Ruben Hakopian
Ruben Hakopian
Oct 7, 2018 7 min read

Marriage of Application Delivery and Service Mesh

thumbnail for this post

From developer’s diary

I thought “why me?”, when business came back and asks for a new set of microservices and replace an existing one, to cover the new direction they were willing to head in. That immediately reminded the hell of rewriting the original monolithic application to microservices two years back, only this time it’s much worse.

Back then we had a humongous Java application and a single MySQL database. There was of course a lot of spaghetti in the code, but with help of modern IDEs we could make sense of what’s going on. Our deployment script was as if from early Paleolithic period, but since there were so little moving parts, it was surprisingly very well manageable.

Breaking up the monolith into microservices was quite fun in the beginning. Lots of brand name tools were adapted to provision cloud infrastructure, orchestrate services and to provide service discovery and service mesh features. With time, use and configuration of those tools became more complex. Defining and linking new services gets harder and harder. To avoid the hassles, many would break rules of microservices and directly connect to a database owned by some other service. The whole code base was broken up to fifteen repositories, and there was no good way to see why and how services(our own as well as cloud native services) were consumed. Our spaghetti evolved to a completely new level.

Root Cause

The cause of all challenges is not because of the staff, but the environment the staff was put it. The dependencies that used to be the part of the monolithic code base, now became dependencies across teams. Every single tool to cover a certain delivery stage (provisioning, orchestration, discovery, monitoring) introduces additional complexity, requires precise configuration and babysitting.

Solution

How about we try to adopt a different approach. We will look to application delivery as a whole, including service mesh and monitoring.

Lets consider the following sample application below. The project is available at https://github.com/berlioz-the/samples repository, top level directory - 04.Pharmacy.

This sample emulates a modern pharmacy. Access from the public internet goes through a web front-end service. The inventory service allows population and retrieval of available drugs. The clerk service accepts prescriptions from patients and posts the order into jobs message queue. The pharmacist service takes the order from the jobs message queue, prepares the order, and notifies the waiting patient using dashboard service. The dashboard service stores names of patients to pickup the prescription into DynamoDB table named dash. See the diagram below for reference.

The diagram above is generated using berlioz command line tool:

$ berlioz output-diagram

It is produced from Berliozfile definition files. Let’s have a deeper look inside.

Here we got definition file for the web front-end service. It is part of cluster pharm. It exposes an http endpoint and listens to port 3000. This endpoint needs to be behind a load balancer.

We also wanted the web service to communicate with inventory, clerk and dashboard services, so we would declare that in the “consumes” section.

The inventory service would not be very different from the web, except the fact that we want it to consume the database named drugs.

This is how the definition of drugs database looks like. It represents an AWS DynamoDB table. For the life-cycle of DynamoDB table the hash key is a must, so name and type of the hash column is defined.

The rest of services and resources are defined in a similar fashion. As you can see the declaration is pretty self explanatory. It consists exactly what is needed for the service to operate. Nothing more, nothing less.

What makes Berlioz so much different from any other application delivery tool, is that it natively provides service discovery and service mesh capabilities. Since it already knows the topology of the entire application, there is nothing else required to configure. To communicate with peer services and benefit from service mesh features (retries, distributed tracing, etc) a client side SDK is used. Technical aspects of the SDK and discovery engine are not the part of this article, but I will just mention that peer resolution happens locally without involving any proxies along the data path.

This is a code snippet from the web front-end service, that sends HTTP GET request to the inventory service to fetch list of drugs. To access a service, one just has to use aliases defined in the “consumes” section of service definition. The following code written in Node.js.

Here is how the same code would look like in Python.

And in Golang.

Access to AWS native resources follows similar principle. Berlioz client SDK would also provide service mesh capabilities to AWS cloud native resources just like it does for home-grown microservices.

Below is a code snippet from the inventory service to return list of drugs. A sample below is in Node.js. It would look very similar in other languages.

Live Demo!

I guess it is enough of coding. How about we just run it to see live in action?Following commands would let you run the Pharmacy example on your local workstation. It is very exciting to see an application with five services, two databases and a message queue up and running in seconds.

# Install berlioz command line tools
$ npm install berlioz -g

# Since the sample uses AWS resources, provide the profile name
$ berlioz local account --profile <the-name-of-aws-profile-to-use>

# Download samples repository
$ git clone [https://github.com/berlioz-the/samples.git](https://github.com/berlioz-the/samples.git)
$ cd samples/04.Pharmacy

# Run the application on the workstation!
$ berlioz local push-run

# Your application is running at http://127.0.0.1:40005

The pharmacy sample should look like this, when inventory is populated and Ibuprofen is ordered for Chuck Norris.

Berlioz also deploys a zipkin distributed tracing collector and automatically instruments calls to microservices as well as calls to AWS native services — without any additional code changes. This is a screenshot from a zipkin portal that shows trace of the web service render request. It is clearly seen that a call is made to inventory service which in it’s turn calls AWS dynamo table drugs to scan the list of drugs available. A similar call is made to dashboard service, to render the call board. One might clearly see that the most of the time is spent making database calls and there may be a need for some improvement. The diagram also shows that by making inventory and dashboard service requests in parallel, it is possible to to cut the overall processing time in half.

Live Demo in AWS!!!

You might say that it looks good on the localhost, but the purpose was to run the application in the cloud, right? Sure, lets do that! The process would not be very different. If you do not already have an AWS account, it is the best time to open one.

# Assuming you already have berlioz installed, if not - look up

# Register new account with berlioz
$ berlioz signup

# Link your AWS account credentials with berlioz
$ berlioz provider create --name myaws --kind aws
        --key ... --secret ...

# Define a deployment
$ berlioz deployment create --name prod --provider myaws
        --region us-east-1

# Build the images and push them to AWS
# execute from the same samples/04.Pharmacy directory
$ berlioz push

# Deliver the Pharmacy application tp AWS
$ berlioz run --deployment prod --cluster pharm --region us-east-1

This will start a remote delivery process. The workstation is no longer needed. It will usually take about 3–5 minutes to complete. Note, that no prior configuration of the AWS account is needed. VPC, subnets, security groups, IAM policies, etc will be automatically created.

To monitor deployment progress use:

# Check deployment status
$ berlioz status

# When status says "completed"
$ berlioz endpoints --deployment prod

# Web running at http://prod-pharm-web-client-elb.amazonaws.com:80
# Zipkin running at http://34.235.136.55:32768

And few minutes later the app is running in AWS just like it was running on the local workstation. Only difference is the URL, and that Chuck ordered Advil this time.

Distributed tracing running in the AWS, works same way, different timing though since the services and databases are running within the datacenter.

This was overview of integrated service mesh capabilities of Berlioz. As you can see no additional configuration or sets of tools were required to be deployed and configured to make that happen. If there is anything to configure, Berlioz would do it automatically and let the application to be deployed using a single command, whether locally to the workstation for development efficiency purposes or directly to the cloud for production.

To learn more about Berlioz checkout https://github.com/berlioz-the/berlioz

comments powered by Disqus