Deploying Microservices

Deployment of Microservices is extremely important. Remember the “Infrastructure Automation” attribute of Microservices architecture? This is exactly this. Slow and complicated deployment will render the whole system ineffective and useless.

In the Microservices system, we have lots of moving parts and if we have to test and deploy each one of them manually, then we are going to have a problem. The whole deployment cycle, building, testing and deployment must be as fast and as efficient as possible. If it is not, we will find out that working with monolith was actually simpler and quicker.

CI/ CD (Continuous Integration and Continuous Delivery/Deployment)

This means full automation of the integration and delivery stages. In other words, after completing the development, we have the integration and delivery stages in the software lifecycle and we want these two to be automated.

What exactly are integration and delivery?

We will discuss 5 stages after the development to understand it.

Build

We build the code written by the developers so that the computer it will run on will be able to run it.

Unit Tests

Next, we have unit tests which test the smallest units of code, usually methods.

Integration Tests

Then we have integration tests which test the software itself or in our , the service itself. In other words complete flow of one feature.

Staging

This is where the software is deployed in an environment simulating the production environment for verification tests.

Production

Finally the software goes to production.

We might find timeline that look a little bit different, but lets keep it simple.

The first three steps build, unit tests and integration tests are called integration. The integration stage is where the code is built and tested. The result of this stage is tested piece of software ready to be deployed.

The next two steps, the stating and production are called delivery or deployment. This stage is where we take the software, the integration stage made for us and deploy it on the environments we want. The first the staging and when we are satisfied with the results then in the production servers.

The these two stages, the integration and the delivery, define the full lifecycle of the code once its development is done.

Now when talking about automating the integration and delivery, we actually talk about CI/CD.

We want to be able to perform the integration into the delivery continuously with no interruptions and very quickly. In fact, in well organized Microservices environment, updates are put to the production very frequently. That is a main difference from the traditional monolith, which might get updated very few months time.

Why use CI/ CD?

  1. Faster release cycle

We can accelerate our deployment process and push updates quickly and efficiently.

2. Reliability

With CI/CD, we have reliable tests on each setup of the way and usually its sophisticated alerting will let us know any problems found. This is much more reliable that manual testing which often misses problems and approves defective software.

3. Reporting

With CI/CD, we have extensive reporting in place showing us what exactly is the state of the application, which testes passed, which failed and what was the reason for that and more and more. It collected huge amount of data and a lots insights can be generated from it.

Pipelines

Now to understand CI/CD, we must first understand the concept of pipelines. The pipelines are the heart of the CI/CD process. A pipeline defines the set of actions to perform as part of the CI/CD. A typical pipelines will begin with building the code, then testing it and if everything went fine, deploying it.

Conclusion

First make sure there is CI/CD engine in place in the organization. It does not really matter which one, just that there should be one such as Azure Devops or on premise one like Jenkins.

Then design your pipeline(s) in way that you can fully automate the integration and delivery stages.

Containers

You might have heard about containers before. However before we go further, we need to discuss how traditional deployments were performed. Its actually quite simple. Code was copied, built and transferred the artifacts in to production for the deployment.

In other words, the piece of software that runs on developers servers/ dev env is not the same as the software running in production. The result of that sometimes, there were problems found on the prod servers, might not reproducible in the dev machines/servers.

This lead to countless of jokes that you might have heard,

It works on my machine ;)

This is actually true, the software worked on developers env and for some subtle reason, did not work on the production machine. This lead to lots of wasted effort to find out real root cause.

Containers to the rescue

Containers are thin packaging model. They package together software, its dependencies and configuration files. It then can be copied between machines. This package creates an atomic unit that can be executed using whatever software and files are contained within it and completely independent from the rest of the machine its hosted in.

However the container uses the underlying operating system.

What is difference between containers and Virtual machines?

It looks like they both implement the same ideas. A package of software in the files that was independently on the host machine and does not use its own file system and software.

With virtual machines, we have the infrastructure which is the hardware and the host operating system. On top of it, there is hypervisor which is the component responsible for running the virtual machines and also for making accessible the host resources such as disk, network and more. On top of the hypervisor are the virtual machines each with its own guest operating system and applications. We can have host running windows 2016 and on it one virtual machine running windows 2012 and another virtual machine running Ubuntu.

This is not the case with containers. As with the virtual machines, we have the infrastructure which here is mainly the hardware on top of it. On top of it we have the host operating system which is a regular operating system. Then we have the container run time. The containers managed by the runtime are extremely light weight in comparison to the virtual machines. The reason is they share the operating system with the host as opposed to the virtual machines where each VM can have its own operating system.

If the host runs Ubuntu, this will be the operating system of the container.

Why containers?

  1. Predictability — The same package is deployed from the dev machine to the test to production.
  2. Performance — Container can goes up in seconds vs minutes in VM
  3. Density — One server can run thousands of containers vs dozen of VMs as containers are much lighter

Why not Containers?

  1. Isolation — Containers share the same OS, so isolation is lighter than VM. Its much easier to cross the boundaries between containers than between VMs. That means if the application contains sensitive code or data, it should be probably deployed on an isolated VM. Of course there are ways to secure containers too.

Docker

We talked about containers, but how can we implement them? Docker is the most popular container environment as of today. Its so popular in way that if people say containers, they usually means docker.

https://docs.docker.com/engine/images/architecture.svg

Image

A Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share

Dockerfile

We pull based image from a repository and then add resources to it. This is done using Dockerfile. It is a file that contains instructions for building custom images. It can instruct the docker daemon to copy files, run commands, change working directory and more. There are so many baseline images to begin with so the Dockerfile is usually very small.

Docker is supported by all major operating systems and also supported by major cloud providers (Amazon ECR and Azure ACR).

Container Management

Containers are a great deployment mechanism. Its easy, widely supported and predictable. What happens when there are just too many of them? When the number of containers begins to grow, we find out problem managing them.

The major problems are,

  1. Deployment — we have lots of containers to deploy and doing it manually can be tiresome and error prone.
  2. Scalability — If the containers are loaded, we need to be able to scale and add more containers to distribute load more effectively.
  3. Monitoring — What happens when a container goes down? Someone has to know that. When having lots of them and doing it manually is not feasible.
  4. Routing — If we have more than one instance of a container, we need a routing mechanism similar to load balancer to route the request to the container instance
  5. High availability — We need to make sure container based system can deal with crashes and errors.

For that containers management tools were designed.

Kubernetes

This is the most popular container management platform. It is currently the de-facto standard for container management. It provides all aspects of continuous management, routing, scaling, high availability, automated deployments, configuration management and more.

I will not deep dive into Kubernetes architecture. You may find several my previous articles on it.

Conclusion

Automated deployment is a must for effective Microservices architecture. Docker and Kubernetes are the de-facto industry standard.

Passionate Technical Lead, Senior Software Developer and free and open source software advocate

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store