Since my last blog on DevOps, I have been brushing up my knowledge on Continuous Delivery. Continuous delivery is the next milestone in our automated deployment pipeline. But it is not as easy as how I mentioned here. Once the continuous integration system is well established and became business as usual practice. Then the organization will think of continuous delivery to mature in DevOps.
In the software industry, an agile model followed in the initial stage. But the agile without proper process and tools deliver the unstable product. Now, continuous delivery as a pipeline in DevOps ensures consistency of product delivery. The primary goal of continuous delivery is to offer a high-quality product to the end user in fast mode. Continuous delivery aims to lower the risk of release, make it cost-effective and easier the developer’s job.
Strict the testing process:
As we have seen in the previous chapter, a continuous integration system is to deploy the code and do a more automated test. So the CI pipeline is for developers to get immediate feedback about the change they made. In that case, the issue identified in exploratory testing is to add to the automated test pipeline. The problem that determined in a computerized test is to add in a unit test. So the unit test case preparation is always followed by hypothesis/test driven development.
Revisit the Development Practise:
While we progress to update the CI system with the learnings, it is good practice to review the development approach. Continuous delivery is the approach to move the code to different environments. So the code base should be one for all deploys. The package dependencies for the application should work on any platform and easy install. Make the application configuration isolate from code. The service that application consumes should be the attachable and detachable component. The application code can be stateless but storing data in the stateful database service. The application should include auto scalable either vertical or horizontal as and when required. But the application start or stop, should not consume more time. As part of the CI setup, we should have ensured the similarity of configuration across the environments. Use the external tools like Splunk, Prometheus to record the time events and a log of the application.
The process alone will not help the organization to matures in DevOps. Instead, review the traditional approaches and change if required. To flow the application build in the continuous delivery pipeline, it needs to be lightweight. So the organization can redesign the application and make it deployable using new tools and technology.
A shift from monolithic architecture to microservice architecture facilitate the deployment pipeline fast and smooth.
Microservice is an architecture that structures an application from a large code set into a small subset. So the application and code become maintainable and testable, loosely coupled and Independently deployable.
The microservice architecture enables the continuous delivery of large applications. It also allows an organization to evolve its technology stack.
Packages the Code:
Code development is core, yet packaging the code in the right format is necessary. Every technology has the packaging module to wrap up the code. Say, Maven is a module to package the Java code, NPM module for Node.Js and Pyinstaller for python codes. So the code will become as a standalone executable on any environment.
A container is a software that packages up a code and all its dependencies. So the application runs faster and deployable from one environment to another. It includes everything to run an application: code, runtime, system tools, system libraries, and settings. The containerized application can orchestrate using a tool called kubernetes. Kubernetes allows to spin up some container instances and manage them for scaling and fault tolerance. It also handles a wide range of management activities that would otherwise require separate solutions or custom code, including request routing, container discovery, health checking, and rolling updates.
In DevOps pipeline, the build can move, but not the environment. In any situation, the development, stag, test, and production should have a standard setup. The environments include all facts such as database, web server, environment variable, and even paths. Compromising the environments will lead to breaking the deployment pipeline. Environment setup made for application on the developer system should carry forward till production. It’s not purposed to measure the human skill; instead, it is for application to behave in its own way. So strict in environment setup is to make a flexible application. Serverless architecture is an excellent example of running a compliant application. Cloudfoundry is another example to deliver the deployable application by the developer. So in the continuous pipeline, the build should not fail just because of environmental issues.
While we ensure the commonality of environments behavior, a replica of the production environment is also needed. Of the internet world, business reaches different geographical locations to sell their product. While sales reach, the service even should reach. In that case, replicating the production environment to another site is to recover during the disaster. There are plenty of methods and software that exists in the current technology to replicate the production environment. Say, rsync is a primary method to reproduce the application binaries, the golden gate is available for Oracle, always-on available for SQL server, etc.,
DR is must for uninterrupted service, but that is considered during a disaster. So the Organization also ensures better environments by balancing the traffic. In that case, running the production environment in two identical servers are needed. This kind of facilitated environment helps the organization achieve successful CD pipeline.
Automation is the technology by which the process of the whole DevOps life cycle is running. It helps to reduce human errors and faster work. Automate the repeated task is always a sign of process improvement. There are some traditional tasks executed daily without even knowing the purpose of that task. So understanding the task and identifying the use of the task is the first step in automation. Automation comes into the picture at once the developer checked-in the code in the repository. Though the deployment steps are the same, the way of doing it might differ. So it is the DevOps engineer responsibility to review the task before automating it. There are plenty of tools and technologies available in the current market. But it is best practice to think of the cost, support, implementation process and finally business fit. Some organization develops its own automation tool, so the automation tool should fit into any process flow.
Inner views of Continuous delivery Pipeline:
The continuous delivery pipeline is to automatically deliver the software product to end user. But when we are constructing the pipeline, there are about lots of questions,
- Should we provide to all users at once?
- Should we need to track the user experience?
- All product features apply to all users?
- What if my product bugs all users?
- How to ensure the automated deployment pipeline is delivering the bug-free product in an automated manner?
- Can the automated deployment pipeline, manage the downtime?
The principal benefits of continuous delivery – reduce the deployment risk, believe the progress and user feedback. In the deployment pipeline, the trigger will start when the developer commits the code in the repository. When the self-trigger happened in the CI system, it will compile the code. Push the compiled code to the next phase of the unit test. Upon a successful test case execution, it will perform automated code analysis and make a build. Then the CI tool collects the artifacts associated with that build and passes onto the next phase. The next course of action in CD pipeline is to execute the automated regression, acceptance and performance test. Once all the trial happened in a test environment and approved, then the build will move into a staging environment for deployment. At the final stage in live production, we need to reduce the downtime and also reduce the risk of introducing new changes.
Blue-green deployment is the approach to ensure the staging and production be identical. The term blue refers the active production and green refer to the staging environment. When the changes move to stage (green) environment, The traffic shuffle and route some users to green. Once the changes produce a positive impact, the rest of the traffic routes to green. If there are any issues identified, then it is very easy to rollback. On every release, the green will become production and the blue environment will become a staging environment for the next version. By following this approach, the organization does the deployment with zero downtime.
A/B testing is an approach to understand the customer experience from a business perspective. The A/B testing mostly applicable to the changes in the system that exposed directly to the end user. In this approach, the features that added in the order will release to a different segment of users. Then research the data to understand the user’s experience. Based on the insights collected from the data, the future will release to more segments of users. Analyze the data and compare the conversion rate with old and new features. If user acceptance is comparably more and test result succeeded. Then it will release to all end users.
As we have seen the high-level flow of continuous delivery, this whole process ensures smooth production deployment. The organization that follows this process do more implementation in a day than single deployment for a month. In the current world, many technology providers who are top in the market follow this process to deliver the quality, reliable and hassle-free service to the end user. As Netflix topper in streaming service, Spotify is another giant in music service. You can read how they are delivering the service to the customer and how it is possible in the Spotify technology world.