Released my first DevOps book

After six months of effort, I am happy to release my first book written about DevOps.


Devops Book cover.png

As I write my experience that I am gaining in supporting open source projects. I thought that I should write it in my blog for people to know the process that extensively practiced in the open source industry.

In the opensource community, DevOps as the process is following. Though the developer or contributors are free to do anything, the tools and process make sure everyone does what is right.

In any open source community, when we step to give our contribution, the first step is to learn the process.

Yes, I have also learned and wants to write that practice as a book with real-time example (in the form of a story) and published on

I encourage you to buy and read this book. If you really like the concept, please give your valuable reviews.

If you are not having reading habits, no worries! You can watch this book as a video with a presentation. But you need to wait for each chapter as it includes another effort that I need to put on weekends.

Here is first chapter in the form of a video.

Continouos Monitoring

Monitoring is one of the primary means by which services owner keep track of a system’s health and availability. As such, the monitoring strategy should construct thoughtfully. Monitoring the application and system is usual practice every organization follows. However, when an organization matured to practice DevOps, a general health check of application/system would not be the right choice. So the approach of continuous monitoring in DevOps is encouraged us to do full-stack monitoring.

In traditional monitoring practice, monitoring parameters are set up based on reactive manner. In some cases, monitoring configured without purpose. The effort that put for developing the system not considered while monitoring that system. However, “Monitoring as a discipline” means ensuring network, servers, applications, and so on are all stable, healthy, and running at peak efficiency. It means not just being able to tell that a system has crashed, but more importantly to say the possibility of a system crash and intervening to avoid the collision.

Things to monitor:

In the DevOps world, watch everything is a good practice. Everything means, it includes

  • Infrastructure monitor,
  • Server,
  • Database,
  • Middleware,
  • Web server,
  • App server,
  • Storage,
  • Network connectivity,
  • Application monitoring,
  • Log monitoring,
  • API monitoring,
  • File process monitoring,
  • Batch process monitoring,
  • Transactions monitoring,
  • SQL transactions,
  • Code visibility monitoring,
  • CI/CD Pipeline monitoring,
  • End to End application monitoring,
  • Gather stats from the system,
  • Internal application monitoring,
  • External application monitoring,
  • Raise the Alert before the adverse event occurs.

System Monitoring:

The System represents the server where the applications are running. The servers may reside on on-premises or cloud, but our monitoring solution should provide us visibility of our infra. So we get the clear picture of the infrastructure and network on which our application runs.

We usually set up the monitoring parameters for the servers around
CPU Usage, Disk Usage, Memory Usage, connectivity, Port establish status and other services related to OS. If any adverse event occurs to reach the threshold, we get an alert to act upon it. This kind of alerting system alarms us to fix instead give us more details of the root cause. Warnings raised based on a threshold that estimated with a reactive approach. The reactive monitoring set up is always not the right system monitoring solution. In the modern world of DevOps, monitoring can be set to collect System stats, metrics and watch event log, Syslog file, performance, logs, and Integrated systems. So when we gather and watch all these system-related components, then it is useful to understand how infrastructure metrics correlate to business transaction performance.

Metrics that needs to collect from the system:

There are important metrics that we need to obtain from the server to get the clue of how our servers perform. In general, there are important metrics that help us to check the health of ours.

  • Request per second: How many applications received and processed by the target server.
  • Error Rates: Error rates measure the failure in the application and highlight the failure points.
  • Response Time: With the average response time, we can test the rate of the target web application.
  • Peak Response Time: It measures the point that took the longest response time.
  • Uptime: How many hours the server is up and running.
  • CPU Usage: Amount CPU time used by the application that is running in the server.
  • Memory Usage: Amount of memory used by the application.
  • Thread counts: Usually, the application creates the threads to process the request.
  • So it is essential to count the number of threads per process as its limit by the system.
  • File IO Operation: In general, there should be a limitation on IO operation per process to handle the file.
  • Disk Usage: Amount of Disk consumed in the server by any running service/applications.
  • Network Bandwidth: Service or application consume the higher network bandwidth.
  • Log File Size: There are web server logs that may face a sudden increase in size due to underlying application malfunction.

Application Monitoring:

Application monitoring always happens only in production. The development occurs without the plan of monitoring. So the production team set up monitoring based on the logs and stats that observed from application behavior. However, in this monitoring set up, the production team is lack of visibility inside the application. So the monitoring scope limit with adverse events raised by the services. However, in the modern technology world, application monitoring is happening to start from the development stage. So the monitoring parameter can be set at the code level to get complete visibility. Some monitoring tools such as AppDynamics, Datadog, and Prometheus give us more insights about applications through agents. The agents embedded in the code collects data and metrics on each stage from web, application, and database. With data and parameters feed into an underlying system, we can see the flow of transactions.

Metrics that needs to collect from Application:

The following are the types of data that can be stored and analyzed by Application monitoring:

  • HTTP request rates, response times, and success rates.
  • Dependency (HTTP & SQL) call rates, response times, and success rates.
  • Exception traces from both server and client.
  • Diagnostic log traces.
  • Page view counts, user and session counts, browser load times, exceptions.
  • Response times to reach success rates.
  • Server performance counters.
  • Custom client and server telemetry.
  • Segmentation at client location, browser version, OS version, server instance, and
  • custom dimensions.
  • Availability tests results

Application Monitoring Performance:

Service availability monitoring is excellent, but latency for response is also essential when we plan for the next level of monitoring. When the volume increase in the system, there are always possible in performance degrade. So, we need to visualize the performance of the system with collected stats and metrics. If there is any deviation in the production, then we need to identify the bottleneck to improvise the system. Delay in latency can happen at any level; it can occur at the code level, operation, web/front-end, Database, and network level. In this case, end to end monitoring to watch the application performance is best practice.

Network Monitoring:

Network monitoring is always the out of scope topics for the operational support team. Of course, network monitoring is part of network management but to do full monitoring stack, it is good practice to bring it under one roof. Issues analysis perspective, network is the last layer if there is no relevant information found from other traces. So we need to get the idea of how our network is managed and monitored.

Network monitoring is also happening by using software and hardware. In that, the bare least check usually happens using ping, SNMP, ICMP, and logs. To get complete visibility around the network management, it is good practice to

  • Execute the script using an agent on the device to collect detailed information.
  • Track IP SLA between the devices in the network infrastructure.
  • Analyze the bandwidth utilization and traffic using NetFlow.
  • Collect the dumps using a network tap.
  • Check devices performance.

Database Monitoring:

For any stateful application, the database is the core for transaction processing. The database availability and performance are vital elements for any operations. From an operation point of view, we would like to ensure the connectivity between application and database. In some extent, we also watch the query response and throughput. Our database admin also monitoring the behavior of the database as it is their primary scope. So they set the monitoring parameter on database management jobs, database backup, performance, replication, storage, data-file, and server health.
But, from a continuous monitoring perspective, we should get the metrics about
  • resource usage,
  • disk I/O,
  • tablespace,
  • caches and buffers pool usage,
  • Negative error codes,
  • queries that delay response,
  • number of threads established by application and other services,
  • idle threads and running threads,
  • Connection error that caused due to server error,
  • Failed connections,
  • tables growth,
  • Index efficiency,
  • partition growth,
  • state of stored procedure and functions,
  • Database trigger status.

I believe, we can collect some more metrics by thinking from a support perspective and visualize.

CI/CD Monitoring:

DevOps team setting up CI/CD pipeline model there is a significant increase in the number of changes in production.
In the pipeline process, everything moves in an automated manner. Say, CI tools trigger when code commits in the repository and CI tool take it further steps to package, unit test and perform other test cases. In this pipeline, there are chances the diverse event such as
  • the trigger did not occur when the code commit happened,
  • something got aborted while packaging the code,
  • the automated unit test did not trigger,
  • Unit test script malfunction,
  • security scan failed,
  • Code integration has not passed before integration test performance,
  • Deployment failure due to technical challenges,
Also, there are some more events can occur in the continuous delivery pipeline as well. So, monitoring is at the beginning of release during code creation, at the point of integration in continuous integration environments, and there right before a production release.
When we monitor the flow of code, visualize the packages, test, and deployment process inside the CI/CD pipeline, we get the confidence on our full infrastructure and application stack.


Automated continuous monitoring will keep our continuous delivery pipelines flowing smoothly and efficiently. Monitoring the full stack should happen automatically but alert should raise only when human intervenes needed.

Continuous Delivery in DevOps


Since my last blog on DevOps, I have been brushing up my knowledge on Continuous Delivery. Continuous delivery is the next milestone in our automated deployment pipeline. But it is not as easy as how I mentioned here. Once the continuous integration system is well established and became business as usual practice. Then the organization will think of continuous delivery to mature in DevOps.

In the software industry, an agile model followed in the initial stage. But the agile without proper process and tools deliver the unstable product. Now, continuous delivery as a pipeline in DevOps ensures consistency of product delivery. The primary goal of continuous delivery is to offer a high-quality product to the end user in fast mode. Continuous delivery aims to lower the risk of release, make it cost-effective and easier the developer’s job.

Strict the testing process:

As we have seen in the previous chapter, a continuous integration system is to deploy the code and do a more automated test. So the CI pipeline is for developers to get immediate feedback about the change they made. In that case, the issue identified in exploratory testing is to add to the automated test pipeline. The problem that determined in a computerized test is to add in a unit test. So the unit test case preparation is always followed by hypothesis/test driven development.

Revisit the Development Practise:

While we progress to update the CI system with the learnings, it is good practice to review the development approach. Continuous delivery is the approach to move the code to different environments. So the code base should be one for all deploys. The package dependencies for the application should work on any platform and easy install. Make the application configuration isolate from code. The service that application consumes should be the attachable and detachable component. The application code can be stateless but storing data in the stateful database service. The application should include auto scalable either vertical or horizontal as and when required. But the application start or stop, should not consume more time. As part of the CI setup, we should have ensured the similarity of configuration across the environments. Use the external tools like Splunk, Prometheus to record the time events and a log of the application.


The process alone will not help the organization to matures in DevOps. Instead, review the traditional approaches and change if required. To flow the application build in the continuous delivery pipeline, it needs to be lightweight. So the organization can redesign the application and make it deployable using new tools and technology.

Micro Service:

A shift from monolithic architecture to microservice architecture facilitate the deployment pipeline fast and smooth.

Microservice is an architecture that structures an application from a large code set into a small subset. So the application and code become maintainable and testable, loosely coupled and Independently deployable.

The microservice architecture enables the continuous delivery of large applications. It also allows an organization to evolve its technology stack.

Packages the Code:

Code development is core, yet packaging the code in the right format is necessary. Every technology has the packaging module to wrap up the code. Say, Maven is a module to package the Java code, NPM module for Node.Js and Pyinstaller for python codes. So the code will become as a standalone executable on any environment.


A container is a software that packages up a code and all its dependencies. So the application runs faster and deployable from one environment to another. It includes everything to run an application: code, runtime, system tools, system libraries, and settings. The containerized application can orchestrate using a tool called kubernetes. Kubernetes allows to spin up some container instances and manage them for scaling and fault tolerance. It also handles a wide range of management activities that would otherwise require separate solutions or custom code, including request routing, container discovery, health checking, and rolling updates.


In DevOps pipeline, the build can move, but not the environment. In any situation, the development, stag, test, and production should have a standard setup. The environments include all facts such as database, web server, environment variable, and even paths. Compromising the environments will lead to breaking the deployment pipeline. Environment setup made for application on the developer system should carry forward till production. It’s not purposed to measure the human skill; instead, it is for application to behave in its own way. So strict in environment setup is to make a flexible application. Serverless architecture is an excellent example of running a compliant application. Cloudfoundry is another example to deliver the deployable application by the developer. So in the continuous pipeline, the build should not fail just because of environmental issues.

Disaster Recovery:

While we ensure the commonality of environments behavior, a replica of the production environment is also needed. Of the internet world, business reaches different geographical locations to sell their product. While sales reach, the service even should reach. In that case, replicating the production environment to another site is to recover during the disaster. There are plenty of methods and software that exists in the current technology to replicate the production environment. Say, rsync is a primary method to reproduce the application binaries, the golden gate is available for Oracle, always-on available for SQL server, etc.,

Load Balancing:

DR is must for uninterrupted service, but that is considered during a disaster. So the Organization also ensures better environments by balancing the traffic. In that case, running the production environment in two identical servers are needed. This kind of facilitated environment helps the organization achieve successful CD pipeline.


Automation is the technology by which the process of the whole DevOps life cycle is running. It helps to reduce human errors and faster work. Automate the repeated task is always a sign of process improvement. There are some traditional tasks executed daily without even knowing the purpose of that task. So understanding the task and identifying the use of the task is the first step in automation. Automation comes into the picture at once the developer checked-in the code in the repository. Though the deployment steps are the same, the way of doing it might differ. So it is the DevOps engineer responsibility to review the task before automating it. There are plenty of tools and technologies available in the current market. But it is best practice to think of the cost, support, implementation process and finally business fit. Some organization develops its own automation tool, so the automation tool should fit into any process flow.

Inner views of Continuous delivery Pipeline:

The continuous delivery pipeline is to automatically deliver the software product to end user. But when we are constructing the pipeline, there are about lots of questions,

  • Should we provide to all users at once?
  • Should we need to track the user experience?
  • All product features apply to all users?
  • What if my product bugs all users?
  • How to ensure the automated deployment pipeline is delivering the bug-free product in an automated manner?
  • Can the automated deployment pipeline, manage the downtime?

The principal benefits of continuous delivery – reduce the deployment risk, believe the progress and user feedback. In the deployment pipeline, the trigger will start when the developer commits the code in the repository. When the self-trigger happened in the CI system, it will compile the code. Push the compiled code to the next phase of the unit test. Upon a successful test case execution, it will perform automated code analysis and make a build. Then the CI tool collects the artifacts associated with that build and passes onto the next phase. The next course of action in CD pipeline is to execute the automated regression, acceptance and performance test. Once all the trial happened in a test environment and approved, then the build will move into a staging environment for deployment. At the final stage in live production, we need to reduce the downtime and also reduce the risk of introducing new changes.

Blue-Green Deployment:

Blue-green deployment is the approach to ensure the staging and production be identical. The term blue refers the active production and green refer to the staging environment. When the changes move to stage (green) environment, The traffic shuffle and route some users to green. Once the changes produce a positive impact, the rest of the traffic routes to green. If there are any issues identified, then it is very easy to rollback. On every release, the green will become production and the blue environment will become a staging environment for the next version. By following this approach, the organization does the deployment with zero downtime.

A/B Testing:

A/B testing is an approach to understand the customer experience from a business perspective. The A/B testing mostly applicable to the changes in the system that exposed directly to the end user. In this approach, the features that added in the order will release to a different segment of users. Then research the data to understand the user’s experience. Based on the insights collected from the data, the future will release to more segments of users. Analyze the data and compare the conversion rate with old and new features. If user acceptance is comparably more and test result succeeded. Then it will release to all end users.


As we have seen the high-level flow of continuous delivery, this whole process ensures smooth production deployment. The organization that follows this process do more implementation in a day than single deployment for a month. In the current world, many technology providers who are top in the market follow this process to deliver the quality, reliable and hassle-free service to the end user. As Netflix topper in streaming service, Spotify is another giant in music service. You can read how they are delivering the service to the customer and how it is possible in the Spotify technology world.

Continuous Testing In DevOps

Testing is very important in the release segment as it measures the level of product quality that delivers to customer/end users.

Testing is the concept that applied in any level of product manufacture/development from large-scale industries to small business. We have seen “Test Ok” as a symbol in all the products that delivered by big brands or small companies as assurance for the end user to use the product for the respective purpose. In fact, certain companies come with a big pamphlet about “How to use?” , “who to use?” , “what level of test performed for that particular product”, “Do the product has any hazardous limit” and final rating about product quality.

The final rating/QA result measures the product total quality that controls whether to deliver the product on the market or reject.


Testing in Software Development:

Quality analysis is not only specific to manufacturing industries or it is not that approach is applicable for software development company. Irrespective of the company, modules, product, practice or methodology, it is applicable to all the deliverables that deliver to customers/end users as it is standard (ISO) that everybody should follow being products development/manufacturers. Hence it is one of the phases in the SDLC also.

Usually, In the SDLC cycle, QA comes into the picture after the development completed. But there are many confusion or debate raised about “whether the QA team needs to be part of project workgroup discussion or product design phase?”  It is still a debate in many companies, however, there are companies that take cross training decision and get the QA team also be part of project discussion.

In waterfall methodology, QA team usually prepare the test case either during the code development (if QA part of project discussion) or after the development. But it’s all about how effective the test case scenario prepared and how many days scheduled for the QA team to perform the testing for the bulk set of code that deployed in the testing environment. Due to the large code delivery, they are many challenges faced for getting the environment ready with test data,  system/functional integration issue while performing integration testing, incomplete regression test case scenario due to time constraints, etc but out of all these, least bother about final QA result.  

Types of Testing:

In the software testing world, there are many test approaches followed to test the quality of code.

  • Unit Testing: Testing done by the developer to validate the changes based on the personal view.
  • Regression testing: Irrespective of any code changes, regression test will be executed on the whole application as kind of sanity check. Meaning, testing the changes on top of base code or base functionality to ensure no defects found on the base product. Regression testing applicable when there is upgrades & fixes introduced.
  • Integration testing: The code change will be combined with other software modules to perform the test as a group.
  • Functional Testing: The code change will not be considered, however, the testing will be happening to the functionality/Business/client requirements.
  • Load Testing: The code change will not be considered, rather pumping the system with more traffic/load to see the system performance. With this test result, TPS (transaction per second) is measured.
  • Exploratory testing: The test will happen against out of box test case scenarios.
  • Acceptance Testing: The test will happen to confirm whether the requirements are met. Mostly this kind of test performed by the client.

The testing approach in DevOps:

Though there are different types of testing exists before the DevOps process comes as industrial practice, the continuous testing is the approach that encompasses these testings in proper order to ensure that the test happened.


Continuous testing is one of approach to practice the DevOps process, as it is dictated to get fast feedback on the impact of changes. But it does not about automate the testing rather it emphasizes the concept of test early, test faster, test often and then automate the test.  

As I have discussed in the article, The combined team of Developer & tester who are called as software engineer in DevOps world, can parallelly prepare the test case scenarios as code for unit test, regression test, functional test, UI test & load test in order to achieve test early ,test often and automate it.

[amazon_link asins=’1520745923′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’a76b2c85-d027-11e7-9a14-0d8b806dd31d’]

Automated Test:

The automated test is not new in the software industry, but “what is to be automated” is important, automation testing should be done at the right time in a right way to ensure a high-quality release.

For instance, There is password policy change requirements raised and the development also completed. As part of the CI pipeline, the code has been build and next phase is to perform the test.

Continuous Tests in CI Pipeline:

Unit test: The automated unit test can be executed in an isolated form of passing different test case to the unit of modules where the changes were done.

For example, if “passowrd_policy.js” script file changed then that can be tested using simple script wrapping around this .js module or some test automation tools like selenium, testComplete, etc for passing different password as an input to validate the changes. There are tools like JUnit, nUnit, Pytest, etc as a language-specific module to automate this.

Test Driven Development: Since the Unit test is specific to the particular module. The developer should prepare the test case which can be written in code and that code can be executed on every stage during the development/code changes and this routine can be repeated until the unit test cases executed successfully.

Regression Test: If the unit test passed then the code build can move to regression test. The automated regression test should have always been revisited and updated with new test cases. The test cases can be applied using scripts & tools based on the complete base module. The test should cover almost all the necessary base functionality/code to ensure the changes not impacted the existing functionality/code.

In our example, the regression should be executed on the login page, password reset page & password encryption method, password force to reset & registration page, verify the error pattern in the application/system logs, etc.,

Functional Test: The next phase is to functional test, where the newly created test cases prepared against BRD / CR (business requirements/change request) to be executed.

In our example, the functional test case can be prepared to check whether the changes force all the existing customers to reset the password as per the new policy, What if existing customers that already have the password as per new password policy, etc.,

Mostly the functional test can be executed on UI so it can be automated using a tool like selenium, testComplete, etc.

Integration Test: In the phase of integration test case scenario, the pages/modules/services that went through changes should be auto integrated into the co-services & modules without any impact.

In our example, if the new password policy allows special character, it should not impact the underlying system be malfunctioned. Nowadays most of the system integration happening through API. So in order to execute these test case scenarios, we can automate using tools like SoapUI, JMeter & RESTAssured, etc.

Load Test: The load test sometime will not be added to the CI pipeline due to resource limitation. However, the test should have happened at least once before code moves to production.

In our example, the test case can be “the password_policy.js cause any latency issue and makes the customer wait for the login page for a long time”. Performance test can be automated using Jmeter, etc..

Benefits of the Automated test process:

  • Having an automated unit, regression, functional, integration & load test as part of the CI pipeline, the timely & early feedback sent to the developer.
  • All these tests are part of CI if there is an issue identified. The corrective action will be done on the same day.
  • Reduce the manual effort.
  • Automated process ensures no manual error and ensure reliability.

Continuous Test in CD pipeline:

The continuous testing approach is not about automating the test case, rather it emphasis finding the problem as soon as possible in the meantime (MTTR) from code check-in to release in the deployment pipeline. Hence the automated test added as part of the CI pipeline. However, there are scenarios where tester knows that automation cannot be done. So these test cases can be manually tested in continuous delivery (CD) pipeline.

For eg: In banking domain, the reconciliation process is very important to check ledger balance is equal to the sum of customer’s balance. If that does not happen, then manual cross verification should be done by verifying particular account activity to see what goes wrong. Sometimes, the test environment complete accounts details can be sent as a report to finance team who can do 4th eye verification.

Exploratory testing:

[amazon_link asins=’1937785025′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’e1c38abd-d027-11e7-94f4-dbafe2834581′]

The scenario that explained above can be conducted as part of exploratory testing. In most product based companies, before the product deployed in production, the companies engage external tester/internal tester who will verify the application/product in the different dimension by critically thinking, carefully observe the application, creative test case scenarios, design the tests systematically with different diverse ideas. The exploratory testing can be executed manually to perform ad-hoc test quickly and capture the test cases while performing exploratory testing.

Learnings in CD pipeline: The issue/bugs that identified or test cases that captured during the exploratory testing can be added to automated CI test pipeline for continuous testing.

Acceptance Testing:

As part of the continuous delivery pipeline, the code changes can be deployed on UAT (User Acceptance Test) environment, where the client will have access to perform set of use case test by themselves. This test considers as sign-off to deploy the application into production.

In some companies, this test will happen with the concept of beta testing. Nevertheless, in service-based companies, UAT sign-off from the client might require and product based companies follow beta testing.

If you are the developer or tester, I would encourage you to attend this course to get into continuous testing practice

Continues Integration in DevOps

We have seen in the past article on the high level of the continues integration is the process of automated build, deployment and publish the code on the repository tool in order to continuously validate the code.

In the open source project development world, The continuous integration is very well implemented through GitHub repository tool with team collaboration tools like slack, open source communities, chat, etc..

Here in this scenario, the developers initiate the project in their own machine IDEs (most of IDE has the integration with GitHub repositories tool) and once the code has been developed and by clicking the build button on the IDE tool, it will get the code to be deployed on the branch folder in the repositories. There are audiences who watch that particular project, clone that branch folder into their own environment and execute the result in the form of core functionality test by x person, security scan by y person & code review by z person who actually initiates this open source project. It is either developer do unit testing by his own hand or the watcher do the test and update the test report to the developer with feedback from team communities. Also, each and every project followed with versioning, as some may be happy with the version xxxx.0.1 that they use in their production zone or some audience might be expecting additional features to be released on the next version xxxx.2.0.

The outcome of collaborative working culture is to encourage the developer to share their code on the common repository tool to integrate their code with other’s code and also share the unit test case scenario for the peers to run on the same code and get the feedback of test result/report. As the result, it delivers the quality software code to the end users.

With that same approach be followed in the in-house development environment, the continuous integration ensures to go the next steps of continuous delivery.

Implementing the continuous integration:

The continuous integration relies on three foundational elements such as

  1. Version control system
  2. Continuous integration system
  3. Automated build process

Version Control System:

Earlier days the version control system was handled by the team under VCU (version control unit) who were responsible for maintaining the codes in folders structured like Branch & Trunk based on the environment such as Test, UAT & Production. Such manual code maintenance was followed with the paper-based process like the developer has to get the approval from respective team lead or manager in the code-release form which has to be submitted to VCU team, who collect the code from the developer and keep it respective environment folder. Here, if a developer does even small changes on the fly in any of one environment then the respective changes must be submitted manually to the VCU team.

Due to this whole manual process, there was no concept of code sharing and packaging. Also, If person “A” do changes on login.jsp while person “B” already doing changes then it is complete VCU team responsibility to ensure code integration happened during the deployment.

Centralized version control system:

To have all these processes automated and properly maintained the code, the organization start using centralized version control tools such as TFS (Team foundation server), SVN (Subversion). In this centralized version control systems, all the codes are maintained. The centralized tools leverage the developer to check-out the code for code changes and also check-in into the branch for the code update. In this centralized version control process, the pull & push request of code will keep sync into the centralized server, so if another developer does check-out to his local machines, they will get a notification that this particular file is been changed by person “A”. In this type of tools, it has the user interface maturity, single repository to maintain the code as common and enable the SCM (software configuration management) team to have proper access control policy, so well used for audit purpose.

Specifically this kind of centralized version control system well suited for large base code, which is another term widely used in the organization that follows the waterfall methodology.

Distributed version control system:

The distributed version control system comes into practice when organizational changes their methodology from waterfall to agile. As the agile process in place, the large code base divided into the smaller code base, so each developer of the respective team like (front-end, backend & database team) can clone the common code on their development environment and they can start working on the code by themselves.

The Github is currently distributed open source repository tool, which leverages the developer to fork the code base from another developer’s repo who shared their code in the public and also it has well access control policy as the repo owner can completely control the master node of who can do commit. Since it is distributed format branching of code is very easy to follow.

Currently GIT is famous repository tool in the open source world, as almost all IDEs has the proper integration with this GIT and also the CI tools such VSTS, Jenkins has the tokenization method to access the GIT, The process of clone, push the code commit to master node, see the history, user stories on repo changes & git hooks features associated with these kind of distributed tools.   

Git automation:

The GitHub distributed version control has the special feature of git hook which will perform client-side verification on the file before we commit the code in the common repository. In another term, we can add set of shell scripts at client side to perform pre-commit verification for sensitive information on code & comments, auto-fix the code alignments, rearrange the configuration file, remove unnecessary spaces in the code. Also with git hook server-side verification functionality, introduce the code reviewer just before the code gets committed & auto-merging based on the certain commit policies.

Continuous integration system:

As the DevOps process make the organization succeeded in term of quality product delivery to the customer in the fastest manner. CI (continuous integration) is the important approach which benefits the developer to perform hassle-free development without worrying development environment design, platform certainty, release notes preparation, package management, unit test integration & test report preparation. The CI system fully takes care of steps from code build to code deployment and also it notifies the whole group about the result of code that checked-in recently.

Automated Build process:

The main focus of the continuous integration system is to encourage the developers to concentrate on developing the code with new ideas. While this CI system establish the pipeline with features of,

  1. Get the source code from repositories.
  2. Build a solution to compile the code
  3. Pack the binaries for deployment.
  4. Deploy the project code to the CI server.
  5. Test execution as per the test case design
  6. Publish the test/build report to members of the team.

Continuous integration tools behavior:

There are many CI tools currently available in the market, out of which VSTS (visual studio team service), JetBrains TeamCity & Jenkins CI are well popular and widely used tools in the organization.

Most of CI system designed with agent & queue concept. So when the schedule or trigger happens upon source code commits in the repository tool. The CI system will initiate the build definitions with help of queue management and dispatch the build configuration to the agent server (CI) for the project specific build to be executed. Also, every CI system has the plugins for Maven, Gradle, Ant, MSBuild, Junit, Pylint, etc for automated build, unit test, functional test and regression test.

Build Definition/Project build:: The Build definition/project (each tool uses different terms) can be easily created either through web UI or files in the form of JSON or YAML files. The build definition generally consists of Agent, environment variable, stages and steps to follow on each stage.

Agent: The agent can be any of the local system, virtual machines, containers or on the cloud.

Environment variable: It has the values that globally used on each stage of the CI pipeline.

Stages: The stages highlight the build, test, deploy and deliver the result or artifacts.

Steps: Each step under the stages are classified and derived according to the goal of the stage. For eg: The steps for the stage “Build” are MSBuild, Maven-build or even simple UNIX shell/ Windows PowerShell script to compile the code.

Moreover, the CI tools have multiple options to verify your build, Build status notification, code analysis, code comparison, stories about the check-ins and digital code approval in place.

Benefits of CI:

  • Improving code quality based on instance test report
  • Auto Triggering / constant scheduling for automated testing upon every code change
  • Reducing build times for rapid feedback and early detection of problems (risk reduction)
  • Better managing technical debt and conducting code analysis
  • Reducing long, difficult, and bug-inducing merges
  • Increasing confidence in codebase health long before production

Monitoring the CI

If the continuous integration to be added in the pipeline of continuous delivery, It has to be validated whether the continuous integration process setup correctly and whether it really gives the benefits to the organization.

WIth the below steps in mind to measure our continuous integration to go into the full mile of continuous delivery.

  • Do the developers commit the code multiple times in a day into the repository tool.
  • Least one commit happening on the branch node in the repository tool to automatically publish the code on the CI environment (development env) where automated code built and unit test happening.
  • Whether CI/development env have updated with proper auto build trigger and auto unit test scripts.
  • Do the developers given priority to the unit test result.
  • With the CI environment test result, whether developers improve the codes and fix.

Infrastructure as Code – IaC

As we have seen in the previous article, configuration management is about how the environments are configured for the applications. With configuration management as script/tool, It is more easily maintainable, such as all the release definition in the form of variables. If there is a change in the configuration then it is only fewer location change in the setup code/tool rather changing each and every possible file.

Reduce mistakes Whenever there is a change is needed on the IP and Port or end point correction in single configuration script/tool rather keep changing in all possible files manually and miss some important files that still keep the previous environment IP details and cause the error.

It brings More secure in the form of maintaining the production database username, password And connection strings in the deployment tools rather maintaining it in text or config file. Provisioning Infrastructure by the tool is always more reliable so it could avoid the repeated issues from one environment to another environment.

There are two concepts will come under configuration management

  1. infrastructure as code (IaC)
  2. The configuration of a code (CaC)

1.Infrastructure as code:

There was a high priority issue which I still remember. In the banking system, the transactions loads will be handled by two authorization server for load balancing and high availability. The IT team usually do OS patch upgrade on the application servers. During the activity, there was unfortunate happened on the OS upgrade and cause the server A to go down. Due to this unfortunate server down, it was very hard to handle the transactions loads on one server while another server is still down and the team was working to bring it up. Usually, any upgrade will happen along with all stakeholders on the call, so the delivery head requested the Infra manager to quickly bring the other fresh system By replicating Server A.  so the manager requested the system admin to set up the new system by going through the manual step that was written in the excel sheet. The system admin tried to follow the steps as per the user manual But after certain steps completed,  he could not continue, So he called another guy who can collect missed steps from his memory and assist. While the system admin was getting the fresh system ready. The client business day started and as expected the load of the transitions was more and made the server B to be in the crashed state.

This is the good learning situation for the infrastructure manager to set the process. In this situation, I was thinking on the other hand that what if the manual Excel sheet be as a simple script or configuration tools.  so it would be a fraction of second to provision new server with the configuration of same as server A.

In DevOps, Infrastructure as code provides the solution for this type of situation along with the process. When we develop the script or tool for infrastructure as code, we need to ensure that the code of the tool has been tested multiple time consistently Without error.

The infrastructure as code can be achieved with so many tools out there in the market, the tools are Vagrant, Ansible, Puppet, Chef, Docker, PowerShell DSC, python scripting, etc.

Benefits :

  • consistent server Set up across the environment
  • Elements of easily crater and scaled
  • Updates of environment infrastructure creation are fully automated

2.Configuration as code

Defining the configuration of your servers, code, and other resources as a text file (script or definition) that is checked into version control and used as the base source for creating or updating those configurations. For instance, adding a new port to a firewall should be done by editing a text file and running the release pipeline, not by remoting into the environment and spinning one up manually.

During any monthly maintenance or server upgrade activity, an application support team have to disable the configuration of one server and enable the configuration of another server. So this disables and enable was happen manually due to which the downtime window was bit large. Since it is a manual activity, every time we end up with some issues due to the wrong configuration.  In order to reduce this effort & issues, Our team comes up with a solution of configuration as code. So the script can be executed on server-A to route the traffic to server-B and vice versa.

After the script development done, It was just simple steps for us to execute the configuration setup script and downtime window also drastically reduced. The same way, we can automate the configuration of the server.


  • Bugs are easily reproducible by continuously using this configuration as a code scripts
  • The configuration changes become consistent

In some cases, the infrastructure as code is used to describe both provisionings (IaC) the machines and configuring the machines (CaC). So widely used term is always infrastructure as Code.

Treat the infrastructure as Software:

In DevOps, infrastructure as software leverage the organizations to concentrate on delivering the reliable products. Having this as a concept, there are many large internet based companies providing Infrastructure as a service (IaaS), Platform as a service (Paas) and Software as a Service, so you can only concentrate your customer needs.

IaaS:- There are the clouds computing service providers (AWS, Azure, GCP,etc..). Based on your application & technology you can choose the cloud service providers. Nowadays almost all cloud service providers are up to mark on supporting any kind of application to run on the cloud. To know more about cloud service providers and supports, you can go through

With IaaS, the organization will get hardware such as servers, storage and networking on the cloud rather spending and set up own Datacenter.

PaaS:- The cloud service providers, offer not only the hardware additionally middleware, development tool, Business intelligence & database management system. So you don’t need to worry about your infrastructure maintenance or middleware architecture design or recruit skilled DBA to manage your DB but only manage the application or the service that you have developed.

But in this service, you might have the limitation if your application running with traditional technology.

SaaS:- software as a service provided much organization from small scale to large scale. Say example, if banks want to facilitate Their customer with plastic money, they get SaaS with the organization who run switch for transaction processing.

Revolution in Infrastructure Management:

Along with the DevOps, the infrastructure drastically Finding revolution in terms of utilizing the Hardware. Running a single application on high configuration servers as physical machines have been changed from Physical machine to virtual machines on top of the physical machine using firmware hypervisor  and now containers on top of either directly on the bare metal physical machine or virtual machine using docker platform

Provisioning Virtual Machine:

The virtual machines concept helps the infrastructure team to provision the new VM machines in minutes and allocate memory as well as utilize the hardware resources effectively and ease the system admin workload. There are tools like puppet, chef, ansible, vagrant & salt that use vsphere or vcenter or VM templates using vsphere web client to provision new virtual machines as per your application needs on the bare-metal physical machines or on the cloud infrastructure.

Say, there are requirements from the development team with SRF (system requirement form) to infra operation team for the new project to be deployed on UAT (use acceptance testing) environment. Since the project is going to be deployed on UAT, the bare minimum configuration of the virtual machine is much enough for this SRF. So the VM specialist will provision the VM with existing template through the vCenter tool with pieces of information such as VM name, type of OS & OS version and configure with the number of processes, disk format & data store location, also network level DNS & n/w name confirmation.  So the network infra team will allocate the IP manually in VLAN and open the WAF, internal & external firewall based on need and the storage team will create LUN id on SAN (storage area network) for the amount of storage to allocate for the new VM. After this, the system admin will perform server hardening (refer: Once complete action performed by the infra operation, then only the test environment server will be given to application team for deploying the application and execution.

In this whole process of except new VM provisioning, rest all process may be the still manual process and time-consuming. Rather if we have automation tool in place to provide the whole IT operation stuff under one catalog (puppet), cookbook(chef) or generic desired state configuration script. The system engineer of DevOps can do one click and provision the whole SRF as desired virtual machines for the software engineer to run the automated build through continuous delivery.

Nowadays, There are the organizations like Cloudera, hortonworks, etc that packaged the big data Hadoop env with their distribution and uses the VM extensively for the customers.


The container is the platform using which the software can be packaged and can be deployed on bare-metal machines or VM machines. The containerized applications are platform independent which can be installed and run on any operating system.

The container ease software development, deployment & delivery in the fastest manner and makes the application be portable, secure and cost saving. Along with container, there are further tools such as kubernetes for automating the deployment and orchestration arrangement, scale and manage the containerized application and Prometheus for monitoring the containers.

Currently, Docker & Core OS rkt are the container service providers to know which one to choose

The containers mainly come under part of DevOps packaging the software rather discuss more on infrastructure as Code. So we will talk more about this in upcoming articles.

Redhat openshift is the cloud application platform which can be used to develop, deploy the containerized application on the cloud or on-premises cloud application.

So in the agile of DevOps world, the configuration of infrastructure for an application or product is well streamlined and automated with varieties of tools and technologies. So the developer can opt their own infrastructure with SRF of their application to run on virtual machines or they can package the application in the form of container with all the necessary application configuration and they have complete freedom of planning even the load balancing for the application.

Tools & technologies can be used anytime without DevOps but at the same time, We need to ensure that whether the process that we follow is always directly proportional to technology and vice versa.

Practice the DevOps

In the previous articles, I have written the theory of DevOps transformation, Which has been followed in any organizations that are Currently delivering quality software. DevOps transformation it’s not just only in five level, In order to sustain in the DevOps culture, we need to practice it.

There are the key concepts in DevOps, Which can be followed to practice the DevOps to Make it as a BAU Process. the concepts are,

  1. Configuration Management
  2. Continuous integration
  3. Continuous delivery
  4. Continuous Testing
  5. continuous deployment
  6. Performance Monitoring.

When I wanted to learn about DevOps, The first book that I wanted to Refer is The Phoenix Project: A Novel about IT, DevOps and Helping Your Business Win by Gene Kim, Kevin Behr, George Spafford.

In that book, the very eye-catching Topic that discusses “how frequent and effective way the industries are producing the product”.

Many industries are developing the product either it is consumable or non-consumable, the manufacturing process is completely automated and fast delivery on the market.

Configuration management :

Configuration management is the management of infrastructure configuration. It Emphasizes the encapsulation of configuration as code.  The configuration management Define environment server, network, Computer resources to be set ready for the software deployment. It is something like getting ready the whole infrastructure or tools for the fast delivery of my product.

For Eg: In the potato chips production industry, The complete production machines are ready In order to continuously deliver the potato chip on the market.

the same way We need to ensure that the infrastructure and respective Resources are ready in the form of a code so whenever there is a new infrastructure is required it is very simple to create or configure the infrastructure by executing that script/code.


Continuous integration (CI):

Continuous integration is the process of automating the code build and unit testing, this process ensures the quality code has been checked into code repository tools. This process also encourages the developers to share their code & unit test case scenarios on the repository tool for easy integration of code in Master branch in order to perform complete validation.

The real-time example, it is the same process of how the potatoes are cleaned and placed on the tray for the next continuous process.

The same way, continuous integration work when we do automated build & test which make easy integration with codes that already committed by other developers in the branch or master node in the repository tools. Since each change committed to the repository tool and fine integration, the master branch looks well organized with codes to deliver the software as the product.

The benefits of making every change committed into the master branch, the automated build will presume automated unit test and provide the immediate feedback to the developer about bugs. Due to this, the bug fix process became very easy and provide consistent code quality.

Continuous delivery (CD):

Continuous delivery is the process of establishing the single pipeline from code built to production deployment.  It is an automated pipeline Starts from the build, test, configure and production deployment. The process of continuous delivery emphasizes continuous improvement.

The continuous delivery provides the benefits of lower risk of bugs, faster delivery of the product, higher quality on the product due to continuous improvement, reduce the cost as the complete automation evolved in the CD pipeline.


Continuous testing:

Continuous testing is the process of synchronizing the Automated testing in the pipeline of continuous deployment to achieve the Business goal.

Continuous testing ensures test early, test faster, test often and automate the test. This process benefits from integration testing, regression testing, and acceptance testing.


Continuous Deployment:

Continuous deployment is the process of wrapping continuous integration, continuous delivery & continuous testing. The process always looks like continuous delivery but it has slight changes in the approval process before the code deployed into production. If our automated continuous delivery pipeline is well streamlined, then we can introduce the auto-approval for the code goes into production, meaning the next level of deployment in CD (continuous deployment). The difference between continuous delivery and continuous deployment has been explained in simple for

Performance Monitoring:

Performance monitoring is the process of continuous learning from the production to improve & scale the business.

The goal of this process is to achieve High availability by minimizing the time to defects or time to mitigate bypass through each and every defect that identified in the performance monitor process to the development team for further retrofit the issue in the test environment and provide the fix. Another word “test in production” happening before it is reported by the users.

When we put the process of performance monitoring in the DevOps cycle,   we will get the feedback about reliability, quality, and safety. There are many forms of performing the monitoring, which we will discuss in the upcoming articles. 


Among the key concepts of DevOps, security/compliance are not negligible. There is another process called DevSecOPs which talks about security as Code. As the security process in place, it will ensure that the customer data or privacy of customer information are not left behind.

With DevSecOps, we provide the Insights about security to the developer when the code is in developing stage. Penetration testing should not only the stage to verify the security in code, rather DevSecOps process introduce before the code build happen.

Each and every process are key concepts in DevOps, which can happen with tools in place to achieve the best practice of DevOps.