Our last summary installment focused on establishing automated testing practices to improve the level of fast feedback developers receive. This week, we look at mitigating the downward spiral that starts when developers work in isolated “branches” of the version control system.
Before we dive in though, make sure you’re fully caught up on the previous installments in the series first.
The DevOps Handbook Summary Series (so far):
The Big Problem-Solver: Continuous Integration
The idea behind developers working in individual branches aimed to reduce the risk of committing changes directly to trunk/master/mainline. While at the same time allowing devs to work on alternate sections of the software system simultaneously.
In practicality though, this process initiates a downward spiral of increasing technical debt that begins when teams try to merge their changes back into the trunk after working in isolation. The problem intensifies further as branch numbers and changes that need integration increase.
Alternatively, practicing Continuous Integration (CI) nullifies a surprising number of these problems. All by making changes and trunk merges part of the normal flow of work. Rather than becoming a dreaded calendar date that sparks a multitude of errors and rework, these integrations are the opposite; daily non-events.
- “Small Batch Development and What Happens When We Commit Code to Trunk Infrequently”
Branching strategies can be categorized in two ways:
— To improve individual productivity where everyone works in isolation and no-one can disrupt another’s work. The result, though, is that collaboration is difficult, merging becomes a painstaking process, and overall visibility of work is non-existent.
— To improve team productivity where everyone works in a common area. The result is an unbroken line of development, and commits are clear. The downside is that commits can impact the entire project.
Merging branches back in sporadically only creates a “merge hell” resulting in chaos, delayed feedback, and rework. Utilizing small batch-size merges while optimizing for team productivity mitigates this significantly.
- “Adopt Trunk-Based Development Practices”
Embrace the practice of CI by encouraging developers to commit code to the trunk daily. This reduced batch size allows automated tests to run frequently and for teams to detect merge problems quicker. More importantly, smaller problems are much easier to swarm and fix.
“Gated commits” are another method to help teams retain a deployable state by configuring the pipeline to reject code commits which threaten it. These disciplines (daily code commits, small batch sizes, fast problem-solving, etc.) promote higher quality and faster deployment times.
To establish CI and make code commits a low-risk process, deployments need to be “automated, repeatable, and predictable.” To achieve this, reduce friction in production deployments by extending the deployment pipeline.
- “Automate Our Deployment Process”
Document the process of code commits (as in the value stream mapping activity), then re-architect to streamline by reducing the number of handoffs. Encourage Devs and Ops to collaborate and:
— Use the same deployment method to every environment
— Smoke test deployments as well as all supporting systems
— Maintain consistently synchronized environments using the version control system implemented previously
Should any problems occur, pull the Andon cord and swarm as outlined before.
- “Enable Automated Self-Service Deployments”
Being able to self-deploy code in production and fix issues independent of Ops has become unusual practice for Devs due to increased security and compliance measures. Hence, Ops tend to perform code deployments to reduce these risks. However, to realize DevOps, both department’s goals need to be aligned, shift reliance to automated testing and deployment to achieve the same risk mitigation and improve “transparency, responsibility, and accountability.” Ideally, code deployments can then be performed by either Devs or Ops without manual handoffs or holdups.
- “Integrate Code Deployment Into the Deployment Pipeline”
Automating the code deployment process makes it easier to integrate into the deployment pipeline. Furthermore, container technology also helps reduce the complexity of deployments. The combined benefits of these help teams to achieve deployment lead times of minutes or hours, rather than months.
- “Decouple Deployments From Releases”
To overcome the problems that occur during simultaneous production deployments and feature releases it makes sense to decouple the two from each other.
The following two categories of release patterns can help ensure fast and frequent production deployments while reducing the impact and risk of any deployment errors:
- “Environment-Based Release Patterns”
— Blue-green deployment:
Deploy to two or more environments, while sending traffic (by configuring load balancers) to only one. Following new code commits, traffic can be moved to the corresponding environment, and little or no application changes are required. See image below.
Blue-Green Deployment Patterns: The DevOps Handbook (1st ed. 2016) Kim et al.
(For more on the different deployment types available, check out our article Docker & Continuous Delivery Deployment Types.)
— Database changes:
Address the problems that occur by having one database support different application versions by either creating two databases (each application version has its own database) or by decoupling the release of database changes from application changes.
— Canary and cluster immune system release patterns:
The design behind this pattern is based on a miner’s tradition of using a canary to detect toxic gases. In the case of code deployment, “canary release” monitors the software during code commits to ensure environment maintenance. Rollbacks are then easier to implement should anything be flagged.
The Canary Release Pattern: The DevOps Handbook (1st ed. 2016) Kim et al.
- The cluster immune system is an extension of the canary release to automatically roll back on code should any user-facing performance results deviate from expectations.
- “Application-Based Patterns to Enable Safer Releases”
Modify the application to release certain application functions selectively on a per feature basis by making small configuration changes only.
— Feature toggles: A mechanism by which we can permit or restrict features without the need for a production code deployment. This capability enables teams to roll back with ease, downgrade performance with grace, and increase resilience thanks to a service-oriented architecture.
— Dark launches: An expanded version of feature toggling, dark launching deploys all functionality into production while keeping users in the dark. Large or risky changes can be fully tested under production-like loads to ensure confidence in the service before it is launched.
- “Survey of Continuous Delivery and Continuous Deployment in Practice”
Over the years, the definitions of these concepts have changed organically and should be defined individually by an organization according to its requirements. Form is less important than outcome. According to the Handbook, “Deployments should be low-risk, push-button events we can perform on demand.”
Choosing the Right Architecture
Many near-death experiences can be attributed to architectural problems. Companies both large and small will go through many rewrites in their efforts to evolve and address these issues. It is overcoming the challenge of migrating from one architecture to another that is necessary to achieve DevOps organizational goals.
- “An Architecture That Enables Productivity, Testability, and Safety”
As defined in Part 2, loosely-coupled architecture establishes a productive and safe environment for (two-pizza sized) teams to make small changes with low-risk. Service-oriented architecture—based on layers of dependable services—can be hugely beneficial for team’s flexibility and scaling, and subsequently, productivity too.
- “Architectural Archetypes: Monoliths vs. Microservices”
Making architectural choices is entirely dependent on the organization in question. Many factors (such as stage in product life cycle, time to market, functionality, etc.) can have an impact on the choice of monoliths over microservices. But as an organization changes, so will architectural needs too. By understanding the evolution of these changes, teams can adapt/migrate their architecture accordingly. See the table below for the three major architectural archetypes.
Architectural Archetypes (Source: Shoup, “From the Monolith to Microservices.”) The DevOps Handbook (1st ed. 2016) Kim et al.
- “Use the Strangler Application Pattern to Safely Evolve Our Enterprise Architecture”
“They seed in the upper branches of a fig tree and gradually work their way down the tree until they root in the soil. Over many years they grow into fantastic and beautiful shapes, meanwhile strangling and killing the tree that was their host.”
The “strangler application” practice inspired by these Australian vines is when existing functionality is placed behind an application’s API and services are accessed through “versioned services.” The process seeks to slowly supplant a legacy, monolithic application with a new service-oriented architecture.
Throughout Part 3, we have endeavored to implement architecture and technical practices that encourage the fast flow of work between Dev and Ops. Up next in Part 4, The DevOps Handbook looks at establishing The Second Way and the technical practices of feedback.
Want to kick-start your DevOps transformation? We invite you to consider a Caylent subscription. For a monthly fee, your team receives full architecture, scalability, CI/CD, and container orchestration capabilities plus assistance 24/7/365.
Free your team up to concentrate on application development and delivering value quickly and safely to your customers.
Kim, G., Debois, P., Willis, J., & Humble, J. (2016). The DevOps handbook: how to create world-class agility, reliability, and security in technology organizations. Portland, OR: IT Revolution Press, LLC.