The modern market is an everchanging, evergrowing, hypermodern environment. Every day thousands of new competitors are created worldwide, transformative technology shakes industries at their core, and new paradigms are trailblazing the way to do business.
In this context, fast deliveries are more important than ever, and clients want their projects to meet the demands of a competitive market that’s moving at breakneck speed. Continuous Delivery and Development (CI/CD) is a popular methodology that seeks to do just that by automatizing the development process and creating a constant cycle.
But even a method that’s designed to speed up your development cycle can have its fair share of bottlenecks, and that’s what we’re going to talk about today. The things we can do to optimize our CI/CD pipeline.
Build Only What You Need
It might be tempting to try to push as much as possible with each commit, but building five different modules or services in one go can often lead to more trouble than it’s worth.
A good policy is that each commit should be like a good email, short and to the point. Even projects based on monolithic architecture have plenty to gain by sticking to one module at a time.
Focus your efforts on what’s absolutely necessary, prioritize and keep it simple. Massive commits can often act as bottlenecks due to code reviews, QA, and testing. If 99% of the code is perfect but a single line raises a flag, then the rest of the code could get stuck in the process while the bug is fixed.
If this sounds like someone selling microservices architectures, well, that’s exactly what it is. While monolithic approaches have their own strengths, usually, keeping everything micro and compartmentalized will save a lot of time in the long run.
Avoid Making Too Many Feature Changes at Once
A bit of a follow-up from the previous point. Features changes are necessary, but at the same time, they present a risk. From the perspective of software development, with every feature change we run the risk of bugging our code or introducing unintended consequences.
From a user’s perspective, too many changes can confuse our end-user, and it’s harder to assess what changes are working and which aren’t if they’re all presented at the same time. Worse yet, you run the risk of having one disliked change spoil the whole update, as people tend to generalize bad experiences.
Just as with building new modules, if one feature fails a test, the rest of the features have to be put on hold while you find the culprit. To make matters worse, the more you changed, the harder it is to find the source of the problem.
It bears repeating, the best way to avoid this type of bottleneck is to think small, one change at a time keep things organized, and are easier to handle for the whole team.
Run Jobs in Parallel
CI/CD pipelines can be a huge timesaver when used correctly, but just like a pipe, they can handle only so much. After a certain threshold, the continual cycle collapses and a bottleneck is formed.
For those who aren’t too savvy about software development, on the most common build steps are run sequentially. That is, each step runs on its own, and only happens after the previous step is finished.
Sometimes we can break the process into individual steps and run them at the same time, this is what’s commonly known as running in parallel or concurrently. Five jobs that take each a minute to complete would take five minutes to complete in sequence and just one minute if running in parallel.
Obviously, not every job can be run in parallel, nor does it need to. Some jobs are quick or efficient enough that they can handle the workload. Other jobs might require more time and resources, and in those cases, the process can be sped up by creating other instances to run in tandem.
Cache, Cache, Cache
Artifacts from previous CI/CD releases can be reused during new cycles. For instance, a package or container required by your app can be used in subsequent cycles.
To avoid having to download or rebuild your resource pool completely for each cycle, you should cache everything and reuse it wherever possible. There are fantastic tools for this kind of task, like for example Artifactory.
By reusing resources that you already have on hand, you can significantly increase the speed of your CI/CD pipeline. At the same time, you reduce the risks of problems arising from compatibility issues down the line.
While caches are useful, they aren’t meant to be used forever. It’s important to delete your cache when you update them or when they are no longer necessary, that way you can avoid confusion down the line.
Use Canary Releases
Some DevOps teams release preview builds for a small subset of users and gather feedback before fully committing to a build. Canary releases can help you gather useful data about your changes without exposing your whole userbase to potential bugs or issues.
Big development teams can even build different canary releases, each aimed at a different subset of users. This way, different changes can be evaluated in parallel and potential issues can be more easily detected.
For example, if you have three sample users groups, A, B and C, and the A group is reporting a bug, then you know that something is going on with that canary release specifically. That way it’s easier to trace the source of the problem.
Analyze Your Pipeline
We left the best for last. We cannot stress just how important it is for a team to constantly monitor their CI/CD pipeline. Measure how long each cycle is taking and take a close look at the jobs that are taking the longest.
Is the time being used on high-value steps? If that’s not the case, then what can you do to reduce the overall time or reshuffle your steps?
Are there long steps that aren’t bringing value? Why are they in place? Can they be removed safely?
By understanding your own pipeline you can make better decisions concerning how to increase the speed of each cycle. Remember that designing a CI/CD pipeline is an iterative process. You are supposed to keep working on it and constantly monitoring it to find areas of improvement.