Continuous Integration through a CMS coloured lens Part I
Something that I have come to appreciate is continuous integration, for me it all started with an article by Martin Fowler. The idea is to have everyone working from the main line and commit often so that code collisions happen and can be solved early. I like to think of these collisions as events, and we as developers should subscribe to them. When the event is fired, open up communication with the developer whose code you collided with. Talk with them, why are you both working on the same file, are you creating features that are similar? There could be an opportunity to cut your work load, or pick up something else from the back log. In any case if you don’t communicate with the other developer your code could just keep head butting.
In the past I have used feature branching, this seemed like a great idea when it was first introduced to me, however more often than not it’s a good way to experience MERGE HELL! This hell is what you get when you go to merge your branch back into the main line once it’s finished, although you can mitigate this by pulling from the main line often, it is simply not fun. As I use continuous integration more and more it seem superior to feature branching, especially when you start to look at a continuous delivery pipeline.
I see continuous delivery as the evolved form for continuous integration, in fact Martin Fowler describes continuous integration as the foundation and first step of continuous delivery. As far as I understand the concept, when you check in a code change a pipeline of tasks are run on that change set. As an example you could have the following tasks in your pipeline:
- Unit Tests.
- Automated Integration Tests.
- QA Deploy.
- QA Testing.
- UAT Deploy.
- UAT Testing.
- Production Deploy.
At the completion of each step you can be more and more confident about deploying into production. If any task fails the cause is identified, fixed and the pipeline starts again. It is important to add automated tests to catch that issue again (if possible) helping the pipeline to fail early, an early fail is good, as more effort is required as the pipeline continues through to later tasks. One thing that I should point out is that you only build once, this build is then promoted up through the environments. At first I did not see why you could not just build out to each environment, but after thinking about it makes a lot of sense. If you don’t use the exact same binaries you can no longer be completely confident, as differences in build environments can alter the artefacts produced. It is even suggested that you promote an entire virtual system through the pipeline. Have a read of the free chaptor of the book Continuous Delivery by Jez Humble and Dave Farley to get a better understanding of this.
Continuous integration/delivery does not play nicely with a CMS system, and I relay want to talk about a pipeline, techniques and tools you can use to help when working with them. However this post is getting a bit too long so I will split it up over the next few entries.