Continuous Deployment with AWS CodeCommit, Docker Swarm and Jenkins

Continuous Deployment with AWS CodeCommit, Docker Swarm and Jenkins

150 150 Igor Kashin

This article is still getting updated…

At the moment when I write this article this blog is powered by WordPress and it probably will stay like that for a long time. However, quite some time ago I moved this WordPress instance into a docker swarm along with my sandbox. What’s wrong with docker swarm? When you use only WordPress – there’s nothing. However I have a reverse proxy, a few micro services, different databases and whatsoever. It is based on different stacks as well.

It’s all nice and fun but it is too complex without fully automated deployment process. This article describes the automation that I have in place to keep all the fun and get rid of complexity.

Manual Deployment or How it used to be

I have a local development environment where everything is easy: what I code is running in debug mode, all prerequisites are run with different docker-compose files. When I need some piece, I run whatever I need and stop it when I don’t need it, easy-peasy.

Every change is tracked with git, and I have remote repositories on AWS CodeCommit. Amazon gives it for free without any restrictions, that sounds really good. When I’m satisfied with the changes I do push and have a production-ready code in place. Codebase worth nothing since everything is docerized, so I need to build a container. For every micro service I have a shell script resided on my computer, that downloads the latest codebase from AWS, creates a slim container that is ready for deploy via two pass Dockerfile, uploads the container to docker hub and tags it as latest. It takes quite some time for certain projects, e.g. basic Vapor-based API may require ten minutes and more to build. Therefore, builds do fail sometimes, usually if not all changes are committed, then I have to restart the process.

Now it’s deployment time. I need to SSH to my swarm master node, pull the latest images from docker hub, shutdown the swarm and deploy it again. I had two separated stacks: one for WordPress with mysql and another one for everything else that I merged into a single one later because I want to have a single reverse proxy. Please let me know in comments if I’m stupid here and I still can have them separated (i.e. two stacks must share a network). However, there’s a downtime and quite some manual work involved for every update. If that’s not enough updates DO fail sometimes, for different reasons and then everything down until I resolve the problem. These are pet projects so downtime do not hurt my wallet, yet they definitely bother me.

Staging!

Well, it sounds obvious but if you have more than just a single machine, it helps to have a staging environment, which is as close to your production as possible. Even if you don’t have excessive machines, build the environment locally. Since updates do fail even with highly diverse team of a single person, it is hard to overvalue a chance to send to downtime something known to you only instead of what is published. Since I had two servers, I moved one of them out of my “private cloud” and dedicated it to deployment process instead. It should have been possible to use my local NAS for the same purpose, but I’d rather watch something via Plex while a machine in datacenter does heavy lifting. I still do backups onto NAS though, I trust AWS but backups are cool to have.

Long story short, there is a server, that has everything that I develop with essential dependencies. It is all defined in a single docker stack with .yml very similar to what I have in production. I also have Jenkins container running over there that does ALL manual steps for me and in the following paragraphs I will describe everything needed to make it work.

Automate Everything, Step-by-step

Run Jenkins

First of all we would need Jenkins to do all boring things for us. Well, we need some tool but I decided to go with Jenkins. Since everything on every machine is on docker, I’d go with container-based Jenkins. It sounds good and easy until you realise that Jenkins must be able to use docker. One more time: Jenkins in a docker container with docker installed into the same container. You can download the image from my docker hub. Readme section has a Dockerfile description as well.

docker run --name jenkins-with-docker -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock smartello/jenkins-with-docker

When Jenkins is installed and can be started, it is necessary to install a “AWS CodeCommit URL Helper” plugin to maintain AWS CodeCommit credentials and “Docker Pipeline” to call docker commands from a pipeline.

… to be continued

Leave a Reply