Automate the update of multiple containers
Introduction
Create containers and deal with docker is a great skill and a good practice, specially important when working with one or more services that must to be isolated. I will not approach the way that docker works — this could be a topic for another post, a great one I would say — but I'll write about how using it can facilitate others process, like a system update.
In certain point I was needing to keep five services running and updated on some Raspberry Pi boards. Implements docker and create containers to isolate those services turn possible to build a system that updates and keep updated each container (that contains the service running) of each board. This increase significantly the version control and scalability, enabling that each container do your own update based on a common base image for all boards — for each service.
A quiet note: I said that I will not talk about docker, but I thought important to describe its importance. When you develop a service based on a code that you created, in the common development process this code is versioned in some remote repository. So why not make a service that eventually runs git pull, update the code and restart the service? The point to use docker is to freeze the image and to guarantee that it will not crash because of some broken or deprecated package.
Containers update
- Manage credentials
- Download and run the image
To execute this commands you can create a bash file (.sh) for each service and define each step in it. For example: if I have five different containers that will be running I can create five folders, each one containing its script (.sh) and docker-compose.yml file (at the end I write a bonus topic about Dockerfile and docker-compose.yml, if you have some doubt, check it out). I guess that the easier way to explain this is sharing the script (update.sh) and describing each line:
#!/bin/bash
cd /docker-images/my-service
export DOCKER_CONFIG=~/.my-service
docker-compose pull my-service
docker-compose up -d
docker image prune -f
Before the explanation, here is the folder tree:
Now, explaining each line:
cd /docker-images/my-service
First of all is necessary to open the directory where your docker-compose.yml of the container to be updated is in it. Is necessary to do this because you have different docker-compose scripts in different folders.
export DOCKER_CONFIG=~/.my-service
DOCKER_CONFIG is a system variable that will contain the information of the token to make pull of your image. If you are using gitlab container registry, for example, you must to create a token access to be able to pull images. Once you have this token the command you define that the file will be at /.my-service, with the name config.json. The config file you me something like (/.my-service/config.json):
{
"auths": {
"registry.gitlab.com": {
"auth": "your-token-here"
}
}
}
Note: I used the same name in all process to be easier to assimilate the resources of the same container, but you can change it.
docker-compose pull my-service
Uses docker-compose to pull the new image, if exists. So if the local image has the same hash then the supposed new image, they are not different and the process stops here.
docker-compose up -d
Up the new image running at background (-d option)
docker image prune -f
Remove the old image that is no longer used and necessary, saving space.
Now the main point that will update the images. After create all your scripts you can create symbolic links to execute the scripts. This way your containers will check updates every hour:ln -sf /docker-images/my-service/update.sh /etc/cron.hourly/my-service
ln -sf /docker-images/my-service2/update.sh /etc/cron.hourly/my-service2
Bonus: Dockerfile vs docker-compose.yml
Conclusion
Again, create a scalable system is very important, in all scenarios. At the beginning work directly with folders and files updating with git was being enough, but at some point, managing ten are more boards was not easy.
No doubts that docker is a great tool and in this case turns easier to implements services updates. It is nice to point that docker consumes a lot of memory so is necessary to study each scenario and to analyze if you can handle all your services with docker — that depends, of course, of your systems and how heavy are they.

Comments
Post a Comment