Automate the update of multiple containers

 

image: Pexels

Introduction

Create containers and deal with docker is a great skill and a good practice, specially important when working with one or more services that must to be isolated. I will not approach the way that docker works — this could be a topic for another post, a great one I would say — but I'll write about how using it can facilitate others process, like a system update.

In certain point I was needing to keep five services running and updated on some Raspberry Pi boards. Implements docker and create containers to isolate those services turn possible to build a system that updates and keep updated each container (that contains the service running) of each board. This increase significantly the version control and scalability, enabling that each container do your own update based on a common base image for all boards — for each service.

A quiet note: I said that I will not talk about docker, but I thought important to describe its importance. When you develop a service based on a code that you created, in the common development process this code is versioned in some remote repository. So why not make a service that eventually runs git pull, update the code and restart the service? The point to use docker is to freeze the image and to guarantee that it will not crash because of some broken or deprecated package.


Containers update

Let's start from the principle that you already have your built and updated images created and hosted at somewhere (like gitlab container registry, in my case). The idea is to update each container from those remote images each hour, this way you can just update the remote image and your boards will get this new image at some point

In the board, two steps are necessary:
  1. Manage credentials
  2. Download and run the image

To execute this commands you can create a bash file (.sh) for each service and define each step in it. For example: if I have five different containers that will be running I can create five folders, each one containing its script (.sh) and docker-compose.yml file (at the end I write a bonus topic about Dockerfile and docker-compose.yml, if you have some doubt, check it out). I guess that the easier way to explain this is sharing the script (update.sh) and describing each line:


#!/bin/bash

cd /docker-images/my-service

export DOCKER_CONFIG=~/.my-service

docker-compose pull my-service

docker-compose up -d

docker image prune -f


Before the explanation, here is the folder tree:

docker-images/
├── my-service
│   ├── docker-compose.yml
│   └── update.sh
└── my-service2
    ├── docker-compose.yml
    └── update.sh

Now, explaining each line:

cd /docker-images/my-service

First of all is necessary to open the directory where your docker-compose.yml of the container to be updated is in it. Is necessary to do this because you have different docker-compose scripts in different folders.

export DOCKER_CONFIG=~/.my-service

DOCKER_CONFIG is a system variable that will contain the information of the token to make pull of your image. If you are using gitlab container registry, for example, you must to create a token access to be able to pull images. Once you have this token the command you define that the file will be at /.my-service, with the name config.json. The config file you me something like (/.my-service/config.json): 

{

"auths": {

"registry.gitlab.com": {

"auth": "your-token-here"

}

}

}

Note: I used the same name in all process to be easier to assimilate the resources of the same container, but you can change it.

docker-compose pull my-service

Uses docker-compose to pull the new image, if exists. So if the local image has the same hash then the supposed new image, they are not different and the process stops here.

docker-compose up -d

Up the new image running at background (-d option)

docker image prune -f

Remove the old image that is no longer used and necessary, saving space.


Now the main point that will update the images. After create all your scripts you can create symbolic links to execute the scripts. This way your containers will check updates every hour:

ln -sf /docker-images/my-service/update.sh /etc/cron.hourly/my-service

ln -sf /docker-images/my-service2/update.sh /etc/cron.hourly/my-service2


Bonus: Dockerfile vs docker-compose.yml

I decide to approach this topic for two reasons: I had doubts at the beginning about the difference between this two resources (just because both contain docker in the name, I guess) and also because docker-compose.yml is used to update, so is necessary to know what is it and how to use it.

Dockerfile is the recipe to create your docker image. You can define the base image, create and move folders, define the packages to be installed and its versions and also describe the commands to run when the image is booted. Like I said before, this post don't have as goal to explain about docker, but is nice to know that it will create an isolated image that can be hosted at somewhere.

In the other side, docker-compose.yml is also a recipe, not for the image creation, but for booting and accessing it. This file determines from where your image created with Dockerfile will be gotten and also defines its access like: folders, serial and privilege. To manage docker-compose.yml you can use docker-compose commands. Here is an example:

version: '3'
services:
  my-service:
    container_name: my-service
    image: registry.gitlab.com/user/repository:tag
    network_mode: host
    restart: always
    volumes:
      - /service-1:/service-1
      - /etc/localtime:/etc/localtime:ro
    privileged: true
    tty: true

Conclusion

Again, create a scalable system is very important, in all scenarios. At the beginning work directly with folders and files updating with git was being enough, but at some point, managing ten are more boards was not easy. 

No doubts that docker is a great tool and in this case turns easier to implements services updates. It is nice to point that docker consumes a lot of memory so is necessary to study each scenario and to analyze if you can handle all your services with docker — that depends, of course, of your systems and how heavy are they.

Comments

Popular Posts