Featured image of post Keep your machines updated with GitLab; Ansible inside

Keep your machines updated with GitLab; Ansible inside

Some of the more mundane tasks involving a server, or a bunch of servers, is to run repeatable tasks on each one of them.

Some of the more mundane tasks involving a server, or a bunch of servers, is to run repeatable tasks on each one of them.

Most sysadmins nowadays have some sort of automation in place to control this, however not everyone has the time to keep their machines updated, for example.

So, this article is going to be written as an absolute starting point to what can be achieved using pipeline scheduling and control for system automation.

How does a pipeline work in GitLab

According to GitLab’s documentation:

Pipelines are the top-level component of continuous integration, delivery, and deployment.

Pipelines comprise:

  • Jobs, which define what to do. For example, jobs that compile or test code.
  • Stages, which define when to run the jobs. For example, stages that run tests after stages that compile the code.

Jobs are executed by runners. Multiple jobs in the same stage are executed in parallel, if there are enough concurrent runners.

If all jobs in a stage succeed, the pipeline moves on to the next stage.

If any job in a stage fails, the next stage is not (usually) executed and the pipeline ends early.

So basically all we need to start automating things is a runner.

Luckily we’ve released an article that covers this, for example using the Kubernetes executor here.

Getting started with our repo

Since we’re using a pipeline, we’re also going to be needing a repository to save and commit our work.

Go on your GitLab instance and create a new repo. In this case we’re going to be calling it: vm-update

Once the repository has been created we need to create a folder on it, lets call it ansible:

mkdir ansible

Within that folder, we’re going to need a file called hosts and a folder called debian. Take into account that you can have several playbooks depending on the distro type, such as CentOS.

Actually, lets create a centos folder too. You should end up with the following files on the repo:

ansible/debian/
ansible/centos/

Ansible Hosts

For ansible to know where to connect to it needs a hosts file, create that file:

vi ansible/hosts

And add some machine descriptors to it (update with the IPs for your infra):

[debian]
120.0.120.200 # gitlab
120.0.120.201 # k3s
[centos]
120.0.120.202 # webserver

Ansible Playbooks

You’re going to need some playbooks for this to work.

Debian

Lets say you have a Debian type OS (Debian, Ubuntu, etc.) that needs to be updated, then create the following file:

vi ansible/debian/playbook.yml

Within that file place the following contents:

---
- hosts:  debian 

  tasks:

    - name: "Update repositories and upgrade packages"
      become: yes
      apt:
        update_cache: yes
        upgrade: yes
        force_apt_get: yes
        allow_unauthenticated: no
        autoremove: yes
        autoclean: yes
        install_recommends: no
        only_upgrade: yes
      tags: upgrade

CentOS

As for the CentOS based machines we can use this (more complete) playbook:

vi ansible/debian/playbook.yml

With the following contents

---
- hosts: centos

  tasks:
  
   - name: check packages for updates
      shell: yum list updates | awk 'f;/Updated Packages/{f=1;}' | awk '{ print $1 }'
      changed_when: updates.stdout_lines | length > 0
      args:
        warn: false
      register: updates
    - name: display count
      debug:
        msg: "Found {{ updates.stdout_lines | length }} packages to be updated:\n\n{{ updates.stdout }}"
    - when: updates.stdout_lines | length > 0
      block:
        - name: install updates using yum
          yum:
            name: "*"
            state: latest
        - name: install yum-utils
          package:
            name: yum-utils
        - name: check if reboot is required
          shell: needs-restarting -r
          failed_when: false
          register: reboot_required
          changed_when: false
    - when: updates.stdout_lines | length > 0 and reboot_required.rc != 0
      block:
        - name: reboot the server if required
          shell: sleep 3; reboot
          ignore_errors: true
          changed_when: false
          async: 1
          poll: 0
        - name: wait for server to come back after reboot
          wait_for_connection:
            timeout: 600
            delay: 20
          register: reboot_result
        - name: reboot time
          debug:
            msg: "The system rebooted in {{ reboot_result.elapsed }} seconds."

Now you should have the following files on your repo:

ansible/debian/playbook.yml
ansible/centos/playbook.yml
ansible/hosts

SSH Connections

For this to work the GitLab runner is going to need a key pair to be able to connect to the servers.

I’m not going to go on much detail about it on this article but you can create a folder on the repo to have those keys:

mkdir ssh

And put both the private and the public key there.

NOTE: I know this can be way more secure, lets save it for another article.

GitLab Pipeline

For a GitLab pipeline to work you’re going to need a .gitlab-ci.yml file:

vi .gitlab-ci.yml

And add the following contents:

image: mullnerz/ansible-playbook 

stages:
  - update_centos
  - update_debian

update_debian:
  stage: update_debian
  variables:
    ANSIBLE_HOST_KEY_CHECKING: "False"
    ANSIBLE_SSH_PRIVATE_KEY_FILE: "ssh/id_rsa"
  script:
    - chmod 600 ssh/id_rsa
    - ansible-playbook ansible/debian/playbook.yml -i ansible/hosts --tag upgrade -u vectops --private-key=ssh/id_rsa
  only:
    - master

update_centos:
  stage: update_cento
  variables:
    ANSIBLE_HOST_KEY_CHECKING: "False"
    ANSIBLE_SSH_PRIVATE_KEY_FILE: "ssh/id_rsa"
  script:
    - chmod 600 ssh/id_rsa
    - ansible-playbook ansible/debian/playbook.yml -i ansible/hosts --tag upgrade -u vectops --private-key=ssh/id_rsa
  only:
    - master

On this YML you can see we’re running a pre-setup Docker image that has all of the needed ansible tools to be used and the command that connects to the ansible hosts uses the vectops user.

Adjust it to your setup.

GitLab Scheduling

The whole idea is for this pipeline to run automatically on a scheduled day.

For this you can take advantage of GitLab’s job scheduler, on your GitLab web interface go on:

CI/CD > Schedules

Then click on the New Schedule and set up the properties and save the schedule.

Et voilĂ ! You can now let the pipeline do it’s job and keep your machines updated.

Now, I know that some critical infrastructure can’t be updated this way because of some package updates can break stuff, however validation steps can be added between stages so the job is still automated but can be checked by a human before the upgrade happens.

Or maybe modify the pipeline to just update security patches that shouldn’t break anything.

This is just a starting point, it can be scaled or modified to suit your needs.

Built with Hugo