Vectops https://vectops.com Ops with some vectoring on it Sat, 12 Jun 2021 20:32:06 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.2 https://vectops.com/wp-content/uploads/2019/12/cropped-9684_Vectops_logo_MR_03-scaled-2-150x150.png Vectops https://vectops.com 32 32 Terraform 1 and Proxmox; working as it should https://vectops.com/2021/06/terraform-1-and-proxmox-working-as-it-should/ https://vectops.com/2021/06/terraform-1-and-proxmox-working-as-it-should/#respond Sat, 12 Jun 2021 20:32:06 +0000 https://vectops.com/?p=1851 And work it should… A while ago we typed up a detailed article about using Terraform to provision VMs on proxmox nodes, you can find it here. That article was dependant on some versions that are no longer supported and can and will cause problems eventually. So we updated the dependencies, versions and did some […]

The post Terraform 1 and Proxmox; working as it should appeared first on Vectops.

]]>
And work it should…

A while ago we typed up a detailed article about using Terraform to provision VMs on proxmox nodes, you can find it here.

That article was dependant on some versions that are no longer supported and can and will cause problems eventually.

So we updated the dependencies, versions and did some definition refactoring in order to allow the process to work with the latest versions at the time of writing this article.

Current versions:

  • Terraform 1.0
  • Proxmox 6.4

Hooray Terraform Registry!

The Terraform Registry finally has Telmate’s Proxmox provisioner hosted on its platform so we can define it on the .tf file and forget about it.

You can find the module here: Telmate’s Proxmox Module

This allows us to happily say you no longer need to worry about installing the module manually and working out the kinks on Go’s installation methods.

Terraform Definitions

There are some changes that need to be made to the old main.tf definition file, the first one is:

terraform {
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "2.7.1"
    }
  }
}

Afterwards, we need change some other entries that are no longer supported:

Disk definitions

These definitions no longer apply as they are properly detected by the module:

    id              = 0
    storage_type    = "lvm"
    iothread        = true

Network definitions

The same happens to some of the network definitions:

    id              = 0

Github repo

We’ve uploaded the updates to Github with the updates you can also check out the commit history to see the changes yourself:

https://github.com/galdorork/tf-Proxmox

Also, we’ve updated the docker image we’ve use to run pipelines that need Terraform and (maybe) Terragrunt (wink, wink):

https://github.com/galdorork/terragrunt-proxmox-module


We’ll try to keep updating old posts to work on newer versions, but this one really needed some love.

BAIIII 🙂

The post Terraform 1 and Proxmox; working as it should appeared first on Vectops.

]]>
https://vectops.com/2021/06/terraform-1-and-proxmox-working-as-it-should/feed/ 0
How to fking use Git; like a pro https://vectops.com/2021/06/how-to-fking-use-git-like-a-pro/ https://vectops.com/2021/06/how-to-fking-use-git-like-a-pro/#respond Fri, 11 Jun 2021 08:24:28 +0000 https://vectops.com/?p=1847 TLDR; Just read this: https://git-scm.com/book/en/v2 Dear Devs, Some tasks are easily performed using a GUI, some are not, some are more convenient. However, not all tooling is available all the time. I recently had an issue with a version control interface that was failing, throwing 500 errors everywhere, but guess what? the git services were […]

The post How to fking use Git; like a pro appeared first on Vectops.

]]>
TLDR; Just read this:

https://git-scm.com/book/en/v2


Dear Devs,

Some tasks are easily performed using a GUI, some are not, some are more convenient. However, not all tooling is available all the time.

I recently had an issue with a version control interface that was failing, throwing 500 errors everywhere, but guess what? the git services were still working.

Its not only your responsibility, its your duty as a corporate developer to KNOW how to use git without all the bells and whistles of a graphical interface.

Using simple commands such as:

git branch
git checkout
git status
git merge
git commit --amend -m 

Is extremely helpful and extremely important to know when you want to work fast.

Take this into account on your current/next job offerings.

PD: yes, this is a rant.

The post How to fking use Git; like a pro appeared first on Vectops.

]]>
https://vectops.com/2021/06/how-to-fking-use-git-like-a-pro/feed/ 0
First steps with git server; the right way https://vectops.com/2021/04/first-steps-with-git-server-the-right-way/ https://vectops.com/2021/04/first-steps-with-git-server-the-right-way/#respond Sat, 03 Apr 2021 23:18:57 +0000 https://vectops.com/?p=1816 Yuup! What’s going on? On this article, we resume the first steps to do with your own git server. Don’t have any yet? Read this article. Remember, any host with git software and internet connection can be a git server, you just need to configure a ssh connection to be able to use it. You […]

The post First steps with git server; the right way appeared first on Vectops.

]]>
Yuup! What’s going on? On this article, we resume the first steps to do with your own git server. Don’t have any yet? Read this article.

Remember, any host with git software and internet connection can be a git server, you just need to configure a ssh connection to be able to use it.

You can install git on your raspberry pi, a virtual machine on your homelab, whatever you want, just install it!

Remote server

Ok, right. Now you have the git software installed, but you need to create the repository first. How to do it? simple, just go to the folder where you want to put the repository (this directory needs to be accessible via ssh):

# change directory
cd /home/cooluser/repositories/

# make new directory with .git extension
mkdir coolrepo.git

# come inside
cd coolrepo.git

# run the init command
git init --bare 

Local machine

Create the directory where you want to have this repository

mkdir coolrepo
cd coolrepo

Now, you need to prepare this directory for git:

git init
touch README.md
git add .
git commit -m "First commit" -a
git remote add origin ssh://sshuser@server_ip:/home/cooluser/repositories/coolrepo.git
git push origin master

Et voila, you have your own repository with full functionality.

In the future, you can do a git clone like:

git clone sshuser@server_ip:/home/cooluser/repositories/coolrepo.git

The post First steps with git server; the right way appeared first on Vectops.

]]>
https://vectops.com/2021/04/first-steps-with-git-server-the-right-way/feed/ 0
Build your own git server on WD MyCloud EX2 https://vectops.com/2021/04/build-your-own-git-server-on-wd-mycloud-ex2/ https://vectops.com/2021/04/build-your-own-git-server-on-wd-mycloud-ex2/#comments Sat, 03 Apr 2021 23:08:10 +0000 https://vectops.com/?p=1814 If you want to build your own server to create and manage a private repositories and if you have a WD MyCloud EX2, this post may be helpful to do it in a painless manner. Fist of all, you need to enable the SSH access on your server, and install the Entware software. After you […]

The post Build your own git server on WD MyCloud EX2 appeared first on Vectops.

]]>
If you want to build your own server to create and manage a private repositories and if you have a WD MyCloud EX2, this post may be helpful to do it in a painless manner.

Fist of all, you need to enable the SSH access on your server, and install the Entware software.

After you have installed the Entware software, connect via SSH to your NAS and do the magic:

ln -s /opt/bin/opkg /usr/bin/opkg
opkg install git

Yes! Now you have installed the git software in your NAS.

Now we should make some syslinks to do git clone, upload, etc without errors, just execute these commands:

ln -s /opt/bin/git /usr/bin/git
ln -s /opt/bin/git-receive-pack /usr/bin/git-receive-pack
ln -s /opt/bin/git-shell /usr/bin/git-shell
ln -s /opt/bin/git-upload-archive /usr/bin/git-upload-archive
ln -s /opt/bin/git-upload-pack /usr/bin/git-upload-pack

Finally, you can use the WD MyCloud as a git server.

The post Build your own git server on WD MyCloud EX2 appeared first on Vectops.

]]>
https://vectops.com/2021/04/build-your-own-git-server-on-wd-mycloud-ex2/feed/ 2
Nagios-like monitoring Linux system services within Grafana https://vectops.com/2021/03/nagios-like-monitoring-linux-system-services-within-grafana/ https://vectops.com/2021/03/nagios-like-monitoring-linux-system-services-within-grafana/#respond Mon, 29 Mar 2021 05:33:15 +0000 https://vectops.com/?p=1789 Keeping it up with the monitoring series, I guess… But this time the approach is slightly different. Assumptions: You have a Telegraf service up and running on the same host you want to monitor system services from You have a InfluxDB instance up and running, receiving data from the previously mentioned Telegraf service You have […]

The post Nagios-like monitoring Linux system services within Grafana appeared first on Vectops.

]]>
Keeping it up with the monitoring series, I guess…

But this time the approach is slightly different.

Assumptions:

  • You have a Telegraf service up and running on the same host you want to monitor system services from
  • You have a InfluxDB instance up and running, receiving data from the previously mentioned Telegraf service
  • You have a Grafana instance up and running, making it possible to visualize data from InfluxDB
  • In this example, the Nginx web server will be monitored

INSTALL SYSDWEB

Follow the installation instructions provided in the repository’s README file.

Then save the sysdweb.conf file to /etc/sysdweb/sysdweb.conf:

sudo mkdir /etc/sysdweb
sudo wget https://raw.githubusercontent.com/ogarcia/sysdweb/master/sysdweb.conf -O /etc/sysdweb/sysdweb.conf

CREATE A SYSTEMD SERVICE UNIT FOR SYSDWEB

Save the following to /lib/systemd/system/sysdweb.service:

[Unit]
Description=Control systemd services through Web or REST API
Documentation=https://github.com/ogarcia/sysdweb
After=network.target
Requires=dbus.socket

[Service]
ExecStart=/usr/local/bin/sysdweb -c /etc/sysdweb/sysdweb.conf
Restart=on-failure

[Install]
WantedBy=multi-user.target

Start the new Sysdweb service:

sudo systemctl daemon-reload
sudo systemctl enable --now sysdweb

Check if the service is working:

$ curl -u 'sysdweb:supersecretpassword' 127.0.0.1:10080/api/v1/ngx/status
{"status": "active"}

CREATE A SYSTEM USER FOR SYSDWEB

There are multiple ways of interacting with Sysdweb (just review the config file), but this time I will be using the system user method:

sudo adduser sysdweb

PREPARE TELEGRAF

Add the following block to /etc/telegraf/telegraf.conf:

[[inputs.http_response]]
  urls = ["http://localhost:10080/api/v1/ngx/status"]
  response_timeout = "5s"
  username = "sysdweb"
  password = "supersecretpassword"
  response_string_match = "{\"status\": \"active\"}"

Apply changes to the Telegraf configuration:

sudo systemctl restart telegraf

LET GRAFANA DO THE REST

After adding a new panel and selecting InfluxDB as data source, with such a simple query you will see a "Success" string (by picking Stat as visualization type) in your panel:

SELECT result_type FROM http_response WHERE server =~ /ngx/

At this point, by adding a value mapping from the string "success" to "UP", you will then be able to monitor the current status of the Nginx service from your host at a glance.

In the same way, by adding another value mapping from the string "response_string_mismatch" to "KO", it will become pretty clear that something wrong is taking place when Nginx goes down.

FINAL NOTES

I know there is a Telegraf plugin called inputs.systemd_units that could directly cover the need described in this article, but finding out that Sysdweb could be effectively used for the same purpose was hella fun.

Finally, shout out to Ă“scar GarcĂ­a for bringing this useful tool to life!

Cheers!

The post Nagios-like monitoring Linux system services within Grafana appeared first on Vectops.

]]>
https://vectops.com/2021/03/nagios-like-monitoring-linux-system-services-within-grafana/feed/ 0
Install Nextcloud with Apache2 on Debian 10 https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/ https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/#respond Sat, 30 Jan 2021 21:19:15 +0000 https://vectops.com/?p=1667 PRE-REQUISITES Have your own domain and be able to configure DNS accordingly Have access to a Debian host with root privileges INSTALL DEPENDENCIES apt update apt install apache2 libapache2-mod-php php php-gd php-curl php-zip php-dom php-xml php-simplexml php-mbstring php-apcu php-mysql php-intl php-bcmath php-gmp php-imagick unzip mariadb-server certbot CONFIGURE DATABASE mysql -u root -p CREATE DATABASE your_database; […]

The post Install Nextcloud with Apache2 on Debian 10 appeared first on Vectops.

]]>
PRE-REQUISITES
  • Have your own domain and be able to configure DNS accordingly
  • Have access to a Debian host with root privileges

INSTALL DEPENDENCIES

apt update
apt install apache2 libapache2-mod-php php php-gd php-curl php-zip php-dom php-xml php-simplexml php-mbstring php-apcu php-mysql php-intl php-bcmath php-gmp php-imagick unzip mariadb-server certbot

CONFIGURE DATABASE

mysql -u root -p

CREATE DATABASE your_database;
GRANT ALL ON your_database.* TO 'your_user'@'localhost' IDENTIFIED BY 'your_password';
FLUSH PRIVILEGES;

DOWNLOAD NEXTCLOUD

Run the following commands:

cd /tmp
wget https://download.nextcloud.com/server/releases/latest.zip
unzip latest.zip
mv nextcloud/* /var/www/html/
mv nextcloud/.* /var/www/html/
rmdir nextcloud
chown -R www-data. /var/www/html/

SET UP NEXTCLOUD

Edit /var/www/html/config/config.php file and:

1) Declare your public access domain:

'trusted_domains' =>
  array (
    0 => 'your.domain.tld',
  ),

2) Disable new user registration:

'simpleSignUpLink.shown' => false,

3) Configure APCu as cache memory system:

'memcache.local' => '\\OC\\Memcache\\APCu',

SET UP APACHE2

1) Make it run at startup:

systemctl enable --now apache2

2) Enable HTTPS traffic:

a2enmod ssl

3) Issue a new Let’s Encrypt SSL certificate:

certbot certonly -d your.domain.tld

4) Set up Apache virtual host:

a2ensite your.domain.tld

Here’s a /etc/apache2/sites-available/your.domain.tld.conf file sample:

<VirtualHost *:80>
  ServerName your.domain.tld
  Redirect permanent "/" "https://your.domain.tld/"
</VirtualHost>

<VirtualHost *:443>
  ServerName your.domain.tld

  # Example SSL certificate path for Let's Encrypt
  SSLEngine On
  SSLCertificateFile /etc/letsencrypt/live/your.domain.tld/fullchain.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/your.domain.tld/privkey.pem

  DocumentRoot /var/www/html

  CustomLog /var/log/apache2/your.domain.tld-access.log combined
  ErrorLog /var/log/apache2/your.domain.tld-error.log
</VirtualHost>

Apply changes by running:

apachectl configtest
systemctl reload apache2

At this point you should be able to open https://your.domain.tld at any web browser and follow the web installation wizard.

FIX SECURITY WARNINGS ON A FRESH INSTALLATION

1) Add HSTS header

Edit /etc/apache2/sites-available/your_vhost.conf and within HTTPS VirtualHost block add:

Header always set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

2) Increase default PHP memory_limit value

Edit /etc/php/7.3/apache2/php.ini and set:

memory_limit = 512M # At least

3) Disable PHP output_buffering

Edit /etc/php/7.3/apache2/php.ini and set:

output_buffering = off

4) Fix missing database indices

Run these commands:

chmod +x /var/www/html/occ
sudo -u www-data /usr/bin/php /var/www/html/occ db:add-missing-indices

5) Fix webdav URLs

Edit /etc/apache2/sites-available/your_vhost.conf and within the HTTPS VirtualHost add:

  RewriteEngine On
  RewriteRule ^/\.well-known/carddav https://your.domain.tld/remote.php/dav/ [R=301,L]
  RewriteRule ^/\.well-known/caldav https://your.domain.tld/remote.php/dav/ [R=301,L]

Apply all these last changes by running:

systemctl restart apache2

The post Install Nextcloud with Apache2 on Debian 10 appeared first on Vectops.

]]>
https://vectops.com/2021/01/install-nextcloud-with-apache2-on-debian-10/feed/ 0
Monitoring an OpenWRT router with Grafana and InfluxDB https://vectops.com/2021/01/monitoring-an-openwrt-router-with-grafana-and-influxdb/ https://vectops.com/2021/01/monitoring-an-openwrt-router-with-grafana-and-influxdb/#respond Sat, 30 Jan 2021 21:15:09 +0000 https://vectops.com/?p=1664 ENVIRONMENT A Debian 10 VM running InfluxDB (v1.7.10+) and Grafana (v6.7.2+) services Network appliance running OpenWRT (v18.06+) INSTALL AND CONFIGURE INFLUXDB (@ VM) Install needed package and dependencies: sudo wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add - echo "deb https://repos.influxdata.com/debian buster stable" | sudo tee /etc/apt/sources.list.d/influxdb.list sudo apt update sudo apt install -y influxdb NOTE: […]

The post Monitoring an OpenWRT router with Grafana and InfluxDB appeared first on Vectops.

]]>
ENVIRONMENT
  • A Debian 10 VM running InfluxDB (v1.7.10+) and Grafana (v6.7.2+) services
  • Network appliance running OpenWRT (v18.06+)

INSTALL AND CONFIGURE INFLUXDB (@ VM)

Install needed package and dependencies:

sudo wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/debian buster stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt update
sudo apt install -y influxdb

NOTE: if you receive the following error:

E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation

You need to install ‘gpg’ package:

sudo apt install gpg

Once InfluxDB is installed, edit /etc/influxdb/influxdb.conf file to enable Collectd plugin (locate and uncomment the following lines):

[[collectd]]
   enabled = true
   bind-address = ":25826"
   database = "${YOUR_DB_NAME}"
   retention-policy = ""
   typesdb = "/usr/local/share/collectd/types.db"
   security-level = "none"
   batch-size = 5000
   batch-pending = 10
   batch-timeout = "10s"
   read-buffer = 0

Create ‘types.db’ file:

sudo mkdir -p /usr/local/share/collectd/
sudo wget -O /usr/local/share/collectd/types.db https://raw.githubusercontent.com/CactusProjects/openwrt_influxdb/master/types.db

Start and enable the InfluxDB service on startup:

sudo systemctl enable --now influxdb

Create Influx database:

influx
CREATE DATABASE ${YOUR_DB_NAME}
exit

Assign a one-month based retention policy to the database:

DROP RETENTION POLICY "autogen" ON "${YOUR_DB_NAME}"
CREATE RETENTION POLICY "one_month" ON "${YOUR_DB_NAME}" DURATION 730h0m REPLICATION 1 DEFAULT

INSTALL AND CONFIGURE GRAFANA (@ VM)

Install needed package:

sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
sudo apt update
sudo apt install grafana

NOTE if you receive the following error:

bash: add-apt-repository: command not found

You need to install ‘software-properties-common’ package:

apt install software-properties-common

If you want to run Grafana service on port 80, edit the /etc/grafana/grafana.ini file:

http_port = 80

After that, you need to run this command to allow running Grafana service as non-root (the default user is ‘grafana’):

sudo setcap 'cap_net_bind_service=+ep' /usr/sbin/grafana-server

Start and enable the Grafana service on startup:

sudo systemctl enable --now grafana-server.service

INSTALL AND CONFIGURE COLLECTD (@ OPENWRT)

Install needed packages:

sudo opkg install collectd collectd-mod-cpu collectd-mod-dns collectd-mod-interface collectd-mod-iwinfo collectd-mod-load collectd-mod-logfile collectd-mod-memory collectd-mod-network collectd-mod-openvpn collectd-mod-ping collectd-mod-rrdtool collectd-mod-thermal collectd-mod-uptime collectd-mod-wireless

Edit /etc/collectd/collectd.conf to configure the service:

BaseDir "/var/run/collectd"
Include "/etc/collectd/conf.d"
PIDFile "/var/run/collectd.pid"
PluginDir "/usr/lib/collectd"
TypesDB "/usr/share/collectd/types.db"
Interval 10
ReadThreads 2
Hostname "${YOUR_OPENWRT_HOSTNAME_HERE}"

LoadPlugin ping
<Plugin ping>
        TTL 127
        Interval 10
        Host "1.1.1.1"
</Plugin>

LoadPlugin memory
LoadPlugin cpu
LoadPlugin load
LoadPlugin uptime

LoadPlugin interface
<Plugin interface>
        IgnoreSelected false
        Interface "YOUR_INTERFACE_1_NAME_HERE" # i.e. "pppoe-wan"
        Interface "YOUR_INTERFACE_2_NAME_HERE" # i.e. "br-lan"
        Interface "YOUR_INTERFACE_3_NAME_HERE"
</Plugin>

LoadPlugin dns
<Plugin dns>
        Interface "YOUR_INTERFACE_1_NAME_HERE" # i.e. "pppoe-wan"
        Interface "YOUR_INTERFACE_2_NAME_HERE" # i.e. "br-lan"
        Interface "YOUR_INTERFACE_3_NAME_HERE"
        IgnoreSource "127.0.0.1"
</Plugin>

LoadPlugin thermal
<Plugin thermal>
        IgnoreSelected false
</Plugin>

LoadPlugin network
<Plugin network>
        Server "${YOUR_INFLUXDB_SERVER_ADDRESS_HERE}" "25826"
        CacheFlush 86400
        Forward false
</Plugin>

Start and enable Collectd service on startup:

sudo /etc/init.d/collectd start
sudo /etc/init.d/collectd enable

TESTING IF IT WORKS PROPERLY

Once OpenWRT is configured, check if it’s working. SSH to your Grafana/InfluxDB server and issue:

influx
use ${YOUR_DB_NAME}
show measurements
select * from uptime_value

And verify if every 10 seconds the records are getting updated.

GRAFANA DASHBOARD

Login to your Grafana webUI and follow this steps:

CREATE NEW DATA SOURCE

  • Go to https://${YOUR_GRAFANA_SERVER_ADDRESS}/datasources and click on "Add"
  • Change "Name"
  • Fill "URL" field under HTTP settings
  • Fill "Database" field and set "HTTP method" to GET under "InfluxDB Details" settings

IMPORT AN EXISTING DASHBOARD

  • Place your mouse over the + sign at the left side of screen
  • Click on Import
  • In the Grafana.com Dashboard field, type "11858"
  • Click on the Load button

IMPORTANT!

There are things on that dashboard that won’t be working by default right after you create it, such as the Wi-Fi related stuff and maybe other elements depending on your setup.

At this point, it’s time for you to verify the queries definition and modifying it to suit your needs.

The post Monitoring an OpenWRT router with Grafana and InfluxDB appeared first on Vectops.

]]>
https://vectops.com/2021/01/monitoring-an-openwrt-router-with-grafana-and-influxdb/feed/ 0
Keep your machines updated with GitLab; Ansible inside https://vectops.com/2021/01/keep-your-machines-updated-with-gitlab-ansible-inside/ https://vectops.com/2021/01/keep-your-machines-updated-with-gitlab-ansible-inside/#respond Mon, 25 Jan 2021 14:30:45 +0000 https://vectops.com/?p=1656 Some of the more mundane tasks involving a server, or a bunch of servers, is to run repeatable tasks on each one of them. Most sysadmins nowadays have some sort of automation in place to control this, however not everyone has the time to keep their machines updated, for example. So, this article is going […]

The post Keep your machines updated with GitLab; Ansible inside appeared first on Vectops.

]]>
Some of the more mundane tasks involving a server, or a bunch of servers, is to run repeatable tasks on each one of them.

Most sysadmins nowadays have some sort of automation in place to control this, however not everyone has the time to keep their machines updated, for example.

So, this article is going to be written as an absolute starting point to what can be achieved using pipeline scheduling and control for system automation.

How does a pipeline work in GitLab

According to GitLab’s documentation:

Pipelines are the top-level component of continuous integration, delivery, and deployment.

Pipelines comprise:

  • Jobs, which define what to do. For example, jobs that compile or test code.
  • Stages, which define when to run the jobs. For example, stages that run tests after stages that compile the code.

Jobs are executed by runners. Multiple jobs in the same stage are executed in parallel, if there are enough concurrent runners.

If all jobs in a stage succeed, the pipeline moves on to the next stage.

If any job in a stage fails, the next stage is not (usually) executed and the pipeline ends early.

So basically all we need to start automating things is a runner.

Luckily we’ve released an article that covers this, for example using the Kubernetes executor here.

Getting started with our repo

Since we’re using a pipeline, we’re also going to be needing a repository to save and commit our work.

Go on your GitLab instance and create a new repo. In this case we’re going to be calling it: vm-update

Once the repository has been created we need to create a folder on it, lets call it ansible:

mkdir ansible

Within that folder, we’re going to need a file called hosts and a folder called debian. Take into account that you can have several playbooks depending on the distro type, such as CentOS.

Actually, lets create a centos folder too. You should end up with the following files on the repo:

ansible/debian/
ansible/centos/

Ansible Hosts

For ansible to know where to connect to it needs a hosts file, create that file:

vi ansible/hosts

And add some machine descriptors to it (update with the IPs for your infra):

[debian]
120.0.120.200 # gitlab
120.0.120.201 # k3s
[centos]
120.0.120.202 # webserver

Ansible Playbooks

You’re going to need some playbooks for this to work.

Debian

Lets say you have a Debian type OS (Debian, Ubuntu, etc.) that needs to be updated, then create the following file:

vi ansible/debian/playbook.yml

Within that file place the following contents:

---
- hosts:  debian 

  tasks:

    - name: "Update repositories and upgrade packages"
      become: yes
      apt:
        update_cache: yes
        upgrade: yes
        force_apt_get: yes
        allow_unauthenticated: no
        autoremove: yes
        autoclean: yes
        install_recommends: no
        only_upgrade: yes
      tags: upgrade

CentOS

As for the CentOS based machines we can use this (more complete) playbook:

vi ansible/debian/playbook.yml

With the following contents

---
- hosts: centos

  tasks:

   - name: check packages for updates
      shell: yum list updates | awk 'f;/Updated Packages/{f=1;}' | awk '{ print $1 }'
      changed_when: updates.stdout_lines | length > 0
      args:
        warn: false
      register: updates
    - name: display count
      debug:
        msg: "Found {{ updates.stdout_lines | length }} packages to be updated:\n\n{{ updates.stdout }}"
    - when: updates.stdout_lines | length > 0
      block:
        - name: install updates using yum
          yum:
            name: "*"
            state: latest
        - name: install yum-utils
          package:
            name: yum-utils
        - name: check if reboot is required
          shell: needs-restarting -r
          failed_when: false
          register: reboot_required
          changed_when: false
    - when: updates.stdout_lines | length > 0 and reboot_required.rc != 0
      block:
        - name: reboot the server if required
          shell: sleep 3; reboot
          ignore_errors: true
          changed_when: false
          async: 1
          poll: 0
        - name: wait for server to come back after reboot
          wait_for_connection:
            timeout: 600
            delay: 20
          register: reboot_result
        - name: reboot time
          debug:
            msg: "The system rebooted in {{ reboot_result.elapsed }} seconds."

Now you should have the following files on your repo:

ansible/debian/playbook.yml
ansible/centos/playbook.yml
ansible/hosts

SSH Connections

For this to work the GitLab runner is going to need a key pair to be able to connect to the servers.

I’m not going to go on much detail about it on this article but you can create a folder on the repo to have those keys:

mkdir ssh

And put both the private and the public key there.

NOTE: I know this can be way more secure, lets save it for another article.

GitLab Pipeline

For a GitLab pipeline to work you’re going to need a .gitlab-ci.yml file:

vi .gitlab-ci.yml

And add the following contents:

image: mullnerz/ansible-playbook 

stages:
  - update_centos
  - update_debian

update_debian:
  stage: update_debian
  variables:
    ANSIBLE_HOST_KEY_CHECKING: "False"
    ANSIBLE_SSH_PRIVATE_KEY_FILE: "ssh/id_rsa"
  script:
    - chmod 600 ssh/id_rsa
    - ansible-playbook ansible/debian/playbook.yml -i ansible/hosts --tag upgrade -u vectops --private-key=ssh/id_rsa
  only:
    - master

update_centos:
  stage: update_cento
  variables:
    ANSIBLE_HOST_KEY_CHECKING: "False"
    ANSIBLE_SSH_PRIVATE_KEY_FILE: "ssh/id_rsa"
  script:
    - chmod 600 ssh/id_rsa
    - ansible-playbook ansible/debian/playbook.yml -i ansible/hosts --tag upgrade -u vectops --private-key=ssh/id_rsa
  only:
    - master

On this YML you can see we’re running a pre-setup Docker image that has all of the needed ansible tools to be used and the command that connects to the ansible hosts uses the vectops user.

Adjust it to your setup.

GitLab Scheduling

The whole idea is for this pipeline to run automatically on a scheduled day.

For this you can take advantage of GitLab’s job scheduler, on your GitLab web interface go on:

CI/CD > Schedules

Then click on the New Schedule and set up the properties and save the schedule.

Et voilĂ ! You can now let the pipeline do it’s job and keep your machines updated.

Now, I know that some critical infrastructure can’t be updated this way because of some package updates can break stuff, however validation steps can be added between stages so the job is still automated but can be checked by a human before the upgrade happens.

Or maybe modify the pipeline to just update security patches that shouldn’t break anything.

This is just a starting point, it can be scaled or modified to suit your needs.

The post Keep your machines updated with GitLab; Ansible inside appeared first on Vectops.

]]>
https://vectops.com/2021/01/keep-your-machines-updated-with-gitlab-ansible-inside/feed/ 0
Kubernetes executor on GitLab (not a GitLab managed cluster) https://vectops.com/2021/01/kubernetes-executor-on-gitlab-not-a-gitlab-managed-cluster/ https://vectops.com/2021/01/kubernetes-executor-on-gitlab-not-a-gitlab-managed-cluster/#respond Mon, 25 Jan 2021 13:35:45 +0000 https://vectops.com/?p=1647 Some of us that work with Gitlab, regardless on where we work or the role you thrive in (development, sysadmins, etc.), benefit hugely from GitLab’s ability to streamline and automate a deployment process. Whether it’s a process that deploys an application or some tedious time-consuming process that sysadmins have to perform, can be automated. Unfortunately […]

The post Kubernetes executor on GitLab (not a GitLab managed cluster) appeared first on Vectops.

]]>
Some of us that work with Gitlab, regardless on where we work or the role you thrive in (development, sysadmins, etc.), benefit hugely from GitLab’s ability to streamline and automate a deployment process.

Whether it’s a process that deploys an application or some tedious time-consuming process that sysadmins have to perform, can be automated.

Unfortunately not everyone or every company can afford to use the enterprise version of Gitlab. This is especially the case for Gitlab instances that run on homelabs.

Usually this doesn’t affect the functionality of the platform, however Gitlab has a really awesome toolset that allows it to manage a kubernetes’ deployment that’s included with the enterprise version.

We can work around this with the use of a Gitlab Kubernetes executor.

Enter the Kubernetes Executor

The Kubernetes executor is basically what it’s name states, it’s pod that’s deployed on a kubernetes namespace that can run processes.

What kind of processes? Anything that can be done with any other executor can be done by this executor.

Once deployed it’s a pretty hands-off experience. If you need to update it, you can update it with no downtime and the main benefit it has is that, since it’s not run on a specific VM or HW you don’t have another machine to add to your infrastructure and maintain. This last one can become overwhelming after a while (100 machines is easy to maintain, try 4000).

Requirements

This kind of executor needs some stuff to work:

  • A GitLab Instance
  • A Kubernetes instance, doesn’t matter which type, it can be a full fledged k8s cluster, OpenShift cluster or even k3s.
  • A namespace on the kubernetes cluster.
  • Access to create secrets on the namespace.
  • kubectl and helm3

That’s basically all you need to do this.

Setup

Kubernetes’ side

Namespace

You’re going to need a namespace to install the executor, in this case we’re going with gitlab-executor (make sure you can access the kubernetes instance from your machine):

kubectl create namespace gitlab-executor
GitLab registration Token

The registration token can be acquired from your GitLab instance, either from a full instance perspective, a GitLab-wide admin.

Or you can just define it for a project group or a single project.

The steps are pretty much the same.

For a project:

​ Go on the project settings -> CI/CD -> Runners -> Expand

And you’ll see the registration token.

Helm Chart

Once the namespace is created you need to set up Helm, start by adding the GitLab repo:

helm repo add gitlab https://charts.gitlab.io

Then update the repo:

helm repo update

Afterwards, you need a values.yaml file, which can be found here. From that file you’re going to need to focus on the following entries (to begin with):

gitlabUrl: https://gitlab.vectops.com/
runnerRegistrationToken: "XXXXXXXXXXXXXXXXX"
runners:
  image: ubuntu:16.04
  privileged: true

There’s a lot more entries on that file, you can decide which ones do you need, right now we’re focusing on the gitlabUrl, the registration token and whether the container is going to run on privileged mode (yes, this can run DockerInDocker).

Save the values.yaml on your local machine and then run the following command:

helm install gitlab-runner -f values.yaml -n gitlab-executor gitlab/gitlab-runner

Make sure you have the proper permissions to deploy an application on that namespace (although, if you’ve already created the namespace you should already have those.)

GitLab’s side

There’s not much to do on GitLab’s side, just wait until the runner is registered and you can use it.

Testing our shiny new runner

We’re going to perform an easy test,just a hello world to check that the runner runs a task.

For this you need a repo with the following files:

.gitlab-ci.yml

Yes, just one.

Within that gitlab-ci.yml file you need the following contents (we’re borrowing from the official GitLab wiki, the main article is here:

build-job:
  stage: build
  script:
    - echo "Hello, $GITLAB_USER_LOGIN!"

test-job1:
  stage: test
  script:
    - echo "This job tests something"

test-job2:
  stage: test
  script:
    - echo "This job tests something, but takes more time than test-job1."
    - echo "After the echo commands complete, it runs the sleep command for 20 seconds"
    - echo "which simulates a test that runs 20 seconds longer than test-job1"
    - sleep 20

deploy-prod:
  stage: deploy
  script:
    - echo "This job deploys something from the $CI_COMMIT_BRANCH branch."

After you commit and push the file to your project repo, the pipeline should run automatically and show the echo results.

The post Kubernetes executor on GitLab (not a GitLab managed cluster) appeared first on Vectops.

]]>
https://vectops.com/2021/01/kubernetes-executor-on-gitlab-not-a-gitlab-managed-cluster/feed/ 0
Using Icinga2 and Ansible: one playbook to monitor them all! https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/ https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/#respond Sat, 24 Oct 2020 15:35:09 +0000 https://vectops.com/?p=1621 Have you ever thought of way to monitor new hosts without having to spend much time adding the NRPE plugins, command check definitions and other custom configurations manually on each of them? No problem, I have just faced that very same situation. Also, got tired of it pretty quickly. So how should we solve it? […]

The post Using Icinga2 and Ansible: one playbook to monitor them all! appeared first on Vectops.

]]>
Have you ever thought of way to monitor new hosts without having to spend much time adding the NRPE plugins, command check definitions and other custom configurations manually on each of them?

No problem, I have just faced that very same situation. Also, got tired of it pretty quickly. So how should we solve it?

The solution we are providing here is pretty simple: apply an Icinga2 monitoring template to a brand new, fresh installed machine thanks to Ansible.

NOTICE: for the examples provided we will be using Debian-like distros, so if yours is different you may have to adapt those affected parts, such as package manager related commands, specific Ansible plugins and so on.

INSTALLING DEPENDENCIES

The only things we need to configure on our machine are the SSH keys (so we can apply our playbooks normally), and to install the sudo package.

For the SSH keys you can copy your public key with the following command:

ssh-copy-id -i path/to/your/key ${YOUR_USERNAME}@${YOUR_NEW_MACHINE}

In case you don’t have a key set up, you can create one as follows:

ssh-keygen -t rsa

Then fill in the information the shell is going to prompt for. After that, from inside your new machine, run the following as root (or using sudo):

apt-get update
apt-get install sudo -y

When the package is installed, be sure to run visudo and configure the user you will be using properly, otherwise the Ansible steps may fail. If you are using root user directly (which I don’t recommend, *insert security disclaimer here*) these last steps are not needed at all.

Due to time constraints we’re not going to cover the Icinga2 installation on this article.

We’re going to assume you’ve already set it up and it’s running properly.

SETTING UP THE ANSIBLE STUFF

From the machine you’re going to be using for the Ansible deployments, you will need to have a directory structure such as this one:

|-- inventories
|   `-- my_machines
|       `-- hosts
|-- playbooks
    |-- icinga_add_host.yml
    |-- install_nrpe_client.yml
    |-- files
        |-- nrpe
            |-- nrpe.cfg.template
            `-- nrpe_local.cfg.template

In this configuration, two playbooks are set up, the first one install_nrpe_client.yml has the following content:

---
- hosts: "{{ host }}"

  tasks:

    - name: "Install NRPE client and monitoring plugins"
      apt:
        pkg: ["nagios-nrpe-server", "monitoring-plugins", "nagios-plugins-contrib"]
        force_apt_get: yes
        update_cache: yes
        state: present
      tags: install

    - name: Copy NRPE service core files
      copy: src={{ item.src }} dest={{ item.dest }}
      with_items:
        - { src: 'nrpe/nrpe_local.cfg.template', dest: '/etc/nagios/nrpe_local.cfg' }
        - { src: 'nrpe/nrpe.cfg.template', dest: '/etc/nagios/nrpe.cfg' }
      tags: copy

    - name: Restart nagios-nrpe-server service
      service: name=nagios-nrpe-server state=restarted
      tags: restart

The playbook just goes on the machine and performs the needed package set up to run the monitoring services for that machine.

The second one, icinga_add_host.yml, has the following content:

---
- hosts: ${YOUR_ICINGA2_SERVER}

  tasks:

    - name: "Add host to Icinga"
      copy:
        dest: /etc/icinga2/conf.d/homelab/{{ host }}.conf
        content: |
          object Host "{{ host }}" {
            import "generic-host"
            address = "{{ host }}"
            vars.os = "Linux"
            vars.disks["disk /"] = {
              disk_partitions = "/"
            }
            vars.notification["mail"] = {
              groups = [ "icingaadmins" ]
            }
          }
      tags: add-host-template

    - name: "Restart Icinga2 service"
      service: name=icinga2 state=restarted
      tags: restart

Note that we’re using the default way of adding a host in Icinga2, of course this can be further extended by adding multiple and new commands and services.

Finally, there is the inventories/my_machines/hosts file, which should only have one line for now:

new_machine_hostname

Keep this in mind for later.

SETTING UP ICINGA2

As you may have already seen, there are two other files in this setup, both templates are for the Icinga2 service configuration itself and command check definitions.

The file nrpe.cfg.template, is almost a clone of the default nrpe.cfg, as the only meaningful change to get things working is the allowed_hosts variable.

Where you must declare the address or FQDN of your Icinga2 server, so you can leave it intact except for that one bit (seriously don’t forget this).

The rest is just matter of custom preferences.

Also, the nrpe_local.cfg.template is the default file I chose to host all my custom command checks. However you can get things working too just by copy-pasting the ones declared in the default nrpe.cfg file or directly uncommenting them right there.

If you choose to copy them, it would end up looking something like this:

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -r -w .15,.10,.05 -c .30,.25,.20
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200

### MISC SYSTEM METRICS ###
command[check_users]=/usr/lib/nagios/plugins/check_users $ARG1$
command[check_load]=/usr/lib/nagios/plugins/check_load $ARG1$
command[check_disk]=/usr/lib/nagios/plugins/check_disk $ARG1$
command[check_swap]=/usr/lib/nagios/plugins/check_swap $ARG1$
command[check_cpu_stats]=/usr/lib/nagios/plugins/check_cpu_stats.sh $ARG1$
command[check_mem]=/usr/lib/nagios/plugins/custom_check_mem -n $ARG1$

### GENERIC SERVICES ###
command[check_init_service]=sudo /usr/lib/nagios/plugins/check_init_service $ARG1$
command[check_services]=/usr/lib/nagios/plugins/check_services -p $ARG1$

### SYSTEM UPDATES ###
command[check_yum]=/usr/lib/nagios/plugins/check_yum
command[check_apt]=/usr/lib/nagios/plugins/check_apt

### PROCESSES ###
command[check_all_procs]=/usr/lib/nagios/plugins/custom_check_procs
command[check_procs]=/usr/lib/nagios/plugins/check_procs $ARG1$

### OPEN FILES ###
command[check_open_files]=/usr/lib/nagios/plugins/check_open_files.pl $ARG1$

### NETWORK CONNECTIONS ###
command[check_netstat]=/usr/lib/nagios/plugins/check_netstat.pl -p $ARG1$ $ARG2$

RUNNING THE PLAYBOOKS

At this point everything should be ready to start monitoring your new machine, so the way you would do this is by running the following commands from your super amazing laptop:

ansible-playbook -i ansible/inventories/my_hosts/hosts ansible/playbooks/nrpe_client.yml --extra-vars "host=${new_machine_hostname}"
ansible-playbook -i ansible/inventories/my_hosts/hosts ansible/playbooks/icinga_add_host.yml --extra-vars "host=${new_machine_hostname}"


IMPORTANT: The “new_machine_hostname” value must coincide with the one set in the hosts inventory file (I told you to keep that in mind for a reason!), else it will return an error message.

CONCLUSION

As you can see, it can be pretty easy to monitor new hosts with Icinga2 when it comes to get things done by using automation software, and this is just a slight example.

Hope you find it useful!

The post Using Icinga2 and Ansible: one playbook to monitor them all! appeared first on Vectops.

]]>
https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/feed/ 0