Vectops https://vectops.com Ops with some vectoring on it Tue, 17 Nov 2020 06:52:49 +0000 en-US hourly 1 https://wordpress.org/?v=5.6 https://vectops.com/wp-content/uploads/2019/12/cropped-9684_Vectops_logo_MR_03-scaled-2-150x150.png Vectops https://vectops.com 32 32 Using Icinga2 and Ansible: one playbook to monitor them all! https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/ https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/#respond Sat, 24 Oct 2020 15:35:09 +0000 https://vectops.com/?p=1621 Have you ever thought of way to monitor new hosts without having to spend much time adding the NRPE plugins, command check definitions and other custom configurations manually on each of them? No problem, I have just faced that very same situation. Also, got tired of it pretty quickly. So how should we solve it? […]

The post Using Icinga2 and Ansible: one playbook to monitor them all! appeared first on Vectops.

]]>
Have you ever thought of way to monitor new hosts without having to spend much time adding the NRPE plugins, command check definitions and other custom configurations manually on each of them?

No problem, I have just faced that very same situation. Also, got tired of it pretty quickly. So how should we solve it?

The solution we are providing here is pretty simple: apply an Icinga2 monitoring template to a brand new, fresh installed machine thanks to Ansible.

NOTICE: for the examples provided we will be using Debian-like distros, so if yours is different you may have to adapt those affected parts, such as package manager related commands, specific Ansible plugins and so on.

INSTALLING DEPENDENCIES

The only things we need to configure on our machine are the SSH keys (so we can apply our playbooks normally), and to install the sudo package.

For the SSH keys you can copy your public key with the following command:

ssh-copy-id -i path/to/your/key ${YOUR_USERNAME}@${YOUR_NEW_MACHINE}

In case you don’t have a key set up, you can create one as follows:

ssh-keygen -t rsa

Then fill in the information the shell is going to prompt for. After that, from inside your new machine, run the following as root (or using sudo):

apt-get update
apt-get install sudo -y

When the package is installed, be sure to run visudo and configure the user you will be using properly, otherwise the Ansible steps may fail. If you are using root user directly (which I don’t recommend, *insert security disclaimer here*) these last steps are not needed at all.

Due to time constraints we’re not going to cover the Icinga2 installation on this article.

We’re going to assume you’ve already set it up and it’s running properly.

SETTING UP THE ANSIBLE STUFF

From the machine you’re going to be using for the Ansible deployments, you will need to have a directory structure such as this one:

|-- inventories
|   `-- my_machines
|       `-- hosts
|-- playbooks
    |-- icinga_add_host.yml
    |-- install_nrpe_client.yml
    |-- files
        |-- nrpe
            |-- nrpe.cfg.template
            `-- nrpe_local.cfg.template

In this configuration, two playbooks are set up, the first one install_nrpe_client.yml has the following content:

---
- hosts: "{{ host }}"

  tasks:

    - name: "Install NRPE client and monitoring plugins"
      apt:
        pkg: ["nagios-nrpe-server", "monitoring-plugins", "nagios-plugins-contrib"]
        force_apt_get: yes
        update_cache: yes
        state: present
      tags: install

    - name: Copy NRPE service core files
      copy: src={{ item.src }} dest={{ item.dest }}
      with_items:
        - { src: 'nrpe/nrpe_local.cfg.template', dest: '/etc/nagios/nrpe_local.cfg' }
        - { src: 'nrpe/nrpe.cfg.template', dest: '/etc/nagios/nrpe.cfg' }
      tags: copy

    - name: Restart nagios-nrpe-server service
      service: name=nagios-nrpe-server state=restarted
      tags: restart

The playbook just goes on the machine and performs the needed package set up to run the monitoring services for that machine.

The second one, icinga_add_host.yml, has the following content:

---
- hosts: ${YOUR_ICINGA2_SERVER}

  tasks:

    - name: "Add host to Icinga"
      copy:
        dest: /etc/icinga2/conf.d/homelab/{{ host }}.conf
        content: |
          object Host "{{ host }}" {
            import "generic-host"
            address = "{{ host }}"
            vars.os = "Linux"
            vars.disks["disk /"] = {
              disk_partitions = "/"
            }
            vars.notification["mail"] = {
              groups = [ "icingaadmins" ]
            }
          }
      tags: add-host-template

    - name: "Restart Icinga2 service"
      service: name=icinga2 state=restarted
      tags: restart

Note that we’re using the default way of adding a host in Icinga2, of course this can be further extended by adding multiple and new commands and services.

Finally, there is the inventories/my_machines/hosts file, which should only have one line for now:

new_machine_hostname

Keep this in mind for later.

SETTING UP ICINGA2

As you may have already seen, there are two other files in this setup, both templates are for the Icinga2 service configuration itself and command check definitions.

The file nrpe.cfg.template, is almost a clone of the default nrpe.cfg, as the only meaningful change to get things working is the allowed_hosts variable.

Where you must declare the address or FQDN of your Icinga2 server, so you can leave it intact except for that one bit (seriously don’t forget this).

The rest is just matter of custom preferences.

Also, the nrpe_local.cfg.template is the default file I chose to host all my custom command checks. However you can get things working too just by copy-pasting the ones declared in the default nrpe.cfg file or directly uncommenting them right there.

If you choose to copy them, it would end up looking something like this:

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -r -w .15,.10,.05 -c .30,.25,.20
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200

### MISC SYSTEM METRICS ###
command[check_users]=/usr/lib/nagios/plugins/check_users $ARG1$
command[check_load]=/usr/lib/nagios/plugins/check_load $ARG1$
command[check_disk]=/usr/lib/nagios/plugins/check_disk $ARG1$
command[check_swap]=/usr/lib/nagios/plugins/check_swap $ARG1$
command[check_cpu_stats]=/usr/lib/nagios/plugins/check_cpu_stats.sh $ARG1$
command[check_mem]=/usr/lib/nagios/plugins/custom_check_mem -n $ARG1$

### GENERIC SERVICES ###
command[check_init_service]=sudo /usr/lib/nagios/plugins/check_init_service $ARG1$
command[check_services]=/usr/lib/nagios/plugins/check_services -p $ARG1$

### SYSTEM UPDATES ###
command[check_yum]=/usr/lib/nagios/plugins/check_yum
command[check_apt]=/usr/lib/nagios/plugins/check_apt

### PROCESSES ###
command[check_all_procs]=/usr/lib/nagios/plugins/custom_check_procs
command[check_procs]=/usr/lib/nagios/plugins/check_procs $ARG1$

### OPEN FILES ###
command[check_open_files]=/usr/lib/nagios/plugins/check_open_files.pl $ARG1$

### NETWORK CONNECTIONS ###
command[check_netstat]=/usr/lib/nagios/plugins/check_netstat.pl -p $ARG1$ $ARG2$

RUNNING THE PLAYBOOKS

At this point everything should be ready to start monitoring your new machine, so the way you would do this is by running the following commands from your super amazing laptop:

ansible-playbook -i ansible/inventories/my_hosts/hosts ansible/playbooks/nrpe_client.yml --extra-vars "host=${new_machine_hostname}"
ansible-playbook -i ansible/inventories/my_hosts/hosts ansible/playbooks/icinga_add_host.yml --extra-vars "host=${new_machine_hostname}"


IMPORTANT: The “new_machine_hostname” value must coincide with the one set in the hosts inventory file (I told you to keep that in mind for a reason!), else it will return an error message.

CONCLUSION

As you can see, it can be pretty easy to monitor new hosts with Icinga2 when it comes to get things done by using automation software, and this is just a slight example.

Hope you find it useful!

The post Using Icinga2 and Ansible: one playbook to monitor them all! appeared first on Vectops.

]]>
https://vectops.com/2020/10/using-icinga2-and-ansible-one-playbook-to-monitor-them-all/feed/ 0
Run your own bandwidth speed test server thanks to iPerf https://vectops.com/2020/10/run-your-own-bandwidth-speed-test-server-thanks-to-iperf/ https://vectops.com/2020/10/run-your-own-bandwidth-speed-test-server-thanks-to-iperf/#respond Mon, 19 Oct 2020 08:34:25 +0000 https://vectops.com/?p=1178 If you are reading this, I’m pretty sure you may want to have your own selfhosted speedtest server. There are multiple reasons to run your own instance, so why not? When it comes to this kind of analysis, iPerf is one of the first options I think of to perform network tests. INSTALLING AND RUNNING […]

The post Run your own bandwidth speed test server thanks to iPerf appeared first on Vectops.

]]>
If you are reading this, I’m pretty sure you may want to have your own selfhosted speedtest server. There are multiple reasons to run your own instance, so why not?

When it comes to this kind of analysis, iPerf is one of the first options I think of to perform network tests.

INSTALLING AND RUNNING IPERF

So, that said let’s just get hands on. This example relies on running a Linux machine wherever you want to host your speed test instance. Thus the only thing you just need is to install the iperf3 tool.

Then the next step would be running the speed test instance, which can be achieved by running this:

/usr/bin/iperf3 -s -p 5500

Where -s means “run in server mode” and -p 5500 means “in port 5500” (this is the one by default, you can change it to whatever you want). If the server is running behind a NAT firewall don’t forget to set the proper port-forwarding rules.

USING THE IPERF SERVER

At this point, the only thing you have to do to start measuring your speed connection against this iPerf instance is running this command:

iperf3 -c ${IPERF_SERVER_ADDRESS} -p 5500 -P 8 -t 30

Where -c points the server to connect to, -P the number of parallel client streams to run and -t the ammount of time (in seconds) we want it to keep running.

You should get an output similar to this:

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  32.4 MBytes  27.2 Mbits/sec  444             sender
[  5]   0.00-10.00  sec  29.3 MBytes  24.6 Mbits/sec                  receiver
(...)
[SUM]   0.00-10.00  sec   276 MBytes   232 Mbits/sec  2663             sender
[SUM]   0.00-10.00  sec   259 MBytes   217 Mbits/sec                  receiver

RUNNING IPERF AS A SYSTEMD SERVICE

Also, if you are running Systemd you can create a script file to make it run as a service on your system. For this, simply create the file /etc/systemd/system/speedtest.service and copy the following content into it:

[Unit]
Description=iPerf3 speed test server
After=network.target

[Service]
ExecStart=/usr/bin/iperf3 -s -p 5500

[Install]
WantedBy=multi-user.target

Once created just reload Systemd’s configuration and start it:

sudo systemctl daemon-reload
sudo systemctl start speedtest.service

Lastly, if you wish to get your iPerf3 instance running on system startup go enable the new service:

sudo systemctl enable speedtest.service

And that’s it, now you can start measuring your bandwidth speed test against your own server.

See you next time!

The post Run your own bandwidth speed test server thanks to iPerf appeared first on Vectops.

]]>
https://vectops.com/2020/10/run-your-own-bandwidth-speed-test-server-thanks-to-iperf/feed/ 0
Install Docker Engine on Debian 10, as effortless as possible https://vectops.com/2020/06/install-docker-engine-on-debian-10/ https://vectops.com/2020/06/install-docker-engine-on-debian-10/#comments Sun, 21 Jun 2020 09:49:55 +0000 https://vectops.com/?p=889 Recently, I have been dealing with the deployment of multiple Debian servers which I had to configure in a pretty tailored way and also running Docker was a must. As I was performing the deployments by hand, after a few times finishing the Docker service installation I reached the conclusion that things needed to speed […]

The post Install Docker Engine on Debian 10, as effortless as possible appeared first on Vectops.

]]>

Recently, I have been dealing with the deployment of multiple Debian servers which I had to configure in a pretty tailored way and also running Docker was a must. As I was performing the deployments by hand, after a few times finishing the Docker service installation I reached the conclusion that things needed to speed up.

That said, the quickest way that I found to do it was as easy as parsing every command pointed in installation process (which you can check in the official Docker documentation) directly into the machines through a snippet that I posted on Gist to make it available for anyone interested on it.

So yeah, as you can guess this is another one-liner shot focused to save some time. So if you are thinking of installing Docker in a Debian system, just try:

curl -s https://gist.githubusercontent.com/hads0m/502cdb812caa25a32ddd994f6fbff0df/raw/e7ec8f706fc88623f0fb097b4d9704c4e0b4bd9a/install_docker_debian.bash | sudo bash

Of course, I encourage you to check the content of whatever you run before doing so, for many reasons. In this case, note that by default I am installing the latest available version of docker-ce and also docker-compose.

When the installation of Docker finishes, you can check if it is active on your machine by running:

sudo systemctl status docker

After that, you should see something like this:

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2020-06-20 15:42:47 CEST; 33s ago
     Docs: https://docs.docker.com
 Main PID: 3769 (dockerd)
    Tasks: 12
   Memory: 47.9M
   CGroup: /system.slice/docker.service
           └─3769 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Note that there is also an official script from Docker to provide an automated installation located at this GitHub repository, so you just can follow any method you prefer, but keep in mind that, as the Docker team points, this method is only recommended in TESTING AND DEVELOPMENT ENVIRONMENTS.

And that’s all, now you can start containerizing whatever you want! I hope you find this as useful as I have. As always, any feedback is welcome.

Cheers!

The post Install Docker Engine on Debian 10, as effortless as possible appeared first on Vectops.

]]>
https://vectops.com/2020/06/install-docker-engine-on-debian-10/feed/ 17
Provision Proxmox VMs with Terraform, quick and easy https://vectops.com/2020/05/provision-proxmox-vms-with-terraform-quick-and-easy/ https://vectops.com/2020/05/provision-proxmox-vms-with-terraform-quick-and-easy/#comments Thu, 07 May 2020 12:18:07 +0000 https://vectops.com/?p=968 Previously, I wrote an article about how to provision Proxmox VMs using Ansible, you can find it here. That article went into the workings of a functional Ansible script that provisions Proxmox virtual machines in an easy and streamlined way that can be integrated into many other implementations. This time, we’re going to delve into […]

The post Provision Proxmox VMs with Terraform, quick and easy appeared first on Vectops.

]]>
Photo by Randy Fath on Unsplash

Previously, I wrote an article about how to provision Proxmox VMs using Ansible, you can find it here.

That article went into the workings of a functional Ansible script that provisions Proxmox virtual machines in an easy and streamlined way that can be integrated into many other implementations.

This time, we’re going to delve into another way to do so, using Terraform.

Terraform allows us to streamline the process even further by the use of plugins in much the same way as Ansible would.

Both of these methods depend on a template to be created beforehand. You could, of course, just create an empty virtual machine and it would work, but this means you’ll have to perform the OS installation manually, where’s the fun in that?

Within Proxmox, the VM creation method (using the GUI) is:

Create VM -> Present operating system ISO to VM -> perform installation -> Enjoy

That process takes too long, it’s manual (eww) and honestly, it’s just boring. Especially when you have to create multiple VMs.

Let’s create a template to be used by Terraform.

Building a template

Within Proxmox, you can choose two ways (usually) to create a VM, from the GUI or from the Terminal console. For this example, you should use a bare minimum machine with the minimal resources allocated to it. This way it can easily be scaled in the future.

You’re going to need a VM with the following resources:

1 Core
1 GB RAM
10 GB HDD
1 Network Interface
1 Cloud-init drive
1 EFI Disk

Some of the properties noted above will have to be added after the VM creation process.

Manually creating the template

The process for creating a VM within the Proxmox GUI has been explained countless times on the internet. For the sake of completeness let’s mention the basic process:

1) Click on create VM
2) Input a name for the VM, you can check for it to start at boot, your call. Click next
3) Select an ISO for the install and select the type and version of the OS that will be installed. Click next
4) Check the "Qemu Agent" option, you’ll use this later on. Click next
5) Select the Disk size, in this case 10 GB. You can also change some of the storage emulation options for this drive, we won’t go into that on this example. Click next
6) Select how many Cores you want to use for the VM, in this case 1 Core. Click next
7) Input the amount of memory for the VM, in this case 1024 MB. I advice using the Ballooning device so you can save up memory resources on the node and to be able to oversell the resources, just like a CPU. Note that if the memory is being actually used by the VM, it can’t be used by other VMs unless it’s the exact same memory block, enter KSM (Kernel Shared Memory) I won’t go into detail about KSM just know that it’s awesome. Select the Minimum memory for the Ballooning Device, in this case 256 MB. Click next
8) If you don’t have any custom network configurations on the node you can just Click next here. If you do, make sure that the configuration matches what you need.
9) Confirm the VM setup and click on "Finish" Don’t start the VM yet

After the VM is created, you’re going to need to change a few things on the VM. As you can see from the above steps, the Cloud-init drive wasn’t added. Select the VM on the left and click on Hardware then Add and finally on Cloud-Init Drive and select the storage where it will reside.

Afterward, edit the BIOS (double click on the BIOS entry on the Hardware tab) and select OVMF (UEFI)

Finally, the EFI Disk, it’s the same process as with the Cloudinit-drive, but now select EFI Disk and select the storage where it will reside. It won’t let you create the drive before setting up the BIOS on the second step.

Inside the VM’s terminal, you can go ahead and install the Cloud-Init’s packages so the VM is ready for use with:

apt-get install cloud-init -y

Using Debian’s official image for Cloud-init.

If the manual process above takes too long and you don’t want to spend as much time with the OS installation, you can just download a pre-configured image from Debian’s official repositories.

From the proxmox node’s terminal run:

wget https://cdimage.debian.org/cdimage/openstack/current-10/debian-10-openstack-amd64.qcow2

Since we described the process through the GUI on the manual installation, let’s go for the CLI way of doing things, the commands are as follows:

qm create 9000 -name debian-10-template -memory 1024 -net0 virtio,bridge=vmbr0 -cores 1 -sockets 1 -cpu cputype=kvm64 -description "Debian 10 cloud image" -kvm 1 -numa 1
qm importdisk 9000 debian-10-openstack-amd64.qcow2 lvm-thin
qm set 9000 -scsihw virtio-scsi-pci -virtio0 lvm-thin:vm-9000-disk-1
qm set 9000 -serial0 socket
qm set 9000 -boot c -bootdisk virtio0
qm set 9000 -agent 1
qm set 9000 -hotplug disk,network,usb,memory,cpu
qm set 9000 -vcpus 1
qm set 9000 -vga qxl
qm set 9000 -name debian-10-template
qm set 9000 -ide2 lvm-thin:cloudinit
qm set 9000 -sshkey /etc/pve/pub_keys/pub_key.pub

Please, please, please, take into account that the disk needs to be resized to 10 GB, so the VM has space to grow when it runs, you can do this from the hardware tab on the GUI.

Template setup

Ok, you created the template, now what?

The template needs to have some packages on it to run smoothly, not all of these packages are strictly necessary but it’s what I usually go for:

sudo apt install bmon screen ntpdate vim locate locales-all iotop atop curl libpam-systemd python-pip python-dev ifenslave vlan mysql-client sysstat snmpd sudo lynx rsync nfs-common tcpdump strace darkstat qemu-guest-agent

Defining the template

When the VM has been shut down cleanly, you can proceed and convert it to a template, this can be done on the Proxmox GUI by right-clicking on the VM and clicking on "Convert to Template".

In this case, let’s rename the VM template to: debian-cloudinit

Success, the template has been created.

Enter Terraform

Terraform works in a pretty straightforward way.

It uses a file with HCL format (kinda like JSON) which is JSON compatible.

Within that file (or files), you can define an entire infrastructure. How simple or complex it can be is up to you. For this example, it’s going to be a pretty simple infrastructure definition, after all, we’re just creating one VM (for now).

The Terraform installation has been explained countless times online, for whichever operating system you might use, so I’m going to assume that you know how to install it (or how to google for: terraform install <insert OS here>).

Once it has been installed you need to install a provider so it can talk to the Proxmox API Server. Luckily there’s a provider that’s actively developed for this use.

Proxmox Provider

You can find the Proxmox provider for Terraform here.

The project is in active development and runs without hitches most of the time(99% of the time works all the time).

To install it just run the following commands to install the dependencies:

go get -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go get -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
go install -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go install -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
make

And finally, copy the executables that the compilation gave us into the path directory, in my case:

sudo cp $GOPATH/bin/terraform-provider-proxmox /usr/local/bin/
sudo cp $GOPATH/bin/terraform-provisioner-proxmox /usr/local/bin/

Terraform Project

Now you can get started with the Terraform project and project definitions.
We’re going to use a directory structure like this one:

tfProxmox
|- main.tf

Just a single file. Remember this can be as complex or as simple as you need it to.

Project Definition

Within that main.tf file you first need to set up the connection profile for the Proxmox node. In case you have a cluster, any of the nodes will suffice:

provider "proxmox" {
    pm_api_url = "https://$PROXMOXSERVERIP:8006/api2/json"
    pm_user = "root@pam"
    pm_password = "$SUPERSECRETPASSWORD"
    pm_tls_insecure = "true"
}

Remember to change the $PROXMOXSERVERIP and the $SUPERSECRETPASSWORD variables on the example.

SSH Keys

Since you’re using a Cloud-init image (in case you went for Debian’s official template image), it’s set up for passwordless login so you need to define an SSH key to be installed on the VM:

variable "ssh_key" {
  default = "#INSERTSSHHPUBLICKEYHERE"
}

Where $INSERTSSHHPUBLICKEYHERE is your super-amazing-laptop’s SSH public key.

Now you can define the VM itself.

VM Definition

Bellow these definitions we can start defining our VM:

resource "proxmox_vm_qemu" "proxmox_vm" {
  count             = 1
  name              = "tf-vm-${count.index}"
  target_node       = "$NODETOBEDEPLOYED"
clone             = "debian-cloudinit"
os_type           = "cloud-init"
  cores             = 4
  sockets           = "1"
  cpu               = "host"
  memory            = 2048
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "scsi0"
disk {
    id              = 0
    size            = 20
    type            = "scsi"
    storage         = "data2"
    storage_type    = "lvm"
    iothread        = true
  }
network {
    id              = 0
    model           = "virtio"
    bridge          = "vmbr0"
  }
lifecycle {
    ignore_changes  = [
      network,
    ]
  }
# Cloud Init Settings
  ipconfig0 = "ip=10.10.10.15${count.index + 1}/24,gw=10.10.10.1"
sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Remember to change the $NODETOBEDEPLOYED entry for the node name where the VM will be deployed and the lvm-thin entry to whatever storage resource you’ll be using.

Let’s explain the resource definition. The main entries that you should take into account are:

count     <- This states the amount of VMs to be created
name      <- This states the VM Name, the "${count.index}" allows  
             you to create more than one VM and it'll just count    
             from then e.g.: tf-vm-1, tf-vm-2, tf-vm-3, etc.
cores     <- The amount of cores that the VM will have
memory    <- The amount of RAM the VM will have
disk      <- The disk definitions for the VM, scale the size here.
network   <- The network bridge definition to be used.
ipconfig0 <- The Ip for the VM, the "${count.index}" allows  
             you to create more than one VM and it'll just count    
             from then e.g.: 10.10.10.151, 10.10.10.152, etc.

Running Terraform

Terraform uses 3 main stages to run:

  • Init - This step allows Terraform to be initialized and downloads the required plugins to run
  • Plan - This step performs planning for the deployment, using the tf file that you’ve defined. It focuses on the calculation for the deployment and conflict resolution in case such conflict exists, it’s going to show you all the changes, additions, and deletions to be performed.
  • Apply - After the planning stage, this is the stage that applies the changes to the infrastructure. It’s going to give you a summary of the changes, additions and/or deletions to be made and ask for confirmation to commit these changes.

Init

While on the project folder run:

terraform init

As stated before, this is going to initialize Terraform and install the needed plugins for the project, the output should be as follows:

terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Plan

This step will take care of all of the calculations that need to be run and conflict resolution with the infrastructure that might already be deployed.

victor@AMAZINGLAPTOP:~$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
# proxmox_vm_qemu.proxmox_vm[0] will be created
  + resource "proxmox_vm_qemu" "proxmox_vm" {
      + agent        = 0
      + balloon      = 0
      + boot         = "cdn"
      + bootdisk     = "scsi0"
      + clone        = "debian-cloudinit"
      + clone_wait   = 15
      + cores        = 4
      + cpu          = "host"
      + force_create = false
      + full_clone   = true
      + hotplug      = "network,disk,usb"
      + id           = (known after apply)
      + ipconfig0    = "ip=10.10.10.151/24,gw=10.10.10.1"
      + memory       = 2028
      + name         = "tf-vm-0"
      + numa         = false
      + onboot       = true
      + os_type      = "cloud-init"
      + preprovision = true
      + scsihw       = "virtio-scsi-pci"
      + sockets      = 1
      + ssh_host     = (known after apply)
      + ssh_port     = (known after apply)
      + sshkeys      = <<~EOT
              ssh-rsa ...
        EOT
      + target_node  = "pmx-01"
      + vcpus        = 0
      + vlan         = -1
+ disk {
          + backup       = false
          + cache        = "none"
          + format       = "raw"
          + id           = 0
          + iothread     = true
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + replicate    = false
          + size         = "20"
          + storage      = "data2"
          + storage_type = "lvm"
          + type         = "scsi"
        }
+ network {
          + bridge    = "vmbr0"
          + firewall  = false
          + id        = 0
          + link_down = false
          + model     = "virtio"
          + queues    = -1
          + rate      = -1
          + tag       = -1
        }
    }
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: planfile
To perform exactly these actions, run the following command to apply:
    terraform apply "planfile"

The plan states that a new resource will be created on the target node: pmx-01 (which is the node I’m using on my lab).

After you check the planning and everything seems to be alright apply it.

Apply

To apply the Terraform plan, just run:

terraform apply

This will give you the summary from the plan and prompt for confirmation, type: yes and it’ll do it’s bidding.

When it’s done the output should be as follows:

victor@AMAZINGLAPTOP:~$ terraform apply
...
...
...
yes
...
...
...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate

Final Thoughts

This example should work as a starting point for Terraform interacting with Proxmox.

Take into account that you can destroy the VM by changing the count on the main.tf file to zero and going through the plan and apply stages.

Also, you can split the main.tf file into different files so it’s more organized when you decide to extend the infrastructure with different machine role definitions and different configurations for each one of them.

I’ve also uploaded the file on my GitHub here in case you just want the file.

Thanks for reading.

New Additions

As some folks have reported there could be some issues getting the proxmox plugin installed on their machines.

I’ve set up a Docker Image all set up and ready for it. The repo is located at:

https://github.com/galdorork/terragrunt-proxmox-provisioner

And here’s the link to the Image on Docker’s public registry:

https://hub.docker.com/r/galdor1/terragrunt-proxmox-provisioner

The post Provision Proxmox VMs with Terraform, quick and easy appeared first on Vectops.

]]>
https://vectops.com/2020/05/provision-proxmox-vms-with-terraform-quick-and-easy/feed/ 6
Setting up Python virtual environments: The right way https://vectops.com/2020/04/python-virtual-environments-the-rigth-way/ https://vectops.com/2020/04/python-virtual-environments-the-rigth-way/#respond Thu, 16 Apr 2020 15:40:17 +0000 https://vectops.com/?p=935 Yuuup little hackers, a few days ago I was assigned to a new project. The main thing about this project is Python so it has to be set-up on my work laptop and the installation should work with my team’s laptops so they can help in writing some dependencies. At this point, I have doubts […]

The post Setting up Python virtual environments: The right way appeared first on Vectops.

]]>
Yuuup little hackers, a few days ago I was assigned to a new project.

The main thing about this project is Python so it has to be set-up on my work laptop and the installation should work with my team’s laptops so they can help in writing some dependencies.

At this point, I have doubts about how my workload can shared with them, and the answer was easy, use virtualenvs.

First of all, you should know a little bit about pip.

Pip its a package installer for Python, its like apt, yum, pacman... but focused only to python dependencies.

You have probably heard about virtualenvs in Python, it allows you to create a Python environment totally independent of the local system where you can run specific versions of Python and its dependencies.

Install pipenv

Plain and simple; open your favorite terminal and issue this command:

$ pip install pipenv

Your virtualenvs packages will be saved at this location:

$HOME/.local/share/virtualenvs/

First environment with pipenv

Ok, let’s go to make our first environment. Just create a folder where you can put code or simply go to your git directory, and follow these steps:

$ mkdir myFirstApp
$ cd myFirstApp
$ pipenv shell

After you execute the last command, you will probably see a little change in your terminal. Yeap, your virtualenv has been created and you’re on it.

You have a new file called Pipfile, this file contains all the dependencies that pipenv has installed in your virtualenv as well as any dependencies that you project might need.

Let’s try to install, for example, the aws-cli tool:

(myFirstApp) $ pip install aws-cli

After the installation has finished, run help with help argument.

Now try to run an exit and test if you can to execute the aws-cli.

Activate your virtualenv again

$ source $HOME/.local/share/virtualenvs/myFirstApp-*/bin/active

Save your packages

This is a golden feature and this is the main thing we want to use.

I can install a lot of packages on my local machine, but when my teammate tries to deploy the application in his/hers local environment, there might be some dependencies issues. Mainly because they haven’t installed any.

How would this work?

Easy, just save your local packages on a stupid file called requirements.txt and push to your repository with the source code of the app.

$ pipenv lock -r | awk '{ if (NR!=1) print $1 }' > requirements.txt

After that, your teammate be able to install the dependencies with the command described on the next step.

Install packages from file

$ pip install -r requirements.txt

Aaaaand there go, you know now how to create and work with virtualenv, and the most important part: to be able to teamwork!

The post Setting up Python virtual environments: The right way appeared first on Vectops.

]]>
https://vectops.com/2020/04/python-virtual-environments-the-rigth-way/feed/ 0
Production-ready Kubernetes PaaS in 10 steps; IaaS included https://vectops.com/2020/02/production-ready-kubernetes-paas-in-10-steps-iaas-included/ https://vectops.com/2020/02/production-ready-kubernetes-paas-in-10-steps-iaas-included/#comments Tue, 25 Feb 2020 08:38:39 +0000 https://vectops.com/?p=860 Yeah, this isn’t a clickbaity title, it’s 10 steps. And no, this isn’t a Kubespray or kubeadm tutorial. Easy, straightforward and can be performed by anyone with a basic understanding of production systems (read: basic). But, my advice IF (<- big if) you want to use these steps on your infrastructure, adapt them to your […]

The post Production-ready Kubernetes PaaS in 10 steps; IaaS included appeared first on Vectops.

]]>
Yeah, this isn’t a clickbaity title, it’s 10 steps. And no, this isn’t a Kubespray or kubeadm tutorial.

Easy, straightforward and can be performed by anyone with a basic understanding of production systems (read: basic).

But, my advice IF (<- big if) you want to use these steps on your infrastructure, adapt them to your needs.

At the very least you’ll have a place to start without having to spend countless hours, doing research… drinking more than what anyone would consider a healthy amount of coffee. And/or having to work out what articles out there are compatible with what or which aren’t.

Tech writers need to start specifying software versions on their articles to prevent this from happening.

I’ve written other articles about how to streamline the deployment for VMs using virtualization platforms such as Proxmox and automation tools like Ansible.

This step-by-step can and will deal with some of these technologies without too much customization. You could also adapt it to use some other alternatives to the tools used here.

Enough preamble, let’s go for it.

Step 1: Get some hardware

Take your pick. Whether it’s a very low-cost decommissioned desktop PC from an office or a laptop with a broken screen.

You can get a lot of good deals on ebay for decommissioned hardware, either tower PCs or “old” servers.

Or a US$ 20,000 latest-generation server with the latest and greatest hardware.

It’s your call. I can’t determine how much load will your platform have but these steps should help you choose. So, please, adapt it to your needs.

Step 2: Let’s set up a Cloud-ish environment.

Let’s say you don’t have a lot of bare-metal machines or, maybe, you do. In my case I’ve had an issue with the bare-metal servers I have available to me. They’re too big.

Note: Yes, I know, this example can be set up with virsh. This is how I decided to do it, there’s a million ways to do it, this one’s easier for me 🙂

As in, 512 Gigs of RAM each. Now, Kubernetes’ workloads don’t usually go well with huge servers, there’s a pod limit to each node. Kernel limitations won’t help either. So I’ve decided to go with a Proxmox cluster environment that will host a few VMs with different VM models on it.

This is how the Proxmox deployment will look like after it’s done:

Proxmox Infrastructure

This example uses a 3 node Proxmox cluster. The setup for the cluster just needs you to install 3 machines with Proxmox, it can be Proxmox 5 or 6, your call. It makes no difference.

After you have installed the Proxmox distro on the machines, just create a cluster.

Create the Proxmox Cluster

In this example I’m going to reserve the following IPs for the installation (they’re also stated on the image above):

192.168.1.2 node1
192.168.1.3 node2
192.168.1.4 node3

The cluster creation is pretty well documented on Proxmox’s documentation here. On the newer versions, it can also be done with the GUI. I’m proceeding with the installation on Proxmox 6.1.

These steps should create a cluster pretty painlessly, let’s name it cloudish. Just log in via SSH to any of the nodes, in this case node1, and execute the following command:

node1:~# pvecm create cloudish

Then login into the other nodes and run the join command on each one of them:

node2:~# pvecm add 192.168.1.2
node3:~# pvecm add 192.168.1.2

The pvecm command will ask you for the node1 credentials, input them and you’re good to go.

Step 3: Create a bare-metal controller

To control and deploy the nodes I’m using MaaS (Metal as a Service), a bare metal controller that allows you to present the VMs you are going to create, or even bare metal servers to a cloud orchestrator such as Terraform or JuJu.

MaaS uses two elements:

  • A region Controller that handles: DNS, NTP, Syslog, and Squid Proxy
  • A rack Controller that handles: PXE, IPMI, NTP, TFTP, iSCSI, DHCP

Both of these elements can be deployed to the same machine.

In case you want/need high availability these components can be separated and installed in different machines, it’s well documented and straightforward, more info here.

In our case, we’re going to use the same machine for both things.

The deployment will look like this:

MaaS Deployment

Download an Ubuntu Server ISO from the Ubuntu website and install it on a VM created on the Proxmox cluster, in this case, Ubuntu 18.04 LTS.

The bare minimum MaaS deployment needs one machine with 1 core and 2 Gigs of RAM. This should suffice for a very small deployment with a couple of machines.

Lets amp that up and go all the way to 4 cores and 8 Gigs of RAM!

For this example the MaaS machine will have the following IP:

MaaS --- 192.168.1.10

After you’ve installed the OS on the VM, go ahead and install the MaaS Controller, in our case MaaS 2.6.2:

maas:~# sudo add-apt-repository ppa:maas/2.6
maas:~# sudo apt update
maas:~# sudo apt install maas
maas:~# sudo maas init

After the initialization is done you’ll see a URL on the terminal. Go ahead and log into it using your favorite browser and finally, proceed with the configuration stage. Read more here.

Please remember to set up your DHCP service, by going on the MaaS interface and setting up the range you want.

To use MaaS DHCP you need it to be the only DHCP service on the network. This is very important. Also, reserve the ranges for the IPs you’re going to use elsewhere or the ones you’ve already used.

Step 4: Create VMs.

Yes, create Virtual Machines for Proxmox, a lot of them. I advise going with a for loop on the terminal using qm create so the process doesn’t get tedious, remember to check what requirements you need.

For this project, we’re going to create the bare minimum that Canonical recommends for a charmed Kubernetes deployment with HA, 10 machines.

Also the 3 VMs for JuJu.

The setup is as follows:

VM1    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=master
VM2    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=master
VM3    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=loadbalancer
VM4    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=easyrsa
VM5    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=etcd
VM6    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=etcd
VM7    2 CPU Cores | 2 GB RAM | 20 GB HDD   tags=etcd
VM8    4 CPU Cores | 4 GB RAM | 20 GB HDD   tags=worker
VM9    4 CPU Cores | 4 GB RAM | 20 GB HDD   tags=worker
VM10   4 CPU Cores | 4 GB RAM | 20 GB HDD   tags=worker
VM11   1 CPU Cores | 3 GB RAM | 20 GB HDD   tags=juju-controller
VM12   1 CPU Cores | 3 GB RAM | 20 GB HDD   tags=juju-controller
VM13   1 CPU Cores | 3 GB RAM | 20 GB HDD   tags=juju-controller

After you create the VMs you need to add them to MaaS, make sure they’re set up for Network boot first and HDD second, like this:

VM Boot order

Then boot them up so they go through the process to be added to MaaS.

Step 5: Configure MaaS

For MaaS to be able to control Proxmox VMs like if they were a bare-metal machine it needs a driver.

Luckily a guy by the name of Wojtek Rakoniewski wrote one for all of us to use and posted it on launchpad. It works with what I’ve tested: Proxmox 5.1, 6.1 and everything in between since it uses the Proxmox API.

I’ve taken the liberty of hosting it on a public Github repo with credits for him all around, just in case the guys from launchpad decide to archive the bug tracker.

You can download it here.

Or using wget:

wget https://raw.githubusercontent.com/galdorork/proxmox-maas/master/proxmox.py

After you download it you need to put the file on this route on the MaaS Region controller:

/usr/lib/python3/dist-packages/provisioningserver/drivers/power/

And register the driver on the registry.py located here:

/usr/lib/python3/dist-packages/provisioningserver/drivers/power/registry.py

You need to edit the entry so it looks like this:

## Add this to the import headers
from provisioningserver.drivers.power.proxmox import ProxmoxPowerDriver
## The ProxmoxPowerDriver is the entry to be added
power_drivers = [
     ProxmoxPowerDriver(),
     ...
 ]
 ```
Then install the *proxmoxer* python package so the driver knows what to use when talking to the Proxmox API:
```bash
apt install python3-proxmoxer

Finally, restart the maas-regiond service to take the changes into account, systemctl restart maas-regiond should work.

Unfortunately even using this driver MaaS doesn’t know how to auto-detect the hardware controller for the machine, but it’s easily done manually on the VM’s config.

To do this you need to create a PVE user on the Proxmox GUI that has the VM.Audit and VM.PowerMgmt permissions.

User Definition
User Definition

Group Definition
Group Definition

Then configure the VM on MaaS so the region controller knows how to use it’s power capabilities.

MaaS power type configuration
MaaS power type configuration**

Input here the PVE user you have created, it’s password, one of the proxmox nodes’ IPs and the VM id.

Step 6: Commission the VMs

MaaS uses with this workflow:

The machine is added →Machine needs to be commissioned → Machine is deployed.

The workflow makes sense since no machine can be formatted or deployed accidentally by being present on the same network and being pxe-booted.

When you boot up your proxmox VMs the first time, they will be added to MaaS as new machines:

New machine added
New machine added

Then you need to commission them, this means it gets booted from MaaS (remember to configure the power type) and some tests are run on the machine, as well as defining an inventory for that specific machine, parsing CPU info, RAM info, Storage info, etc.

After it’s commissioned it’ll be shown as ready:

MaaS VM in ready state
MaaS VM in ready state

This means the VM is ready to be used by either a controller or being manually deployed by either using the MaaS GUI or the MaaS API or, maybe even JuJu.

Let’s go for JuJu.

DO NOT FORGET: JuJu needs “tags” to identify MaaS nodes, else it grabs them at random, please set up tags with the VM usage so you can use them as constraints later on:

MaaS VM Tag configuration
MaaS VM Tag configuration

Step 7: Create your JuJu controller VMs.

This implementation requires a JuJu controller. For this example, we’re going to use 2.7.1-eoan-amd64.

We’re going to use a single machine for it. The VM needs at least 1 core and 3 GB of RAM.

You don’t need to install it, just set it up to boot from Network and the VM will boot up using iPXE.

After it boots with the Ubuntu image it will download some of the packages it needs to check the resources the VM has and show it to the MaaS controller.

Afterward, commission it and you’re all set, you can commission your first machine from JuJu. This will be done from your PC.

On your PC, execute the following command:

yourawesomelaptop:~$ sudo snap install juju --classic

Now, add the MaaS cloud to your JuJu environment:

yourawesomelaptop:~$ juju add-cloud --local

The output will be as follows:

Cloud Types
  lxd
  maas
  manual
  openstack
  vsphere
Select cloud type: maas
Enter a name for your maas cloud: my-amazing-bare-metal-cloud
Enter the API endpoint url: http://192.168.1.10:5240/MAAS
Cloud "my-amazing-bare-metal-cloud" successfully added
You will need to add credentials for this cloud (juju add-credential my-amazing-bare-metal-cloud)
before creating a controller (juju bootstrap my-amazing-bare-metal-cloud).

Your JuJu installation needs to be able to connect to the API, it has the URL, now it needs the credentials:

yourawesomelaptop:~$ juju add-credential maas-cloud

It’ll ask for the MaaS secret, get it from the MaaS GUI. Click on your username on the top right corner and it’s the first field, copy and paste when it asks for maas-oauth:

Enter credential name: my-amazing-bare-metal-cloud-credentials
Using auth-type "oauth1".
Enter maas-oauth:
Credentials added for cloud my-amazing-bare-metal-cloud.

You can check the added credentials with:

yourawesomelaptop:~$ juju credentials --local

And finally create the main JuJu controller, using the JuJu tags and constraints:

yourawesomelaptop:~$ juju bootstrap --bootstrap-constraints tags=juju-controller my-amazing-bare-metal-cloud maas-controller

You’re done with JuJu for now.

If you want High Availability you can scale the controller and let it have more machines, 3 is advised for any production environment (remember to create the VMs first on Proxmox) for this example we’re doing it:

yourawesomelaptop:~$ juju enable-ha

Step 8: JuJu the hell out of that MaaS instance

Once the empty VMs are ready, the MaaS controllers are ready and the JuJu controller is ready you’re all set.

You can even do these steps on a nice graphical interface using JuJu Gui. Don’t worry, you can still use the terminal to adjust some of it, or even customize the charm bundle so it adapts to your machines.

Charmed Kubernetes Bundle
Charmed Kubernetes Bundle

In this case, we’re going to edit some of the constraints, so next to where it says untitled-model you can click on export and download the YAML descriptor file. On this file you need to focus on the machine descriptors at the end of the file and add the constraints to each one, depending on their need. E.g:

machines:
  '0':
    constraints: tags=easyrsa
  '1': 
    constraints: tags=etcd
  '2': 
    constraints: tags=etcd
  '3': 
    constraints: tags=etcd
  '4': 
    constraints: tags=loadbalancer
  '5': 
    constraints: tags=master
  '6': 
    constraints: tags=master
  '7': 
    constraints: tags=worker
  '8': 
    constraints: tags=worker
  '9': 
    constraints: tags=worker

Make sure the machine IDs match with the descriptions present at the beginning of each service definition. Then import it to your GUI and let it process.

Then commit the changes. You should add your ssh public key on this step. Remember you can always add it later on.

Then relax, get a cup of coffee, or tea, or a beer, maybe all of them. Depending on the hardware you have available this could take a while, from 15 mins on. In the meantime, you can see the deployment status on the GUI.

If you decide to use the terminal the bundle can also be deployed like this:

yourawesomelaptop:~$ juju deploy myawesomekubernetescluster.yaml

Step 9: Get your Kubernetes config file

After the provisioning is done you just need to get your Kubernetes config file from the juju deployment:

yourawesomelaptop:~$ juju scp kubernetes-master/0:config ~/.kube/config

And connect to the deployment using kubectl, in case you need to install kubectl on your machine I’ve written an article about it [here](https://vectops.com/2019/12/set-up-your-machine-to-use-kubectl/ "here"), it has instructions for the main OSs out there.

There, you’re all set.

Optionally you can set up MetalLB as a way to expose the services or even use nfs-client provisioner or whichever persistent storage you might want to use.

Step 10: Scale it away

Yes, this platform can be scaled vertically, using more VMs present on MaaS for this task.

From JuJu you can extend each of the cluster roles from your choosing, an example for a Kubernetes worker would be:

juju add-unit kubernetes-worker

Scale it A LOT? (take caution, this will add 16 worker nodes):

juju add-unit kubernetes-worker -n 16

Or scale it down (removing the second worker):

juju remove-unit kubernetes-worker/2

The same applies to all of the other cluster roles.

Et voilá, it’s all set up, now you can focus on developing the next million-dollar idea without having to worry about the platform you have, and without paying premiums on cloud platforms that can be replicated on-site or maybe use hardware you already have laying around.

If you’re going to use this article as a base to deploy your own PaaS solution, don’t forget that it can be extended a lot, using JuJu for vertical scaling as well as Kubernetes autoscaling for horizontal scaling (pod replica scaling).

If you need any extra help you can get in touch with me using the comments on this article.

The post Production-ready Kubernetes PaaS in 10 steps; IaaS included appeared first on Vectops.

]]>
https://vectops.com/2020/02/production-ready-kubernetes-paas-in-10-steps-iaas-included/feed/ 1
Force Apache show a 403 error when accessing via IP address https://vectops.com/2020/02/forcing-an-apache-webserver-to-show-403-error-when-accessed-through-ip-address/ https://vectops.com/2020/02/forcing-an-apache-webserver-to-show-403-error-when-accessed-through-ip-address/#comments Mon, 10 Feb 2020 09:20:26 +0000 https://vectops.com/?p=842 Let’s say we have a few websites hosted in our machine, but we need them to process requests only via a name service; not when accessed via IP address. Very common behavior in shared hosting systems. A real time saving tip, and it only needs a couple of lines of code. For this purpose we […]

The post Force Apache show a 403 error when accessing via IP address appeared first on Vectops.

]]>
Let’s say we have a few websites hosted in our machine, but we need them to process requests only via a name service; not when accessed via IP address. Very common behavior in shared hosting systems.

A real time saving tip, and it only needs a couple of lines of code. For this purpose we tested it on an Apache 2.4 server running under a CentOS system.

To achieve this we only need to make a new file (e. g. default.conf under the /etc/httpd/conf.d directory) containing the following lines:

<VirtualHost _default_:80>
   ServerName $WEBSERVER_IP_ADDRESS
   UseCanonicalName Off
   Redirect 403 /
</VirtualHost>

Of course we should run a reload to apply the changes we made on the service:

systemctl reload httpd

And that’s literally all, from here on out when anyone access the webserver through its IP address, the only thing that will show up is a beautiful 403 error message like this one (which is what we ever intended, right?) as long as you do not tune this up to show anything else or even a custom error message:

Forbidden

You don't have permission to access / on this server.

Cheers!

The post Force Apache show a 403 error when accessing via IP address appeared first on Vectops.

]]>
https://vectops.com/2020/02/forcing-an-apache-webserver-to-show-403-error-when-accessed-through-ip-address/feed/ 41
Provision Proxmox VMs with Ansible, quick and easy https://vectops.com/2020/01/provision-proxmox-vms-with-ansible-quick-and-easy/ https://vectops.com/2020/01/provision-proxmox-vms-with-ansible-quick-and-easy/#comments Fri, 10 Jan 2020 10:25:38 +0000 https://vectops.com/?p=847 Proxmox is an amazing Virtualization solution for both production, development, testing, etc. basically anything you can think of that requires a virtual machine. Whether you rely on an actual kernel (full blown VM) or just the userspace (LXC Containers) it helps a lot to have a free tool that can perform. Not only that but […]

The post Provision Proxmox VMs with Ansible, quick and easy appeared first on Vectops.

]]>

Proxmox is an amazing Virtualization solution for both production, development, testing, etc. basically anything you can think of that requires a virtual machine.

Whether you rely on an actual kernel (full blown VM) or just the userspace (LXC Containers) it helps a lot to have a free tool that can perform. Not only that but the fact that it can be clustered and have VMs migrated from node to node can help with availability issues that can and will happen when you to perform maintenance on any specific virtualization node.

However, it can be cumbersome to provision VMs sometimes, the vanilla method is:

Create VM -> Present operating system ISO to VM -> perform installation -> Enjoy

This method can be time consuming depending on how many VMs you need, or what the OS installation process is like.

I know, I know… There’s new fancy tech such as Kubernetes that allows you to easily and swiftly deploy applications on a cloud environment but that kind of infrastructure is not always readily available and it can be hard to migrate some applications to it, depending on what has been developed.

Enter templates

The Proxmox system allows you to use and create VM templates, that can be set up with whatever operating system you want.

We’re going to use a basic Debian 10 template for this example, just go ahead and create a VM, pick low resources for the image so you can expand them later. CPU and memory are easily downsized, storage drives aren’t so take this into account.

I’ve created a VM with the following resources:

 1 Core
 1 GB RAM
 10 GB HDD
 1 Network Interface
 1 Cloud-init drive
 1 EFI Disk

Some of the properties noted above will have to be added after the VM creation process.

Creating the template manually

This process is pretty straightforward, here’s a step by step:

1) Click on create VM
2) Input a name for the VM, you can check for it to start at boot, your call. Click next
3) Select an ISO for the install and select the type and version of the OS that will be installed. Click next
4) Check the "Qemu Agent" option, you’ll use this later on. Click next
5) Select the Disk size, in this case 10 GB. You can also change some of the storage emulation options for this drive, we won’t go into that on this example. Click next
6) Select how many Cores you want to use for the VM, in this case 1 Core. Click next
7) Input the amount of memory for the VM, in this case 1024 MB. I advice using the Ballooning device so you can save up memory resources on the node and to be able to oversell the resources, just like a CPU. Note that if the memory is being actually used by the VM, it can’t be used by other VMs unless it’s the exact same memory block, enter KSM (Kernel Shared Memory) I won’t go into detail about KSM just know that it’s awesome. Select the Minimum memory for the Ballooning Device, in this case 256 MB. Click next
8) If you don’t have any custom network configurations on the node you can just Click next here. If you do, make sure that the configuration matches what you need.
9) Confirm the VM setup and click on "Finish" Don’t start the VM yet

After the VM is created, we need to add a couple of things.

After the creation

First, the Cloud-init drive, select the VM on the left and click on Hardware then Add and finally on Cloud Init Drive and select the storage where it will reside.

Second, edit the BIOS (double click on the BIOS entry on the Hardware tab) and select OVMF (UEFI)

Third, the EFI Disk, same process as the Cloudinit-drive, but now select EFI Disk and select the storage where it will reside. It won’t let you create the drive before setting up the BIOS on the second step.

Finally, start up the machine.

Lets get it prepared for Cloud-Init, go on the VM and run this command:

apt-get install cloud-init -y

That’s it, it’s set up now.

Using Debian’s official image for Cloud-init.

You can also just go for the easy step and more straightforward even that the manual installation, download a ready-to-go image from Debian’s repositories. SSH into the node, or open a shell on the node through the GUI and run:

wget http://cdimage.debian.org/cdimage/openstack/10.2.0/debian-10.2.0-openstack-amd64.qcow2

Afterwards create a VM either through the GUI or through the command line, if you decide to do it with the graphic interface just do as I wrote earlier, on the CLI the commands are as follows:

qm create 9000 - name debian-10-template - memory 1024 - net0 virtio,bridge=vmbr0 - cores 1 - sockets 1 - cpu cputype=kvm64 - description "Debian 10.2 cloud image" - kvm 1 - numa 1
qm importdisk 9000 debian-10.2.0-openstack-amd64.qcow2 lvm-thin
qm set 9000 - scsihw virtio-scsi-pci - virtio0 lvm-thin:vm-9000-disk-1
qm set 9000 - serial0 socket
qm set 9000 - boot c - bootdisk virtio0
qm set 9000 - agent 1
qm set 9000 - hotplug disk,network,usb,memory,cpu
qm set 9000 - vcpus 1
qm set 9000 - vga qxl
qm set 9000 - name debian-10-template
qm set 9000 - ide2 lvm-thin:cloudinit
qm set 9000 - sshkey /etc/pve/pub_keys/pub_key.pub

After you execute these commands you need to resize the disk to 10 GB, you can do this on the hardware tab for the VM

Installation

Start up the machine and install some basic packages you’ll most likely use on all the machines in my case I usually go for these:

sudo apt install bmon screen ntpdate vim locate locales-all iotop atop curl libpam-systemd python-pip python-dev ifenslave vlan mysql-client sysstat snmpd sudo lynx rsync nfs-common tcpdump strace darkstat qemu-guest-agent

After these packages are installed shutdown the VM with:

shutdown -h now

Defining the template

When the VM has been shutdown cleanly, you can proceed and convert it to a template, this can be done on the Proxmox GUI by right clicking on the VM and clicking on "Convert to Template"

Success, the template has been created.

Setting up the Proxmox node for ansible communication

The proxmox node has to be set up with a python tool called proxmoxer. This tool has the ability to behave like an ansible module, you can either run a playbook for the install or go in manually and proceed with the installation via SSH.

With a console proceed with these commands and install the necessary packages on the node:

apt install -y python-pip python-dev build-essential
pip update pip
pip install virtualenv
pip install proxmoxer

Creating our Ansible directory structure

We’re going to use a simple Ansible setup with the following structure:

hosts
playbooks
|──proxmox_deploy.yml
roles
|──proxmox_deploy
|
| ── defaults
| |
| ── main.yml
| ── meta
| |
| ── main.yml
| ── tasks
| |
| ── main.yml
| ── vars
| |
| ── main.yml
| ── travis.yml

Ansible files

You need to define several things on the ansible files, lets define the hosts file first.

For the sake of this example we’re using two nodes, proxmox1 with the IP: 192.168.1.11 and proxmox2 with the IP: 192.168.1.12

This definition only needs the proxmox nodes (in case you have more than one) and a group to be called:

hosts

[proxmox1]
proxmox1 ansible_ssh_host=192.168.1.11

[proxmox2]
proxmox2 ansible_ssh_host=192.168.1.12

[proxmoxs:children]
proxmox1
proxmox2

The main playbook is setup as follows (playbooks/proxmox_deploy.yml):

playbooks/proxmox_deploy.yml

- name: 'prep proxmox hosts for automation'
  hosts: 'proxmox1'
  vars_prompt:
  - name: PV_password
    prompt: "Node Password"
    private: yes
  - name: VM_name
    prompt: "VM name"
    private: no
  - name: VM_network
    prompt: "Network associated to ipconfig0"
    private: no
    default: vlan10
  - name: VM_IP
    prompt: "VM IP"
    private: no
    default: 192.168.1.100
  - name: VM_sockets
    prompt: "VM socket/s"
    private: no
    default: 1
  - name: VM_cores
    prompt: "VM core/s"
    private: no
    default: 1
  - name: VM_memory
    prompt: "VM RAM Memory (MB)"
    private: no
    default: 1024
  - name: VM_INCREASE_DISK
    prompt: "Increase virtio0 disk (20 GB) in"
    private: no
    default: 0
  - name: PV_node
    prompt: "Migrate Virtual Machine to"
    private: no
    default: none
  user: root
  gather_facts: false
  roles:
    - { role: proxmox_deploy, default_proxmox_node: proxmox1 }

This playbook defines the inputs that you, as a sysadmin/devops/computer-magician will need to input so the tasks can be completed successfully. Note: It asks for a "Node password", this is so the proxmoxer python module can communicate with the node, it uses Linux standard PAM authentication

These inputs encompass CPU Sockets, CPU Cores, Memory, IP, Disk size and target node in case you want to migrate the VM to another node after it’s finished the creation process.

Then lets define the roles for this deployment, first with the travis.yml file within the roles directory (roles/proxmox_deploy/travis.yml):

roles/proxmox_deploy/travis.yml

---
language: python
python: "2.7"

# Use the new container infrastructure
sudo: false

# Install ansible
addons:
     apt:
         packages:
             - python-pip

install:
     # Install ansible
     - pip install ansible

    # Check ansible version
     - ansible - version

    # Create ansible.cfg with correct roles_path
     - printf '[defaults]\nroles_path=../' >ansible.cfg

script:
     # Basic role syntax check
     - ansible-playbook tests/test.yml -i tests/inventory - syntax-check

notifications:
     webhooks: https://galaxy.ansible.com/api/v1/notifications/

Afterwards we set up the main.yml within the defaults directory (roles/proxmox_deploy/defaults/main.yml). Please note that it’s very important to adjust this file to concur with the template name you created on the template creation step:

roles/proxmox_deploy/defaults/main.yml

---
# defaults file for proxmox_deploy
VM_template: debian-10-template
default_disk: virtio0
default_interface: ens18
default_volume: /dev/vda
default_partition: 2
template_name: template-debian-deployment

The handlers main.yml file is basically empty but needs to be defined (roles/proxmox_deploy/handlers/main.yml):

roles/proxmox_deploy/handlers/main.yml

---
# handlers file for proxmox_deploy

Then define a very basic default template for the meta’s main.yml file, i’m just leaving it as a default template (roles/proxmox_deploy/meta/main.yml):

roles/proxmox_deploy/meta/main.yml

galaxy_info:
    author: your name
    description: your description
    company: your company (optional)
    license: license (GPLv2, CC-BY, etc)
    min_ansible_version: 2.4
    galaxy_tags: []
dependencies: []

The main.yml file for the vars directory is as follows (roles/proxmox_deploy/vars/main.yml), in here, some of the variables that you might need for a VM are set up, in this case i’m going to use two VLAN setups as an example. Adjust it to your own infrastructure:

roles/proxmox_deploy/vars/main.yml

# vars file for proxmox_deploy
vlan10:
    params:
        netmask: 24
        vmbr: 0
        gateway: 192.168.2.1
        dnsservers: "192.168.2.253 192.168.2.254"
        searchdomain: vectops.com
vlan11:
    params:
        netmask: 24
        vmbr: 1
        gateway: 192.168.3.130
        dnsservers: "192.168.3.253 192.168.3.254"
        searchdomain: vectops.com

Finally the main file, the tasks’ main.yml file (roles/proxmox_deploy/tasks/main.yml). All the actual work goes here, the playbook uses this to complete all of the deployment tasks:

roles/proxmox_deploy/tasks/main.yml

---
# tasks file for proxmox_deploy
    -  name: Cloning virtual machine from "{{ VM_template }}" with name "{{ VM_name }}" 
        proxmox_kvm:
            api_user : root@pam
            api_password: "{{ PV_password }}"
            api_host : "{{ default_proxmox_node }}"
            name : "{{ VM_name }}"
            node : "{{ default_proxmox_node }}"
            clone: "{{ VM_template }}"
            timeout: 300
        tags: provission,test

    - name: Increasing disk if it is necessary
        shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm resize $A {{ default_disk }} +{{ VM_INCREASE_DISK }}G
        when: '"{{ VM_INCREASE_DISK }}" != "0"'
        tags: provission

    - name: Waiting to apply cloud init changes in disk
        wait_for:
        timeout: 5
        tags: provission

    - name: starting new Virtual Machine to change IPv4 configuration, it is necessary
        proxmox_kvm:
            api_user : root@pam
            api_password: "{{ PV_password }}"
            api_host : "{{ default_proxmox_node }}"
            name : "{{ VM_name }}"
            node : "{{ default_proxmox_node }}"
            state : started
            timeout: 300
        when: '"{{ VM_INCREASE_DISK }}" != "0"'
        register: wait
        tags: provission

    - name: Waiting to start virtual server machine completely
        wait_for:
            timeout: 45
        when: wait.changed == true 
        tags: provission

    - name: Resize disk
        shell: growpart "{{ default_volume }}" "{{ default_partition }}"; pvresize "{{ default_volume }}""{{ default_partition }}"
        when: '"{{ VM_INCREASE_DISK }}" != "0"'
        delegate_to: "{{ template_name }}"
        tags: provission

    - name: stopping new Virtual Machine to change IPv4 configuration, it is necessary
        proxmox_kvm:
            api_user : root@pam
            api_password: "{{ PV_password }}"
            api_host : "{{ default_proxmox_node }}"
            name : "{{ VM_name }}"
            node : "{{ default_proxmox_node }}"
            state : stopped
            timeout: 300
        when: '"{{ VM_network }}" != "vlan10" or "{{ VM_INCREASE_DISK }}" != "0"'
        tags: provission

    - name: Loading set up for Virtual Machine. Assigning correct bridge in network interface
        shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - net0 'virtio,bridge=vmbr{{ item.value.vmbr }}'
        when: '"{{ VM_network }}" != "vlan10"'
        with_dict: "{{ vars[VM_network] }}"
        tags: provission

    - debug: 
        msg: "item.key {{ item.key }} item.value {{ item.value }} item.value.netmask {{ item.value.netmask }} item.value.vmbr {{ item.value.vmbr }}"
    with_dict: "{{ vars[VM_network] }}"
    tags: provission

    - name: Loading set up for Virtual Machine. Assigning IP, sockets, cores and memory for Virtual Machine
        shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - ipconfig0 'ip={{ VM_IP }}/{{ item.value.netmask }},gw={{ item.value.gateway }}' - nameserver '{{ item.value.dnsservers }}' - searchdomain '{{ item.value.searchdomain }}' - memory '{{ VM_memory }}' - sockets '{{ VM_sockets }}' - cores '{{ VM_cores }}'
        when: '"{{ VM_IP }}" != "automatic"'
        with_dict: "{{ vars[VM_network] }}"
        tags: provission

    - debug:
            var: current_ip
        tags: provission

    - name: Loading set up for Virtual Machine. Assigning IP automatically, sockets, cores and memory for Virtual Machine
        shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - ipconfig0 'ip={{ current_ip.stdout }}/{{ item.value.netmask }},gw={{ item.value.gateway }}' - nameserver '{{ item.value.dnsservers }}' - searchdomain '{{ item.value.searchdomain }}' - memory '{{ VM_memory }}' - sockets '{{ VM_sockets }}' - cores '{{ VM_cores }}'
        when: '"{{ VM_IP }}" == "automatic"'
        with_dict: "{{ vars[VM_network] }}"
        tags: provission

    - debug: 
    var: "{{ PV_node }}"

    - name: Migrating Virtual Machine if it is necessary
        shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}');qm migrate $A "{{ PV_node }}"
        when: '"{{ PV_node }}" != "none"'
        tags: provission

    - name: starting new Virtual Machine in current proxmox node
        proxmox_kvm:
            api_user : root@pam
            api_password: "{{ PV_password }}"
            api_host : "{{ default_proxmox_node }}"
            name : "{{ VM_name }}"
            node : "{{ default_proxmox_node }}"
            state : started
            timeout: 300
        when: '"{{ PV_node }}" == "none"'
        tags: provission

    - name: starting new Virtual Machine in correct proxmox node
        proxmox_kvm:
            api_user : root@pam
            api_password: "{{ PV_password }}"
            api_host : "{{ PV_node }}" 
            name : "{{ VM_name }}"
            node : "{{ PV_node }}"
            state : started
            timeout: 300
        delegate_to: "{{ PV_node }}"
        when: '"{{ PV_node }}" != "none"'
        tags: provission

This tasks file is pretty straightforward, it uses several steps, they’re defined as follows:

1) Clones the template into a new VM
2) If you choose to increase the disk size, it does so on the hardware side.
3) Applies Cloudinit configurations
4) Starts the VM and waits for it to start
5) Resizes the partition so it fits the new disk size (in case you did change it)
6) Stops the VM so it can apply the IP configuration
7) Assigns the correct network hardware properties, in case they need to be changed.
8) Configures the necessary hardware properties for the VM (cpu, memory, etc.)
9) If you chose to migrate it on the prompt, it will perform a migration of the VM to a target node

That’s it

That’s it, with this playbook you can easily deploy VMs on proxmox, fully configured to your needs with a simple ansible command:

ansible-playbook playbooks/proxmox_deploy.yml

I hope this helps you out as much as it has helped me to simplify and speed up the process of creating new classic virtual instances on Proxmox.

The post Provision Proxmox VMs with Ansible, quick and easy appeared first on Vectops.

]]>
https://vectops.com/2020/01/provision-proxmox-vms-with-ansible-quick-and-easy/feed/ 12
Save a copy of all your running Docker containers, the lazy way https://vectops.com/2020/01/save-a-copy-of-all-your-running-docker-containers-the-lazy-way/ https://vectops.com/2020/01/save-a-copy-of-all-your-running-docker-containers-the-lazy-way/#comments Wed, 08 Jan 2020 09:11:39 +0000 https://vectops.com/?p=844 Ever wanted to save a snapshot of all your Docker containers but never had the time or even the energy to write a script that can do that for you? Don’t worry, we are here for that reason. Let’s just say you save these snapshots under the /backup directory of your system. Then you just […]

The post Save a copy of all your running Docker containers, the lazy way appeared first on Vectops.

]]>
Ever wanted to save a snapshot of all your Docker containers but never had the time or even the energy to write a script that can do that for you? Don’t worry, we are here for that reason.

Let’s just say you save these snapshots under the /backup directory of your system. Then you just will run this magic one-liner:

for CONTAINER in $(docker ps -a --format '{{.Names}}'); do
    docker export $(docker ps -a |grep $CONTAINER \
    |awk '{print $1}') > /backup/backup_${CONTAINER}_$(date +%d-%m-%Y).tar
done

This will save the container’s current filesystem as a tarball.

However, remember that the docker export command DOES NOT include a copy of the contents of the volumes associated to the containers, as you can read in the official Docker documentation. For that purpose you will have to mess around with another kind of solutions such as file synchronization directly over the directories linked to the volumes or so.

Also, because of how it works, the docker export command will remove the entrypoint and the container’s history, making it useful only if you want to access the container’s information, but not if you look for running the image as a container again. The way you would access to it is first creating a new image from the tarball:

docker import /backup/backup_${CONTAINER}_$(date +%d-%m-%Y).tar ${IMAGE_NAME}

And then running a container based on the new image, like this:

docker run -ti ${IMAGE_ID} /bin/bash

If you are seeking to save a copy of a existing container keeping its information as it is, you would have to use the commit function in order to generate a new image based on the current container, like this:

docker commit ${CONTAINER} ${IMAGE_NAME}_$(date +%d-%m-%Y)

Then you can export it to a new tarball:

docker save > /backup/${IMAGE_NAME}_$(date +%d-%m-%Y).tar

So, if this is what you are looking to do with all your containers, you can run:

for CONTAINER in $(docker ps -a --format '{{.Names}}'); do
    docker commit $(docker ps -a |grep $CONTAINER \
    |awk '{print $1}') ${CONTAINER}_$(date +%d-%m-%Y)
    docker save > ${CONTAINER}_$(date +%d-%m-%Y)
done

And finally if you want to import your images from the tar files you can run:

docker load  < /backup/${IMAGE_NAME}_$(date +%d-%m-%Y).tar

See you next time!

The post Save a copy of all your running Docker containers, the lazy way appeared first on Vectops.

]]>
https://vectops.com/2020/01/save-a-copy-of-all-your-running-docker-containers-the-lazy-way/feed/ 235
Set up your machine to use kubectl https://vectops.com/2019/12/set-up-your-machine-to-use-kubectl/ https://vectops.com/2019/12/set-up-your-machine-to-use-kubectl/#comments Mon, 23 Dec 2019 17:30:33 +0000 https://vectops.com/?p=827 At Vectops we like to dwell on new techologies, which includes Kubernetes (K8s from now on). Most of the infrastructure we use on a daily basis is currently tied to K8s in one way or another. As a result this means that our machines have to be set up for K8s in one way or […]

The post Set up your machine to use kubectl appeared first on Vectops.

]]>
At Vectops we like to dwell on new techologies, which includes Kubernetes (K8s from now on). Most of the infrastructure we use on a daily basis is currently tied to K8s in one way or another. As a result this means that our machines have to be set up for K8s in one way or another to be able to perform different tasks on it. If you decide to run, test or become proficient on K8s you need to have kubectl set up on your environment.

K8s allows you to have different permissions and roles depending on your role regarding the cluster, whichever it may be: admin, mantainer, operator, developer, tester, etc.

This article won’t go into detail on RBAC (Role-Based Access Control), if you want to learn more about it go here.

For whatever task you need to perform on the cluster you do need some sort of way to interact with the cluster.

Enter kubectl

kubectl is a CLI tool that allows you to run commands that interact with K8s clusters. Wether it may be getting information from a specific pod, deployment, service, etc. to editing a definition for an instance.

It can be installed in different ways, lets focus on the ones that can be maintained easily, for instance from the OS’s package manager.

After you install kubectl you need to configure it. Just scroll down, you’ll get there. Above all, take into account the freedom of choice.

Linux

The easiest to install, just copy and paste these commands on the terminal for each of the distros.

Please take into account that you should check the contents of the repo file as well as the URLs, just to be safe

You’re going to use package management, snaps and straight binary download for the sake of completeness (JUST USE ONE OF THESE)

Debian/Ubuntu/other derivatives that use .deb

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

CentOS/RHEL/Fedora/other derivatives that use .rpm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl

Snap

sudo snap install kubectl --classic

Straight binary

Download the latest release from google:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Give it executable permissions

chmod +x ./kubectl

And move it to your binary path

sudo mv ./kubectl /usr/local/bin/kubectl

Make sure it works!

kubectl version

Windows

In the case of Windows you can decide to use a few tools including but not limited to WSL (Windows subsystem for linux), PowerShell, Chocolatey, etc.

WSL

Just use the same procedures for linux on your WSL console

PowerShell

PSGallery

You can use PSGallery:

Install-Script -Name install-kubectl -Scope CurrentUser -Force
install-kubectl.ps1 [-DownloadLocation $PATH]

You should change: $PATH for whichever path you want to choose for the download

Curl

Pure PowerShell allows you to run kubectl as well:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/windows/amd64/kubectl.exe

Don’t forget to add the .exe to your path, else you’ll have to navigate to the download path all the time or use an absolute route for the binary.

Chocolatey

choco install kubernetes-cli

Mac

The number of systems and devops engineers that use Mac either grows or diminishes every year (depends on who you ask, to each their own. I don’t judge). However there’s a lot our there that use it daily. In the same way that Linux and Windows users have a few ways to install it.

Homebrew

For Homebrew it’s a simple one-liner:

brew install kubectl 

Macports

Same with Macports

sudo port selfupdate; sudo port install kubectl

Straight binary on MAC

The binary method for MAC is basically the same as for linux but with a different binary package:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"

Give it executable permissions

chmod +x ./kubectl

And move it to your binary path

sudo mv ./kubectl /usr/local/bin/kubectl

Make sure it works!

kubectl version

Configuration

After the installation is done you need to add the config file for kubernetes on your $PATH for kubectl to be able to work against your cluster.

On Linux, MAC and WSL this can be done by copying your config file on the following route:

~/.kube/config

In the case of Windows the file should be copied on the following route:

%USERPROFILE%/.kube/config

Checking that everything works

You can test that kubectl has access to the cluster by issuing the command:

kubectl version

If the connection works you’ll see an output like this:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-10T03:03:57Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Where the Client Version is the kubectl binary version and the Server Version is the Cluster version.

Final words

This tutorial should help you configure kubectl for your machine, it can also be used to set up a control server (just a simple virtual machine that has access to the same network as the K8s cluster)

In case you need any help you can type in a comment, we try to answer them ASAP.

Good Luck!

The post Set up your machine to use kubectl appeared first on Vectops.

]]>
https://vectops.com/2019/12/set-up-your-machine-to-use-kubectl/feed/ 1