Featured image of post Provision Proxmox VMs with Ansible, quick and easy

Provision Proxmox VMs with Ansible, quick and easy

Whether you rely on an actual kernel (full blown VM) or just the userspace (LXC Containers) it helps a lot to have a free tool that can perform...

Proxmox is an amazing Virtualization solution for both production, development, testing, etc. basically anything you can think of that requires a virtual machine.

Whether you rely on an actual kernel (full blown VM) or just the userspace (LXC Containers) it helps a lot to have a free tool that can perform. Not only that but the fact that it can be clustered and have VMs migrated from node to node can help with availability issues that can and will happen when you to perform maintenance on any specific virtualization node.

However, it can be cumbersome to provision VMs sometimes, the vanilla method is:

Create VM -> Present operating system ISO to VM -> perform installation -> Enjoy

This method can be time consuming depending on how many VMs you need, or what the OS installation process is like.

I know, I know… There’s new fancy tech such as Kubernetes that allows you to easily and swiftly deploy applications on a cloud environment but that kind of infrastructure is not always readily available and it can be hard to migrate some applications to it, depending on what has been developed.

Enter templates

The Proxmox system allows you to use and create VM templates, that can be set up with whatever operating system you want.

We’re going to use a basic Debian 10 template for this example, just go ahead and create a VM, pick low resources for the image so you can expand them later. CPU and memory are easily downsized, storage drives aren’t so take this into account.

I’ve created a VM with the following resources:

 1 Core  1 GB RAM  10 GB HDD  1 Network Interface  1 Cloud-init drive  1 EFI Disk

Some of the properties noted above will have to be added after the VM creation process.

Creating the template manually

This process is pretty straightforward, here’s a step by step:

  1. Click on create VM
  1. Input a name for the VM, you can check for it to start at boot, your call. Click next
  2. Select an ISO for the install and select the type and version of the OS that will be installed. Click next
  3. Check the “Qemu Agent” option, you’ll use this later on. Click next
  4. Select the Disk size, in this case 10 GB. You can also change some of the storage emulation options for this drive, we won’t go into that on this example. Click next
  5. Select how many Cores you want to use for the VM, in this case 1 Core. Click next
  6. Input the amount of memory for the VM, in this case 1024 MB. I advice using the Ballooning device so you can save up memory resources on the node and to be able to oversell the resources, just like a CPU. Note that if the memory is being actually used by the VM, it can’t be used by other VMs unless it’s the exact same memory block, enter KSM (Kernel Shared Memory) I won’t go into detail about KSM just know that it’s awesome. Select the Minimum memory for the Ballooning Device, in this case 256 MB. Click next
  7. If you don’t have any custom network configurations on the node you can just Click next here. If you do, make sure that the configuration matches what you need.
  8. Confirm the VM setup and click on “Finish” Don’t start the VM yet

After the VM is created, we need to add a couple of things.

After the creation

First, the Cloud-init drive, select the VM on the left and click on Hardware then Add and finally on Cloud Init Drive and select the storage where it will reside.

Second, edit the BIOS (double click on the BIOS entry on the Hardware tab) and select OVMF (UEFI)

Third, the EFI Disk, same process as the Cloudinit-drive, but now select EFI Disk and select the storage where it will reside. It won’t let you create the drive before setting up the BIOS on the second step.

Finally, start up the machine.

Lets get it prepared for Cloud-Init, go on the VM and run this command:

apt-get install cloud-init -y

That’s it, it’s set up now.

Using Debian’s official image for Cloud-init.

You can also just go for the easy step and more straightforward even that the manual installation, download a ready-to-go image from Debian’s repositories. SSH into the node, or open a shell on the node through the GUI and run:

wget http://cdimage.debian.org/cdimage/openstack/10.2.0/debian-10.2.0-openstack-amd64.qcow2

Afterwards create a VM either through the GUI or through the command line, if you decide to do it with the graphic interface just do as I wrote earlier, on the CLI the commands are as follows:

qm create 9000 - name debian-10-template - memory 1024 - net0 virtio,bridge=vmbr0 - cores 1 - sockets 1 - cpu cputype=kvm64 - description "Debian 10.2 cloud image" - kvm 1 - numa 1
qm importdisk 9000 debian-10.2.0-openstack-amd64.qcow2 lvm-thin
qm set 9000 - scsihw virtio-scsi-pci - virtio0 lvm-thin:vm-9000-disk-1
qm set 9000 - serial0 socket
qm set 9000 - boot c - bootdisk virtio0
qm set 9000 - agent 1
qm set 9000 - hotplug disk,network,usb,memory,cpu
qm set 9000 - vcpus 1
qm set 9000 - vga qxl
qm set 9000 - name debian-10-template
qm set 9000 - ide2 lvm-thin:cloudinit
qm set 9000 - sshkey /etc/pve/pub_keys/pub_key.pub

After you execute these commands you need to resize the disk to 10 GB, you can do this on the hardware tab for the VM

Installation

Start up the machine and install some basic packages you’ll most likely use on all the machines in my case I usually go for these:

sudo apt install bmon screen ntpdate vim locate locales-all iotop atop curl libpam-systemd python-pip python-dev ifenslave vlan mysql-client sysstat snmpd sudo lynx rsync nfs-common tcpdump strace darkstat qemu-guest-agent

After these packages are installed shutdown the VM with:

shutdown -h now

Defining the template

When the VM has been shutdown cleanly, you can proceed and convert it to a template, this can be done on the Proxmox GUI by right clicking on the VM and clicking on “Convert to Template”

Success, the template has been created.

Setting up the Proxmox node for ansible communication

The proxmox node has to be set up with a python tool called proxmoxer. This tool has the ability to behave like an ansible module, you can either run a playbook for the install or go in manually and proceed with the installation via SSH.

With a console proceed with these commands and install the necessary packages on the node:

apt install -y python-pip python-dev build-essential
pip update pip
pip install virtualenv
pip install proxmoxer

Creating our Ansible directory structure

We’re going to use a simple Ansible setup with the following structure:

hosts
playbooks
|──proxmox_deploy.yml
roles
|──proxmox_deploy
|
| ── defaults
| |
| ── main.yml
| ── meta
| |
| ── main.yml
| ── tasks
| |
| ── main.yml
| ── vars
| |
| ── main.yml
| ── travis.yml

Ansible files

You need to define several things on the ansible files, lets define the hosts file first.

For the sake of this example we’re using two nodes, proxmox1 with the IP: 192.168.1.11 and proxmox2 with the IP: 192.168.1.12

This definition only needs the proxmox nodes (in case you have more than one) and a group to be called:

hosts
[proxmox1]
proxmox1 ansible_ssh_host=192.168.1.11

[proxmox2]
proxmox2 ansible_ssh_host=192.168.1.12

[proxmoxs:children]
proxmox1
proxmox2

The main playbook is setup as follows (playbooks/proxmox_deploy.yml):

playbooks/proxmox_deploy.yml
- name: 'prep proxmox hosts for automation'
  hosts: 'proxmox1'
  vars_prompt:
  - name: PV_password
    prompt: "Node Password"
    private: yes
  - name: VM_name
    prompt: "VM name"
    private: no
  - name: VM_network
    prompt: "Network associated to ipconfig0"
    private: no
    default: vlan10
  - name: VM_IP
    prompt: "VM IP"
    private: no
    default: 192.168.1.100
  - name: VM_sockets
    prompt: "VM socket/s"
    private: no
    default: 1
  - name: VM_cores
    prompt: "VM core/s"
    private: no
    default: 1
  - name: VM_memory
    prompt: "VM RAM Memory (MB)"
    private: no
    default: 1024
  - name: VM_INCREASE_DISK
    prompt: "Increase virtio0 disk (20 GB) in"
    private: no
    default: 0
  - name: PV_node
    prompt: "Migrate Virtual Machine to"
    private: no
    default: none
  user: root
  gather_facts: false
  roles:
    - { role: proxmox_deploy, default_proxmox_node: proxmox1 }

This playbook defines the inputs that you, as a sysadmin/devops/computer-magician will need to input so the tasks can be completed successfully. Note: It asks for a “Node password”, this is so the proxmoxer python module can communicate with the node, it uses Linux standard PAM authentication

These inputs encompass CPU Sockets, CPU Cores, Memory, IP, Disk size and target node in case you want to migrate the VM to another node after it’s finished the creation process.

Then lets define the roles for this deployment, first with the travis.yml file within the roles directory (roles/proxmox_deploy/travis.yml):

roles/proxmox_deploy/travis.yml
---
language: python
python: "2.7"

# Use the new container infrastructure
sudo: false

# Install ansible
addons:
	 apt:
		 packages:
			 - python-pip

install:
	 # Install ansible
	 - pip install ansible

	# Check ansible version
	 - ansible - version

	# Create ansible.cfg with correct roles_path
	 - printf '[defaults]\nroles_path=../' >ansible.cfg

script:
	 # Basic role syntax check
	 - ansible-playbook tests/test.yml -i tests/inventory - syntax-check

notifications:
	 webhooks: https://galaxy.ansible.com/api/v1/notifications/

Afterwards we set up the main.yml within the defaults directory (roles/proxmox_deploy/defaults/main.yml). Please note that it’s very important to adjust this file to concur with the template name you created on the template creation step:

roles/proxmox_deploy/defaults/main.yml
---
# defaults file for proxmox_deploy
VM_template: debian-10-template
default_disk: virtio0
default_interface: ens18
default_volume: /dev/vda
default_partition: 2
template_name: template-debian-deployment

The handlers main.yml file is basically empty but needs to be defined (roles/proxmox_deploy/handlers/main.yml):

roles/proxmox_deploy/handlers/main.yml
---
# handlers file for proxmox_deploy

Then define a very basic default template for the meta’s main.yml file, i’m just leaving it as a default template (roles/proxmox_deploy/meta/main.yml):

roles/proxmox_deploy/meta/main.yml
galaxy_info:
	author: your name
	description: your description
	company: your company (optional)
	license: license (GPLv2, CC-BY, etc)
	min_ansible_version: 2.4
	galaxy_tags: []
dependencies: []

The main.yml file for the vars directory is as follows (roles/proxmox_deploy/vars/main.yml), in here, some of the variables that you might need for a VM are set up, in this case i’m going to use two VLAN setups as an example. Adjust it to your own infrastructure:

roles/proxmox_deploy/vars/main.yml
# vars file for proxmox_deploy
vlan10:
	params:
		netmask: 24
		vmbr: 0
		gateway: 192.168.2.1
		dnsservers: "192.168.2.253 192.168.2.254"
		searchdomain: vectops.com
vlan11:
	params:
		netmask: 24
		vmbr: 1
		gateway: 192.168.3.130
		dnsservers: "192.168.3.253 192.168.3.254"
		searchdomain: vectops.com

Finally the main file, the tasks’ main.yml file (roles/proxmox_deploy/tasks/main.yml). All the actual work goes here, the playbook uses this to complete all of the deployment tasks:

roles/proxmox_deploy/tasks/main.yml
---
# tasks file for proxmox_deploy
    - name: Cloning virtual machine from "{{ VM_template }}" with name "{{ VM_name }}" 
      proxmox_kvm:
        api_user : root@pam
        api_password: "{{ PV_password }}"
        api_host : "{{ default_proxmox_node }}"
        name : "{{ VM_name }}"
        node : "{{ default_proxmox_node }}"
        clone: "{{ VM_template }}"
        timeout: 300
      tags: provission,test

    - name: Increasing disk if it is necessary
      shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm resize $A {{ default_disk }} +{{ VM_INCREASE_DISK }}G
      when: '"{{ VM_INCREASE_DISK }}" != "0"'
      tags: provission

    - name: Waiting to apply cloud init changes in disk
      wait_for:
      timeout: 5
      tags: provission

    - name: starting new Virtual Machine to change IPv4 configuration, it is necessary
      proxmox_kvm:
        api_user : root@pam
        api_password: "{{ PV_password }}"
        api_host : "{{ default_proxmox_node }}"
        name : "{{ VM_name }}"
        node : "{{ default_proxmox_node }}"
        state : started
        timeout: 300
      when: '"{{ VM_INCREASE_DISK }}" != "0"'
      register: wait
      tags: provission

    - name: Waiting to start virtual server machine completely
      wait_for:
        timeout: 45
      when: wait.changed == true 
      tags: provission

    - name: Resize disk
      shell: growpart "{{ default_volume }}" "{{ default_partition }}"; pvresize "{{ default_volume }}""{{ default_partition }}"
      when: '"{{ VM_INCREASE_DISK }}" != "0"'
      delegate_to: "{{ template_name }}"
      tags: provission

    - name: stopping new Virtual Machine to change IPv4 configuration, it is necessary
      proxmox_kvm:
        api_user : root@pam
        api_password: "{{ PV_password }}"
        api_host : "{{ default_proxmox_node }}"
        name : "{{ VM_name }}"
        node : "{{ default_proxmox_node }}"
        state : stopped
        timeout: 300
      when: '"{{ VM_network }}" != "vlan10" or "{{ VM_INCREASE_DISK }}" != "0"'
      tags: provission

    - name: Loading set up for Virtual Machine. Assigning correct bridge in network interface
      shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - net0 'virtio,bridge=vmbr{{ item.value.vmbr }}'
      when: '"{{ VM_network }}" != "vlan10"'
      with_dict: "{{ vars[VM_network] }}"
      tags: provission

    - debug: 
        msg: "item.key {{ item.key }} item.value {{ item.value }} item.value.netmask {{ item.value.netmask }} item.value.vmbr {{ item.value.vmbr }}"
      with_dict: "{{ vars[VM_network] }}"
      tags: provission

    - name: Loading set up for Virtual Machine. Assigning IP, sockets, cores and memory for Virtual Machine
      shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - ipconfig0 'ip={{ VM_IP }}/{{ item.value.netmask }},gw={{ item.value.gateway }}' - nameserver '{{ item.value.dnsservers }}' - searchdomain '{{ item.value.searchdomain }}' - memory '{{ VM_memory }}' - sockets '{{ VM_sockets }}' - cores '{{ VM_cores }}'
      when: '"{{ VM_IP }}" != "automatic"'
      with_dict: "{{ vars[VM_network] }}"
      tags: provission

    - debug:
        var: current_ip
      tags: provission

    - name: Loading set up for Virtual Machine. Assigning IP automatically, sockets, cores and memory for Virtual Machine
      shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}'); qm set $A - ipconfig0 'ip={{ current_ip.stdout }}/{{ item.value.netmask }},gw={{ item.value.gateway }}' - nameserver '{{ item.value.dnsservers }}' - searchdomain '{{ item.value.searchdomain }}' - memory '{{ VM_memory }}' - sockets '{{ VM_sockets }}' - cores '{{ VM_cores }}'
      when: '"{{ VM_IP }}" == "automatic"'
      with_dict: "{{ vars[VM_network] }}"
      tags: provission

    - debug: 
        var: "{{ PV_node }}"

    - name: Migrating Virtual Machine if it is necessary
      shell: A=$(qm list |grep "{{ VM_name }}" | awk '{print $1}');qm migrate $A "{{ PV_node }}"
      when: '"{{ PV_node }}" != "none"'
      tags: provission

    - name: starting new Virtual Machine in current proxmox node
      proxmox_kvm:
        api_user : root@pam
        api_password: "{{ PV_password }}"
        api_host : "{{ default_proxmox_node }}"
        name : "{{ VM_name }}"
        node : "{{ default_proxmox_node }}"
        state : started
        timeout: 300
      when: '"{{ PV_node }}" == "none"'
      tags: provission

    - name: starting new Virtual Machine in correct proxmox node
      proxmox_kvm:
        api_user : root@pam
        api_password: "{{ PV_password }}"
        api_host : "{{ PV_node }}" 
        name : "{{ VM_name }}"
        node : "{{ PV_node }}"
        state : started
        timeout: 300
      delegate_to: "{{ PV_node }}"
      when: '"{{ PV_node }}" != "none"'
      tags: provission

This tasks file is pretty straightforward, it uses several steps, they’re defined as follows:

  1. Clones the template into a new VM
  1. If you choose to increase the disk size, it does so on the hardware side.
  2. Applies Cloudinit configurations
  3. Starts the VM and waits for it to start
  4. Resizes the partition so it fits the new disk size (in case you did change it)
  5. Stops the VM so it can apply the IP configuration
  6. Assigns the correct network hardware properties, in case they need to be changed.
  7. Configures the necessary hardware properties for the VM (cpu, memory, etc.)
  8. If you chose to migrate it on the prompt, it will perform a migration of the VM to a target node

That’s it

That’s it, with this playbook you can easily deploy VMs on proxmox, fully configured to your needs with a simple ansible command:

ansible-playbook playbooks/proxmox_deploy.yml

I hope this helps you out as much as it has helped me to simplify and speed up the process of creating new classic virtual instances on Proxmox.

Built with Hugo