Photo by Randy Fath on Unsplash

Provision Proxmox VMs with Terraform, quick and easy

Photo by Randy Fath on Unsplash

Previously, I wrote an article about how to provision Proxmox VMs using Ansible, you can find it here.

That article went into the workings of a functional Ansible script that provisions Proxmox virtual machines in an easy and streamlined way that can be integrated into many other implementations.

This time, we’re going to delve into another way to do so, using Terraform.

Terraform allows us to streamline the process even further by the use of plugins in much the same way as Ansible would.

Both of these methods depend on a template to be created beforehand. You could, of course, just create an empty virtual machine and it would work, but this means you’ll have to perform the OS installation manually, where’s the fun in that?

Within Proxmox, the VM creation method (using the GUI) is:

Create VM -> Present operating system ISO to VM -> perform installation -> Enjoy

That process takes too long, it’s manual (eww) and honestly, it’s just boring. Especially when you have to create multiple VMs.

Let’s create a template to be used by Terraform.

Building a template

Within Proxmox, you can choose two ways (usually) to create a VM, from the GUI or from the Terminal console. For this example, you should use a bare minimum machine with the minimal resources allocated to it. This way it can easily be scaled in the future.

You’re going to need a VM with the following resources:

1 Core
1 GB RAM
10 GB HDD
1 Network Interface
1 Cloud-init drive
1 EFI Disk

Some of the properties noted above will have to be added after the VM creation process.

Manually creating the template

The process for creating a VM within the Proxmox GUI has been explained countless times on the internet. For the sake of completeness let’s mention the basic process:

1) Click on create VM
2) Input a name for the VM, you can check for it to start at boot, your call. Click next
3) Select an ISO for the install and select the type and version of the OS that will be installed. Click next
4) Check the "Qemu Agent" option, you’ll use this later on. Click next
5) Select the Disk size, in this case 10 GB. You can also change some of the storage emulation options for this drive, we won’t go into that on this example. Click next
6) Select how many Cores you want to use for the VM, in this case 1 Core. Click next
7) Input the amount of memory for the VM, in this case 1024 MB. I advice using the Ballooning device so you can save up memory resources on the node and to be able to oversell the resources, just like a CPU. Note that if the memory is being actually used by the VM, it can’t be used by other VMs unless it’s the exact same memory block, enter KSM (Kernel Shared Memory) I won’t go into detail about KSM just know that it’s awesome. Select the Minimum memory for the Ballooning Device, in this case 256 MB. Click next
8) If you don’t have any custom network configurations on the node you can just Click next here. If you do, make sure that the configuration matches what you need.
9) Confirm the VM setup and click on "Finish" Don’t start the VM yet

After the VM is created, you’re going to need to change a few things on the VM. As you can see from the above steps, the Cloud-init drive wasn’t added. Select the VM on the left and click on Hardware then Add and finally on Cloud-Init Drive and select the storage where it will reside.

Afterward, edit the BIOS (double click on the BIOS entry on the Hardware tab) and select OVMF (UEFI)

Finally, the EFI Disk, it’s the same process as with the Cloudinit-drive, but now select EFI Disk and select the storage where it will reside. It won’t let you create the drive before setting up the BIOS on the second step.

Inside the VM’s terminal, you can go ahead and install the Cloud-Init’s packages so the VM is ready for use with:

apt-get install cloud-init -y

Using Debian’s official image for Cloud-init.

If the manual process above takes too long and you don’t want to spend as much time with the OS installation, you can just download a pre-configured image from Debian’s official repositories.

From the proxmox node’s terminal run:

wget https://cdimage.debian.org/cdimage/openstack/current-10/debian-10-openstack-amd64.qcow2

Since we described the process through the GUI on the manual installation, let’s go for the CLI way of doing things, the commands are as follows:

qm create 9000 -name debian-10-template -memory 1024 -net0 virtio,bridge=vmbr0 -cores 1 -sockets 1 -cpu cputype=kvm64 -description "Debian 10 cloud image" -kvm 1 -numa 1
qm importdisk 9000 debian-10-openstack-amd64.qcow2 lvm-thin
qm set 9000 -scsihw virtio-scsi-pci -virtio0 lvm-thin:vm-9000-disk-1
qm set 9000 -serial0 socket
qm set 9000 -boot c -bootdisk virtio0
qm set 9000 -agent 1
qm set 9000 -hotplug disk,network,usb,memory,cpu
qm set 9000 -vcpus 1
qm set 9000 -vga qxl
qm set 9000 -name debian-10-template
qm set 9000 -ide2 lvm-thin:cloudinit
qm set 9000 -sshkey /etc/pve/pub_keys/pub_key.pub

Please, please, please, take into account that the disk needs to be resized to 10 GB, so the VM has space to grow when it runs, you can do this from the hardware tab on the GUI.

Template setup

Ok, you created the template, now what?

The template needs to have some packages on it to run smoothly, not all of these packages are strictly necessary but it’s what I usually go for:

sudo apt install bmon screen ntpdate vim locate locales-all iotop atop curl libpam-systemd python-pip python-dev ifenslave vlan mysql-client sysstat snmpd sudo lynx rsync nfs-common tcpdump strace darkstat qemu-guest-agent

Defining the template

When the VM has been shut down cleanly, you can proceed and convert it to a template, this can be done on the Proxmox GUI by right-clicking on the VM and clicking on "Convert to Template".

In this case, let’s rename the VM template to: debian-cloudinit

Success, the template has been created.

Enter Terraform

Terraform works in a pretty straightforward way.

It uses a file with HCL format (kinda like JSON) which is JSON compatible.

Within that file (or files), you can define an entire infrastructure. How simple or complex it can be is up to you. For this example, it’s going to be a pretty simple infrastructure definition, after all, we’re just creating one VM (for now).

The Terraform installation has been explained countless times online, for whichever operating system you might use, so I’m going to assume that you know how to install it (or how to google for: terraform install <insert OS here>).

Once it has been installed you need to install a provider so it can talk to the Proxmox API Server. Luckily there’s a provider that’s actively developed for this use.

Proxmox Provider

You can find the Proxmox provider for Terraform here.

The project is in active development and runs without hitches most of the time(99% of the time works all the time).

To install it just run the following commands to install the dependencies:

go get -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go get -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
go install -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go install -v github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
make

And finally, copy the executables that the compilation gave us into the path directory, in my case:

sudo cp $GOPATH/bin/terraform-provider-proxmox /usr/local/bin/
sudo cp $GOPATH/bin/terraform-provisioner-proxmox /usr/local/bin/

Terraform Project

Now you can get started with the Terraform project and project definitions.
We’re going to use a directory structure like this one:

tfProxmox
|- main.tf

Just a single file. Remember this can be as complex or as simple as you need it to.

Project Definition

Within that main.tf file you first need to set up the connection profile for the Proxmox node. In case you have a cluster, any of the nodes will suffice:

provider "proxmox" {
    pm_api_url = "https://$PROXMOXSERVERIP:8006/api2/json"
    pm_user = "[email protected]"
    pm_password = "$SUPERSECRETPASSWORD"
    pm_tls_insecure = "true"
}

Remember to change the $PROXMOXSERVERIP and the $SUPERSECRETPASSWORD variables on the example.

SSH Keys

Since you’re using a Cloud-init image (in case you went for Debian’s official template image), it’s set up for passwordless login so you need to define an SSH key to be installed on the VM:

variable "ssh_key" {
  default = "#INSERTSSHHPUBLICKEYHERE"
}

Where $INSERTSSHHPUBLICKEYHERE is your super-amazing-laptop’s SSH public key.

Now you can define the VM itself.

VM Definition

Bellow these definitions we can start defining our VM:

resource "proxmox_vm_qemu" "proxmox_vm" {
  count             = 1
  name              = "tf-vm-${count.index}"
  target_node       = "$NODETOBEDEPLOYED"
clone             = "debian-cloudinit"
os_type           = "cloud-init"
  cores             = 4
  sockets           = "1"
  cpu               = "host"
  memory            = 2048
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "scsi0"
disk {
    id              = 0
    size            = 20
    type            = "scsi"
    storage         = "data2"
    storage_type    = "lvm"
    iothread        = true
  }
network {
    id              = 0
    model           = "virtio"
    bridge          = "vmbr0"
  }
lifecycle {
    ignore_changes  = [
      network,
    ]
  }
# Cloud Init Settings
  ipconfig0 = "ip=10.10.10.15${count.index + 1}/24,gw=10.10.10.1"
sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Remember to change the $NODETOBEDEPLOYED entry for the node name where the VM will be deployed and the lvm-thin entry to whatever storage resource you’ll be using.

Let’s explain the resource definition. The main entries that you should take into account are:

count     <- This states the amount of VMs to be created
name      <- This states the VM Name, the "${count.index}" allows  
             you to create more than one VM and it'll just count    
             from then e.g.: tf-vm-1, tf-vm-2, tf-vm-3, etc.
cores     <- The amount of cores that the VM will have
memory    <- The amount of RAM the VM will have
disk      <- The disk definitions for the VM, scale the size here.
network   <- The network bridge definition to be used.
ipconfig0 <- The Ip for the VM, the "${count.index}" allows  
             you to create more than one VM and it'll just count    
             from then e.g.: 10.10.10.151, 10.10.10.152, etc.

Running Terraform

Terraform uses 3 main stages to run:

  • Init - This step allows Terraform to be initialized and downloads the required plugins to run
  • Plan - This step performs planning for the deployment, using the tf file that you’ve defined. It focuses on the calculation for the deployment and conflict resolution in case such conflict exists, it’s going to show you all the changes, additions, and deletions to be performed.
  • Apply - After the planning stage, this is the stage that applies the changes to the infrastructure. It’s going to give you a summary of the changes, additions and/or deletions to be made and ask for confirmation to commit these changes.

Init

While on the project folder run:

terraform init

As stated before, this is going to initialize Terraform and install the needed plugins for the project, the output should be as follows:

terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Plan

This step will take care of all of the calculations that need to be run and conflict resolution with the infrastructure that might already be deployed.

[email protected]:~$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
# proxmox_vm_qemu.proxmox_vm[0] will be created
  + resource "proxmox_vm_qemu" "proxmox_vm" {
      + agent        = 0
      + balloon      = 0
      + boot         = "cdn"
      + bootdisk     = "scsi0"
      + clone        = "debian-cloudinit"
      + clone_wait   = 15
      + cores        = 4
      + cpu          = "host"
      + force_create = false
      + full_clone   = true
      + hotplug      = "network,disk,usb"
      + id           = (known after apply)
      + ipconfig0    = "ip=10.10.10.151/24,gw=10.10.10.1"
      + memory       = 2028
      + name         = "tf-vm-0"
      + numa         = false
      + onboot       = true
      + os_type      = "cloud-init"
      + preprovision = true
      + scsihw       = "virtio-scsi-pci"
      + sockets      = 1
      + ssh_host     = (known after apply)
      + ssh_port     = (known after apply)
      + sshkeys      = <<~EOT
              ssh-rsa ...
        EOT
      + target_node  = "pmx-01"
      + vcpus        = 0
      + vlan         = -1
+ disk {
          + backup       = false
          + cache        = "none"
          + format       = "raw"
          + id           = 0
          + iothread     = true
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + replicate    = false
          + size         = "20"
          + storage      = "data2"
          + storage_type = "lvm"
          + type         = "scsi"
        }
+ network {
          + bridge    = "vmbr0"
          + firewall  = false
          + id        = 0
          + link_down = false
          + model     = "virtio"
          + queues    = -1
          + rate      = -1
          + tag       = -1
        }
    }
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: planfile
To perform exactly these actions, run the following command to apply:
    terraform apply "planfile"

The plan states that a new resource will be created on the target node: pmx-01 (which is the node I’m using on my lab).

After you check the planning and everything seems to be alright apply it.

Apply

To apply the Terraform plan, just run:

terraform apply

This will give you the summary from the plan and prompt for confirmation, type: yes and it’ll do it’s bidding.

When it’s done the output should be as follows:

[email protected]:~$ terraform apply
...
...
...
yes
...
...
...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate

Final Thoughts

This example should work as a starting point for Terraform interacting with Proxmox.

Take into account that you can destroy the VM by changing the count on the main.tf file to zero and going through the plan and apply stages.

Also, you can split the main.tf file into different files so it’s more organized when you decide to extend the infrastructure with different machine role definitions and different configurations for each one of them.

I’ve also uploaded the file on my GitHub here in case you just want the file.

Thanks for reading.

New Additions

As some folks have reported there could be some issues getting the proxmox plugin installed on their machines.

I’ve set up a Docker Image all set up and ready for it. The repo is located at:

https://github.com/galdorork/terragrunt-proxmox-provisioner

And here’s the link to the Image on Docker’s public registry:

https://hub.docker.com/r/galdor1/terragrunt-proxmox-provisioner