One recurring pattern that I’ve seen over the last number of years is that organsiations who adopt public cloud build out processes and workflows that allow them to build and deploy in a highly automated and reliable manner. Once they have things running more or less smoothly they then turn their eyes to their on-premises datacenter and begin considering how they can apply the lessons that they learned in public cloud to this environment.
One of the more immediate areas this can be done is by taking the consumption model of infrastructure-as-code, and leverage a declarative approach to on-premises infrastructure.
There are a number of great “getting started” posts out there for Terraform on vSphere, but inevitably they use a series of bash commands leveraging provisioners to handle post provisioning configuration. This isn’t a bad approach, but it isn’t representative of the way you would handle image bootstrapping in a public cloud. This approach also places the onus of dependency resolution on the person writing up the code, rather than delegating it to a system that can resolve these dependencies programatically. To sum up, the provisioners approach requires you to handle post provisioning in an imperative manner, when everything else we are doing is handled declaratively.
This challenge is uniformly addressed by the public cloud vendors through the adoption of cloud-init as a pseudo-standard. Over the next couple of posts we are going to explore how we can take this standard from the public cloud, and appy it to virtual machines running within the four walls of your datacenter, provisioned onto vSphere using Terraform.
What are we going to do?
This post is really just a “hello world” example. To demonstrate some simple capabilities, we will cover the following:
- Creating a Ubuntu template (nee image) with cloud-init installed using Packer
- Writing files that contain both user data and meta data
- Injecting configuration files into the guest with cloud-init
- Profit!
Prerequisites
- Perhaps it is no surprise that to use cloud-init on vSphere, you are going to need a template that has cloud-init installed. I have included a Packer definition that will build this for you. It also handles the installation of the next prerequisite.
- The relationship between the cloud-init OVF datasource and the default perl vSphere customisation is…. tricky. Thankfully, I am not the first person to recognise this, and a datasource that leverages guestinfo has been opensourced by VMware. This hasn’t been merged into the official cloud-init bundle yet, but the repo includes information to install this into your template, and in case you missed it the Packer definition linked above will install this for you.
The Example
To begin, we are going to work in some fairly simple ideas leveraging the code in this repo:
- Configure our network interface with a static ip address; and
- Create a user, and assign a public SSH key for authentication.
Cloud Init
You can find a ton of examples for cloud-init on their readthedocs site.
We’ve pulled straight from these docs to populate the metadata.yaml file in the templates directory with the network configuration.
network:
version: 2
ethernets:
ens192:
dhcp4: false
addresses:
- 10.0.0.200/24
gateway4: 10.0.0.1
nameservers:
addresses:
- 192.168.1.5
local-hostname: ubuntu-01
instance-id: ubuntu01
Similarly, we have captured a subset of the possible configuration options to create a new user during the provisioning process in the userdata.yaml file in the templates directory, and install jq.
#cloud-config
users:
- name: grant
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8rqxon4hRyV5cLNZczuJTe8dsZ33hpWHDU993r4iiY3t9bXqfmIHlIZ7dTL93nlvsgzVdOYMVGMOHMg/a1ZK0VRoKTS5BBhBGJejjDUfWRAtedZbM9JE5HHpks+L+nf8cOM14Os+Q3BV+z4MjYfIK5ZbV0IvUaY0kscQcE8cZoOTC2hHu/MPDneKJxG+HRQJfvqvnWz69/EXyi9iqtmOn0Xy9905qtbPNlDs1c4qF+zZ1qQCkMYP0Z4AVvLaPEJZlPmDnGqz5s1vVb130aXe1A11eq4RwgvZRxXW8i88pKqCGPuLRh7anqvSI15SLpA2KWvu7wD5CvhTisc/6TfVf
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
packages:
- jq
Getting these files into the guest OS requires us to add a short stanza to the machine resource as follows (the repo contains the full code).
extra_config = {
"guestinfo.metadata" = base64encode(file("${path.module}/templates/metadata.yaml"))
"guestinfo.metadata.encoding" = "base64"
"guestinfo.userdata" = base64encode(file("${path.module}/templates/userdata.yaml"))
"guestinfo.userdata.encoding" = "base64"
}
You will note that not only do we encode the file contents, we also specify the encoding type per the datasource documentation.
As a personal note, something that I love about Terraform is the maturity that it has. A really simple demonstration of this is that you are able to perform functions like reading files, or encoding them as base64.
Using this content
To make use of this yourself, you would need to edit the files in the template directory as well as making your own terraform.tfvars file using the example as a staring point. I would recommend waiting for the next blog post where I will walk you through the process for injecting variables into the userdata and metadata files.
Conclusion
At this point in time, you should be able to recognise the simplicity of using cloud-init to bootstrap workloads in vSphere, even if we haven’t yet demonstrated how powerful it can be. Hold on tight, the next blog is not too far away!