Introducing Virtual Machine Provisioning, via Kubernetes with VM Service

April 26, 2021

Introduction

VMware as a company has always done something very well, Virtual Machines - and we believe, as well as the wider community that modern applications will not exist in isolation as Kubernetes-only apps, rather they will be made up of a mixture of containers, functions, VMs and as-a-service offerings.

However, the way in which VMs have historically been provisioned is somewhat the antithesis of what desired state and Kubernetes stands for, which makes those parts of a solution much less attractive overall than converting everything to containers.

With vSphere 7.0 U2a we will be changing that. I am happy to introduce the VM Service and VM Operator, these two components work in unison on vSphere with Tanzu to offer Kubernetes users a VM provisioning workflow, never before seen on vSphere.

VM Service has a whole bunch of goodies inside; from declarative Kubernetes CRD based VM provisioning, automatic load balancing across multiple VMs, industry standard cloud-init based guest OS customisation for dev users all the way to administrative controls for VM Images and VM Classes (think t-shirt sizing) for the vSphere administrator.

This is our first release of VM Service and the VM Operator (v1alpha1) so we would very much appreciate your feedback in helping to shape the future of this feature!

Without further ado, let's dive in.

What does it do

VM Service allows Kubernetes users to provision VMs and their guest OSes declaratively, that is to say in a desired-state manner, just like anything else that is managed by Kubernetes.

How does it work

VM Service is made up of two components, a vSphere side component and a Kubernetes side component.

The vSphere side is built right into vCenter, it allows you to manage VM Images (Content Libraries) and VM Classes (VM sizing), the Kubernetes side is called VM Operator which creates and services the Kubernetes Custom Resources (CRs/CRDs), which we'll get into later, and tells K8s how to talk to vSphere.

Additionally, we are happy to announce that we are releasing the VM Operator as a completely open-source component on GitHub, and you can find that here.

So, let's have a look at each of these components in detail and see what makes it all tick!

VM Service in vSphere

The vSphere side of the VM Service is very straight forward, it is created as part of your vSphere with Tanzu namespaces once you enable the service in the new Workload Management "Services" tab:

Workload Management Services

For more details on setting up VM Service - check out the documentation here.

Inside the VM Service, you'll find where you can manage the components, namely the VM Images (Content Libraries) and VM Classes (t-shirt sizes).

VM Service Overview

If we take a look at the VM Classes tab it'll give you a better feel for what these actually are - essentially at this point they are CPU and Memory presets as well as resource reservations that developers will be able to create VMs from - there are a whole bunch of presets available out of the box to get you started and should cover most use-cases, however, if you want to create your own you can do that too.

VM Classes

Next up, Content Libraries are where the images that can be used by developers are stored - so when a developer deploys a VM via a VM Image, the images available to them are pulled from these content libraries - we'll take a closer look at that in a bit.

As of this release of VM Service and VM Operator API v1alpha1 we are only supporting a subset of images that we have built to work with VM Service, this will be changing to be more open in the near future. The current image you can deploy from can be found here.

Content Libraries

Finally, once you've had a look around the VM Service UI and created the Content Library, the only thing left to do is create a namespace and assign it some VM Classes and Content Libraries for the K8s users to deploy from - as you can see, I've created a vSphere namespace called vm-operator and assigned it all the preset VM Classes and the pre-built VM Image Content Library.

Namespace Assignment

From here, we can now go have a look at the K8s user's experience of VM Service.

VM Operator in Kubernetes

As I said at the beginning there are two components of VM Service and the second of which is in Kubernetes, so we're going to take a look at that now. The VM Operator creates Kubernetes CRDs for a few different types that allows K8s users to discover and define the resources they need from the Operator, the most important types they would be interested in are VirtualMachineVirtualMachineImageVirtualMachineClasses and VirtualMachineServices. We'll take a look at these each in turn.

VirtualMachineClasses

The VirtualMachineClasses type (vmclass for short) will show the K8s users what Classes they are able to choose to deploy their VMs against, for example: best-effort-2x-largeguaranteed-small, etc. When they use kubectl to query these they'll see the Classes that you assigned to them in the vCenter UI:

❯ kubectl get vmclass
NAME                  CPU   MEMORY   AGE
best-effort-2xlarge   8     64Gi     7d21h
best-effort-4xlarge   16    128Gi    7d21h
best-effort-8xlarge   32    128Gi    7d21h
best-effort-large     4     16Gi     7d21h
best-effort-medium    2     8Gi      7d21h
best-effort-small     2     4Gi      7d21h
best-effort-xlarge    4     32Gi     7d21h
best-effort-xsmall    2     2Gi      7d21h

 

They can also ask for more info from these types as well by describing the object. Below, you can see the entire specification of the vmclass object type - expressing the CPUs and RAM available as well as any resource guarantees or limits imposed on VMs at the vSphere layer provisioned from these classes.

❯ kubectl get vmclass best-effort-medium -o jsonpath='{.spec}' | jq
{
  "hardware": {
    "cpus": 2,
    "memory": "8Gi"
  },
  "policies": {
    "resources": {
      "limits": {
        "cpu": "0",
        "memory": "0"
      },
      "requests": {
        "cpu": "0",
        "memory": "0"
      }
    }
  }
}

VirtualMachineImages

The VirtualMachineImages type (vmimage shorthand), allows the K8s user to query what images are available to them, in the Content Library we are distributing with this release there is one image: CentOS Stream 8. You'll also notice that when we query the available images it notes if the image is supported or not (by default, only the distributed images are with the first release, as I mentioned at the start, this will be changing soon, and we'd love feedback on what images would be most useful to you!).

❯ kubectl get vmimage -o wide
NAME                                          VERSION   OSTYPE              FORMAT   IMAGESUPPORTED   AGE
centos-stream-8-vmservice-v1alpha1.20210222             centos8_64Guest     ovf      true             7d1h

Again, if we describe one of these images you can get some very useful information from them - like what OVF Environment keys are supported, allowing you to do one of the coolest things with VM Service - Guest OS Customisation.

❯ kubectl get vmimage centos-stream-8-vmservice-v1alpha1.20210222 -o jsonpath='{.spec}' | jq
{
  "imageSourceType": "Content Library",
  "osInfo": {
    "type": "centos8_64Guest",
    "version": "8"
  },
  "ovfEnv": {
    "hostname": {
      "default": "centosguest",
      "key": "hostname",
      "type": "string"
    },
    "instance-id": {
      "default": "id-ovf",
      "key": "instance-id",
      "type": "string"
    },
    "password": {
      "key": "password",
      "type": "string"
    },
    "public-keys": {
      "key": "public-keys",
      "type": "string"
    },
    "seedfrom": {
      "key": "seedfrom",
      "type": "string"
    },
    "user-data": {
      "key": "user-data",
      "type": "string"
    }
  },
  "productInfo": {
    "product": "Centos Stream 8 (64-bit) For VMware VM Service"
  },
  "type": "ovf"
}

As you can see from the above output, there are a few keys that we can populate to customise the Guest OS - but to me the most interesting is user-data - this allows native customisation of the Guest OS using the cloud-init specification as you would see on public clouds, meaning if you have a cloud-init user-data template that works there, it will work on vSphere too! We'll check this out in more detail in a bit.

The other main OVF key of note is hostname which, you guessed it, sets the hostname of the provisioned VM - everything else can be handled through user-data including; passwords, ssh keys, users, groups, package installations, running arbitrary commands - basically, anything.

VirtualMachine

Now, on to the main event - the VirtualMachine type (CLI shorthand: vm) of the types we've seen so far, the K8s user can actually create these, and that's the whole point! So, what I'm going to do is give you some example manifests you can use to try out the VM provisioning for yourselves, including some Guest OS customisation via cloud-init.

I've created a GitHub repo here with a bunch of example manifests in for you to play around with. I'll take the CentOS one as an example and we'll take it apart here to see how it works.

# The API version and type for VM Service in the current release
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
# The name of the VM object in kubernetes and what vSphere namespace to deploy it into
  name: centos-cloudinit-example
  namespace: vm-operator
spec:
# What network the VM should be attached to
  networkInterfaces:
  - networkName: "primary"
    networkType: vsphere-distributed
# What VM Class the VM should be provisioned with
  className: best-effort-small
# What Image from the Content Library the VM should be provisioned with
  imageName: centos-stream-8-vmservice-v1alpha1.20210222
# What state the VM should be in after being provisioned
  powerState: poweredOn
# What StorageClass the VM should be provisioned against
  storageClass: wcpglobal-storage-profile
# Any Additional metadata to pass into the VM (such as cloud-init, hostname, etc) here,
# we are using a ConfigMap to store this information rather than in the VM object itself
  vmMetadata:
    configMapName: centos-cloudinit-test
    transport: OvfEnv
---
apiVersion: v1
kind: ConfigMap
metadata:
# The name of the ConfigMap and the vSphere namespace to create it in
    name: centos-cloudinit-test
    namespace: vm-operator
# The data (OVF Keys and their values) to pass into the VM at provision time
data:
# This passes in a base64 encoded version of the cloud-init user-data file you will have
# built for the VM - the example below can be easily decoded to readable text by running
# echo 'Big Long String Here' | base64 -d
# on your CLI
  user-data: |
    I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgY2VudG9zOlZNd2FyZTEhCiAgICBleHBpcmU6IGZhbHNlCmdyb3VwczoKICAtIGRvY2tlcgp1c2VyczoKICAtIGRlZmF1bHQKICAtIG5hbWU6IGNlbnRvcwogICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgLSBzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFEQVFBQkFBQUJnUURDckxaV3RBbVRkNEY0WkVBVzIvOUR2VFY5RHNaeUZIVU5BSTk2SENGM2t4Q2M3ZlFmelp6YnllS09GTXFFQ25uZURlczFjbkkvVDVadlVrQ01qUzBkdldnWFZxdzFiclFSbVBmRzJPS1pUalZkYlpJaXE3SG9KdVd4REpmODFKZmpkY2pRQnBxNHZYNzdQNk1FekpxVVlNM2x0d3ZFbzdTUjVMMGIvNG5XUXNDWGxrQk9lZlJiaWowaVpPcjdHLy9rdVNIMG9JQW1jZlZ3elFHOVMxYUk4dHNlWm8reDgzdEViZDRiTzd4TlFQNzVySXU1bUZGUTlxK3IwYlduU2RDa1RxMUt0bjJEazJWYjRxRDdKN2c3UUlTTVRlVDVHandsTms5SHhzMVNVREsxS25NcFNGdjhjbld5VDZMZm5mSTFJTHczdHVEMjVvN0JNYkxSWGdFZXd2V2o1SExZQml0R0JZN2haNmtsV2x5QWRLS2Z6ckdTNnVWaUpMc2ZaNnlkd0lNcnM1L3cySXBKUi9xNUZKRjE3d1kvM2RPTVk1L3QvbDQ3aFpOdVRPM2hsV2pzSXQydzFZQkdRbmF4dnFFS1F5M2tyWE1MZ1JOS2NsMkhnTW1rdnJSMHd0YWJJTzk0Sm5DUXlWaUtEM204dkM2RVBqZ0l2V3RCYU4yVFA3TT0gbXlsZXNnQG15bGVzZy1hMDIudm13YXJlLmNvbQogICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgZ3JvdXBzOiBzdWRvLCBkb2NrZXIKICAgIHNoZWxsOiAvYmluL2Jhc2gKbmV0d29yazoKICB2ZXJzaW9uOiAyCiAgZXRoZXJuZXRzOgogICAgICBlbnMxOTI6CiAgICAgICAgICBkaGNwNDogdHJ1ZQ==
# The hostname to assigned to the VM when it is provisioned
  hostname: centos-cloudinit-example

Networks

The network name to fill into the VM spec can be retrieved by querying the available networks in K8s (Note: the networkName stanza is not required if you use NSX-T, NSX-T automatically creates segments for you - simply set networkType: nsx-t):

❯ kubectl get network
NAME      AGE
primary   7d2h

We can, as usual, garner a bit more information by describing the object's spec:

❯ kubectl get network -o jsonpath='{.items[0].spec}' | jq
{
  "providerRef": {
    "apiGroup": "netoperator.vmware.com",
    "apiVersion": "v1alpha1",
    "kind": "VSphereDistributedNetwork",
    "name": "primary"
  },
  "type": "vsphere-distributed"
}

cloud-init customisation

When it comes to customising Guest OSes, the industry standard has become cloud-init, supported by every major Linux distro out there, it provides an easy way to get just about everything done on a VM at provision-time. We've built support for cloud-init into the VM Operator and as such, customising Guest OSes is as trivial as a few (more) lines of YAML.

Again, I've built out a few example user-data templates on the GitHub repo and we're going to take one apart here.

#cloud-config
## Required syntax at the start of user-data file
## Create a user called centos and give a password of VMware1! and set it to not expire
chpasswd:
    list: |
      centos:VMware1!
    expire: false
## Create a docker user group on the OS
groups:
  - docker
users:
## Create the default user for the OS
  - default
## Customise the centos user created above by adding an SSH key that's allowed to login to the VM
## In this case, it's the SSH public key of my laptop
  - name: centos
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCrLZWtAmTd4F4ZEAW2/9DvTV9DsZyFHUNAI96HCF3kxCc7fQfzZzbyeKOFMqECnneDes1cnI/T5ZvUkCMjS0dvWgXVqw1brQRmPfG2OKZTjVdbZIiq7HoJuWxDJf81JfjdcjQBpq4vX77P6MEzJqUYM3ltwvEo7SR5L0b/4nWQsCXlkBOefRbij0iZOr7G//kuSH0oIAmcfVwzQG9S1aI8tseZo+x83tEbd4bO7xNQP75rIu5mFFQ9q+r0bWnSdCkTq1Ktn2Dk2Vb4qD7J7g7QISMTeT5GjwlNk9Hxs1SUDK1KnMpSFv8cnWyT6LfnfI1ILw3tuD25o7BMbLRXgEewvWj5HLYBitGBY7hZ6klWlyAdKKfzrGS6uViJLsfZ6ydwIMrs5/w2IpJR/q5FJF17wY/3dOMY5/t/l47hZNuTO3hlWjsIt2w1YBGQnaxvqEKQy3krXMLgRNKcl2HgMmkvrR0wtabIO94JnCQyViKD3m8vC6EPjgIvWtBaN2TP7M= mylesg@mylesg-a02.vmware.com
## Add the centos user to the sudo group and allow it to escalate to sudo without a password
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, docker
## Set the default shell of the user to bash
    shell: /bin/bash
## Enable DHCP on the default network interface provisioned in the VM
network:
  version: 2
  ethernets:
      ens192:
          dhcp4: true

So, all in all pretty readable and simple - I highly encourage you to check out the cloud-init docs or the blog I wrote a while back on my personal site for way more comprehensive examples, including how to build a complete K8s node from scratch using cloud-init.

A testament to just how much easier this is now with VM Service is comparing the massive four part series I wrote on doing this manually before, from vSphere with admin creds - to a simple YAML manifest done with RBAC, quotas and security out-of-the-box.

With our cloud-init user-data file built, we simply need to convert it into base64 encoding and paste the output into the ConfigMap as in the YAML example above.

❯ cat cloud-init/centos-user-data | base64
I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgY2VudG9zOlZNd2FyZTEhCiAgICBleHBpcmU6IGZhbHNlCmdyb3VwczoKICAtIGRvY2tlcgp1c2VyczoKICAtIGRlZmF1bHQKICAtIG5hbWU6IGNlbnRvcwogICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgLSBzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFEQVFBQkFBQUJnUURDckxaV3RBbVRkNEY0WkVBVzIvOUR2VFY5RHNaeUZIVU5BSTk2SENGM2t4Q2M3ZlFmelp6YnllS09GTXFFQ25uZURlczFjbkkvVDVadlVrQ01qUzBkdldnWFZxdzFiclFSbVBmRzJPS1pUalZkYlpJaXE3SG9KdVd4REpmODFKZmpkY2pRQnBxNHZYNzdQNk1FekpxVVlNM2x0d3ZFbzdTUjVMMGIvNG5XUXNDWGxrQk9lZlJiaWowaVpPcjdHLy9rdVNIMG9JQW1jZlZ3elFHOVMxYUk4dHNlWm8reDgzdEViZDRiTzd4TlFQNzVySXU1bUZGUTlxK3IwYlduU2RDa1RxMUt0bjJEazJWYjRxRDdKN2c3UUlTTVRlVDVHandsTms5SHhzMVNVREsxS25NcFNGdjhjbld5VDZMZm5mSTFJTHczdHVEMjVvN0JNYkxSWGdFZXd2V2o1SExZQml0R0JZN2haNmtsV2x5QWRLS2Z6ckdTNnVWaUpMc2ZaNnlkd0lNcnM1L3cySXBKUi9xNUZKRjE3d1kvM2RPTVk1L3QvbDQ3aFpOdVRPM2hsV2pzSXQydzFZQkdRbmF4dnFFS1F5M2tyWE1MZ1JOS2NsMkhnTW1rdnJSMHd0YWJJTzk0Sm5DUXlWaUtEM204dkM2RVBqZ0l2V3RCYU4yVFA3TT0gbXlsZXNnQG15bGVzZy1hMDIudm13YXJlLmNvbQogICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgZ3JvdXBzOiBzdWRvLCBkb2NrZXIKICAgIHNoZWxsOiAvYmluL2Jhc2gKbmV0d29yazoKICB2ZXJzaW9uOiAyCiAgZXRoZXJuZXRzOgogICAgICBlbnMxOTI6CiAgICAgICAgICBkaGNwNDogdHJ1ZQ==

Provisioning the VM

We've got our YAML built, so all we need to do is apply it and wait for the VM to spin up - if you're following along with the GitHub repo you should be able to apply this directly:

❯ kubectl apply -f manifests/cloud-init-based/cloudinit-centos.yaml
virtualmachine.vmoperator.vmware.com/centos-cloudinit-example created
configmap/centos-cloudinit-test configured

And we can watch the resources to see when they're ready:

❯ kubectl get vm -o wide
NAME                       POWERSTATE   CLASS               IMAGE                                         AGE
centos-cloudinit-example   poweredOn    best-effort-small   centos-stream-8-vmservice-v1alpha1.20210222   32m

And a little further to retrieve the IP address:

❯ kubectl get vm centos-cloudinit-example -o jsonpath='{.status.vmIp}'
192.168.128.7

So clearly the DHCP request worked - let's log in, if all has gone well, I shouldn't have to put in a password as my SSH key is already trusted from the cloud-init customisation and I should be able to elevate to sudo without a password:

❯ ssh centos@192.168.128.7
The authenticity of host '192.168.128.7 (192.168.128.7)' can't be established.
ECDSA key fingerprint is SHA256:A1/hsamPufdSysbPv63ODUC5/XHUhhdNcLjio1JA6aM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.128.7' (ECDSA) to the list of known hosts.
[centos@centos-cloudinit-example ~]$ sudo bash
[root@centos-cloudinit-example centos]#

Additionally, there should be a docker group present on the VM:

[centos@centos-cloudinit-example ~]$ groups
centos docker sudo

Success! Now, you can imagine that these are just very basic steps - but you can go much further than this with cloud-init a very common pattern I've seen is using cloud-init to install pre-requisites or bootstrap the VM for a config management system like Ansible, Chef, Puppet, or any other orchestration and management tool - so cloud-init does the day-0 stuff and your config management system does the rest.

If we have a look back vSphere side, we can see the VM successfully provisioned with the expected resource settings.

 

VirtualMachineServices

The last feature we'll look at is VirtualMachineServices (CLI shorthand: vmservice) this allows you to create a load-balancer that fronts one or many VMs - again, in the GitHub repo I've created an example deployment that uses everything we've looked at so far, cloud-init, VM objects and now a vmservice object to front two CentOS VMs running NginX, each with their own unique content.

So, if we take a look at the manifest and see how it works:

# The API version and type
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
# The name of the load balancer and the vSphere namespace we're deploying it into
  name: centos-lb
  namespace: vm-operator
spec:
# This selector tells the VMService how to match backend VMs across which to load
# balance, in this case - it's looking for VMs with a label of "app: lb-app" which
# the VMs in the example are labelled with
  selector:
    app: lb-app
# This tells the VM Operator to deploy a LoadBalancer service type
  type: LoadBalancer
  ports:
# Expose port 80 on the load balancer to port 80 on each of the VMs
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80

 

Not too much going on there, a fairly simple port 80 mapping right through to the VM, but the magic part comes from the label selector - this tells the VM Service how to choose what VMs are backed by the load balancer, so in this case it will look for any VMs with that label.

If we check the VMs we have in our environment that match that label, we can see there are two:

❯ kubectl get vm -l app=lb-app
NAME          POWERSTATE   AGE
lb-centos-1   poweredOn    15m
lb-centos-2   poweredOn    15m

 

If we query the vmservice type, we can see the one we created above is present, and additionally if we query the standard K8s objects Service and Endpoints we can see the load balancer service that was created for us, it's external IP address as well as the IPs of the VMs that are backing that service.

❯ kubectl get vmservice
NAME        TYPE           AGE
centos-lb   LoadBalancer   6m53s

❯ kubectl get svc,endpoints
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/centos-lb   LoadBalancer   172.24.170.61   192.168.0.2   80:32395/TCP   6m58s

NAME                  ENDPOINTS                             AGE
endpoints/centos-lb   192.168.128.21:80,192.168.128.22:80   6m58s

Finally, if we curl the load balancer IP address, we can see that the HTML being served back changes, so clearly the traffic is being round-robined between the backend VMs.

❯ curl 192.168.0.2
lb-centos-2
❯ curl 192.168.0.2
lb-centos-1

Equally, if you add more VMs, or take them away - they will automatically get added to, or taken from, the load balancer - meaning that there is no manual reconfiguration of the load balancer objects needed because of our use of label selectors.

Conclusion

That about brings us to the end of our whistle-stop tour of the new VM Service and VM Operator in vSphere - as I mentioned at the start we would love to hear your feedback on this feature, what you like, what you don't like, what OSes are the most important to you and your organisation, pre-packaged apps distributed as OVAs, etc - send Nikitha Suryadevara (product manager for the feature) and myself a note on our Twitters and it will help us guide the future of this feature!

For detailed how-to style info, check the official vSphere documentation - also, don't forget to check out the VM Operator GitHub and give it a Star to keep up to date with the features and releases as they come out!

Until next time, happy VM provisioning!

For more info or questions on this, reach out to Myles on Twitter.

Filter Tags

Modern Applications vSphere with Tanzu Content Library Kubernetes Blog Announcement Deep Dive Feature Walk-through Operational Tutorial Quick-Start Intermediate Advanced Deploy Manage

Myles Gray

Read More from the Author

Myles is a Staff Technical Marketing Architect focused on developer experience and building cloud native apps on the VMware Tanzu stack.