Supervisor Series: Ep 3. Deploying a 3-Tier Application with VM Service and TKG

Modern applications are versatile and ever-evolving, but the design philosophy remains the same; different workloads benefit from different deployment types. While some applications are better suited for microservices architecture, breaking them into smaller, containerized units, others may perform better as Virtual Machines. The true power of Supervisor comes by allowing you to deploy, run, and manage different workloads on the same platform.

In this episode, we will look at deploying a 3-tier application using VM Service and Tanzu Kubernetes Grid.

The Database part of the application will be deployed as a Virtual Machine into a Supervisor Namespace, using VM Service.

Backend and Frontend parts of the application will be deployed as Kubernetes pods into a Tanzu Kubernetes Grid (TKG) cluster. We have already uploaded application images into a newly deployed Harbor repository in the previous episode.

3ta-diagram

Configure a Supervisor Namespace

The first thing we need to do is to create a new namespace on our Supervisor cluster.

  • Navigate to Workload Management, click on Namespaces, and New Namespace.
  • Select Supervisor-1 and enter namespace-1 into the Name field.

Graphical user interface, application</p>
<p>Description automatically generated

  • Assign k8s-storage-policy Storage Policy to specify which datastore should be used by objects created in this namespace.
  • To configure VM Service, associate best-effort-small VM class and configure a Content Library which contains an Ubuntu VM image.

Note: We will use a custom Ubuntu 22.04 Jammy Jellyfish image we have created earlier. You can learn more about how to create your own images in this blog post: https://core.vmware.com/blog/vsphere-8-vmservice-bring-your-own-image-part-2

One of the new features creates a Kubernetes Grid Service Content Library automatically during the Supervisor enablement process, this Content Library is automatically associated with any new namespace and is set to download content when needed by default. If you wish to change the Content Library, you can do so in Workload Management -> Supervisor -> Configure -> General -> Tanzu Kubernetes Grid Service -> Content Library

Graphical user interface, application</p>
<p>Description automatically generated

Deploy a MySQL DB VM using VM Service

Now that we have a namespace, we can proceed with deployment of the Database VM. We will use two files for this deployment:

  • cloud-config.yaml
  • mysql-vm.yaml

Let’s have a look at cloud-config.yaml first. This file contains configuration details for our MySQL database, including user information, database information, as well as commands that should be executed on the VM during deployment. These will install and configure our mysql server.

cloud-config.yaml:

#cloud-config
ssh_pwauth: true

groups:
  - admingroup: [root,sys]

users:
  - name: dev
    gecos: Dev S. Ops
    lock_passwd: false
    passwd: $6$n/zJuy.x/O0oRKHp$sRK0wNmKkTRX26poRTVPIsXiz4u9SvVR2euzNV7ZXR9DTD.L3XgH0TgZZyxiGE1Mw.B6D8YcqCrLpwDCoRnBQ.
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, users, admin
    shell: /bin/bash

write_files:
  - content: |
       ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
    append: true
    path: /alteruser.txt

  - content: |
       CREATE USER 'devops'@'%' IDENTIFIED WITH mysql_native_password BY 'password';
       CREATE DATABASE demo;
       GRANT ALL PRIVILEGES ON demo.* TO 'devops'@'%';

       CREATE TABLE demo.user (
         id INTEGER PRIMARY KEY AUTO_INCREMENT,
         username varchar(255) NOT NULL,
         password varchar(255) NOT NULL,
         UNIQUE (username)
       );


       CREATE TABLE demo.entry (
         id INTEGER PRIMARY KEY AUTO_INCREMENT,
         author_id INTEGER NOT NULL,
         created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
         title varchar(255) NOT NULL,
         body varchar(5400) NOT NULL,
         FOREIGN KEY (author_id) REFERENCES user (id)
       );

       INSERT INTO demo.user (username,password) VALUES('Jeremy','devops123');
       INSERT INTO demo.entry (author_id,title,body) VALUES(1,"Welcome to the request page, this is the first entry","This entry is owned by Jeremy and can only be modified by him. You can create your own post by registering and logging in!");

    append: true
    path: /init.sql

runcmd:
  - sudo apt update
  - sudo apt -y install mysql-server
  - sudo systemctl start mysql.service
  - sudo mysql < /alteruser.txt
  - mysql -u root -ppassword < /init.sql
  - sudo sed -i '0,/bind-address/s//#bind-address/' /etc/mysql/mysql.conf.d/mysqld.cnf
  - sudo systemctl restart mysql.service

The second file, mysql-vm.yaml, is the Virtual Machine YAML manifest, defining the desired state of our DB VM. In this file, we define the name, namespace the VM should be deployed to, image that should be used, and other configuration details. We will use CloudInit as the transport method.

The ConfigMap section contains the base64 encoded version of the cloud-config.yaml file.

We will also deploy a Load Balancer, to expose our DB VM on required ports.

mysql-vm.yaml:

apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
  labels:
    vm.name: db-vm
    role: db
  name: mysql-db
  namespace: namespace-1
spec:
  imageName: ubuntu-22.04
  className: best-effort-small
  powerState: poweredOn
  storageClass: k8s-storage-policy
  networkInterfaces:
  - networkName:
    networkType: nsx-t
  vmMetadata:
      configMapName: mysql-db-cm
      transport: CloudInit
---
apiVersion: v1
kind: ConfigMap
metadata:
    name: mysql-db-cm
    namespace: namespace-1
data:
  user-data: >-
    CiNjbG91ZC1jb25maWcKCnNzaF9wd2F1dGg6IHRydWUKCmdyb3VwczoKICAtIGFkbWluZ3JvdXA6IFtyb290LHN5c10KCnVzZXJzOgogIC0gbmFtZTogZGV2CiAgICBnZWNvczogRGV2IFMuIE9wcwogICAgbG9ja19wYXNzd2Q6IGZhbHNlCiAgICBwYXNzd2Q6ICQ2JG4vekp1eS54L08wb1JLSHAkc1JLMHdObUtrVFJYMjZwb1JUVlBJc1hpejR1OVN2VlIyZXV6TlY3WlhSOURURC5MM1hnSDBUZ1paeXhpR0UxTXcuQjZEOFljcUNyTHB3RENvUm5CUS4gCiAgICBzdWRvOiBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBncm91cHM6IHN1ZG8sIHVzZXJzLCBhZG1pbgogICAgc2hlbGw6IC9iaW4vYmFzaAoKd3JpdGVfZmlsZXM6CiAgLSBjb250ZW50OiB8CiAgICAgICBBTFRFUiBVU0VSICdyb290J0AnbG9jYWxob3N0JyBJREVOVElGSUVEIFdJVEggbXlzcWxfbmF0aXZlX3Bhc3N3b3JkIEJZICdwYXNzd29yZCc7CiAgICBhcHBlbmQ6IHRydWUKICAgIHBhdGg6IC9hbHRlcnVzZXIudHh0CgogIC0gY29udGVudDogfAogICAgICAgQ1JFQVRFIFVTRVIgJ2Rldm9wcydAJyUnIElERU5USUZJRUQgV0lUSCBteXNxbF9uYXRpdmVfcGFzc3dvcmQgQlkgJ3Bhc3N3b3JkJzsKICAgICAgIENSRUFURSBEQVRBQkFTRSBkZW1vOwogICAgICAgR1JBTlQgQUxMIFBSSVZJTEVHRVMgT04gZGVtby4qIFRPICdkZXZvcHMnQCclJzsKCiAgICAgICBDUkVBVEUgVEFCTEUgZGVtby51c2VyICgKICAgICAgICAgaWQgSU5URUdFUiBQUklNQVJZIEtFWSBBVVRPX0lOQ1JFTUVOVCwKICAgICAgICAgdXNlcm5hbWUgdmFyY2hhcigyNTUpIE5PVCBOVUxMLAogICAgICAgICBwYXNzd29yZCB2YXJjaGFyKDI1NSkgTk9UIE5VTEwsCiAgICAgICAgIFVOSVFVRSAodXNlcm5hbWUpCiAgICAgICApOwoKCiAgICAgICBDUkVBVEUgVEFCTEUgZGVtby5lbnRyeSAoCiAgICAgICAgIGlkIElOVEVHRVIgUFJJTUFSWSBLRVkgQVVUT19JTkNSRU1FTlQsCiAgICAgICAgIGF1dGhvcl9pZCBJTlRFR0VSIE5PVCBOVUxMLAogICAgICAgICBjcmVhdGVkIFRJTUVTVEFNUCBOT1QgTlVMTCBERUZBVUxUIENVUlJFTlRfVElNRVNUQU1QLAogICAgICAgICB0aXRsZSB2YXJjaGFyKDI1NSkgTk9UIE5VTEwsCiAgICAgICAgIGJvZHkgdmFyY2hhcig1NDAwKSBOT1QgTlVMTCwKICAgICAgICAgRk9SRUlHTiBLRVkgKGF1dGhvcl9pZCkgUkVGRVJFTkNFUyB1c2VyIChpZCkKICAgICAgICk7CgogICAgICAgSU5TRVJUIElOVE8gZGVtby51c2VyICh1c2VybmFtZSxwYXNzd29yZCkgVkFMVUVTKCdKZXJlbXknLCdkZXZvcHMxMjMnKTsKICAgICAgIElOU0VSVCBJTlRPIGRlbW8uZW50cnkgKGF1dGhvcl9pZCx0aXRsZSxib2R5KSBWQUxVRVMoMSwiV2VsY29tZSB0byB0aGUgcmVxdWVzdCBwYWdlLCB0aGlzIGlzIHRoZSBmaXJzdCBlbnRyeSIsIlRoaXMgZW50cnkgaXMgb3duZWQgYnkgSmVyZW15IGFuZCBjYW4gb25seSBiZSBtb2RpZmllZCBieSBoaW0uIFlvdSBjYW4gY3JlYXRlIHlvdXIgb3duIHBvc3QgYnkgcmVnaXN0ZXJpbmcgYW5kIGxvZ2dpbmcgaW4hIik7CgogICAgYXBwZW5kOiB0cnVlCiAgICBwYXRoOiAvaW5pdC5zcWwKCnJ1bmNtZDoKICAtIHN1ZG8gYXB0IHVwZGF0ZQogIC0gc3VkbyBhcHQgLXkgaW5zdGFsbCBteXNxbC1zZXJ2ZXIKICAtIHN1ZG8gc3lzdGVtY3RsIHN0YXJ0IG15c3FsLnNlcnZpY2UKICAtIHN1ZG8gbXlzcWwgPCAvYWx0ZXJ1c2VyLnR4dAogIC0gbXlzcWwgLXUgcm9vdCAtcHBhc3N3b3JkIDwgL2luaXQuc3FsCiAgLSBzdWRvIHNlZCAtaSAnMCwvYmluZC1hZGRyZXNzL3MvLyNiaW5kLWFkZHJlc3MvJyAvZXRjL215c3FsL215c3FsLmNvbmYuZC9teXNxbGQuY25mCiAgLSBzdWRvIHN5c3RlbWN0bCByZXN0YXJ0IG15c3FsLnNlcnZpY2UK
---
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
  name: mysql-db
  namespace: namespace-1
spec:
  ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    vm.name: db-vm
  type: LoadBalancer

In a command line of your choice, run the following commands to deploy the DB VM.

Log in to the Supervisor:

Set context to namespace-1:

  • kubectl config use-context namespace-1

Deploy the mysql-vm.yaml:

  • kubectl apply -f mysql-vm.yaml -n namespace-1

Verify that vm and service have been deployed and configured:

  • kubectl get vm
  • kubectl get service

In order to prepare for Backend deployment, take the External IP of mysql-lb and base64 encode it:

  • echo -n "192.168.30.39" | base64 -w 0

Deploy a Tanzu Kubernetes Grid (TKG) cluster

The Backend and Frontend pods will be deployed into a TKG cluster. We will define the desired state of the cluster in tkg-cc-1.yaml file. For our demo purposes, we will deploy a TKG cluster with one Control plane, and two Worker nodes. The following Cluster definition is using ClusterClass, a new definition introduced in vSphere 8. We will define the name, namespace, network and storage configuration, and version we wish to deploy. You can also use annotations to specify the OS type you wish to use, choosing from Ubuntu or Photon OS.

tkg-cc-1.yaml:

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: tkg-cc-1
  namespace: namespace-1
spec:
  clusterNetwork:
    services:
      cidrBlocks: ["192.168.192.0/18"]
    pods:
      cidrBlocks: ["192.168.128.0/18"]
    serviceDomain: "managedcluster.local"
  topology:
    class: tanzukubernetescluster
    version: v1.23.8---vmware.2-tkg.2-zshippable
    controlPlane:
      metadata:
        annotations:
          run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
      replicas: 1
    workers:
      # node pools
      machineDeployments:
        - class: node-pool
          name: node-pool-1
            # failureDomain: zone1
          replicas: 2
    variables:
      - name: vmClass
        value: best-effort-small
      # default storageclass for control plane and node pool
      - name: storageClass
        value: k8s-storage-policy

While logged in to the Supervisor, deploy the TKG cluster using the following command:

  • kubectl apply -f tkg-cc-1.yaml -n namespace-1

Once the cluster is deployed, log in:

  • kubectl vsphere login --server=192.168.30.34 --tanzu-kubernetes-cluster-name tkg-cc-1 --tanzu-kubernetes-cluster-namespace namespace-1 --vsphere-username administrator@vsphere.local

Set the context to tkg-cc-1:

  • kubectl config use-context tkg-cc-1

Apply the standard security policy:

  • kubectl apply -f pod-security-policy.yaml

Create a namespace withing your tkg cluster, called app-ns:

  • kubectl create ns app-ns

Create a Docker secret to allow to pull images from the private Harbor image registry:

  • kubectl create secret docker-registry docker-hub-creds --docker-server=harbor.vmw.lab --docker-username=admin --docker-password=Harbor12345 -n app-ns

Deploy a Backend Application in TKG cluster

Now we are ready to deploy the Backend application. The backend-app.yaml manifest contains three sections. First, we are creating a secret that holds the information about database connection, including the encoded IP of the DB LB we added earlier. The next section defines our deployment, including the number of replicas, image information and some configuration variables. Lastly, we are defining a Load Balancer service to expose the backend on port 5000.

backend-app.yaml:

apiVersion: v1
kind: Secret
metadata:  
  name: backend-app-secret
type: Opaque
data:
  mysql_user: ZGV2b3Bz
  db_passwd: cGFzc3dvcmQ=
  #mysql_host: <BASE64_ENCODED_IP_FOR_MYSQL_VM>
  mysql_host: MTkyLjE2OC4zMC4zOQ==
  db_name: ZGVtbw==

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-app-deployment
  labels:
    app: backend-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-app
  template:
    metadata:
      labels:
        app: backend-app
    spec:
      containers:
        - name: backend-app
          image: harbor.vmw.lab/3ta/backend:latest
          ports:
            - containerPort: 5000
          env:
          - name: MYSQL_HOST
            valueFrom:
              secretKeyRef:
                name: backend-app-secret
                key: mysql_host
                optional: false
          - name: MYSQL_USER
            valueFrom:
              secretKeyRef:
                name: backend-app-secret
                key: mysql_user
                optional: false 
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: backend-app-secret
                key: db_passwd
                optional: false 
          - name: DB_NAME
            valueFrom:
              secretKeyRef:
                name: backend-app-secret
                key: db_name
                optional: false 
      imagePullSecrets:
      - name: docker-hub-creds
---

apiVersion: v1
kind: Service
metadata:
  name: backend-app-service
spec:
  selector:
    app: backend-app
  ports:
    - name: web-app-port
      protocol: TCP
      port: 5000
      targetPort: 5000
  type: LoadBalancer

While logged into tkg-cc-1 cluster, apply the backend-app.yaml manifest in app-ns namespace:

  • kubectl apply -f backend-app.yaml -n app-ns

Check that backend pod is running in app-ns:

  • kubectl get pods -n app-ns

Get External IP of the backend-app-service:

  • kubectl get service -n app-ns

Verify that we are getting the right response on port 5000 to validate the communication with database:

  • curl -X GET 192.168.30.37:5000/api/index

Now we will take this IP and port, base 64 encode it, and add it to the frontend manifest:

  • echo -n "192.168.30.38:5000" | base64 -w0

Deploy a Frontend Application in TKG cluster

Last thing to deploy is the Frontend application, defined in frontend-app.yaml manifest. Again we are defining a Secret, to allow communication with backend, Deployment, including number of replicas and image information, and a Load Balancer Service to expose frontend on port 5000.

frontend-app.yaml:

apiVersion: v1
kind: Secret
metadata:  
  name: frontend-app-secret
type: Opaque
data:
  #api_url: <BASE64_ENCODED_IP:PORT_FOR_backend-app-service>
  api_url: MTkyLjE2OC4zMC4zNzo1MDAw 

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-app-deployment
  labels:
    app: frontend-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend-app
  template:
    metadata:
      labels:
        app: frontend-app
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - backend-app
            topologyKey: kubernetes.io/hostname
      containers:
        - name: frontend-app
          image: harbor.vmw.lab/3ta/frontend:latest
          ports:
            - containerPort: 5000
          env:
          - name: API_URL
            valueFrom:
              secretKeyRef:
                name: frontend-app-secret
                key: api_url
                optional: false 
      imagePullSecrets:
      - name: docker-hub-creds
---

apiVersion: v1
kind: Service
metadata:
  name: frontend-app-service
spec:
  selector:
    app: frontend-app
  ports:
    - name: web-app-port
      protocol: TCP
      port: 5000
      targetPort: 5000
  type: LoadBalancer

While logged in to tkg-cc-1 cluster, apply the frontend-app.yaml manifest in app-ns namespace:

  • kubectl apply -f frontend-app.yaml -n app-ns

Check the frontend pod is running in app-ns:

  • kubectl get pods -n app-ns

Get the External IP of frontend-app-service:

  • kubectl get service -n app-ns

Test the Application

To test that our application has been deployed successfully, take the External IP of frontend-app-service and open it in a browser on port 5000.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

To validate the connection with the Database, register a new user and log in. The user information will be stored in the database. You can also click on Requests and add a New Request to verify that you can write into the database.

See this episode in action

Before you go

In this episode, we have shown you how you can deploy a 3-tier application on a Supervisor, using VM Service and Tanzu Kubernetes Grid.

For more episodes visit the Supervisor Series Activity Path.

Subscribe to our VMware vSphere YouTube channel to get notified as soon as we release new episodes.

Thank you and see you soon!

Filter Tags

Modern Applications vSphere 8 vSphere with Tanzu Kubernetes Supervisor Tanzu Kubernetes Grid VM Service Document Feature Walkthrough Operational Tutorial Proof of Concept Quick-Start