Holo-Tanzu-TKC-Resize

Module 6 - Resize TKC Worker Nodes

When running Kubernetes workloads on Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting TKCs and to expand that capacity over time as needed. 

In Cloud Foundation with Tanzu capacity can be added to TKCs in two ways. 

  • You can scale the TKC vertically by increasing the size of the worker nodes.
  • You can scale the TKC horizontally by adding more worker nodes.

In this exercise we will show how to add capacity by increasing the size of the TKC worker nodes.

Prior to completing this module, complete Module 4 – Create a Tanzu Kubernetes Cluster (TKC).

Note: while the developer can increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere administrator retains controls over the total amount of cluster resources (CPU, memory, storage) that can be consumed by the TKCs running on the cluster.  Resource utilization policies are defined at the vSphere namespace.

Step 1: Enable Additional VM Class


The size of the VMs, in terms of CPU and memory, that can be deployed as part of a TKC is determined by the VM Class defined in the YAML manifest. 

Begin by reviewing the enabled VM Classes in the ns01 vSphere namespace.

From the vSphere client, expand the vSphere inventory tree:

  1. Navigate Menu -> Inventory
  2. Expand vcenter-mgmt.vcf.sddc.lab
  3. Expand mgmt-datacenter-01
  4. Expand mgmt-cluster-01
  5. Expand Namespaces
  6. Expand tkc01
  7. Click ns01
  8. Click Summary

VM Classes are managed from the VM Service Tile.

  1. Click MANAGE VM CLASSES

A screenshot of a computer</p>
<p>Description automatically generated

Currently, only the best-effort-small VM Class has been enabled, indicating that only VMs with 2 vCPU and 4 GB memory can be deployed.  To allow VMs with 2 vCPU and 8 GB memory to be deployed, enable the best-effort-medium VM Class.

  1. Click the checkbox next to best-effort-medium
  2. Click OK

Table</p>
<p>Description automatically generated

With the added VM Class, you can now deploy VMs with 2 vCPUs and 8GB memory in the ns01 namespace.

Step 1 Summary

In this step we added the best-effort-medium VM Class to the ns01 vSphere Namespace in preparation for increasing the size of the tkc01 worker nodes from 2 vCPUs and 4GB of memory to 2 vCPUs and 8GB of memory.

Step 2:  Resize TKC Worker Nodes

With the additional VM Class enabled, we are ready to increase the size of the worker nodes in ‘tkc01’. 

Connect to the developer workstation.   

  1. Click the PuTTY icon on the Windows taskbar
  2. Click Tanzu WS
  3. Click Load
  4. Click Open
    1. Login: root
    2. Password: VMware123!

 

Graphical user interface, application</p>
<p>Description automatically generated

In the PuTTY terminal, run the kubectl vsphere login --vsphere-username sam@vsphere.local --server 10.80.0.2 … command to authenticate to the Kubernetes control plane running on the supervisor cluster.

  1. kubectl vsphere login --vsphere-username sam@vsphere.local --server 172.16.10.1

Next, run the kubectl config use-context ns01 command to connect to set the context to the ns01 vSphere namespace. 

  1. kubectl config use-context ns01

Text</p>
<p>Description automatically generated

Next, run the kubectl get virtualmachines -o wide command to identify the VM Class currently used for the VMs that comprise tkc0.

  1. kubectl get virtualmachines -o wide

Note that both the control plane and worker VMs were deployed using the best-effort-small VM Class.

Run the kubectl get virtualmachienclasses command to see the VM Classes enabled for the ns01 namespace.

  1. kubectl get virtualmachienclasses

Text</p>
<p>Description automatically generated

The output shows the best-effort-small and best-effort-medium VM Classes are enabled on the ns01 namespace:

To increase the memory allocated to the two worker nodes, run the kubectl edit tkc tkc01 command and change the vmClass property for the worker nodes from best-effort-small to best-effort-medium.

  1. kubectl edit tkc tkc01

Text</p>
<p>Description automatically generated with medium confidence

The YAML configuration for tkc01 opens in the vi editor.  Scroll down to the topology section to locate the vmClass property for the worker nodes.

Graphical user interface, text</p>
<p>Description automatically generated

Change the vmClass property to best-effort-medium.  Save the change and exit the editor.

If you are not familiar with the vi editor, the following table will guide you through the steps to make the change. 

Note: the vi editor has two modes.  By default, you start in the “command mode”.  While in this mode you use the arrow keys to scroll through the file and run ‘vi’ commands (vi commands are typed at the bottom of the screen).  To edit/change the contents of the file in this example, you press lowercase ‘c’ and ‘w’ (for change word) to delete the current word and switch to “input mode”, where you then type the new word.  After making changes, you press the “esc” key to switch back to command mode and enter the command :wq to save the changes and exit the vi editor.

To update the vmClass property for the worker nodes:

Action

Result

Press the escape key (esc)

Place the editor in command mode

Use the arrow keys to navigate to the vmClass property for the worker nodes

   nodePools:

    - name: workers

      replicas: 2

      storageClass: vsan-default-storage-policy

      tkr:

        reference:

          name: v1.23.8---vmware.3-tkg.1

      vmClass: best-effort-small

Use the arrow keys to place the cursor on the lowercase “s” at the beginning of the word “small” in the string best-effort-small

    nodePools:

    - name: workers

      replicas: 2

      storageClass: vsan-default-storage-policy

      tkr:

        reference:

          name: v1.23.8---vmware.3-tkg.1

      vmClass: best-effort-small

Type the letters ‘cw’ (change word) and type the word ‘medium’.  This will change the word small to medium

    nodePools:

    - name: workers

      replicas: 2

      storageClass: vsan-default-storage-policy

      tkr:

        reference:

          name: v1.23.8---vmware.3-tkg.1

      vmClass: best-effort-medium

Press the escape key (esc)

Switch back to command mode

Type :wq

Save the changes and exit the vi editor

The following image shows what the PuTTY terminal will look like after making the update and just before saving the changes and existing the vi editor.

Text</p>
<p>Description automatically generated

When you exit the vi editor, the changes are applied immediately.  You are notified that tkc01 has been edited.

Run the command kubectl get virtualmachines -o wide to monitor the progress. 

  1. kubectl get virtualmachines -o wide

Graphical user interface</p>
<p>Description automatically generated with medium confidence

The output shows a new VM being deployed with the best-effort-medium VM Class.  After the VM has been deployed, it will replace one of the existing worker nodes.  This process will then be repeated to deploy a second VM to replace the second worker node.  Note that the update is applied sequentially.

You can also monitor the progress from the Recent Tasks pane in the vSphere client.

  1. Return to the vSphere Client
  2. Expand Recent Tasks

Graphical user interface, application</p>
<p>Description automatically generated

Wait for the new VMs to get deployed and the original VMs removed. It will take approximately five minutes for both VMs to be replaced.

Step 2 Summary

In this step we used the tkc edit tkc/tkc01 command to update the vmClass property for the worker nodes from best-effort-small to best-effort-medium.  In response to doing this, the existing worker nodes were replaced with new VMs deployed using the best-effort-medium VM Class.

Step 3:  Verify TKC Worker Node Resize


Review the VM properties to confirm the worker nodes are now running with 8GB of memory.

  • Click tkc01-workers-scrib-<id>
  • Click to expand the VM Hardware tile

Note that the worker VM now has 8GB of memory, which is the memory allocation assigned to the best-effort-medium VM Class.  Repeat this step for the second worker node to confirm that it also has 8GB of memory.

Graphical user interface, text, application</p>
<p>Description automatically generated

You can also confirm the worker nodes are using the updated VM Class from the PuTTY terminal.

  1. Click the Putty icon to return to the Putty terminal

Run the command kubectl get virtualmachines -o wide.

  1. kubectl get virtualmachines -o wide

The output confirms that the two worker nodes are running the best-effort-medium VM Class. 

Text</p>
<p>Description automatically generated with low confidence

Step 3 Summary

In this step we verified that the new worker node VMs were each deployed with 8GB.  Effectively doubling the amount of memory assigned to tkc01 and confirming the change from the best-effort-small to best-effort-medium VM Class.

Module Summary

Being able to dynamically allocate capacity on demand, and to later grow that capacity over time is a critical capability of the modern cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for hosting TKCs, but also to resize TKCs as needed. 

TKCs can be expanded in two ways. 

  • You can scale the TKC vertically by increasing the size of the worker nodes.
  • You can scale the TKC horizontally by adding more worker nodes.

In this exercise we will saw how to add capacity to a TKC by changing the VM Class to increase the size of the TKC worker nodes.

Module Key Takeaways

  • To change the amount of CPU and/or memory assigned to a TKC node, edit the TKC and change the vmClass property.   
  • The list of available VM Classes is determined by the VM Classes that have been enabled by the vSphere administrator on the vSphere Namespace.
  • While the developer can increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they can consume. 

 

Filter Tags

Document