Holo-Tanzu-TKC-Resize
Module 6 - Resize TKC Worker Nodes
When running Kubernetes workloads on Cloud Foundation with Tanzu it's easy to allocate infrastructure for hosting TKCs and to expand that capacity over time as needed.
In Cloud Foundation with Tanzu capacity can be added to TKCs in two ways.
- You can scale the TKC vertically by increasing the size of the worker nodes.
- You can scale the TKC horizontally by adding more worker nodes.
In this exercise we will show how to add capacity by increasing the size of the TKC worker nodes.
Prior to completing this module, complete Module 4 – Create a Tanzu Kubernetes Cluster (TKC).
Note: while the developer can increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere administrator retains controls over the total amount of cluster resources (CPU, memory, storage) that can be consumed by the TKCs running on the cluster. Resource utilization policies are defined at the vSphere namespace.
Step 1: Enable Additional VM Class
The size of the VMs, in terms of CPU and memory, that can be deployed as part of a TKC is determined by the VM Class defined in the YAML manifest.
Begin by reviewing the enabled VM Classes in the ns01 vSphere namespace.
From the vSphere client, expand the vSphere inventory tree:
- Navigate Menu -> Inventory
- Expand vcenter-mgmt.vcf.sddc.lab
- Expand mgmt-datacenter-01
- Expand mgmt-cluster-01
- Expand Namespaces
- Expand tkc01
- Click ns01
- Click Summary
VM Classes are managed from the VM Service Tile.
- Click MANAGE VM CLASSES
Currently, only the best-effort-small
VM Class has been enabled, indicating that only VMs with 2 vCPU and 4 GB memory can be deployed. To allow VMs with 2 vCPU and 8 GB memory to be deployed, enable the best-effort-medium
VM Class.
- Click the checkbox next to
best-effort-medium
- Click OK
With the added VM Class, you can now deploy VMs with 2 vCPUs and 8GB memory in the ns01 namespace.
Step 1 Summary
In this step we added the best-effort-medium
VM Class to the ns01 vSphere Namespace in preparation for increasing the size of the tkc01 worker nodes from 2 vCPUs and 4GB of memory to 2 vCPUs and 8GB of memory.
Step 2: Resize TKC Worker Nodes
With the additional VM Class enabled, we are ready to increase the size of the worker nodes in ‘tkc01’.
Connect to the developer workstation.
- Click the PuTTY icon on the Windows taskbar
- Click Tanzu WS
- Click Load
- Click Open
- Login: root
- Password: VMware123!
In the PuTTY terminal, run the kubectl vsphere login --vsphere-username
sam@vsphere.local --server 10.80.0.2 …
command to authenticate to the Kubernetes control plane running on the supervisor cluster.
kubectl vsphere login --vsphere-username
sam@vsphere.local
--server 172.16.10.1
Next, run the kubectl config use-context ns01
command to connect to set the context to the ns01 vSphere namespace.
kubectl config use-context ns01
Next, run the kubectl get virtualmachines -o wide
command to identify the VM Class currently used for the VMs that comprise tkc0.
kubectl get virtualmachines -o wide
Note that both the control plane and worker VMs were deployed using the best-effort-small
VM Class.
Run the kubectl get virtualmachienclasses
command to see the VM Classes enabled for the ns01 namespace.
kubectl get virtualmachienclasses
The output shows the best-effort-small
and best-effort-medium
VM Classes are enabled on the ns01 namespace:
To increase the memory allocated to the two worker nodes, run the kubectl edit tkc tkc01
command and change the vmClass
property for the worker nodes from best-effort-small to best-effort-medium.
kubectl edit tkc tkc01
The YAML configuration for tkc01 opens in the vi editor. Scroll down to the topology section to locate the vmClass
property for the worker nodes.
Change the vmClass
property to best-effort-medium
. Save the change and exit the editor.
If you are not familiar with the vi editor, the following table will guide you through the steps to make the change.
Note: the vi editor has two modes. By default, you start in the “command mode”. While in this mode you use the arrow keys to scroll through the file and run ‘vi’ commands (vi commands are typed at the bottom of the screen). To edit/change the contents of the file in this example, you press lowercase ‘c’ and ‘w’ (for change word) to delete the current word and switch to “input mode”, where you then type the new word. After making changes, you press the “esc” key to switch back to command mode and enter the command :wq to save the changes and exit the vi editor.
To update the vmClass property for the worker nodes:
Action |
Result |
Press the escape key (esc) |
Place the editor in command mode |
Use the arrow keys to navigate to the vmClass property for the worker nodes |
|
Use the arrow keys to place the cursor on the lowercase “ |
|
Type the letters ‘cw’ (change word) and type the word ‘medium’. This will change the word small to medium |
|
Press the escape key (esc) |
Switch back to command mode |
Type :wq |
Save the changes and exit the vi editor |
The following image shows what the PuTTY terminal will look like after making the update and just before saving the changes and existing the vi editor.
When you exit the vi editor, the changes are applied immediately. You are notified that tkc01 has been edited.
Run the command kubectl get virtualmachines -o wide
to monitor the progress.
kubectl get virtualmachines -o wide
The output shows a new VM being deployed with the best-effort-medium
VM Class. After the VM has been deployed, it will replace one of the existing worker nodes. This process will then be repeated to deploy a second VM to replace the second worker node. Note that the update is applied sequentially.
You can also monitor the progress from the Recent Tasks pane in the vSphere client.
- Return to the vSphere Client
- Expand Recent Tasks
Wait for the new VMs to get deployed and the original VMs removed. It will take approximately five minutes for both VMs to be replaced.
Step 2 Summary
In this step we used the tkc edit tkc/tkc01
command to update the vmClass property for the worker nodes from best-effort-small
to best-effort-medium
. In response to doing this, the existing worker nodes were replaced with new VMs deployed using the best-effort-medium
VM Class.
Step 3: Verify TKC Worker Node Resize
Review the VM properties to confirm the worker nodes are now running with 8GB of memory.
- Click tkc01-workers-scrib-<id>
- Click to expand the VM Hardware tile
Note that the worker VM now has 8GB of memory, which is the memory allocation assigned to the best-effort-medium
VM Class. Repeat this step for the second worker node to confirm that it also has 8GB of memory.
You can also confirm the worker nodes are using the updated VM Class from the PuTTY terminal.
- Click the Putty icon to return to the Putty terminal
Run the command kubectl get virtualmachines -o wide.
-
kubectl get virtualmachines -o wide
The output confirms that the two worker nodes are running the best-effort-medium
VM Class.
Step 3 Summary
In this step we verified that the new worker node VMs were each deployed with 8GB. Effectively doubling the amount of memory assigned to tkc01 and confirming the change from the best-effort-small
to best-effort-medium
VM Class.
Module Summary
Being able to dynamically allocate capacity on demand, and to later grow that capacity over time is a critical capability of the modern cloud. When running Kubernetes workloads on top of Cloud Foundation with Tanzu it's easy to not only allocate infrastructure for hosting TKCs, but also to resize TKCs as needed.
TKCs can be expanded in two ways.
- You can scale the TKC vertically by increasing the size of the worker nodes.
- You can scale the TKC horizontally by adding more worker nodes.
In this exercise we will saw how to add capacity to a TKC by changing the VM Class to increase the size of the TKC worker nodes.
Module Key Takeaways
- To change the amount of CPU and/or memory assigned to a TKC node, edit the TKC and change the vmClass property.
- The list of available VM Classes is determined by the VM Classes that have been enabled by the vSphere administrator on the vSphere Namespace.
- While the developer can increase the size and number of virtual machines that get deployed as part of a TKC, the vSphere administrator is able to control and manage the amount of vSphere cluster resources (CPU, memory, storage) they can consume.