Tanzu Proof of Concept Guide

POC Guide Overview

The purpose of this document is to act as a simple guide for proof of concepts involving vSphere with Tanzu as well as VMware Cloud Foundation (VCF) with Tanzu.

This document is intended for data center cloud administrators who architect, administer and deploy VMware vSphere and VMware Cloud Foundation technologies. The information in this guide is written for experienced data center cloud administrators.

This document is not a replacement for official product documentation; however, it should be thought of as a structured guide to augment existing guidance throughout the lifecycle of a proof-of-concept exercise. Official documentation should supersede guidance documented here if the there is a divergence between this document and product documentation.

When referring to any statements made in this document, verification regarding support capabilities, minimums and maximums should be cross-checked against official VMware Technical product documentation at https://configmax.vmware.com/ in case of more recent updates or amendments to what is stated here.

This document is laid out into several distinct sections to make the guide more consumable depending on the use case and proof of concept scenario:

 

Section 1: Overview & Setup
Product information and getting started

Section 2: App Deployment & Testing
Use-case defined testing with examples

Section 3: Lifecycle Operations
Scaling, upgrades and maintenance

Section 4: Monitoring
Essential areas of focus to monitor the system

 

A Github repository with code samples to accompany this document is available at:
https://github.com/vmware-tanzu-experiments/vsphere-with-tanzu-proof-of-concept-samples

 

Overview and Setup

In this guide we detail the two networking options available in vSphere with Tanzu, namely vSphere or NSX-T networking. With the latter, we show how VMware Cloud Foundation with Tanzu can be utilised to quickly stand up a private cloud with Tanzu enabled.

Note that Tanzu itself comes in three different flavours, or ‘Editions’, see https://tanzu.vmware.com/tanzu.

 

vSphere with Tanzu — vSphere Networking

Here, we will describe the setup of vSphere with Tanzu using vSphere Networking, with both the NSX Advanced Load Balancer (ALB) and the open-source HaProxy options.

 

Graphical user interface, website</p>
<p>Description automatically generated

 

Getting Started

The basic steps and requirements to get started with vSphere with Tanzu are shown below. For more information, please refer to the official documentation.


1. Network Requirements

In vCenter, configure a vDS with at least two port groups for ‘Management’ and ‘Workload Network’.

Diagram</p>
<p>Description automatically generated

 

The following IP addresses are required:

Management Network:

5x consecutive routable IP addresses for Workload Management, plus one for the network appliance (i.e. either NSX ALB or HaProxy)

Workload Network:

For simplicity, one /24 routable network (which will be split into subnets). In the example below, we will use the network 172.168.161.0/24 with 172.168.161.1 as the gateway.

Next, decide on the network solution to be used, either:

2(a) NSX ALB or —
2(b) HaProxy


2(a) NSX Advanced Load Balancer Configuration

In vSphere 7.0 Update 2, a new option for load balancer is available. The NSX Advanced Load Balancer (NSX ALB) also known as AVI, provides a feature-rich and easy to manage load balancing solution. The NSX ALB is available for download in OVA format from my.vmware.com.

 

Graphical user interface, application</p>
<p>Description automatically generated with medium confidence

Below, we will briefly run through the steps to configure the NSX ALB. For full instructions, please refer to the documentation, https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-AC9A7044-6117-46BC-9950-5367813CD5C1.html

The download link will redirect you to the AVI Networks Portal. Select the VMware Controller OVA:

Graphical user interface, application</p>
<p>Description automatically generated

For more details on download workflow, see https://kb.vmware.com/s/article/82049?lang=en_US

Once the OVA has been downloaded, proceed to your vCenter and deploy the OVA by supplying a management IP address.

Note, supplying a sysadmin login authentication key is not required.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Once the appliance has been deployed and powered on, login to the UI using the supplied management IP/FQDN. Note, depending on the version used, the UI will vary. At the time of writing, the latest version available is 20.1.5.

Create username and password. Email is optional.

Graphical user interface</p>
<p>Description automatically generated

Add supplemental details, such as DNS, passphrase, etc.

A picture containing text, screenshot, monitor, screen</p>
<p>Description automatically generated

Next, the Orchestrator needs to be set to vSphere. Select ‘Infrastructure’ from the menu on the top left:

Graphical user interface, application</p>
<p>Description automatically generated

Then select ‘Clouds’ from the menu at the top:

Graphical user interface, application</p>
<p>Description automatically generated

Edit ‘Default-Cloud’ – on the pop-up window, navigate to ‘select cloud’ and set the orchestrator to ‘VMware’.

Graphical user interface, application</p>
<p>Description automatically generated

Follow the screens to supply the username, password and vCenter information so that the NSX ALB can connect to vCenter. For permissions, leave “Write” selected, as this will allow for easier deployment and automation between ALB and vCenter. Leave SDN Integration set to “None”.

Finally, on the Network tab, under ‘Management Network’, select the workload network as previously defined on the vDS. Provide the IP subnet, gateway, and IP address pool to be utilized. This IP Pool is a range of IP to be used for the Service Engine (SE) VMs .

Note, in a production environment, a separate 'data network' for the SEs may be desired. For more information, see the documentation, https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-489A842E-1A74-4A94-BC7F-354BDB780751.html

Here, we have created a block of 99 addresses in the workload network, from our /24 range:

image 76

After the initial configuration, we will need to either import a certificate or create a self-signed certificate to be used in Supervisor cluster communication. For the purposes of a PoC, a self-signed certificate should suffice.

Navigate to Administration by selecting this option from the drop-down menu on the upper left corner.

In the administration pane, select Settings and edit the System Access Settings by clicking on the pencil icon:

Graphical user interface, text, application, website</p>
<p>Description automatically generated

Remove the default certificates under ‘SSL/TLS’ Certificate. Then click on the caret underneath to expand the options. Click on ‘Create Certificate’ green box.

Graphical user interface, text, application</p>
<p>Description automatically generated

Create a self-signed certificate by providing the required information. You can add Subject Alternate Names if desired. Note, ensure the IP address of the appliance has been captured, either in the Name or in a SAN.

Graphical user interface, application</p>
<p>Description automatically generated

For more information on certificates, including creating a CSR, see the AVI documentation, https://avinetworks.com/docs/20.1/ssl-certificates/

Next, we need to create an IPAM Profile. This is needed to tell the controller to use the Frontend network to allocate VIPs via IPAM.
Navigate to Templates > Profiles > IPAM/DNS Profiles > create

Graphical user interface, text, application</p>
<p>Description automatically generated

Via a IPAM profile, change the cloud for usable network to ‘Default-Cloud’, and set the usable network to the VIP network, in this case DSwitch-wld:

Graphical user interface, text, application</p>
<p>Description automatically generated

 

At this stage, if you have successfully deployed the NSX ALB, proceed to step 3.

 

2(b) HaProxy Configuration

As an alternative to the NSX ALB, VMware have packaged HaProxy in a convenient OVA format, which can be downloaded and deployed quickly. This is hosted on GitHub: https://github.com/haproxytech/vmware-haproxy

In the simplest configuration, the HA Proxy appliance will need a minimum of two interfaces, one on the ‘Management’ network and the other on a ‘Workload’ network, with a  static IP address in each. (An option to deploy with three networks, i.e. with an additional ‘Frontend’ network is also available but is beyond the scope of this guide).

Below we will go through the basic setup of HaProxy and enabling Workload Management to quickly get started.

First, download and configure the latest HaProxy OVA from the GitHub site.

Here, we will use the ‘Default’ configuration, which will deploy the appliance with two network interfaces:

Graphical user interface, application</p>
<p>Description automatically generated

The two port groups for Management and Workload Network should be populated with the appropriate values. The Frontend network can be ignored:

Graphical user interface, application, email</p>
<p>Description automatically generated

Use the following parameters as a guide, substituting the workload network for your own.

As per the table below, we subnet the Workload network to a /25 for the load-balancer IP ranges in step 3.1. In addition, the HaProxy will require an IP for itself in the workload network.

1.2

Permit Root Login

True

2.1

Host Name

<Set a Host Name>

2.2

DNS

<DNS Server>

2.3

Management IP

<IP in Mgmt range>

2.4

Management Gateway

<Mgmt Gateway>

2.5

Workload IP

172.168.161.3

2.6

Workload Gateway

172.168.161.1

3.1

Load Balancer IP Ranges (CIDR)

172.168.161.128/25

3.2

Dataplane API Management Port

5556

3.3

HaProxy User ID

admin

3.4

HaProxy Password

<set a password>

N.B.: Take special care with step 3.1, this must be in CIDR format. Moreover, this must cover the ‘IP Address Ranges for Virtual Servers’ which will be used later to enable Workload Management in vCenter (see below). Note that the vCenter wizard will require the range defined here in a hyphenated format: from the example above, 172.168.161.128/25 covers the range 172.168.161.129-172.168.171.240

 

3. TKG Content Library

Before we can start the Workload Management wizard, we need to first setup the TKG Content Library to pull in the TKG VMs from the VMware repository. The vCenter where the TKG content library will be created on should have internet access in order to be able to connect to the repo.

Create a subscribed content library (Menu > Content Libraries > Create New Content Library) pointing to the URL:

https://wp-content.vmware.com/v2/latest/lib.json

Graphical user interface, text, application</p>
<p>Description automatically generated

For the detailed procedure, see the documentation: https://via.vmw.com/tanzu_content_library

 

4. Load Balancer Certificate

The first step is to obtain the certificate from the deployed network appliance.

For NSX ALB export the certificate from the ALB UI by going to Templates > Security > SSL/TLS Certificates. Select the self-signed certificate you created and export it.

Table</p>
<p>Description automatically generated

 

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Copy the certificate and make a note of it for the steps below.

If using the HaProxy appliance, log into it using SSH. List the contents of the file /etc/haproxy/ca.crt.

 

5. Configure Workload Management

In vCenter, ensure that DRS and HA are enabled for the cluster and a storage policy for the control plane VMs exists. In a vSAN environment, the default vSAN policy can be used.

Navigate to Menu > Workload Management and click ‘Get Started’ to start the wizard.

Graphical user interface, text, application</p>
<p>Description automatically generated

 

Text, letter</p>
<p>Description automatically generated

 

Below we’ll focus on the networking, i.e. step 5 onwards in the wizard. For more details, please see the documentation, https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-8D7D292B-43E9-4CB8-9E20-E4039B80BF9B.html

Use the following as a guide, again, replacing values for your own:

Load Balancer:

Name*: lb1
Type: NSX ALB | HaProxy
Data plane API Address(s): <NSX ALB mgmt IP>:443 | <HaProxy mgmt IP>:5556 
Username: admin
Password: <password from appliance>
IP Address Ranges for Virtual Servers^ : 172.168.161.129–172.168.171.240
Server Certificate Authority: <cert from NSX ALB or HaProxy>

* Note that this is a Kubernetes construct, not the DNS name of the HaProxy appliance.
^ HaProxy only. This must be within the CIDR range defined in step 3.1 of the HaProxy configuration
 

Management Network:

Network: <mgmt port group>
Starting IP: <first IP of consecutive range>
Subnet: <mgmt subnet>
Gateway: <management gateway>
DNS: <dns server>
NTP: <ntp server>

 

Workload Network:

Name: <any you choose>
Port Group: <workload port group>
Gateway: 172.168.161.1
Subnet: 255.255.255.0
IP Address Ranges*:  172.168.161.20–172.168.161.100

* These must not overlap with the load-balancer addresses

 

Note, it may be useful to use a tool such as ‘arping’ or ‘nmap’ to check where IPs are being used. For example:

# arping -I eth0 -c 3 10.156.163.3
ARPING 10.156.163.3 from 10.156.163.10 eth0
Unicast reply from 10.156.163.3 [00:50:56:9C:5A:F5]  0.645ms
Unicast reply from 10.156.163.3 [00:50:56:9C:5A:F5]  0.891ms
Unicast reply from 10.156.163.3 [00:50:56:9C:5A:F5]  0.714ms
Sent 3 probes (1 broadcast(s))
Received 3 response(s)

 

vSphere with Tanzu — NSX-T Networking

Overview

In this section, we show how to quickly deploy vSphere with Tanzu and NSX-T using VMware Cloud Foundation (VCF). NSX provides a container plug-in (NCP) that interfaces with Kubernetes to automatically serve networking requests (such as ingress and load balancer) from NSX Manager. For more details on NCP, visit: https://via.vmw.com/ncp.

In addition, NSX-T networking enables two further elements: ‘vSphere Pods’ and a built-in version of the Harbor registry. The vSphere Pod service enables services from VMware and partners to run directly on top of ESXi hosts, providing a performant, secure and tightly integrated Kubernetes environment.

For more details on vSphere Pods see https://via.vmw.com/vsphere_pods and https://blogs.vmware.com/vsphere/2020/04/vsphere-7-vsphere-pod-service.html

 

Graphical user interface, website</p>
<p>Description automatically generated

 

Once the VCF environment with SDDC manager has been deployed (see https://docs.vmware.com/en/VMware-Cloud-Foundation/index.html for more details), Workload Management can be enabled. Note that both standard and consolidated deployments can be used.

Getting Started

Below is a summary of the detailed steps found in the VCF POC Guide.

First, in SDDC Manager, click on Solutions, this should show “Kubernetes – Workload Management”. Click on Deploy and this will show a window with the deployment pre-requisites, i.e.:

  • Hosts are licenced correctly
  • An NSX-T based Workload Domain has been provisioned
  • NTP and DNS has been set up correctly
  • NSX Edge cluster deployed with a ‘large’ form factor
  • The following IP addresses have been reserved for use:
    • non-routable /22 subnet for pod networking
    • non-routable /24 subnet for Kubernetes services
    • two routable /27 subnets for ingress and egress
    • 5x consecutive IP addresses in the management range for Supervisor services

 

Clicking on Begin will start the Kubernetes deployment wizard.

Graphical user interface, application</p>
<p>Description automatically generated

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Select the appropriate cluster from the drop-down box. Click on the radio button next to the compatible cluster and click on Next:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

The next screen will go through some validation checks

Check that the validation succeeds. After clicking on Next again, check the details in the final Review window:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Click on Complete in vSphere to continue the wizard in vCenter

Ensure the correct cluster has been pre-selected:

Graphical user interface, application</p>
<p>Description automatically generated

 

To show the Storage section, click on Next. Select the appropriate storage policies for the control plane, ephemeral disks and image cache:

Graphical user interface, text, application</p>
<p>Description automatically generated

Click on Next to show the review window. Clicking on Finish will start the supervisor deployment process:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

For an interactive guide of the steps above, visit:

https://core.vmware.com/delivering-developer-ready-infrastructure#step_by_step_guide_to_deploying_developer_ready_infrastructure_on_cloud_foundation_isim_based_demos

 

TKG Content Library

To later setup Tanzu Kubernetes Clusters, we need to first setup the TKG Content Library to pull in the TKG VMs from the VMware repository.

Create a subscribed content library (Menu > Content Libraries > Create New Content Library) pointing to the URL:

https://wp-content.vmware.com/v2/latest/lib.json

Graphical user interface, text, application</p>
<p>Description automatically generated

For the detailed procedure, see the documentation: https://via.vmw.com/tanzu_content_library

 

 

Supervisor Cluster Setup

After the process has been completed, navigate to Cluster > Monitor > Namespaces > Overview to ensure the correct details are shown and the health is green. Note that whilst the operations are in progress, there may be ‘errors’ shown on this page, as it is monitoring a desired state model:

Graphical user interface, application</p>
<p>Description automatically generated

 

Configure Supervisor Cluster Namespace(s) with RBAC

Once the supervisor cluster has been configured, a namespace should be created in order to set permissions, storage policies, and capacity limitations among others. In Kubernetes, a namespace is a collection of resources such as containers, disks, etc.

To create a namespace, navigate to Menu > Workload Management > Click on Namespaces > New Namespace.
Fill in the necessary fields and click create.

Graphical user interface, application</p>
<p>Description automatically generated

 

The new namespace area will be presented. This is where permissions, storage policies and other options can be set.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

After clicking “Got It” button, the summary will show a widget where permissions can be set.

Graphical user interface, application</p>
<p>Description automatically generated

Click on Add Permissions and fill in the necessary fields. It is important to note that the user/group to be added to this namespace should have already been created ahead of time. This can be an Active Directory user/group (see  https://via.vmw.com/ad_setup) or ‘vsphere.local’:

Graphical user interface, text, application</p>
<p>Description automatically generated

After adding permission, the summary screen will show who has permissions and what type. Clicking the Manage Permissions link will take you to the Permissions tab for this namespace

Graphical user interface, application</p>
<p>Description automatically generated

From the permissions tab, you can add/remove/edit permissions for a particular namespace. Thus, here we can enable access for a developer to be able to consume the namespace.

Graphical user interface, application</p>
<p>Description automatically generated

 

Configure Supervisor Cluster Namespace(s) Storage Policy

First, configure any storage policies as needed, either by defining a VM storage policy (as is the case for vSAN) or by tagging an existing datastore. Note that vSAN comes with a default storage policy ‘vSAN Default Storage Policy’ that can be used without any additional configuration.

To create a VM storage policy, navigate to Menu > Policies and Profiles > VM Storage Policies and click on ‘Create’. Follow the prompts for either a vSAN storage policy or tag-based policy under ‘Datastore Specific rules’.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

To create a tag-based VM storage policy, reference the documentation: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-D025AA68-BF00-4FC2-9C7E-863E5787E743.html

Once a Policy has been created, navigate back to the namespace and click on ‘add storage

Graphical user interface, application</p>
<p>Description automatically generated

Select the appropriate storage policy to add to the namespace:

Graphical user interface, text, application</p>
<p>Description automatically generated

 

 

 

Configure Supervisor Cluster Namespace(s) with Resource Limitations

Resource limitations such as CPU, memory, and storage can be tied to a namespace. Under the namespace, click on the Configure tab and select Resource Limits.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

By clicking on the edit button, resources can be limited for this specific Namespace. Resource limitations can also be set at the container level.

Graphical user interface, table</p>
<p>Description automatically generated

Note that under the Configure tab, it is also possible to limit objects such as Replica Sets, Persistent Volume Claims (PVC), and network services among others.

Table</p>
<p>Description automatically generated

 

 

 

Lab VM Setup

Whilst many of the operations in this guide can be performed on a standard end-user machine (be it Windows, MacOS or Linux), it is a good idea to deploy a jump host VM, which has the tools and configuration ready to work with. A Linux VM is recommended.

Conveniently, there is a TKG Demo Appliance fling that we can leverage for our purposes. Download and deploy the OVA file from the link below (look for the ‘offline download’ of the TKG Demo Appliance OVA): https://via.vmw.com/tkg_demo

Note that throughout this guide, we use Bash as the command processor and shell. 

 

Downloading the kubectl plugin

See https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-0F6E45C4-3CB1-4562-9370-686668519FCA.html

Once a namespace has been created (see steps above), a command-line utility (kubectl-vsphere) needs to be downloaded to be able to login to the namespace. First, navigate to the namespace in vCenter: Menu > Workload Management > Namespace then select ‘Copy link’:

Graphical user interface, text, application</p>
<p>Description automatically generated

This will provide the VIP address needed to login to the namespace. Make a note of this address. Then on your jump VM, download the zip file ‘vsphere-plugin.zip’, either using a browser or via wget, pointing to https://<VIP>/wcp/plugin/linux-amd64/vsphere-plugin.zip
 

For example:

# wget https://172.168.61.129/wcp/plugin/linux-amd64/vsphere-plugin.zip --no-check-certificate

Unzip this file and place the contents in the system path (such as /usr/local/bin). The zip file contains two files, namely kubectl and kubectl-vsphere. Remember to set execute permissions.

To log into a namespace on the supervisor cluster, issue the following command, replacing the VIP IP with your own:

# kubectl vsphere login --server=172.168.61.129 --insecure-skip-tls-verify

Use the credentials of the user added to the namespace to login.

Note that the ‘insecure’ option needs to be specified unless the appropriate TLS certificates have been installed on the jump host. For more details see the ‘Shell Tweaks’ sub-section below.

Once logged in, perform a quick check to verify the health of the cluster using ‘kubectl cluster-info’:

# kubectl cluster-info
Kubernetes master is running at https://172.168.61.129:6443
KubeDNS is running at https://172.168.61.129:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

 

Shell Tweaks (optional)

In order to have a better experience (with less typing and mistakes) it’s advisable to spend a little time further setting up our lab VM.

Installing Certificates:

In order to setup trust with vCenter, and to avoid skipping the TLS verify step on every login, we need to download the certificate bundle and copy the certificates to the appropriate location.

The outline procedure for this is given in https://kb.vmware.com/s/article/2108294 with more details here, https://via.vmw.com/tanzu_tls

First, we download the certificate bundle from vCenter and unzip it:

# wget --no-check-certificate https://vCenter-lab/certs/download.zip
# unzip download.zip

 

Then copy the certificates to the correct location. This is determined by the operating system, in the case of the TKG Appliance / Photon OS, it is /etc/ssl/certs:

# cp certs/lin/* /etc/ssl/certs

Finally, either use an OS utility to update the system certificates, or reboot the system.

 

Password as an environment variable:

We can store the password used to login to the supervisor cluster in an environment variable. This can then be combined with the login command for quicker/automated logins, for example (here we have also installed the certificates, thus we have a shorter login command):

# export KUBECTL_VSPHERE_PASSWORD=P@ssw0rd
# kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://172.168.161.101

For autocomplete:

# source <(kubectl completion bash)
# echo "source <(kubectl completion bash)" >> ~/.bashrc

To set the alias of kubectl to just ‘k’:  

# echo "alias k='kubectl'" >> ~/.bashrc
# complete -F __start_kubectl k

 

YAML validator

It is a good idea to get any manifest files checked for correct syntax, etc. before applying. Tools such as ‘yamllint’ (or similar, including online tools) validate files quickly and detail where there may be errors.

 

For more details and other tools see the following links:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://yamllint.readthedocs.io/

 

 

Tanzu Kubernetes Cluster Deployment

Once the Supervisor cluster has been enabled, and a Namespace created (as above), we can create an upstream-compliant Tanzu Kubernetes Cluster (TKC). This is done by applying a manifest on the supervisor cluster which will define how the cluster is setup. (Note that the terms TKC and TKG cluster are used interchangeably within this guide.)

First, make sure that the Supervisor Namespace has been correctly configured. A content library should have been created to pull down the TKG VMs. In vSphere 7 update 2a there is a further requirement to add a VM class.

Navigating to Hosts and Clusters > Namespaces > [namespace] will give you a view of the information cards. The card labelled ‘Tanzu Kubernetes Grid Service’ should have the name of the content library hosting the TKG VMs.

Graphical user interface, application</p>
<p>Description automatically generated

On the ‘VM Service’ card click on ‘Add VM Class’ to add VM class definitions to the Namespace:

Graphical user interface, application</p>
<p>Description automatically generated

This will bring up a window to enable you to add the relevant VM classes (or to create your own). Select all available classes and add them to the Namespace:

Graphical user interface, table</p>
<p>Description automatically generated

For more details on the sizing see: https://via.vmw.com/tanzu_vm_classes.

Next, we can proceed to login to the supervisor namespace using ‘kubectl vsphere login’. If necessary, use the ‘kubectl config use-context’ command to switch to the correct supervisor namespace.

To get the contexts available (the asterisk shows the current context used):

# kubectl config get-contexts
CURRENT   NAME             CLUSTER           AUTHINFO             NAMESPACE
*         172.168.61.129   172.168.61.129    dev@vsphere.local
          ns01             172.168.61.129    dev@vsphere.local    ns01

And to switch between them:

# kubectl config use-context ns01
Switched to context "ns01".

 

If we have setup our TKC content library correctly, we should be able to see the downloaded VM images using the command ‘kubectl get tkr’:

# kubectl get tkr
NAME                                VERSION                      
v1.16.12---vmware.1-tkg.1.da7afe7   1.16.12+vmware.1-tkg.1.da7afe7
v1.16.14---vmware.1-tkg.1.ada4837   1.16.14+vmware.1-tkg.1.ada4837
v1.16.8---vmware.1-tkg.3.60d2ffd    1.16.8+vmware.1-tkg.3.60d2ffd
v1.17.11---vmware.1-tkg.1.15f1e18   1.17.11+vmware.1-tkg.1.15f1e18
v1.17.11---vmware.1-tkg.2.ad3d374   1.17.11+vmware.1-tkg.2.ad3d374
v1.17.13---vmware.1-tkg.2.2c133ed   1.17.13+vmware.1-tkg.2.2c133ed
v1.17.17---vmware.1-tkg.1.d44d45a   1.17.17+vmware.1-tkg.1.d44d45a
v1.17.7---vmware.1-tkg.1.154236c    1.17.7+vmware.1-tkg.1.154236c
v1.17.8---vmware.1-tkg.1.5417466    1.17.8+vmware.1-tkg.1.5417466
v1.18.10---vmware.1-tkg.1.3a6cd48   1.18.10+vmware.1-tkg.1.3a6cd48
v1.18.15---vmware.1-tkg.1.600e412   1.18.15+vmware.1-tkg.1.600e412
v1.18.5---vmware.1-tkg.1.c40d30d    1.18.5+vmware.1-tkg.1.c40d30d
v1.19.7---vmware.1-tkg.1.fc82c41    1.19.7+vmware.1-tkg.1.fc82c41
v1.20.2---vmware.1-tkg.1.1d4f79a    1.20.2+vmware.1-tkg.1.1d4f79a

Thus versions through to v1.20.2 are available to use.

We then need to create a manifest to deploy the TKC VMs. An example manifest is shown below, this will create a cluster in the ns01 supervisor namespace called ‘tkgcluster1’ consisting of one control-plane and three worker-nodes, with the Kubernetes version 1.17.8:

TKG-deploy.yaml

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkgcluster1
  namespace: ns01
spec:
  distribution:
   version: v1.17.8
  topology:
   controlPlane:
      count: 1
      class: guaranteed-small
      storageClass: vsan-default-storage-policy
   workers:
      count: 3
      class: guaranteed-small
      storageClass: vsan-default-storage-policy

 

Let’s dissect this manifest to examine the components:

Text</p>
<p>Description automatically generated

 

A: These lines specify the API version and the kind, these should not be modified. To get the available API version for Tanzu, run ‘kubectl api-versions | grep tanzu’.

B: Tanzu Kubernetes cluster name is defined in the field ‘name’ and the supervisor namespace is defined in the ‘namespace’ field.

C: The K8s version (v1.17.8) is defined. This will depend on the downloaded TKG VMs from the content library. Use the command ‘kubectl get tkr’ to obtain the available versions.

D: The created VMs will use the ‘guaranteed-small’ profile.

E: Storage policy to be used by the control plane VMs

For clarity, some fields have been omitted (the defaults will be used). For a full list of parameters, refer to the documentation: https://via.vmw.com/tanzu_params and further manifest file examples: https://via.vmw.com/tanzu_yaml

Once this file has been created, use kubectl to start the deployment, for example, we create our manifest file called ‘TKG-deploy.yaml’ (as above) and apply:

# kubectl apply -f TKG-deploy.yaml

The supervisor cluster will create the required VMs and configure the TKC as needed. This can be monitored using the get and describe verbs on the ‘tkc’ noun:

# kubectl get tkc -o wide
NAME          CONTROL PLANE WORKER   DISTRIBUTION                     AGE   PHASE
tkgcluster1   1               1      v1.17.8+vmware.1-tkg.1.5417466   28d   running

 

# kubectl describe tkc
Name:         tkgcluster1
Namespace:    ns01
Labels:       <none>
Annotations:  API Version:  run.tanzu.vmware.com/v1alpha1
Kind:         TanzuKubernetesCluster
.
.
Node Status:
    tkgcluster1-control-plane-jznzb:            ready
    tkgcluster1-workers-fl7x8-59849ddbb-g8qjq:  ready
    tkgcluster1-workers-fl7x8-59849ddbb-jqzn4:  ready
    tkgcluster1-workers-fl7x8-59849ddbb-kshrt:  ready
  Phase:                                        running
  Vm Status:
    tkgcluster1-control-plane-jznzb:            ready
    tkgcluster1-workers-fl7x8-59849ddbb-g8qjq:  ready
    tkgcluster1-workers-fl7x8-59849ddbb-jqzn4:  ready
    tkgcluster1-workers-fl7x8-59849ddbb-kshrt:  ready
Events:                                         <none>

For more verbose output and to watch the cluster being built out, select yaml as the output with the ‘-w’ switch:

# kubectl get tkc -o yaml -w
.
.
  nodeStatus:
    tkc-1-control-plane-lvfdt: notready
    tkc-1-workers-fxspd-894697d7b-nz682: pending
  phase: creating
  vmStatus:
    tkc-1-control-plane-lvfdt: ready
    tkc-1-workers-fxspd-894697d7b-nz682: pending

 

In vCenter, we can see the TKC VMs being created (as per the manifest) within the supervisor namespace:

A screenshot of a cell phone</p>
<p>Description automatically generated

Once provisioned, we should be able to see the created VMs in the namespace:

# kubectl get wcpmachines
NAME                                    PROVIDERID   IPADDR
tkgcluster1-control-plane-scsz5-2dr55   vsphere://421075449  172.168.61.33
tkgcluster1-workers-tjpzq-gkdn2         vsphere://421019aa  172.168.61.35
tkgcluster1-workers-tjpzq-npw88         vsphere://421055cf  172.168.61.38
tkgcluster1-workers-tjpzq-vpcwx         vsphere://4210d90c  172.168.61.36

 

Once the TKC has been created, login to it by using ‘kubectl vsphere’ with the following options:

# kubectl vsphere login –server=<VIP> \
--insecure-skip-tls-verify \
--tanzu-kubernetes-cluster-namespace=<supervisor namespace> \
--tanzu-kubernetes-cluster-name=<TKC name>

For example:

# kubectl-vsphere login --server=https://172.168.61.129 \
--insecure-skip-tls-verify \
--tanzu-kubernetes-cluster-namespace=ns01 \
--tanzu-kubernetes-cluster-name=tkgcluster1

Login using the user/credentials assigned to the namespace. You can then change contexts between the TKC and the supervisor namespace with the ‘kubectl config’ command (as above).

 

Developer Access to TKCs

Once a TKG cluster has been provisioned, developers will need sufficient permissions to deploy apps and services.

A basic RBAC profile is shown below:

tkc-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: all:psp:privileged
roleRef:
  kind: ClusterRole
  name: psp:privileged
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io

This can also be achieved using the kubectl command:

# kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

For more information, see the documentation to grant developer access to the cluster: https://via.vmw.com/tanzu_rbac

 

App Deployment and Testing

Deploy Kuard to verify setup

A very basic test to see if the K8s cluster is operational is to deploy KUARD (Kubernetes Up And Running)

Use the commands below to pull the KUARD image and assign an IP to it. (HaProxy will serve the IP from the workload subnet):

# kubectl run --restart=Never --image=gcr.io/kuar-demo/kuard-amd64:blue kuard
# kubectl expose pod kuard --type=LoadBalancer --name=kuard --port=8080

Once deployed, we can list the external IP assigned to it using the ‘get service’ command:

# kubectl get service
NAME       TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)          AGE
kuard      LoadBalancer   10.96.0.136   152.17.31.132   8080:30243/TCP   6s

 

Therefore, opening a browser to the ‘External-IP’ on port 8080, i.e. http://152.17.31.32:8080 should give us a webpage showing the KUARD output:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

 

Persistent Volume Claims (PVC)

To create a PVC, first we need to map any storage policies (defined in vCenter) we wish to use to the supervisor namespace.

In this example, we describe how to do this with standard (block) vSAN volumes. Note, at the time of writing, using the vSAN File Service to provision RWX volumes for Tanzu is not supported.

First, create the storage policy in vCenter, under Menu > Policies and Profiles > VM Storage Policies. Note the convention of using lowercase names:

Graphical user interface, application</p>
<p>Description automatically generated

Then add them to the namespace by clicking on ‘Edit Storage’

Graphical user interface, application</p>
<p>Description automatically generated

Select any additional storage policies. In the example below, we add the new ‘raid-1’ policy:

Graphical user interface, application</p>
<p>Description automatically generated

To list all of the available storage classes, we run:

# kubectl get storageclass
NAME     PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
raid-1   csi.vsphere.vmware.com   Delete          Immediate           true                  3m54s

We can then create a PVC using a manifest. In the example below, we create a 2Gi volume:

2g-block-r1.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc-r1-2g
spec:
  storageClassName: raid-1
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Then apply this:

# kubectl apply -f 2g-block-r1.yaml

To see the details:

# kubectl get pvc
NAME                STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS      AGE
block-pvc-r1-2g     Bound    pvc-0a612267  2Gi        RWO            raid-1            51m

Now we have a volume, we can create attach this to a pod. In the example below, we create a pod using Busybox and mount the volume created above:

simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: simple-pod
spec:
  containers:
  - name: simple-pod
    image: "k8s.gcr.io/busybox"
    volumeMounts:
    - name: block-vol
      mountPath: "/mnt/volume1"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: block-vol
      persistentVolumeClaim:
        claimName: block-pvc-r1-2g

Once the pod has been created, we can examine the storage within it.

First we run a shell on the pod:

# kubectl exec -it simple-pod -- /bin/sh

Using the df command, we can see the volume has been attached and is available for consumption:

# df -h /mnt/volume1/
Filesystem                Size      Used Available Use%  Mounted on
/dev/sdb                  1.9G      6.0M      1.8G   0%  /mnt/volume1

Furthermore, we can see the PVCs created by a Kubernetes admin in vCenter by navigating to either Datacenter > Container Volumes or Cluster > Monitor > Container Volumes:

Graphical user interface, website</p>
<p>Description automatically generated

Clicking on the square next to the volume icon shows more information about the PVC and where it is used. From our example, we see the guest cluster, the pod name “simple pod” and the PVC name given in the manifest:

Graphical user interface, application</p>
<p>Description automatically generated

 

Graphical user interface, text, application, email</p>
<p>Description automatically generated

Clicking on Physical Placement shows (as we are using a vSAN store) the backing vSAN details:

Graphical user interface, application</p>
<p>Description automatically generated

We can also see details of the PVC in vCenter under Cluster > Namespaces > Namespace > Storage > Persistent Volume Claims:

Graphical user interface</p>
<p>Description automatically generated

Here, we can see more details – specifically Kubernetes parameters, if we click on ‘View YAML’:

Graphical user interface, table</p>
<p>Description automatically generated

 

Graphical user interface, text, application</p>
<p>Description automatically generated

 

 

 

Wordpress & MySQL app

The Kubernetes documentation has a practical example on using PVCs using WordPress and MySQL:
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

However, the PVC claims in the example manifests do not include a storage policy (which is required for the PVC to be created). To successfully deploy this app, we must either add a default storage policy into our TKC manifest or edit the manifests to define a storage policy.

 The outline steps for this example app are as follows:

  1. Ensure that an TKC RBAC profile has been applied to the cluster (see the previous section on creating TKG clusters and granting developer access)
  2. Create a new directory on the jump VM
  3. Generate the kustomization.yaml file with a password
  4. Download the two manifest files for mySQL and Wordpress using curl
  5. Add the two files to the kustomization.yaml as shown
  6. Follow one of the two options below to satisfy the storage policy requirement. (For the quickest solution, copy and paste the awk line in option 2)

Thus, firstly we define our RBAC profile; as before:

# kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

We create a directory ‘wordpress’:

# mkdir wordpress; cd wordpress

As per the example, we generate the kustomization.yaml file, entering a password (we combine steps 3&5 for brevity):

# cat <<EOF > kustomization.yaml
secretGenerator:
- name: mysql-pass
  literals:
  - password=P@ssw0rd
resources:
  - mysql-deployment.yaml
  - wordpress-deployment.yaml
EOF

Then download the two manifests:

# curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
# curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml

Looking at the manifest file wordpress-deployment.yaml:

Wordpress-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim

We notice that:

  • It creates a Loadbalancer service in the first instance. This will interact with the network provider we have provisioned (either HaProxy/NSX ALB or NCP in the case for NSX-T).
  • A Persistent Volume Claim of 20GB is instantiated
  • The WordPress containers are specified (to be pulled/downloaded) 

Now, there is no mapping to a storage class given, so as-is this deployment will fail. There are two options to add this:

Option 1: Patch or Edit the TKC manifest to add a default StorageClass

Here, we will define a default storage policy ‘defaultClass’ for our TKG cluster. First change context to the namespace that the TKG cluster resides. In the example below, this is ‘ns01’:

# kubectl config use-context ns01

Then patch with the storage class we want to make the default; in this case “vsan-default-storage-policy”:

# kubectl patch storageclass vsan-default-storage-policy -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}

Alternatively, another way to achieve this to edit the tkc manifest for your TKG cluster, for instance:

# kubectl edit tkc/tkgcluster1

Then add the following lines under spec/settings:

storage:
   defaultClass: <storage policy>

For example, we add the ‘vsan-default-storage-policy’:

spec:
  distribution:
    fullVersion: v1.17.8+vmware.1-tkg.1.5417466
    version: v1.17.8
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 192.168.0.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 10.96.0.0/12
    storage:
      defaultClass: vsan-default-storage-policy

We should then see the effects when running a ‘get storageclass’:

# kubectl get storageclass
NAME                                    PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
vsan-default-storage-policy (default)   csi.vsphere.vmware.com   Delete          Immediate           true                   40h

For more details on the default StorageClass, see the Kubernetes documentation, https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

For more details on editing the TKC manifest, see the documentation: https://via.vmw.com/tanzu_update_manifest

Option 2: Edit the app manifest files to explicitly add the storage class:

Add the following line to the two manifest files after the line ‘- ReadWriteOnce

storageClassName: <storage policy>

For example:

spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: vsan-default-storage-policy

We could also use a script to add this line in to both files. For example, using awk:

# for x in $(grep -l 'app: wordpress' *); do awk '/ReadWriteOnce/{print;print "  storageClassName: vsan-default-storage-policy";next}1' $x >> ${x/.yaml/}-patched.yaml; done

Patched versions are also available in the Github repository

After the storage policy has been set, run the following command within the directory:

# kubectl apply -k ./

Once the manifests are applied, we can see that the PVC has been created:

# kubectl get pvc
NAME              STATUS   VOLUME     CAPACITY   ACCESS    STORAGECLASS                  
mysql-pv-claim    Bound    pvc-6d9d   20Gi       RWO       vsan-default-storage-policy   
wp-pv-claim       Bound    pvc-1906   20Gi       RWO       vsan-default-storage-policy 

We can see that the Loadbalancer service has been created with a dynamic IP address. The external IP can be obtained from the service ‘wordpress’:

# kubectl get services wordpress
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
wordpress   LoadBalancer   10.101.154.101   172.168.61.132   80:31840/TCP   3m21s

If we were to have a look within these network providers, we would see our service there

For example, in NSX ALB, if we navigate to Applications > Virtual Services:

Graphical user interface, application</p>
<p>Description automatically generated

Further settings, logs, etc. can then be explored inside of the network provider.

In vCenter, we can see that the PVC volumes have been created and tagged with the application name:

Graphical user interface, application</p>
<p>Description automatically generated

 

Finally, putting the external IP (in this case 172.168.61.132) into a browser should give the WordPress setup page:

Graphical user interface, application, Word</p>
<p>Description automatically generated

 

To remove the app,

# kubectl delete -k ./

 

 

Re-deploy WordPress app with a Static Load balancer address

Earlier we saw that the load balancer address (172.168.161.105) had been automatically assigned.  With NSX-T and NSX ALB, we can statically define the load balancer address.

We edit our load balancer spec, defined in wordpress-deployment.yaml, and add the extra line ‘loadBalancerIP’ pointing to the address 172.168.161.108:

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
  loadBalancerIP: 172.168.161.108

 
Apply this again:

# kubectl apply -k ./

We can confirm that the service uses the static IP:

# kubectl get service wordpress
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
wordpress   LoadBalancer   10.107.115.82   172.168.161.108   80:30639/TCP   5m1s

 

For more information on using the load balancer service with a static IP address, see the example given in the official documentation (which also covers an important security consideration): https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-83060EA7-991B-4E1E-BBE4-F53258A77A9C.html

 

Developer Self-Service Namespace: Create a new Supervisor Namespace and TKC

Supervisor Namespaces provide logical segmentation between sets of resources and permissions.  Traditionally, a vSphere admin manages infrastructure resources that are then made available into environments for users to consume. Whilst this model ensures that the vSphere admin is able to fairly manage resources across the organisation, there is an operational overhead to this.

Here, we give a devops user the ability to create Supervisor Namespaces, using a resource template that has been created by the vSphere admin. Then we show how the devops user can make use of this to create another TKG cluster.

First, in vCenter, navigate to the cluster that has Workload Management enabled, then navigate to Configure > Namespaces > General. Expand the ‘Namespace Service’ box and toggle to enable:

Graphical user interface, text, application</p>
<p>Description automatically generated

This will then bring up a configuration window for a new template, for resource assignment:

Graphical user interface, application</p>
<p>Description automatically generated

Add permissions to an existing devops user:

Graphical user interface, text, application, chat or text message</p>
<p>Description automatically generated

And confirm:

Graphical user interface, application</p>
<p>Description automatically generated

The devops user (as assigned permissions by the vSphere admin) is now able to create supervisor namespaces.

First, we switch contexts to the supervisor namespace:

# kubectl config use-context 172.168.161.101
Switched to context "172.168.161.101"

Then create the namespace:

# kubectl create namespace ns3
namespace/ns3 created

To ensure the local information is synchronised, re-issue a login (a logout is not needed).

Switch to the new namespace:

# kubectl config use-context ns3
Switched to context "ns3"

To create our TKC, we define our manifest, as before:

TKG-deploy.yaml

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkgcluster2
  namespace: ns3
spec:
  distribution:
    version: 1.20.2+vmware.1-tkg.1.1d4f79a
  topology:
    controlPlane:
      class: best-effort-small
      count: 1
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-small
      count: 3
      storageClass: vsan-default-storage-policy

And apply:

# kubectl apply -f TKG-deploy.yaml
tanzukubernetescluster.run.tanzu.vmware.com/tkgcluster1 created

As before, we can watch the deployment:

# kubectl get tkc tkgcluster2 -o yaml -w

 

For more information on the self-service namespaces, visit: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-BEEA763E-43B7-4923-847F-5E0398174A88.html

 

 

Deploy a Private Registry VM using the VM Service and add to TKG Service Config

The VM Service is a new feature available in vSphere 7 Update 2a which allows you to provision VMs using kubectl within a Supervisor Namespace, thus allowing developers the ability to deploy and manage VMs in the same way they manage other Kubernetes resources.

Note that a VM created through the VM Service can only be managed using kubectl: vSphere administrators can see the VM in vCenter, but cannot edit or otherwise alter the VM, but can display its details and monitor resources it uses. For information, see Monitor VMs in the vSphere Client.

We also have the ability, from vSphere 7 Update 2a, to use private registries for TKG clusters.

In this example, we will use the VM Service feature to deploy a VM as a devops user and then install a Harbor registry on it. Finally, we will use that Harbor instance as a private registry for a TKG cluster.

First the VI-admin must configure the VM service in vCenter.

Similar to TKG, we need to setup a content library to pull from. At the time of writing, CentOS and Ubuntu images are available for testing from the VMware Marketplace:
https://marketplace.cloud.vmware.com

To obtain a subscription link, first sign in using your ‘myvmware’ credentials.

Graphical user interface, application, Teams</p>
<p>Description automatically generated

Clicking on ‘Subscribe’ will take you through the wizard to enter settings and accept the EULA:

Graphical user interface, application</p>
<p>Description automatically generated

The site will then create a Subscription URL:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

See the VMware Marketplace documentation for more details, https://docs.vmware.com/en/VMware-Marketplace/services/vmware-marketplace-for-consumers/GUID-0BB96E5E-123F-4BAE-B663-6C391F57C884.html

Back in vCenter, create a new content library with the link provided:

Graphical user interface, text, application</p>
<p>Description automatically generated

We then proceed to configure a namespace. If needed, create a new namespace and note the ‘VM Service’ info box:

Graphical user interface, application, Teams</p>
<p>Description automatically generated

Add at least one VM class:

Graphical user interface, table</p>
<p>Description automatically generated

Further VM classes can be defined by navigating to Workload Management > Services > VM Service > VM Classes

Add the content library configured above:

Graphical user interface, application</p>
<p>Description automatically generated

Now the service is ready, the rest of the steps can be performed as a devops user.

 

Deploy VM using VM Service

As usual, login to our cluster and switch contexts to the configured namespace. We can then see the Virtual Machine images available (we exclude the TKG images for our purposes):

# kubectl get vmimage | grep -v tkg
NAME                                            OSTYPE                FORMAT   AGE
bitnami-jenkins-2.222.3-1                       otherLinux64Guest     ovf      2d2h
centos-stream-8-vmservice-v1alpha1.20210222     centos8_64Guest       ovf      2d2h

Here we will deploy the CentOS image.

First, we create a file named ‘centos-user-data’ that captures the user, password and any customisation parameters. Use the following as a guide, replacing the password and authorized keys, etc.:

chpasswd:
    list: |
      centos:P@ssw0rd
    expire: false
packages:
  - wget
  - yum-utils
groups:
  - docker
users:
  - default
  - name: centos
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EA… root@tkg.vmware.corp
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, docker
    shell: /bin/bash
network:
  version: 2
  ethernets:
      ens192:
          dhcp4: true

Next, we encode that file in base64 (and remove any newlines):

# cat centos-user-data | base64 | tr -d '\n'
I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgdWJ1bnR1OlBAc3N3MHJkCiAgICBleHBpcmU6IGZhbHNlCnBhY2thZ2VfdXBncmFkZTogdHJ1ZQpwYWNrYWdlczoKICAtIGRvY2tlcgpncm91cHM6CiAgLSBk

For the next step, re-confirm the network name that was defined:

# kubectl get network
network-1

Then we create a manifest for the VM (cloudinit-centos.yaml) and add the encoded line in the previous step, under ‘user-data’. Note the values for the namespace, network, class name, image name, storage class, and hostname and adjust accordingly:

apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
  name: centos-vmsvc
  namespace: ns2
spec:
  networkInterfaces:
  - networkName: network-1
    networkType: vsphere-distributed
  className: best-effort-small
  imageName: centos-stream-8-vmservice-v1alpha1.20210222
  powerState: poweredOn
  storageClass: vsan-default-storage-policy
  vmMetadata:
    configMapName: centos-vmsvc
    transport: OvfEnv
---
apiVersion: v1
kind: ConfigMap
metadata:
    name: centos-vmsvc
    namespace: ns2
data:
  user-data: |
    I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgICAgdWJ1bn…
  hostname: centos-vmsvc

Note: ensure that the base64 encoded data is indented. Use a yaml validator, such as yamlint to make sure the format is correct.

We then apply this manifest:

# kubectl apply -f cloudinit-centos.yaml

We should see this now being created:

# kubectl get vm
NAME                        POWERSTATE   AGE
centos-vmsvc                             4s

Just like the TKC deployment, we can watch the status (and wait for the IP address):

# kubectl get vm centos-vmsvc -o yaml -w

Once the VM has been deployed, we can query the IP address:

# kubectl get vm centos-vmsvc -o yaml | grep Ip
        f:vmIp: {}
  vmIp: 172.168.161.6

We should be able to login to our VM. If the private key was added to the manifest, this should drop straight to a prompt:

# ssh centos@172.168.161.6
[centos@centos-vmsvc ~]$

 

Prepare the deployed VM and Install Harbor

We need to prepare the VM by installing Docker:

❯ sudo yum-config-manager --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

 

❯ sudo yum install -y docker-ce docker-ce-cli containerd.io

See https://docs.docker.com/engine/install/centos/ for further details on installing Docker on CentOS

Next, within our new VM, we’ll download the Harbor installation script, as per the guide at https://goharbor.io/docs/2.0.0/install-config/quick-install-script/

❯ wget https://via.vmw.com/harbor-script

And set execute permissions and run it:

❯ chmod +x harbor.sh
❯ sudo ./harbor.sh

Follow the prompts (install using the IP address).

Next, copy the Harbor manifest template:

❯ sudo cp harbor/harbor.yml.tmpl harbor/harbor.yml

Edit the Harbor manifest file and update the hostname field with the IP address of the VM.
For example:

# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 172.168.161.6

For the next step, we will need to create a self-signed certificate, as per: https://goharbor.io/docs/1.10/install-config/configure-https/

First the CA cert, remember to update as needed:

❯ openssl genrsa -out ca.key 4096

 

❯ openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=CN/ST=UK/L=UK/O=example/OU=Personal/CN=172.168.161.6" \
-key ca.key  -out ca.crt

Then the Server Cert, updating the site name as needed:

❯ openssl genrsa -out testdmain.com.key 4096

 

❯ openssl req -sha512 -new \
    -subj "/C=CN/ST=UK/L=UK/O=example/OU=Personal/CN=172.168.161.6" \
    -key testdmain.com.key \
    -out testdmain.com.csr

 

❯ cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1=172.168.161.6
EOF

 

❯ openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in testdmain.com.csr \
    -out testdmain.com.crt

We will then need to copy the cert files to the appropriate directory:

❯ sudo cp testdmain.com.* /etc/pki/ca-trust/source/anchors/

Run the following command to ingest the certificates:

❯ sudo update-ca-trust

Convert the crt file for use by Docker and copy:

❯ openssl x509 -inform PEM -in testdmain.com.crt -out testdmain.com.cert

 

❯ sudo mkdir -p /etc/docker/certs.d/testdmain.com/
❯ sudo cp testdmain.com.cert /etc/docker/certs.d/testdmain.com/
❯ sudo cp testdmain.com.key /etc/docker/certs.d/testdmain.com/

Restart Docker:

❯ sudo systemctl restart docker

Now, we must configure Harbor to use the certificate files:

❯ sudo vi harbor/harbor.yml

In the https section, update the certificate and private key lines to point to the correct files, for example:

  certificate: /etc/pki/ca-trust/source/anchors/testdmain.com.crt
  private_key: /etc/pki/ca-trust/source/anchors/testdmain.com.key

Next, we run the Harbor prepare script:

❯ cd harbor
❯ sudo ./prepare

Then restart the Harbor instance:

❯ sudo docker-compose down -v
❯ sudo docker-compose up -d

Wait for the services to start and logout of the CentOS VM.

 

Configure the TKG Service to Trust the Deployed Repository

Test the instance by using a browser to navigate to the IP address of the CentOS VM. The Harbor login page should be seen:

Graphical user interface</p>
<p>Description automatically generated

The default credentials are:

admin / Harbor12345

We can also test access using ‘docker login’. First obtain the certificate and store locally:

# echo | openssl s_client -connect 172.168.161.6:443 2>/dev/null -showcerts | openssl x509 > harbor.crt

Then move the certificate into the OS’ cert store. For Photon OS/TKG Appliance this is /etc/ssl/certs.

# mv harbor.crt > /etc/ssl/certs

Then update the OS to use the new certificate (a reboot may be needed).

Finally, login to the Harbor instance, i.e. (credentials are admin/Harbor12345) – there should not be any certificate errors or warnings:

# docker login 172.168.161.6

Next, we will configure the TKG service to be able to use this registry.

Get the certificate form the CentOS VM in base64 format:

# echo | openssl s_client -connect 172.168.161.6:443 2>/dev/null -showcerts | openssl x509 | base64 | tr -d '\n'

We can then add this to a manifest to amend the TKG service configuration. Here we create ‘tks.yaml’. Add the certificate from the previous step:

tks.yaml

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  trust:
    additionalTrustedCAs:
      - name: harbor-ca
         data: [CERT GOES HERE]

As usual, apply:

# kubectl apply -f tks.yaml

Thus, any new TKG clusters created will automatically trust the registry.

 

For more information on the VM service, see: https://core.vmware.com/blog/introducing-virtual-machine-provisioning-kubernetes-vm-service. This blog article also includes a GitHub repository with examples.

For more information on private registry support, see: https://core.vmware.com/blog/vsphere-tanzu-private-registry-support

 

Pulling from a Private Repository

In the previous exercise, we created a private Harbor repository to use with any new TKG clusters created. Here, we will push an image to the private repository and pull it into our TKG cluster.

First, obtain a test container, for instance busybox:

# docker pull busybox

We can then push this to our Harbor instance. First login to the Harbor instance (replacing the IP address with your own):

# docker login 172.168.161.6

Next, tag the image and provide a repository name to save to:

# docker tag busybox:latest 172.168.161.6/library/myrepo:busybox

Finally, push the image:

# docker push 172.168.161.6/library/myrepo:busybox

See the Harbor documentation for further details on pushing images, https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/

Looking at our Harbor UI, under Projects > library > myrepo we can see that the image has been pushed.

A screenshot of a computer</p>
<p>Description automatically generated

Click on the image to bring up the information screen:

A screenshot of a computer</p>
<p>Description automatically generated

Clicking on the squares next to the image gives the pull command. Confirm that this is the same image we have tagged above.

Next, we create a Namespace and a new TKG cluster (see the section earlier in this guide). Login to this new TKG cluster.

We then create a simple manifest that will pull the container. Replace image string with the name saved from the Harbor UI.
We’ll call this manifest bb.yaml:

bb.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  containers:
  - image: "172.168.161.6/library/myrepo:busybox"
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
  restartPolicy: Always

Then apply:

# kubectl apply -f bb.yaml

This should pull very quickly, and we can get and describe the pod:

# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
busybox      1/1     Running   0          28m

 

 

Further Examples

Further examples of workloads on Tanzu Kubernetes Clusters can be found in the official documentation:

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-E217C538-2241-4FD9-9D67-6A54E97CA800.html

 

 

Lifecycle Operations

Scale Out Tanzu Kubernetes Clusters

Scaling out Tanzu Kubernetes Clusters involves changing the number of nodes. You can increase the number of control-plane VMs, Worker VMs or both at the same time.

There are a couple of methods to approach this.

Method 1: Edit the YAML file used for deployment and apply the file just as it was done to create the TKC.

Method 2: Use Kubectl edit to directly edit this YAML file. After the file is saved, the changes will be triggered.

We will focus on Method 2 since this is a more automated approach over method 1.

First, switch to namespace where TKC lives

# kubectl config use-context tkgcluster1

Then list TKG clusters:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   1               3        1.18.15+vmware.1-tkg.1.600e412

Here we can see that there is only one cluster, and it has 1 control-plane VM and 3 worker VMs.

Edit the TKC manifest

# kubectl edit tkc/tkgcluster1

The cluster manifest will open in the text editor defined by your KUBE_EDITOR or EDITOR environment variable

Locate the ‘topology’ section and change controlPlane count from 1 to 3:

  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

Save the file.

You can ‘see’ the VM creation using the watch command with jq:

# watch 'kubectl get tkc -o json | jq -r '.items[].status.vmStatus''

We can see that there are now 3 control-plane VMs:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               3        1.18.15+vmware.1-tkg.1.600e412

In vCenter, we see that the extra VMs have been created

Text</p>
<p>Description automatically generated

In the same manner, you can scale out by increasing the number of worker nodes.

First, switch to the Supervisor Namespace where the TKG cluster resides:

# kubectl config use-context ns1

Then list the available TKG Clusters

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               3        1.18.15+vmware.1-tkg.1.600e412

Here we can see that there is only one cluster, and it has 3 control-plane VMs and 3 worker VMs.

Edit the TKC manifest

# kubectl edit tkc/tkgcluster1

The cluster manifest will open in the text editor defined by your KUBE_EDITOR or EDITOR environment variable (vi by default)

As before, locate the ‘topology’ section. Change workers count from 3 to 5 and save the file:

  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 5
      storageClass: vsan-default-storage-policy

We can see that there are now 5 worker VMs:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               5        1.18.15+vmware.1-tkg.1.600e412

Again, in vCenter, the new VMs can be seen:

Text</p>
<p>Description automatically generated

 

 

Scale-In Tanzu Kubernetes Clusters

Scaling in Tanzu Kubernetes Clusters is just as easy as scaling out. The same procedure applies with the exception that this time we will decrease the number of worker nodes. Note that the Control Pane cannot be scaled in.

First, switch to the Supervisor Namespace where the TKG cluster resides:

# kubectl config use-context ns1

Then list the available TKG Clusters

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               5        1.18.15+vmware.1-tkg.1.600e412

Edit the TKC manifest

# kubectl edit tkc/tkgcluster1

The cluster manifest will open in the text editor defined by your KUBE_EDITOR or EDITOR environment variable (vi by default)

Like previously, locate ‘topology’ section and then decrease the number of worker nodes and save the file:

  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

We can see that the number of workers scales in back to 3:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               3        1.18.15+vmware.1-tkg.1.600e412

 

Update Tanzu Supervisor Cluster

To update one or more Supervisor clusters, including the version of Kubernetes for the environment and the infrastructure supporting TKG clusters, you perform a vCenter and Namespace upgrade.

Note: it is necessary to upgrade the Supervisor Cluster first before upgrading any TKG clusters.


Upgrade vCenter

There are several methods to upgrading the vCenter appliance. Follow VMware’s best practices while conducting this upgrade.

Upgrade details located in the official documentation for Upgrading the vCenter Appliance.

 

Procedure to upgrade Namespace:

  • Log in to the vCenter Server as a vSphere administrator.
  • Select Menu > Workload Management.
  • Select the Namespaces > Updates tab.
  • Select the Available Version that you want to update to.
  • For example, select the version v1.18.2-vsc0.0.5-16762486.

 

Note: You must update incrementally. Do not skip updates, such as from 1.16 to 1.18. The path should be 1.16, 1.17, 1.18.

  • Select one or more Supervisor Clusters to apply the update to.
  • To initiate the update, click Apply Updates.
  • Use the Recent Tasks pane to monitor the status of the update.

 

Update Tanzu Kubernetes Clusters

As opposed to the Supervisor cluster, which is administered and upgraded in vCenter, the child TKG clusters need to be updated using the standard Kubenetes toolset.

Updating the Tanzu Kubernetes Cluster includes variables such as version, virtual machine class, and storage class. However, there are several methods of updating this information for TKG clusters. You can refer to the official documentation for further details.

This approach includes utilizing commands such as kubectl edit, kubectl patch, and kubectl apply.

For this guide, we will highlight one of the “Patch” method to perform an in-place update of the cluster.

To upgrade the Kubernetes version we will create a variable and apply it to the cluster using the patch command. The approach demonstrated here uses the UNIX shell command read to take input from the keyboard and assign it to a variable named $PATCH.

The kubectl patch command invokes the Kubernetes API to update the cluster manifest. The ‘–-type merge’ flag indicates that the data contains only those properties that are different from the existing manifest.

First, we will need to change ‘fullVersion’ parameter to ‘null’. The ‘version’ parameter should then be changed to the version of Kubernetes we want to upgrade to.

For this exercise, we have our TKG cluster deployed at version v1.18.15 that will be upgraded to version v1.19.7

We can inspect the current version of our TKG cluster:

# kubectl get tkc tkgcluster1 -o json | jq -r '.spec.distribution'
{
  "fullVersion": "1.18.15+vmware.1-tkg.1.600e412",
  "version": "1.18.15+vmware.1-tkg.1.600e412"
}

Looking at our available versions, we can see that we have versions from 1.16.12 - 1.20.2 available

# kubectl get tkr
NAME                                VERSION                      
v1.16.12---vmware.1-tkg.1.da7afe7   1.16.12+vmware.1-tkg.1.da7afe7
v1.16.14---vmware.1-tkg.1.ada4837   1.16.14+vmware.1-tkg.1.ada4837
v1.16.8---vmware.1-tkg.3.60d2ffd    1.16.8+vmware.1-tkg.3.60d2ffd
v1.17.11---vmware.1-tkg.1.15f1e18   1.17.11+vmware.1-tkg.1.15f1e18
v1.17.11---vmware.1-tkg.2.ad3d374   1.17.11+vmware.1-tkg.2.ad3d374
v1.17.13---vmware.1-tkg.2.2c133ed   1.17.13+vmware.1-tkg.2.2c133ed
v1.17.17---vmware.1-tkg.1.d44d45a   1.17.17+vmware.1-tkg.1.d44d45a
v1.17.7---vmware.1-tkg.1.154236c    1.17.7+vmware.1-tkg.1.154236c
v1.17.8---vmware.1-tkg.1.5417466    1.17.8+vmware.1-tkg.1.5417466
v1.18.10---vmware.1-tkg.1.3a6cd48   1.18.10+vmware.1-tkg.1.3a6cd48
v1.18.15---vmware.1-tkg.1.600e412   1.18.15+vmware.1-tkg.1.600e412
v1.18.5---vmware.1-tkg.1.c40d30d    1.18.5+vmware.1-tkg.1.c40d30d
v1.19.7---vmware.1-tkg.1.fc82c41    1.19.7+vmware.1-tkg.1.fc82c41
v1.20.2---vmware.1-tkg.1.1d4f79a    1.20.2+vmware.1-tkg.1.1d4f79a

We construct our ‘PATCH’ variable:

# read -r -d '' PATCH <<'EOF'
spec:
  distribution:
    fullVersion: null    # set to null as just updating version
    version: v1.19.7
EOF

Then we apply the patch to the existing tkc that we are targeting. The system should return that the TKG cluster has been patched:

# kubectl patch tkc tkgcluster1 --type merge --patch "$PATCH"
tanzukubernetescluster.run.tanzu.vmware.com/tkgcluster1 patched

Check the status of the TKG cluster; we can see that the ‘phase’ is shown as ‘updating’:

# kubectl get tkc
NAME        CONTROL PLANE   WORKER   DISTRIBUTION                     AGE  PHASE    
tkgcluster1 1               3        v1.19.7+vmware.1-tkg.1.fc82c41   7m   updating

In vCenter, we can see a rolling upgrade of the control-plane VMs, as well as the workers: new VMs will be created with the new version of Kubernetes (and once that is completed, it deletes the old version). This will be done one VM at a time, starting with the control-plane, until they are all completed.

Text</p>
<p>Description automatically generated with medium confidence

 

Graphical user interface, text, application</p>
<p>Description automatically generated with medium confidence

After a few minutes, you will see that status will change from updating to running, at which point you can verify the cluster by running:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION                 
tkgcluster1   3               5        1.19.7+vmware.1-tkg.1.fc82c41

 

 

 

Delete Operations

Destroy TKC and related objects

In order to delete a Tanzu Kubernetes Cluster, first switch to the Supervisor Namespace where the cluster is located. Visually, this can be seen in vCenter:

Graphical user interface, text, application, email</p>
<p>Description automatically generated

We change context to the Supervisor Namespace that contains the TKG cluster that we would like to destroy:

# kubectl config use-context ns1

Double-check the namespace is the correct one; a star next to the name indicates the currently selected context:

# kubectl config get-contexts
CURRENT   NAME                  CLUSTER           AUTHINFO      NAMESPACE
          172.168.161.101       172.168.161.101   wcp: ...
*         ns1                   172.168.161.101   wcp: ...      ns1
          ns2                   172.168.161.101   wcp: ...      ns2
          ns3                   172.168.161.101   wcp: ...      ns3

See which TKG cluster(s) reside in the namespace:

# kubectl get tkc
NAME          CONTROL PLANE   WORKER   DISTRIBUTION       AGE   PHASE 
tkgcluster1   1               3        v1.20.2+vmware...  10d   running

Prior to deletion, conduct a search for the TKG cluster within the vCenter search field to see all related objects:

Finally, to the delete TKG cluster, in this case with the name ‘tkgcluster1’:

# kubectl delete tkc tkgcluster1
tanzukubernetescluster.run.tanzu.vmware.com "tkgcluster1" deleted

vCenter will have tasks regarding the deletion of the TKG cluster and all related objects:

From vCenter, we can see that there are no more resources relating to the TKG cluster:

Delete Namespaces

To delete namespaces from the UI, navigate to Menu > Workload Management > Namespaces. Select the Namespace to be removed, then click on the namespace and click remove

 Note, ensure that there are no TKG clusters contained within the namespace before removal.

 

Graphical user interface, application</p>
<p>Description automatically generated

 

 

Graphical user interface, text, application</p>
<p>Description automatically generated
 

 

 

Delete Supervisor Cluster and Confirm Resources are Released

The supervisor cluster gets deleted when you disable Workload Management for a specific cluster. This action will also delete any existing Namespaces and TKG clusters that exists within this cluster. Proceed with caution when disabling Workload Management for a cluster.

You can first verify the supervisor cluster member by using the following command:

# kubectl get nodes
NAME                               STATUS   ROLES    AGE   VERSION
421c2fba09ab60c0ffe80c27a82d04af   Ready    master   12d   v1.19.1+wcp.3
421c4fcf29033faecfb403bb13656a39   Ready    master   12d   v1.19.1+wcp.3
421cefbcbfaeb030defbb8fcec097c48   Ready    master   12d   v1.19.1+wcp.3

From vCenter, use the search field to look for ‘supervisor’. This will return the supervisor VMs. You can add the DNS Name field and compare this with the output from the ‘kubectl get nodes’ command:

Once you have verified the supervisor cluster, you can delete this cluster and all other objects within this cluster by going to Menu > Workload Management > Select Clusters tab > Select the cluster to be deleted > Click DISABLE to remove the cluster and all of its objects

Graphical user interface</p>
<p>Description automatically generated

 

In this case you can see that the supervisor cluster houses a namespace and TKG cluster

You will receive a confirmation prompt prior to continuing with the deletion task:

Once you select the check box and click Disable, you will see some tasks such as powering off the TKC workers, deleting these virtual machines, deleting related folders, and lastly shutting down and deleting the Supervisor Cluster VMs.

 

A picture containing table</p>
<p>Description automatically generated

 

A picture containing graphical user interface, text, table</p>
<p>Description automatically generated

 

Graphical user interface, table</p>
<p>Description automatically generated

When the tasks complete, the clusters tab will no longer have the previously selected cluster and you will not be able to connect to it via kubectl as the cluster no longer exists.

Graphical user interface, text, application</p>
<p>Description automatically generated

 

Text</p>
<p>Description automatically generated

 

 

Monitoring

Monitor Namespaces, and K8s Objects resource utilization (vCenter)

Resource monitoring is an important aspect of managing a Tanzu environment. As part of the integration, monitoring namespaces and Kubernetes objects resource utilization is possible through vCenter.

At the cluster level, it is possible to monitor the different namespaces that exist within the vCenter. The overview pane provides a high-level view of the health, Kubernetes version and status, as well as the Control Plane IP and node health.

Navigate to Cluster>Monitor>Namespaces>Overview

Graphical user interface, text, application, email</p>
<p>Description automatically generated

 

Under the compute tab for the namespace, the resources for Tanzu Kubernetes as well as Virtual Machines display key information about the environment such as version, IP address, phase, etc.

Graphical user interface, text, application, email</p>
<p>Description automatically generated

 

Graphical user interface, text, application, email</p>
<p>Description automatically generated

For the Tanzu Kubernetes Clusters, the monitor tab also provides specific insights to the particular TKG Cluster. Information such as performance overview, tasks and evets, as well as resource allocation helps the admin understand the state and performance of the Tanzu Kubernetes Cluster.

Graphical user interface, application</p>
<p>Description automatically generated

 

 

Deploy Octant (optional)

Octant is a highly extensible Kubernetes management tool that, amongst many other features, allows for a graphical view of the Kubernetes environment. This is useful in a PoC environment to see the relationship between the different components. See https://github.com/vmware-tanzu/octant for more details.

Octant demo

If the TKG Demo Appliance is being used, Octant is already installed. Otherwise, download and install Octant, as described in the Octant getting started page:
https://reference.octant.dev/?path=/docs/docs-intro--page#getting-started

Launch Octant simply by the command ‘Octant’:

# octant &

Open an SSH tunnel port 7777 of the jump host –

For instance, from a Mac terminal:

 $ ssh -L 7777:127.0.0.1:7777 -N -f -l root <jump host IP>

Or Windows, using putty — navigate to Connection > SSH > Tunnels on the left panel. Enter ‘7777’ for the source port and ‘127.0.0.1:7777’ as the destination. Then click on ‘add’ and open a session to the jump host VM:

Graphical user interface, application</p>
<p>Description automatically generated

Thus, if we open a browser to http://127.0.0.1:777 (note http not https) we can see the Octant console:

Graphical user interface, text, application</p>
<p>Description automatically generated

Filter Tags

Modern Applications Cloud Foundation ESXi 7 vCenter Server 7 vSphere 7 vSphere with Tanzu Container Registry Content Library Kubernetes vSphere Distributed Switch (vDS) Document Proof of Concept Advanced Deploy Manage