vSphere With Tanzu and NSX Advanced Load Balancer Quick Start Guide (V7.0.3)

By Michael West,  Technical Product Manager, VMware

This quick start guide is meant to provide users with an easy and prescriptive path to enabling vSphere with Tanzu and the NSX Advanced Load Balancer.   The goal is to get you up and running quickly and should be used for proof of concept or other non-production environments.   Other documentation is available to describe reference architectures and detailed explanations of the products.  The vSphere with Tanzu Configuration and Management guide is here:  vSphere with Tanzu Configuration and management

Prerequisites for Enabling vSphere with Tanzu

Versions Validated For This Guide:

vCenter: 7.0.3

ESXi: 7.0.3

NSX Advanced Load Balancer 20.1.7

vSphere Hosts and Clusters

At least 3 esxi hosts joined to a vSphere Cluster (4 if using vSAN)

At least one datastore that is shared across all hosts

vSphere HA and DRS must be enabled on the vSphere cluster

Content Library and Storage Policy

A content library subscribed to the VMware CDN at http://wp-content.vmware.com/v2/latest/lib.json

A storage policy that describes your shared datastore(s)

Networking

There are some decisions to be made before you start.  You will need a network that carries management traffic between vCenter, the Supervisor Control Plane, the NSX ALB Controller and the NSX ALB Service Engines.  We are using the vCenter Management Network in this guide. 

You also need at least one Workload Network.  This network is the private network that connects the VMs that make up the Supervisor and TKG cluster nodes.  Most environments will also have a Frontend network.  This network contains the Virtual IPs assigned by the NSX ALB load balancer.  It must be reachable from your User's client device.  We call this the Three network configuration. 

The Frontend Network is optional because you can define a single network to handle both Workload and Frontend traffic.  In that case you will carve out an IP range for each of them in a single Workload Network.  We call this the Two Network configuration.   You may also choose to use Static IP allocation or DHCP.  We will show the static IP allocation in this guide, but point out where you can select DHCP.

Three Network Configuration:

One Network for Management Traffic: 

5 Contiguous Static IPs for the Supervisor Cluster

1 IP for the NSX ALB Controller

Range of contiguous IPs for the NSX ALB Service Engines.  Minimum of 2 for this setup

One Network for Workload Traffic

Range of contiguous IPs for the Supervisor and TKG cluster nodes.   10-20 minimum number of IPs for a small lab.  Add more if you are creating multiple TKG clusters

One Network for Load Balancer Virtual IPs (Called Frontend in this guide)

Range of contiguous IPs for the Load Balancer Virtual IPs (VIPs).   5-10 IPs for a small lab.  You will need more if creating multiple TKG clusters and running many Kubernetes Load Balancer Services.

(Note: for Two Network configuration, this range would be in the Workload Network and would be a range outside of the one defined for the Workload IPs)

Workload and Frontend Networks must be routable to each other

Management and Workload Networks must be routable to each other

Frontend Network must be reachable from User's client device

A vSphere Distributed Switch (vDS 7.0)

Portgroups for Management, Workload and Frontend networks

Two Network Configuration:

Everything is the same as Three Network Configuration except there is no separate Frontend network for VIPs.  You will simply carve out a VIP range and a Workload Range from the Workload Network.  Make sure the IP ranges do not overlap.

DHCP:

If you choose DHCP IP allocation for the Management Network, Workload Networks or both, the following are required:

DHCP server must assign NTP and DNS IPs as well as the DNS Search Domain

DHCP server must be configured to support Client Identifiers (This is because VM MAC address of cluster nodes can change during upgrade or HA failure events)

DNS and NTP Servers

At least one NTP server reachable by all networks

At least one DNS server reachable by all networks

Create Content Library

The content library holds the base images for the TKG cluster nodes.  Each time a new image is created in our development pipeline, it is pushed to a public content delivery network (CDN).  The content library subscribes to the CDN and pulls down updates.  Those new images are made available to individual Namespaces for use in creating the TKG clusters.

From vCenter: Click on Menu -> Click on Content Library -> Click on Create

Give it a Name -> Click Next

Click on Subscribed Content Library Button

Enter http://wp-content.vmware.com/v2/latest/lib.json in URL Subscription -> Click Next

Do not apply security policy -> Click Next

Choose Storage -> Click Next

Verify Information is correct and -> Click Finish

image 184

Create Storage Policy

There are many ways to configure a storage policy.  You must have a policy that describes a datastore that is shared across all of your ESXi hosts.  If you don't already have such a policy created, the simplest method is to tag the datastore you want to use, then create a VM Storage Policy with Tag Based Placement Rules enabled.  Then add the specific tag you placed on your datastore.  The steps are documented here:  Create Storage Policy for vSphere with Tanzu

image 185

My Lab Configuration

As we go through the deployment and setup, it's useful to have an overview of the networking for the reference lab

There are three Networks: 

192.168.110.0/24 Management 
192.168.130.0/24 Workload
192.168.220.0/23 Frontend Load Balancer Vips

 

Assignment of IPs                                                                   What they are used for.

 
192.168.110.32  NSX-ALB Controller IP
192.168.110.120 - 192.168.110.139 Range of IPs for Service Engines
192.168.110.101 - 192.168.110.105 Supervisor Cluster Nodes
192.168.130.2    - 192.168.130.127 Range of Workload Network IPs for the Cluster Nodes
192.168.110.10 DNS
192.168.100.1 NTP
corp.tanzu DNS Search Domain
192.168.220.2 -192.168.220.127 Range of IPs for Load Balancer Vips

 

If you are going to use a two Network Configuration it might look like this:

There are two Networks: 

192.168.110.0/24 Management 
192.168.130.0/24 Workload

 

Assignment of IPs                                                                   What they are used for.

 
192.168.110.32  NSX-ALB Controller IP
192.168.110.120 - 192.168.110.139 Range of IPs for Service Engines
192.168.110.101 - 192.168.110.105 Supervisor Cluster Nodes
192.168.130.2    - 192.168.130.127 Range of Workload Network IPs for the Cluster Nodes
192.168.110.10 DNS
192.168.100.1 NTP
corp.tanzu DNS Search Domain
192.168.130.128 - 192.168.130.250 Range of IPs for Load Balancer Vips

 

 

vSphere Distributed Switch (vDS)

There is a single vDS connected to 4 ESXi hosts.  We will be using the DSwitch-Management portgroup for Management traffic.  The k8s-Workload portgroup has a VLAN ID of 130 and will carry the traffic between nodes in the Supervisor and TKG clusters.  The k8s-Frontend will have VLAN ID 220 and will be the network from which the Load Balancer VIPs are assigned.  

image 186

Deploy and Configure NSX Advanced Load Balancer

The NSX Advanced Load Balancer is deployed from an OVA that you can download from myvmware.com.  Login and search for NSX Advanced Load Balancer Download.   There are two parts to the setup of NSX ALB.  The first is to deploy the Controller OVF and configure the VM storage and networking.  The second part is to configure it to support your vSphere based cloud environment.  The NSX ALB (Previously known as Avi Load Balancer) is an enterprise grade Load Balancer whose architecture is highly scalable.  The Controller is the management platform for the environment and deploys a set of Service Engines.  The Service Engines are configured with the Virtual IPs as they get assigned by the controller and also have routes to the Workload networks.  User Load Balancer traffic goes through the Service Engines.  A more comprehensive architectural discussion is available here:  NSX ALB Architecture Documentation

Deploy NSX ALB OVF

From vCenter Inventory view :

Right Click on Cluster -> Click Deploy OVF Template

image 251

Select Local File -> Choose the correct file -> Click Next

image 267

Select the Name and Folder

Enter Virtual Machine Name:   NSX-ALB-Controller-01a -> Click Next

image 253

Select Compute Resource 

Choose your Cluster and Click Next

image 254

Review Details

Click Next

image 268

Select Storage

Select one of the Datastores

Click Next

image 256

Select Networks

Management Source Network: Choose your Management Portgroup (Ours is DSwitch-Management)

Click Next

image 257

Enter Management Interface IP: (Ours is 192.168.110.32)

Enter Management Subnet Mask

Enter Default Gateway

Click Next

image 269

Click Finish

image 260

 

After OVF finishes deploying -> Right click on VM  -> Select Power -> Power On

Note:  For small lab environments you may want to reduce the default resource for the VM prior to powering it on.

It deploys by default with 8 vCPUs and 24 GB RAM.    You can take it down to 4 vCPUs and 12 GB.

image 261

 

Configure NSX-ALB Controller

After Powering on the NSX-ALB Controller it takes a few minutes for all of the services to become available.

From your browser, Connect to the Management IP you configured.  (Reference lab: https://192.168.110.32:443)

There are several things to configure in the Controller.  

You will first setup an admin account.

Then you need to configure your cloud.  That is establishing the vSphere access, datacenter and cluster, as well as configuring the pool of IPs on your management network that can be assigned to Service Engine VMs.

After the cloud is configured you will provide the cert used to authenticate to the controller and setup up placement of the Service Engines.

And you have to tell the Controller what network to use for the VIPs and whether you will use a Static IP range or DHCP.  Finally you must define the Route that the Service Engine will use to get to the Workload Network from the Frontend VIP.

Setup Admin Account in NSX ALB

From the initial NSX ALB Controller Login Screen you will create the Administrator account for the Controller

Enter Username: (Reference Lab: Admin)

Enter Password: (Reference Lab: VMware1!)

Click Create Account

image 262

Add Passphrase: (Reference Lab: VMware1!)

Add DNS IP: (Reference Lab: 192.168.110.10)

Add DNS Search Domain: (Reference Lab: corp.tanzu)

Click Next

image 188

Email/SMTP 

Select None

Click Next

image 263

Multi -Tenant

Take Defaults

Click on Setup cloud after check box.  (Don't skip this or you will manually have to go through other setup screens)

Click Save

image 190image 191

 

Setup vSphere as Default Cloud Provider

Click on VMware Logo to select Cloud Infrastructure Type

Click Next

image 192image 193

Enter vCenter Admin Username

Enter vCenter Admin Password

Enter vCenter IP or FQDN

Click Next

image 194

Choose the vCenter Datacenter

Click Next

image 195

Choose the Management Network that Service Engines will be placed on (Reference Lab: DSwitch-Management)

DHCP is not enabled.  We will use a Static Range

Enter your IP subnet: (Reference Lab: 192.168.110.0/24

Enter your Gateway: (Reference Lab: 192.168.110.1)

Enter your Static IP Pool: (Reference Lab: 192.168.110.120 - 192.168.110.139).     These are the IPs that will be assigned to Service Engines from the Management Network

Click Save

image 196

 

Configure NTP

Default NTP servers are already configured.  If you need to change them, do it here.  

Click on Infrastructure

Click on Administration

Click on Settings

Click on DNS/NTP

Click Edit (Pencil Icon)

Add your NTP Servers.

Click on Save

image 270

Configure Authentication

The controller has a default self signed certificate that must be replaced in order to successfully deploy the Supervisor Cluster.  You may create and upload your own certificate.   Details can be found here:  Assigning Controller Cert    Instead of creating and uploading a signed certificate, we will create a self-signed cert with the names being the IP address of the controller.

Click on Administration

Click on Settings

Click on Access Settings

Click on Edit (Pencil Icon)

image 271

Find the SSL/TLS Certificates box and delete the two existing certificates (system-default-portal-cert and system-default-portal-cert-EC256)

image 199

Click the Dropdown and then Click on Create Certificate

image 200

Enter the IP address of your Controller into Name: (Reference Lab IP: 192.168.110.32)

Enter the IP address of your Controller into Common Name: (Reference Lab IP: 192.168.110.32)

Enter the IP address of your Controller into Subject Alternative Name: (Reference Lab IP: 192.168.110.32)

Click Save

You may need to refresh your browser and accept the new certificate.   If you don't do this, you will notice that you cannot go to any new screens because your browser doesn't trust this new certificate.

image 272

Click check box "Allow Basic Authentication"  

Click Save

Note: Basic authentication is required in the 7.0.3 release to enable some of the newly added health checks.  Supervisor Deployment will fail without this enabled.  It was not required in earlier versions and the need for it will be removed in an upcoming release.

image 202

 

Configure Service Engine Group

Service Engine VMs are highly configurable.  You can decide things like the number of SEs, the threads per SE, Min and Max to deploy, as well as sizing and placement.  For our lab setup, we will configure a single SE to be placed on our Workload Cluster.

Click on Administration (This is the top level Menu at the Upper Left of the screen)

Click on Infrastructure

Click on Service Engine Group

Click on Edit (pencil icon) for the Default Group

image 273

Nothing changes on the basic settings tab

Click on Advanced tab

Choose the Cluster on which to place the Service engines

Set Buffer Service Engines to 0

Click Save

image 265

 

Configure Frontend (VIP) Network

This section is to tell the controller which portgroup and IP ranges to use for the Load Balancer VIPs.  You can use DHCP, however we are configuring a static range in this setup.

Click on Infrastructure

Click on Networks

Choose your Frontend Network (Reference Lab: k8s-Frontend Portgroup)  If you are using the Two Network Configuration you would choose the Workload Network and setup the VIP range within that.

Click Edit (pencil Icon)

image 274

Click Add Subnet

Enter IP Subnet    

This is the Network associated with the Frontend Portgroup  entered in CIDR notation (Reference lab: 192.168.220.0/23)

Click Add Static IP Pool

This is the range of IPs that can be allocated as Load Balancer VIPs from the Frontend Network )Reference Lab: 192.168.220.2 -192.168.220.127)

Click Save

Click Save again

image 207

 

Configure Static Routes For Service Engines

When users connect through the Load Balancer Virtual IPs (VIPs), they go through an interface configured on one of the Service Engines (SEs).  That SE needs to know how to route traffic from the Frontend Network to the Workload Network.   You need to configure the route.  You are telling it what the Next Hop is when traffic comes into the Frontend interface with a Destination on the Workload Network.  Usually this is going to be the Gateway for the Frontend network.  That Gateway must be able to route the traffic to the Workload Network.  Note:  If you are using the Two Network Configuration, Workload and Frontend are on the same Network so you do not need to configure a Route and can skip down to Configure IPAM for the Load Balancer VIPs.

Click on Infrastructure

Click on Routing

Click on Create

image 275

Under Gateway Subnet, enter the Workload network in CIDR format (Reference Lab: 192.168.130.0/24)

Under Next Hop, enter the Gateway for the Frontend Network (Reference Lab: 192.168.220.1)

Click Save

image 209

Configure IPAM for the Load Balancer VIPs

Once we have configured the VIP network, we must create an IPAM profile that includes that network, then assign it to our default cloud.  This ensures that the Controller will use the Frontend network configuration and Routing we just configured.

Click on Infrastructure in Menu

Click on Templates

Click on IPAM/DNS Profiles

Click on Create 

Select IPAM Profile

image 276

Enter Name: Default-IPAM

Choose Type Avi Advantage Profile

Click Add Usable Network

image 277

For Cloud for Usable Network, Select Default Cloud

For Usable Network, Select your Frontend Network (Reference Lab: K8s-Frontend)

Click Save

image 214

Now to add this IPAM profile to the Default Cloud.

Click on Templates

Click on Infrastructure

Click on Clouds

Click on Default Cloud

Click Edit (Pencil Icon)

image 278

Select Default-IPAM from the IPAM Profile Drop List

Click Save

image 216

Enable Workload Management

Retrieve Certificate from NSX ALB Controller

From the NSX ALB Controller

Click on Menu (Upper Left Corner)

Click Templates 

Click Security

Click SSL/TLS Certificates

Click Edit for the Cert you created earlier  Looks like a download icon (Reference Lab:  Self-Signed 192.168.110.32)

image 280

Click Copy to Clipboard  for the certificate.  Make sure you are copying the certificate and not the private key

Click Done

You will need to paste this into the Load Balancer Portion of the Workload Management enablement wizard

Start Workload Management Enablement Wizard

Open vCenter UI

Click on Menu 

Select Workload Management

Select Get Started

image 218

Choose vCenter and vDS Networking

If you are not running multiple vCenters and do not have NSX-T installed in your lab, vCenter and vDS networking will be selected for you.  Otherwise

Select the vCenter you configured in the NSX ALB setup

Select vSphere Distributed Switch (vDS) networking

Click Next

image 219

Choose vSphere Cluster on which to enable Workload Management

Compatible clusters have HA and Fully Automated DRS enabled, valid licenses on at least two hosts, a vSphere 7 vDS and enough capacity.

Click the cluster on which you want to enable the Supervisor Cluster

Click Next

image 220

Choose vSphere Storage Policy

This policy will be used to place the Supervisor cluster Control Plane node VMs.

Click Select a Policy 

Choose Storage Policy you created earlier

Click Next

image 221

Add Load Balancer Information

You will enter information for the NSX ALB you just configured

Enter a Name: (Reference Lab: NSX-ALB)    This can be anything.  Its just used in internal metadata

Load Balancer Type:  Choose NSX Advanced Load Balancer

Enter Controller ID:  (Reference Lab: 192.168.110.32:443)    This is the IP:443 for your Controller VM

Enter the Username and Password for your NSX ALB Controller

Paste in the Server Certificate you copied earlier

Click Next

image 229

Management Network Configuration

You may choose DHCP or Static IP assignment for the Management Network.  DHCP is the quickest configuration, however we are going to use Static IP ranges in this setup.

Click on Network Mode, choose Static

Click on Network, choose your Management network: (Reference Lab: DSwitch-Management)

Click on Starting IP Address:  Enter first IP in a set of 5 contiguous IPs on your Management Network: (Lab Reference: 192.168.110.101)

Click on Subnet Mask: Enter the Subnet Mask for your Management Network: (Reference Lab: 255.255.255.0

Click on Gateway: Enter the Gateway for your Management Network: (Reference Lab: 192.168.110.1)

Click on DNS Server:  Enter a DNS Server that is reachable from the Management Network: (Reference Lab: 192.168.110.10)

Click on DNS Search Domain: Enter a valid search domain: (Reference Lab: tanzu.corp)

Click on NTP Server:  Enter an NTP Server reachable from the Management Network: (Reference Lab 192.168.100.1)

Click Next

image 281

Workload Network Configuration

You may choose DHCP or Static IP assignment for the Workload Network.  DHCP is the quickest configuration, however we are going to use Static IP ranges in this setup.

Click on Network Mode, choose Static

Click on Portgroup for your Workload Network  (Reference Lab: K8s-Workload)

Click on IP Address Range:  Enter range of IPs for the Supervisor and TKG cluster nodes: (Lab Reference: 192.168.130.2 - 192.168.130.127)

Click on Subnet Mask: Enter the Subnet Mask for your Workload Network: (Reference Lab: 255.255.255.0

Click on Gateway: Enter the Gateway for your Workload Network: (Reference Lab: 192.168.130.1)

Click on DNS Server:  Enter a DNS Server that is reachable from the Workload Network: (Reference Lab: 192.168.110.10)

Click on NTP Server:  Enter an NTP Server reachable from the Workload Network: (Reference Lab 192.168.100.1)

Click Next

image 225

Assign Content Library

The content library contains the images what can be used to configure VMs are Kubernetes nodes.

Click Add

image 226

Select your Content Library

Click Ok

image 227

Click Next

image 228

You can leave the Advanced Settings unchanged.

Click Finish

 

Verify Supervisor Created Successfully

You can Monitor the cluster creation process and verify successful creation through vCenter UI.

From Menu, Click on Workload Management

Click on Supervisor Clusters

Check Config Status.  During creation, reconciliation messages will appear here.

When the Supervisor Cluster has created successfully, the Config Status will be "Running"

Identify the Control Plane Node Address IP.  This is the Load Balancer VIP that users will use to access the cluster. (Reference Lab: 192.168.220.2)

image 230

Create and Configure Namespace

Users are given access to the Supervisor Cluster Kubernetes API through Namespaces created in vCenter.  A full explanation of Namespaces is beyond the scope of this quick start, but more information can be found here:  vSphere with Tanzu Namespaces.

From the vCenter UI menu, 

Click on Workload Management

Click on Namespaces

Click on Create Namespace

image 231

Choose the Cluster for your Namespace.

Enter Namespace Name: (Reference Lab: tkg)    Must be lower case

Click Create

image 232

Add Permissions

Devops users are given access by being added to a Namespace (users within the vSphere Administrator Group have access by default)

Click on Add Permissions 

Select Identity Source: (Reference Lab: tanzu.corp)

Select User or Group: (Reference Lab: tkgadmin)

Select Role: Choose Edit         Edit Role allows users to create TKG clusters in the Namespace

Click OK

image 233

Add Storage

You will add a Storage Policy to the Namespace.  This causes the creation of a storage class in the Supervisor cluster.  Any TKG clusters or Persistent Volumes that are created by users must reference a storage class bound to the Namespace.

Click Add Storage

Choose the Storage Policy you created earlier

Click OK

image 234

Add VM Classes

Cluster Node VM resources are configured based on the VMClass selected in the specification.  Only VMClasses added to a Namespace can be used in a User's TKG Cluster specification.

Click on Add VM Class under the VM Service Pane

Check the box to select All

Click Ok

image 235

Your Namespace is configured and users can begin to use it.

Download and Install kubectl and vSphere Plugin

A landing page has been created with the appropriate versions of kubectl and the vSphere plugin for this Supervisor Cluster.  For more information on the vSphere Plugin, check here: vSphere with Tanzu CLI Tools

From your Namespace page,  in the Status pane, under "Link to CLI Tools",  Click on Open

From the new page, Select the Operating System for your client environment: (Reference Lab: Linux)

Right Click on Download CLI Plugin and Copy Link Address

image 236

Go to your Client CLI machine

Download the file vSphere-Plugin.zip

On Linux this might be wget https://"yourclusterVIP"/wcp/plugin/linux-amd64/vsphere-plugin.zip --no-check-certificate

ubuntu@cli-vm:~$ wget https://192.168.220.2/wcp/plugin/linux-amd64/vsphere-plugin.zip --no-check-certificate
--2021-11-01 05:30:47--  https://192.168.220.2/wcp/plugin/linux-amd64/vsphere-plugin.zip
Connecting to 192.168.220.2:443... connected.
WARNING: cannot verify 192.168.220.2's certificate, issued by âCN=CA,OU=VMware Engineering,O=VMware,L=Palo Alto,ST=California,C=USâ:
  Unable to locally verify the issuer's authority.
HTTP request sent, awaiting response... 200 OK
Length: 18952256 (18M) [text/plain]
Saving to: âvsphere-plugin.zip.â

vsphere-plugin.zip                                          100%[=================================================================================================================================================>]  18.07M  18.7MB/s    in 1.0s

2021-11-01 05:30:48 (18.7 MB/s) - âvsphere-plugin.zip.â saved [18952256/18952256]

ubuntu@cli-vm:~$

Unzip vsphere-plugin.zip into a working directory

You will see two executables: kubectl and vsphere-kubectl

ubuntu@cli-vm:~$ unzip vsphere-plugin.zip
Archive:  vsphere-plugin.zip
  inflating: bin/kubectl-vsphere
  inflating: bin/kubectl
ubuntu@cli-vm:~$

Update your system path to point to these binaries.  In Linux the easiest thing to do is copy them to /usr/local/bin

You are now ready to login

 

Login and Create TKG Cluster

Users must be authenticated to the Supervisor cluster by logging in.

From the CLI:

Enter kubectl vsphere login --server "Supervisor Cluster VIP" -u "user you added to namespace or VC Admin user" --insecure-skip-tls-verify

ubuntu@cli-vm:~$ kubectl vsphere login --server 192.168.220.2 -u administrator@vsphere.local --insecure-skip-tls-verify

Logged in successfully.

You have access to the following contexts:
   192.168.220.2
   tkg

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`
ubuntu@cli-vm:~$

You see that you have access to the Namespace you created.

Now you must set the context to point to that namespace.

Enter kubectl config use-context "Your Namespace"

ubuntu@cli-vm:~$ kubectl config use-context tkg
Switched to context "tkg".
ubuntu@cli-vm:~$

Creating a TKG cluster is simply submitting a specification file that defines the cluster.  Here is a sample file to get you started.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgs-cluster
  namespace: tkg
spec:
  topology:
    controlPlane:
      replicas: 1
      vmClass: best-effort-small
      storageClass: k8s-policy
      tkr:
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
    nodePools:
    - name: tkg-cluster-nodeool-1
      replicas: 2
      vmClass: best-effort-medium
      storageClass: k8s-policy
      tkr:
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
  settings:
    network:
      pods:
        cidrBlocks:
        - 100.96.0.0/11
      services:
        cidrBlocks:
        - 100.64.0.0/13

You must verify a couple of things before submitting this specification.

Verify that the vmClass you are using has been assigned to your Namespace

Enter kubectl get vmclassbinding

ubuntu@cli-vm:~$ kubectl get vmclassbinding
NAME                                     VIRTUALMACHINECLASS   AGE
best-effort-2xlarge                      best-effort-2xlarge   2m4s
best-effort-4xlarge                      best-effort-4xlarge   2m4s
best-effort-8xlarge                      best-effort-8xlarge   2m4s
best-effort-large                        best-effort-large     2m4s
best-effort-medium                       best-effort-medium    103s
best-effort-small                        best-effort-small     119s
best-effort-xlarge                       best-effort-xlarge    103s
best-effort-xsmall                       best-effort-xsmall    2m2s
guaranteed-2xlarge                       guaranteed-2xlarge    119s
guaranteed-4xlarge                       guaranteed-4xlarge    119s
ubuntu@cli-vm:~$

Verify that the storage class name is the same as the Storage Policy you assigned to the Namespace. 

Enter kubectl describe namespace "Your Namespace Name" |grep storageclass

ubuntu@cli-vm:~$ kubectl describe namespace tkg |grep storageclass
  k8s-policy.storageclass.storage.k8s.io/requests.storage  0     9223372036854775807
ubuntu@cli-vm:~$

Verify that the image version is available.  In the spec, the image version is v1.21.2---vmware.1-tkg.1.ee25d55

Enter kubectl get tkr

ubuntu@cli-vm:~$ kubectl get tkr |grep v1.21.2
v1.21.2---vmware.1-tkg.1.ee25d55    1.21.2+vmware.1-tkg.1.ee25d55    True    True         4d13h
ubuntu@cli-vm:~$

Now to create the cluster, just apply the specification file

Enter kubectl apply -f "filename.yaml"

ubuntu@cli-vm:~$ kubectl apply -f demo-applications/tkgs-cluster.yaml
tanzukubernetescluster.run.tanzu.vmware.com/tkgs-cluster created
ubuntu@cli-vm:~$

Follow the creation process by describing the cluster

Enter kubectl describe tkc "cluster name from the spec file"

Look for Phase at the bottom to be "Running" to know you have successfully created the cluster.

  API Endpoints:
    Host:  192.168.220.5
    Port:  6443
  Conditions:
    Last Transition Time:  2021-11-01T11:38:30Z
    Status:                True
    Type:                  Ready
    Last Transition Time:  2021-11-01T11:30:35Z
    Status:                True
    Type:                  AddonsReady
    Last Transition Time:  2021-11-01T11:30:37Z
    Status:                True
    Type:                  ControlPlaneReady
    Last Transition Time:  2021-11-01T11:38:30Z
    Status:                True
    Type:                  NodePoolsReady
    Last Transition Time:  2021-11-01T11:38:30Z
    Message:               1/1 Control Plane Node(s) healthy. 2/2 Worker Node(s) healthy
    Status:                True
    Type:                  NodesHealthy
    Last Transition Time:  2021-11-01T11:30:07Z
    Status:                True
    Type:                  ProviderServiceAccountsReady
    Last Transition Time:  2021-11-01T11:30:07Z
    Status:                True
    Type:                  RoleBindingSynced
    Last Transition Time:  2021-11-01T11:30:43Z
    Status:                True
    Type:                  ServiceDiscoveryReady
    Last Transition Time:  2021-11-01T11:30:40Z
    Status:                True
    Type:                  StorageClassSynced
    Last Transition Time:  2021-11-01T11:30:10Z
    Status:                True
    Type:                  TanzuKubernetesReleaseCompatible
    Last Transition Time:  2021-10-27T21:31:29Z
    Reason:                NoUpdates
    Status:                False
    Type:                  UpdatesAvailable
  Phase:                   running
  Total Worker Replicas:   2
Events:
  Type    Reason        Age    From                                                                                             Message
  ----    ------        ----   ----                                                                                             -------
  Normal  PhaseChanged  6m56s  vmware-system-tkg/vmware-system-tkg-controller-manager/tanzukubernetescluster-status-controller  cluster changes from creating phase to running phase
ubuntu@cli-vm:~$

 

Login To New TKG Cluster and Set Context

Once your cluster is ready, you must login to generate the token needed to access it.

Enter kubectl vsphere login --server "Supervisor Cluster VIP" -u administrator@vsphere.local --tanzu-kubernetes-cluster-name tkgs-cluster --tanzu-kubernetes-cluster-namespace "Namespace Name" --insecure-skip-tls-verify

ubuntu@cli-vm:~$ kubectl vsphere login --server 192.168.220.2 -u administrator@vsphere.local --tanzu-kubernetes-cluster-name tkgs-cluster --tanzu-kubernetes-cluster-namespace tkg --insecure-skip-tls-verify

Logged in successfully.

You have access to the following contexts:
   192.168.220.2
   tkg
   tkgs-cluster

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`
ubuntu@cli-vm:~$

Change context to your TKG cluster

Enter kubectl config use-context tkgs-cluster

ubuntu@cli-vm:~$ kubectl config use-context tkgs-cluster
Switched to context "tkgs-cluster".
ubuntu@cli-vm:~$

Verify you are pointed at your TKG Cluster

Enter kubectl get pods -A     and you see system pods of the TKG cluster.  The Antrea overlay networking pods are used only in TKG cluster, not the Supervisor.

ubuntu@cli-vm:~$ kubectl get pods -A
NAMESPACE                      NAME                                                                        READY   STATUS    RESTARTS   AGE
kube-system                    antrea-agent-jcr4n                                                          2/2     Running   0          12m
kube-system                    antrea-agent-npjbv                                                          2/2     Running   0          12m
kube-system                    antrea-agent-zsmh8                                                          2/2     Running   0          18m
kube-system                    antrea-controller-54d7f86756-lr298                                          1/1     Running   0          18m
kube-system                    antrea-resource-init-684f959cb8-ts5qc                                       1/1     Running   0          18m
kube-system                    coredns-7f5b944f6b-kt7nk                                                    1/1     Running   0          19m
kube-system                    coredns-7f5b944f6b-z8nxn                                                    1/1     Running   0          16m
kube-system                    docker-registry-tkgs-cluster-control-plane-n746c                            1/1     Running   0          19m
kube-system                    docker-registry-tkgs-cluster-tkg-cluster-nodeool-1-q7q7f-564856f6d8-b49bl   1/1     Running   0          12m
kube-system                    docker-registry-tkgs-cluster-tkg-cluster-nodeool-1-q7q7f-564856f6d8-t9xlz   1/1     Running   0          12m
kube-system                    etcd-tkgs-cluster-control-plane-n746c                                       1/1     Running   0          19m
kube-system                    kube-apiserver-tkgs-cluster-control-plane-n746c                             1/1     Running   0          19m
kube-system                    kube-controller-manager-tkgs-cluster-control-plane-n746c                    1/1     Running   0          19m
kube-system                    kube-proxy-cs69m                                                            1/1     Running   0          12m
kube-system                    kube-proxy-fz6kx                                                            1/1     Running   0          12m
kube-system                    kube-proxy-phznh                                                            1/1     Running   0          19m
kube-system                    kube-scheduler-tkgs-cluster-control-plane-n746c                             1/1     Running   0          19m
kube-system                    metrics-server-5bc9f76d99-8s9dq                                             1/1     Running   0          18m
vmware-system-auth             guest-cluster-auth-svc-84246                                                1/1     Running   0          16m
vmware-system-cloud-provider   guest-cluster-cloud-provider-c84648fbd-lzsvj                                1/1     Running   0          19m
vmware-system-csi              vsphere-csi-controller-549f4bf678-n4wfl                                     6/6     Running   0          19m
vmware-system-csi              vsphere-csi-node-kd5v8                                                      3/3     Running   0          19m
vmware-system-csi              vsphere-csi-node-twzhj                                                      3/3     Running   0          12m
vmware-system-csi              vsphere-csi-node-vrv2w                                                      3/3     Running   0          12m
ubuntu@cli-vm:~$

If you have gotten this far, your TKG cluster is ready for developers to deploy applications.

Filter Tags

Modern Applications vSphere with Tanzu Kubernetes Document Quick-Start Intermediate Advanced Deploy