Holo-Tanzu-vSphere-Pods

Module 3 - vSphere Pods

This module shows how to run vSphere Pods on a vSphere Supervisor Cluster that is part of a Cloud Foundation domain.

A vSphere Pod is a special type of virtual machine with a small footprint that runs one or more Linux containers. Each vSphere Pod is sized precisely for the workload that it accommodates and has explicit resource reservations for that workload. vSphere Pods are supported with Supervisor Clusters configured with NSX-T Data Center as the networking stack.

While vSphere Pods are unique to vSphere, they are deployed and managed just like Kubernetes pods on any upstream conformant Kubernetes cluster. 

A note on the Kubernetes terminology.  In the vSphere client, the Kubernetes features are found under ‘Workload Management’. Enabling Kubernetes is referred to as enabling the ‘Workload Control Plane (WCP)’. Once Kubernetes has been enabled, the vSphere cluster is referred to as a ‘Supervisor Cluster’. 

Step 1: Authenticate to the Kubernetes Control Plane


To access the Kubernetes instance running on the Supervisor Cluster, developers use the kubectl binary. 

vSphere with Tanzu provides a special version of the kubectl binary that includes a vCenter SSO plugin-in.  Developers can download the kubectl binary, along with the SSO plugin-in, from the “Kubernetes CLI Tools Download” link on the vSphere Namespace summary page.

Note, that in the lab the kubectl binary is included with the ‘tanzu-ws’ VM deployed during the lab setup in module 1. 

To connect to the Kubernetes Control Plane, you first need to identify the IP address that has been assigned.  To do this, connect to the vSphere client and navigate to the Workload Management -> Clusters view.

From the Holo Console:

  • Open the vSphere Client
  • Navigate to Home -> Workload Management
  • Click the Clusters tab

Graphical user interface, application</p>
<p>Description automatically generated

The Kubernetes ‘Control Plane Node IP Address' is shown for the ‘mgmt.-cluster-01’.  In this example, the IP is 10.80.0.2 (your lab may be different).  To connect to the Kubernetes control plane, we use PuTTY to SSH to the developer workstation:

  • Click the PuTTY icon in the windows taskbar
  • Click Tanzu WS
  • Click Load
  • Click Open
    • Login: sam
    • Password: VMware123!

Graphical user interface, application</p>
<p>Description automatically generated

In the PuTTY window, run the ‘kubectl vsphere login …’ command to log into the Kubernetes Control Plane listening on the IP address 10.80.0.2.  Authenticate as the user ‘sam@vsphere.local’. This account is a member of the ‘devteam’ group, which has been assigned edit privileges to the ‘ns01’ vSphere namespace.

# kubectl vsphere login --vsphere-username sam@vsphere.local --server 10.80.0.2 

Next, set the context to the ‘ns01’ vSphere Namespace by running the ‘kubectl config use-context ns01’ command.

# kubectl config use-context ns01

Text</p>
<p>Description automatically generated

The developer ‘sam@vsphere.local’ has successfully authenticated and set his context to the ‘ns01’ vSphere Namespace. Any Kubernetes objects deployed by the developer will be created in the ‘ns01’ namespace.

Step 1 Summary

In Step 1 we saw how developers access the Kubernetes instance running on the Supervisor Cluster. Developers must first download the vSphere CLI Tools, which includes a version of the "kubectl" command that includes an SSO authentication plugin (In the Holodeck Toolkit, this is done as part of the lab setup in module 1). They then use ‘kubectl’ command to authenticate and set their context to the vSphere Namespace.


Step 2: Deploy vSphere Pods using a Kubernetes Deployment

In this step we deploy a vSphere Pod comprised of a single Nginx container.  Along with the vSphere Pod, we will deploy a Kubernetes “Service” of type LoadBalancer.  These components will be deployed from a YAML manifest as part of a Kubernetes ‘Deployment’. 

The “demo” directory and related contents were pre-staged on the ‘tanzu-ws’ VM that was deployed as part of the lab setup in module 1.

Change directory to '/home/sam/demo/: 

# cd /home/sam/demo

Review the contents of the “nginx-demo.yaml” file.

# cat nginx-demo.yaml

Text</p>
<p>Description automatically generated

The "nginx-demo.yaml" manifest contains the resource definition for a Kubernetes Deployment and associated Kubernetes Service.  The deployment that consists of one pod containing a single Nginx container.  The accompanying service is defined as type LoadBalancer and is set to listen on port 80.  Take a minute to familiarize yourself with the contents of the YAML manifest.

Deploy the YAML manifest by running the ‘kubectl apply -f nginx-demo.yaml command:

# kubectl apply -f nginx-demo.yaml

The output shows the creation of the deployment and service. Wait approximately 3 minutes to allow the image to be pulled and the container to start. Then run the following commands to view details about the deployment:

# kubectl get deployments
# kubectl get replicasets
# kubectl get pods
# kubectl get services

Re-run the ‘kubectl get pods’ command until you see the pod to enter a running state.  In a nested lab this can take several minutes. 

Details about the vSphere Pod are also available from the vSphere Client:

Return to the vSphere client:

  • Navigate to Menu -> Inventory
  • Select ns01
  • Select Summary

Observe the vSphere Pod VM is shown in the vCenter inventory under the ns01 namespace. 

A screenshot of a computer</p>
<p>Description automatically generated

From the Compute tab view details about the Kubernetes Deployment: 

  • Click the Compute tab
  • Click vSphere Pods
  • Click Deployments
  • Click Replica Sets

Graphical user interface, text, application</p>
<p>Description automatically generated

Observe that while the Kubernetes Pod and Service is deployed and managed by the developer, the VCF administrator also has visibility of the objects running on the vSphere cluster. 

Step 2 Summary

In Step 2, we showed how developers use familiar tools (e.g. kubectl with YAML files) to connect to and deploy container-based workloads on a supervisor cluster.  We also showed how the vSphere administrator has visibility into the workloads deployed by the developers.  In this way, vSphere with Tanzu helps bring administrators and developers together by providing self-service access to Kubernetes infrastructure that looks and feels like Kubernetes to developers, but at the same time looks and feels like vSphere to the administrator.

 

Step 3: Scale a Kubernetes Deployment


In the previous step a single vSphere Pod was deployed as part of a Kubernetes deployment.  We will now scale the deployment by adding two additional vSphere pods, for a total of three.  This will provide redundancy for the Nginx webserver.

Return to the PuTTY SSH session (if the PuTTY SSH session timed out you may need to login again):

  • Click the Putty icon on the Windows taskbar

In the PuTTY window run the following command to increase the number of vSphere Pods (referred to in Kubernetes as ‘replicas’) from 1 to 3:

# kubectl scale deployment demo1 --replicas=3

Wait 30 seconds and then use the kubectl get …’ command to query the deployments, replicasets, and pods.  Note that there are now three pods running.

# kubectl get deployments
# kubectl get replicasets
# kubectl get pods

Graphical user interface, text</p>
<p>Description automatically generated

Wait for all three pods to enter a running state before continuing.  It may take two or three minutes.

Return to the vSphere client to view the details of the new pods.

  • Click the “vSphere” browser tab to return to the vSphere Client:

Observe that all three pods are shown in the vSphere client.  Click through the vSphere Pods, Deployments, and Replica Set tabs to view details about the Kubernetes deployment.

Graphical user interface, application</p>
<p>Description automatically generated

Step 3 Summary

In Step 3 we showed how Kubernetes makes it easy for developers can scale their applications by simply increasing the number of replicas in the Kubernetes deployment.  We also showed how the vSphere administrator can view details about the new vSphere Pods as they are deployed.

Step 4: Kubernetes Integration with NSX

The YAML manifest used to deploy the Kubernetes deployment included a definition for a Kubernetes service resource of type LoadBalancer. In Kubernetes, the service resource enables external network access to the containers running in the vSphere Pods. In Cloud Foundation, this is done through integration with an NSX load balancer. 

The Kubernetes NSX integration comes through the NSX Container Plug-in (NCP).  The NCP runs as a container on the Supervisor Cluster.  The Kubernetes control plane sends requests for networking related services to the NCP.  The NCP in-turn invokes the necessary NSX APIs to create the desired components and report the status back to the control plane.

 

In this step we will look at the Kubernetes integration with the NSX Load Balancer. 

Return to the PuTTY SSH session (if the PuTTY SSH session timed out you may need to login again):

  • Click the Putty icon on the Windows taskbar  

In the PuTTY window run the ‘kubectl get services’ command to view details about the deployed services.

# kubectl get services

A screenshot of a computer</p>
<p>Description automatically generated

In the list of services, note the EXTERNAL IP assigned to the demo1 service (in this example, 10.80.0.4). This is the IP address assigned to the NSX load balancer that is hosting the server pool for the vSphere Pods.  We use this address to access the Nginx web servers running inside the vSphere Pods.

Return to the Chrome browser.

  • Click + to open a new browser tab
  • Enter the URL http://10.80.0.4

Confirm that you can connect to the web server.

Graphical user interface, application, Word</p>
<p>Description automatically generated

Having confirmed that you can successfully connect to the Nginx webserver, let’s look at the Load Balancer configuration in NSX. 

Open a new browser window and connect to the NSX Manager:

  • Click + to open a new browser tab
  • Click Managed Bookmarks
  • Click Mgmt Domain
  • Click VMware NSX
  • If prompted by Chrome about the connection not being private:
    • Click Advance
    • Click Proceed to nsx-mgmt.vcf.sddc.lab (unsafe)
  • Login:
    • Username: admin
    • Password:  VMware123!VMware123!
  • Click SKIP if you get the Welcome to NSX-T Data Center screen

From the NSX Home page:

  • Enter the Load Balancer IP address 10.80.0.4 in the search field and press enter:

In the search results, click the ‘Virtual Servers’ tab.

We see the virtual server that was configured for the Kubernetes Service resource defined in the YAML manifest.  To view additional details about the virtual server, click the name hyperlink (domain0c8- <id> -ns01-demo1-80).

We see the IP address for the virtual server ‘10.80.0.4’, and that it is configured as a layer 4 load balancer listening on port 80.  To view the members of the server pool, click the ‘Server Pool’ link (domain-c8- <id> -ns01-demo1-80).

A screenshot of a computer</p>
<p>Description automatically generated

The three vSphere Pods are listed as members of the server pool.  This confirms that the NSX load balancer is balancing network connections across the three vSphere Pods in our Kubernetes Deployment.

  • Click CLOSE to close the Server Pool Members window

Return to the PuTTY SSH session:

  • Click the Putty icon on the Windows taskbar (if the PuTTY SSH session timed out you may need to login again)

Re-run the ‘kubectl scale deployment demo1 –replicas=##’ command, this time to scale the deployment from three replicas to ten.

# kubectl scale deployment demo1 --replicas=10

Wait 30 seconds and then query the deployments, replicasets, and pods.  Note that there are now ten vSphere Pods running.

# kubectl get deployments
# kubectl get replicasets
# kubectl get pods

Wait for all ten pods to enter a running state before continuing.  It may take two or three minutes.

Graphical user interface, text</p>
<p>Description automatically generated

Return to the NSX Manager UI:

  • Click the NSX browser tab
  • Click REFRESH
  • Click the Server Pool hyperlink

Verify there are now ten Pod VMs listed as members of the Server Pool, indicating that the load balancer is distributing connections across all ten vSphere Pods.

A screenshot of a computer</p>
<p>Description automatically generated

  • Click Close to close the Server Pool Members window.
  • Click the Putty icon to return to the putty window

Run the following command to scale the deployment back down to 2 pods:

# kubectl scale deployment demo1 --replicas=2

Wait 30 seconds and then query the deployments, replicasets, and pods.  Note that there are now two vSphere Pods running.

# kubectl get deployments
# kubectl get replicasets
# kubectl get pods

Wait for the pods to enter a running state before continuing.  It may take two or three minutes.

Graphical user interface</p>
<p>Description automatically generated

Return to the NSX-T Manager browser window:

  • Click refresh at the bottom of the page.
  • Click the Server Pool link.

Confirm the Server Pool Members list is updated to include the 2 remaining Pod VMs.

A screenshot of a computer</p>
<p>Description automatically generated

  • Click Close

Step 4 Summary

In Step 4 we looked at the Kubernetes integration with VMware NSX that is provided by the NSX Container Plug-in (NCP).  NCP makes it easy for developers to configure network-related services (such and load balancers) for their container-based applications. The developer simply defines the desired object in a YAML manifest.  When the manifest is applied, Kubernetes passes the desired state to the NCP to instantiate the required configuration inside NSX.  We observed that as changes are made over time (such as scaling a deployment up and down) the NCP correspondingly updates NSX configuration as needed.

Step 5: Delete a Kubernetes Deployment

In this step we will remove the Kubernetes deployment and associated service.

  • Click the Putty icon on the Windows taskbar (if the PuTTY SSH session timed out you may need to login again)

 Run the commands below to delete the Kubernetes deployment and related service.

# cd /home/sam/demo/
# kubectl delete -f nginx-demo.yaml

Wait ~30 seconds and then run the following commands to verify the Kubernetes objects are deleted:

# kubectl get deployments
# kubectl get replicasets
# kubectl get pods
# kubectl get services

Text</p>
<p>Description automatically generated

Return to the vSphere client and confirm the vSphere Pods have been removed from the inventory.

  • Click the vSphere client browser tab

Confirm that the vSphere Pods are no longer listed in the vCenter inventory.

Graphical user interface, application</p>
<p>Description automatically generated

 

Step 5 Summary

In this step we removed the Kubernetes deployment and service and confirmed that the vSphere Pods were removed from the vCenter inventory. 

Module Summary

In this module, we saw how vSphere with Tanzu enables developers to run containers directly on vSphere.   We also saw how developers use kubectl to authenticate and apply YAML manifests to deploy pods and related services.  We highlighted how the vSphere Pods created by the developer are visible to both the developer (from inside Kubernetes) and the vSphere administrator (from inside the vSphere client). We also looked at the NSX integration provided by the NCP and how it makes it easy for developers to expose network related services inside NSX. 

Module Key takeaways

  • vSphere with Tanzu introduces a new vCenter construct called a vSphere Pod, which is the equivalent of a Kubernetes pod.
  • vSphere with Tanzu’s tight integrated with VMware NSX, provided by the NSX Container Plug-in (NCP), makes it easy to deploy networking services for Kubernetes based workloads.
  • vSphere with Tanzu helps bring developers and administrators together by providing Kubernetes infrastructure that looks and feels like Kubernetes to developers, but at the same time looks and feels like vSphere to the vSphere administrator.   

Filter Tags

Document