With the launch of vSphere 7 U2, VMware introduces fully integrated and supported Kubernetes load balancing in vSphere With Tanzu. In vSphere 7 U1, VMware introduced support for vSphere Distributed Switch (vDS) based networking when deploying vSphere With Tanzu. Customers could get up and running very quickly without the need to deploy an entire Software Defined Networking (SDN) stack through NSX. Without NSX, there was a requirement to deploy an external load balancer based on an opensource HAProxy appliance created by VMware. This solution, while advantageous for PoC and Lab environments has some limitations for production deployments. NSX Advanced Load Balancer Essentials in vSphere with Tanzu provides a production ready load balancer. This should not be confused with the full NSX SDN. It is a production class Load Balancer based on technology VMware acquired with Avi Networks. It does not require NSX and deploys as part of the vDS based vSphere With Tanzu solution.
vSphere With Tanzu Load Balancer support encompasses access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters and to Kubernetes Services of Type LoadBalancer deployed in the TKG clusters. Users are allocated a single Virtual IP (VIP) to access the Supervisor Cluster Kubernetes API. Traffic is spread across the three Kubernetes Controllers that make up the Supervisor Cluster. As DevOps users create Tanzu Kubernetes Grid (TKG) clusters, they are allocated new VIPs. Those VIPs also load balance traffic across the controllers of the TKG clusters. Finally, as users deploy applications onto their TKG clusters, they may create Kubernetes Services of Type LoadBalancer that enable access into the cluster. Those Services will also be allocated a VIP that can allow applications like webservers to be accessed by end users.
The NSX Advanced Load Balancer provides dynamically scaling load balancing endpoints for Kubernetes clusters. The Load Balancer is separated into a Control Plane that is the single point of management and control for the Load Balancing system. Data plane operations are handled through a scalable set of Service Engines that receive and execute instructions from the controller over the Management Network. Users will communicate over a Frontend Network with a Virtual Service that has been defined in the Controller. That Virtual Service is a Virtual IP (VIP) and a port that defines the endpoint. A Virtual Service is created in the Controller for each Supervisor cluster, TKG cluster and Kubernetes Load Balancer Service that is created. Requests to the virtual service are received by a Service Engine, validated and forwarded to Pool members (the cluster controller nodes) on the cluster network. This is the Workload network in vSphere With Tanzu. There could be a single Workload Network or Admins could provide Namespace level isolation by creating many of them.
If you are using vSphere with Tanzu on the vSphere Distributed Switch (vDS), you must configure your own load balancer before enabling the Supervisor Cluster on your vSphere Cluster. Currently, you can choose to deploy the HAProxy appliance or a fully supported solution using the NSX Advanced Load Balancer Essentials. The following video walks through the deployment and setup of the NSX ALB Load Balancer and then enabling the Supervisor Cluster to use it.
For more information check out the NSX Advanced Load Balancer Essentials Setup Video: