Tanzu for Kubernetes Operations on VxRail
Solution Overview
In this era of modern apps based on microservices with multi-layer design and distributed architecture, the operational model needs to evolve. With Kubernetes the de facto standard for modern applications, customers want consistent Kubernetes runtime across their deployments, no matter where they reside. They want consistent operations and management and fine grain visibility into their Kubernetes and application frameworks. They want their environments to be secure, easy to deploy, manage and upgrade, while managing the application sprawl of microservices.
Cloud native development is about how applications are created and deployed, and not where. This model is built upon the concepts of DevOps, continuous development, integration, testing, and delivery of production ready code, microservices, and containers. Cloud native application model suits many workloads, and an increasing number of companies are “born in the cloud” or migrating to the cloud. Cloud native is an approach to build and run applications that takes advantage of the cloud computing delivery model. When companies build and operate applications using a cloud native architecture, they bring new ideas to market and respond to customer demands faster.
While cloud native development model is attractive to many organizations, it does not fit all application delivery models. As companies with only cloud native application grow across regions and across borders, they may encounter regulatory standards or policies that require them to build their applications on-premises private cloud for closer governance and control.
This version of the document serves as the first release and the first building block for a multi-cloud solution. Future reference architectures will build upon this foundational on-premises architecture and will be expanded to hybrid and multi-cloud models. This first release of the reference architecture addresses the operational complexities and lifecycle management challenges a Kubernetes modern applications environment presents. With focus on complete cluster lifecycle management and fine grain observability, Tanzu for Kubernetes Operations simplifies operating Kubernetes for multi-cloud deployment by centralizing management and governance for clusters and teams across on-premises, public clouds, and edge. Tanzu for Kubernetes Operations delivers an open source aligned Kubernetes distribution with consistent operations and management to support infrastructure and application modernization.
For end-to-end connectivity, load balancing and ingress, NSX Advanced Load Balancer provides a robust networking stack that can support global DNS services as Kubernetes deployment instances grow from on-premises private cloud to multi-cloud environment. For security, NSX Advanced Load Balancer features an Intelligent Web Application Firewall (iWAF) that covers OWASP CRS protection, support for compliance regulations such as PCI DSS, HIPAA, and GDPR, and signature-based detection. It deploys positive security model and application learning to prevent web application attacks. Additionally, built-in analytics provide actionable insights on performance, end-user interactions and security events in a single dashboard (Avi App Insights) with end-to-end visibility.
Tanzu for Kubernetes Operations on VxRail is a future proof solution that simplifies transformation journey to modern applications for most customers. Whether its move from legacy to cloud native applications, repatriating cloud native applications to on-premises private cloud, or architecting distributed application on multi-cloud environment, Tanzu for Kubernetes on VxRail is the all-encompassing solution.
Audience
This white paper is intended for architects, engineers, consultants, and IT administrators who manage designing and implementing modern application environment on-premises or in the cloud. Readers with strong understanding of technologies such as VMware NSX Advanced Load Balancer, vSphere with Tanzu, VMware vSAN, and cloud native concepts will benefit from the content in this paper.
Architecture Overview
At a high-level, this solution has two components. The first is the on-premises infrastructure and software that supports modern application development and second, the SaaS services to manage, monitor and observe the on-premises deployment. In addition to the on-premises deployment, SaaS services like Tanzu Mission Control and Tanzu Observability can also be used to manage and observe any existing VMware Tanzu Grid or other Kubernetes clusters on most cloud providers. A four node Dell VxRail v570 cluster makes up the infrastructure foundation of this on-premises modern application solution. vSphere with Tanzu provides the capability to run Kubernetes workloads natively on the ESXi hypervisor and create upstream compliant Kubernetes clusters on demand. The NSX Advanced Load Balancer provides dynamically scaling load balancing endpoints for Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service. Along with its Avi Kubernetes Operator, NSX Advanced Load Balancer provides L4 and L7 ingress and load balancing to the deployed workloads. VMware vCenter Server along with VxRail manager makes up the local infrastructure management domain. Harbor is used as the local registry and can be installed manually or via Tanzu Mission Control. In addition to Tanzu Observability, local monitoring, and diagnostic tools such as, Prometheus, Grafana, Fluent are installed.
Figure 1: Architecture Overview
VMware vSAN provides enterprise class hyperconverged storage, that is consistent across deployments and integrates fully with VMware Tanzu. From a storage perspective vSAN future-proofs the solution with its integration with object storage types such as Dell ObjectScale and others.
Note: This document assumes that VxRail is already deployed and configured in your environment. Hence, setup steps are not discussed.
Key Components
This solution is built upon a solid foundation with Dell VxRail v570. When configured with VMware vSAN and NSX Advanced Load Balancer, it provides an enterprise grade software defined datacenter architecture that is agile, easy to manage and secure. vSphere with Tanzu enhances these underlying qualities and delivers a developer-ready, modern application platform for upstream Kubernetes clusters. From a manageability perspective, Tanzu Mission Control and Tanzu Observability provides a solution that is future-proof and extensible from on-premises to the cloud. A description of the key components follows.
NSX Advanced Load Balancer
VMware NSX Advanced Load Balancer provides multi-cloud load balancing, web application firewall and application analytics across on-premises data centers and any cloud. The software-defined platform delivers applications consistently across bare metal servers, virtual machines, and containers to ensure a fast, scalable, and secure application experience. Learn more.
Tanzu Mission Control
VMware Tanzu Mission Control is a centralized management hub, with a robust policy engine, that simplifies multi-cloud and multi-cluster Kubernetes management. Whether you are new to Kubernetes, or quite experienced, Tanzu Mission Control helps PLATFORM OPERATORS reduce complexity, increase consistency, and offer a better developer experience. Learn more.
Tanzu Observability
VMware Tanzu Observability by Wavefront is an observability platform specifically designed for enterprises needing monitoring, observability, and analytics for their cloud-native applications and environments. DevOps, SRE and developer teams use Tanzu Observability to proactively alert on, rapidly troubleshoot and optimize performance of their modern applications running on the enterprise multi-cloud. Learn more.
Dell VxRail
Whether accelerating data center modernization, deploying a hybrid cloud, or creating a developer-ready kubernetes platform, VxRail delivers a turnkey experience that enables customers to continuously innovate. The only hyperconverged system jointly engineered by Dell Technologies and VMware, it is fully integrated, pre-configured, and pre-tested, automating lifecycle management and simplifying operations. Powered by VMware vSAN or VMware Cloud Foundation, VxRail transforms HCI networking and simplifies VMware cloud adoption, while meeting any HCI use case - including support for the most demanding workloads and applications. Learn more.
Tanzu Standard
Tanzu Standard gives enterprises what they need to build a consistent Kubernetes infrastructure across multiple clouds, with governance and efficiency in place. It offers a full Kubernetes runtime distribution which can be deployed across on-premises, on public clouds and at the edge, and at the same time gives platform operators a global control plane, with which they can manage Tanzu clusters, as well as any other conformant Kubernetes clusters, consistently, securely, and efficiently at scale. Learn more.
Hardware Component Specifications
Table 1: Key hardware components.
Hardware Specifications | ||
Component | Description | Quantity |
– Platform | – Dell VxRail V570 version 7.0.241 | – 4 per cluster |
– Processor | – Intel(R) Xeon(R) Platinum 8176 CPU @ 2.10GHz | – 2 per node |
– Memory | – Samsung DRAM DDR4 2666 MHz (65536 MB) | – 8 per node |
– Disks (vSAN) | – Dell 1.92 TB SSD (ST2000NX0463) | – 6 per node |
– Network Adapter | – Intel(R) Ethernet 10G 4P X550 rNDC | – 4 port Integrated |
– TOR switch | – Dell Power Switch – S5248 | – 2 |
Software Components Specifications
Table 2: Key software components
Software Specifications | |
Component | Version |
– VMware ESXi | – VMware ESXi, 7.0.2, 18426014 |
– VMware vCenter Server | – vSphere Client version 7.0.2.00500 |
– NSX Advanced Load Balancer | – 20.1.7 Enterprise |
– vSphere with Tanzu | – Tanzu Standard Runtime |
– Cert-manager | – 1.5.3+vmware.2-tkg.1 |
– Contour | – 1.18.2+vmware.1-tkg.1 |
– Grafana | – 7.5.7+vmware.2-tkg.1 |
– Prometheus | – 2.27.0+vmware.2-tkg.1 |
– Harbor | – 2.3.3+vmware.1-tkg.1 |
– Fluent-bit | – 1.7.5+vmware.1-tkg.1 |
– Tanzu Mission Control | – SaaS |
– Tanzu Observability | – SaaS |
– | – |
Best Practices and Recommendations
– Configure NSX Advanced Load Balancer on “Default-Cloud” instance only. For vSphere, custom cloud configurations are not supported.
– Place Supervisor Cluster, workload clusters, VIP network on separate port groups or network segments. This minimum separation helps isolate traffic types and enables flexible firewall and security policies.
– Use NSX Advanced Load Balancer IPAM service to assign VIP and IPs to service engines. DHCP is supported. IPAM however gives admins more control over their IP management.
– Place different types of Tanzu Kubernetes workload clusters on separate networks. This provides isolation for different team or groups using the workload clusters and enables finer security policy implementation. In this configuration, different VIPs can use Avi Kubernetes Operator for L7 services. For Tanzu Kubernetes Grid Service, Avi Kubernetes Operator however, needs to be installed on each workload cluster.
– For production environments with Tanzu Mission Control, use “Large” VM size for Supervisor cluster.
– Depending on your infrastructure requirements, use different storage tags and policies for different types of storage tiers. This provides flexibility to assign storage per workload requirements.
– In vSphere with Tanzu tenant separation and isolation is done at the namespace level. Tenant separation can be accomplished via permission assignments for that namespace.
Solution Configuration
Prerequisites
Fulfill the following prerequisites before starting the VxRail, vSphere with Tanzu, and NSX Advanced Load Balancer deployment:
- If you plan to use an external identity source along with vCenter Server Single-Sign-On, ensure that vCenter is configured with appropriate identity source and settings. For information please visit Identity Sources for vCenter Server.
- Ensure that the DNS server is available and appropriate DNS records are configured for all the vSphere and VxRail components, including vCenter, ESXi hosts, NSX Advanced Load Balancer and VxRail Manager and nodes.
- Configure the layer 3 networking and appropriate routing for communication between various infrastructure component and network segments per your network requirements.
- Confirm that VxRail has been deployed and is functional.
Networking Overview
VxRail deployment creates the Virtual Distributed Switch with minimum required port groups, such as vCenter and VxRail management. Additional port groups for vSphere with Tanzu were created for NSX ALB and supervisory node management, front-end and workload networks. Placing these networks on separate port groups provides isolation and enables application of granular security\firewall policies. Figure 2 depicts the high-level logical diagram of the network stack configured in the lab.
Figure2: Logical Network Architecture
Table 3 lists the additional required network\port groups with a brief description.
Table 3: Tanzu Kubernetes networks
Network | Description |
NSX ALB Management | NSX Advanced Load Balancer controllers and Service Engines connect to this network |
Supervisor Management | TKGs Supervisor nodes are placed on this network |
Front End | This is where the users connect to and holds the virtual services and VIPs |
Workload | TKG workload cluster control plane and worker nodes connect here |
VMware NSX Advanced Load Balancer
NSX Advanced Load Balancer, formerly known as Avi, comes in two editions, Essentials, and Enterprise. To use L7 load balancing with NSX Advanced Load Balancer, the Enterprise edition is required and was used for this reference architecture. The NSX Advanced Load Balancer provides dynamically scaling load balancing endpoints for Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service. Once you have configured the Controller, it automatically provisions load balancing endpoints for you. The Controller creates a virtual service and deploys Service Engine VMs to host that service. This virtual service provides load balancing for the Kubernetes control plane. NSX Advanced Load Balancer has some key components that are explained below.
NSX Advanced Load Balancer Controller: As the name suggests, NSX Advanced Load Balancer Controller controls and manages the provisioning of service engines, coordinating resources across service engines, and aggregating service engine metrics and logging. It interacts with vCenter Server to automate the load balancing for Kubernetes clusters. It is deployed as an OVA and provides a Web interface and CLI.
NSX Advanced Load Balancer Service Engine: Service Engine runs one or more virtual services and is a data plane component and runs as a virtual machine. Service engines are provisioned and controlled by the controller. The service engines have two interfaces. One connects to the NSX Advanced Load Balancer Controller management network and the second connects to the front-end network from where virtual services are accessed. For service engine sizing guidance, see Sizing Service Engines.
Avi Kubernetes Operator: Avi Kubernetes Operator runs as a Kubernetes POD in the Supervisor cluster and workload clusters to provide ingress and load balancing.
Steps to configure NSX Advanced Load Balancer
NSX Advanced Load Balancer Controller is deployed as VM using the OVA that can be downloaded from https://customerconnect.vmware.com/ using an account that has access to downloading software packages. Once the OVA is downloaded import it to vCenter. For this reference architecture version 20.1.7 was used with Enterprise license. The process of deploying and configuring NSX Advanced Load Balancer follows. Screenshots are used where necessary to emphasize specific configurations.
Prerequisites:
- VxRail cluster is already installed and configured.
- vSphere distributed switch port groups for required networks are created as described in network overview section previously.
- A resource pool is created in vCenter that will hold the NSX Advanced Load Balancer virtual machines.
Controller Deployment:
- Import controller OVA and provide a name for the controller.
Figure 3: Import OVA
- Select the resource pool for the controllers
Figure 4: Select resource pool
- For storage, select vSAN datastore that was created during VxRail deployment
Figure 5: Select storage
- Select VDS port group that is designated for NSX Advanced Load Balancer management interface. Ensure that the port group to which NSX Advanced Load Balancer is attached, can communicate with port group to which vCenter Server management network resides.
Figure 6: Select management interface
- On the next screen enter the required information and proceed to finish on the next screen.
Figure 7: Required information
Figure 8: Complete OVF deployment
Login and Initial Configuration
Once the controller is deployed and ready, access the admin portal from a browser using the previously configured hostname or IP address. Please note that it takes a few minutes for the controller to be available for login.
- On the login screen create a new password and create the admin account
- On the next screen fill in the system settings including choosing a passphrase, DNS, domain and SMTP information. Choose your multi-tenant settings and click save. Figure 9 depicts this screen.
Figure 9: Initial login setup
Controller Configuration
Once the controller is deployed, several tasks need to be performed for NSX Advanced Load Balancer to work with Tanzu Kubernetes Grid Service. These tasks are summarized below.
- Configure the default cloud instance
- Configure settings for system access
- Configure Service Engine group
- Configure the VIP network
- Create IPAM Profile
- Add IPAM to Default-Cloud instance
- Create DNS service
- Add IPAM and DNS Profile to the default cloud instance
- Export the SSL/TLS certificate
- Create route between workload and front-end networks.
Configure default cloud instance
Currently only Default-Cloud instance is supported with vSphere. Access the Default-Cloud settings via Infrastructure ->Clouds and click the pencil icon to edit the cloud configuration.
- On the Infrastructure tab, fill in the IP address, username, and password for vCenter Server. Ensure that under access permissions “Write” is selected. NSX Advanced Load Balancer requires write permissions to vCenter to creating, modifying, and removing Service Engines or other resources automatically as requirements change.
Figure 10: Add vCenter
- One the Data Center tab select your data center.
Figure 11: Select Data center
- On the Network tab, select network designated for NSX Advanced Load Balancer management network, IP subnet, default gateway and an IP range for the static pool.
Figure 12: Select network
Configure settings for system access
- Basic authentication can be set using the following process. Navigate to Administrator > Settings > Access Settings and check “Allow Basic Authentication”.
Figure 13: Set basic authentication
Staying on the same screen delete the existing SSL\TLS certificate and create a new one with your specific organization information. The Controller has a default self-signed certificate. But this certificate does not have the correct SAN (Subject Alternate Name). Certificate must be replaced with a valid external or self-signed certificate that has the correct SAN. Step-by-step instructions visit NSX Advanced Load Balancer documentation page.
Figure 14: Create certificate
Configure Service Engine group
- From Infrastructure > Service Engine Group > Basic Settings, ensure that N+M (buffer) is selected under elastic HA. This is the default mode, where “N” is the minimum number of Service Engines required to place virtual services in a SE group and “M” is the additional Service Engines that the controller spins up to manage Service Engine failures without reducing the capacity of the group.
Figure 15: Create service engine group
Figure 16: Select cluster and vSAN datastore
Configure the VIP network
This network is where various Kubernetes control plane and Kubernetes applications require load balancing services. In this case the VIPs reside on the front-end network.
Figure 17: Configure VIP network
Create IPAM Profile
Create IPAM for the VIP network created earlier to assign IPs to the virtual services. This can be accessed via Templates > Profiles > IPAM/DNS Profiles.
- Create a new IPAM Profile by using the Create button on the top right-hand side of the screen.
Figure 18: Create IPAM
- Enter a name. In the Type field select Avi Vantage IPAM and add a usable network which will be your VIP network.
Figure 19: Create IPAM
Add IPAM to Default-Cloud instance
The new IPAM needs to be added to the default-cloud instance.
- Navigate to Infrastructure > Default-Cloud and edit. Select the newly created IPAM profile from the dropdown list
Figure 20: Add IPAM to Default-Cloud
Create DNS service (optional)
NSX Advanced Load Balancer provides generic DNS virtual service that can be implemented with various functionalities to meet different requirements. The DNS virtual service can be used to load balance DNS servers, hosting static DNS entries, virtual service IP address DNS hosting or hosting GSLB service DNS entries. For more information on NSX Advanced Load Balancer features please visit NSX Advanced Load Balancer features.
For this reference architecture a DNS virtual service was created that served as DNS sever for a subdomain of the primary Active Directory domain via DNS delegation. This generic process is outlined below. With this DNS configuration along with IPAM, a DNS entry will be created for services. Delegation of DNS domain will depend on your Active Directory architecture. Please consult Microsoft® documentation on DNS delegation steps.
- Create DNS virtual service
To create DNS virtual service, navigate to Applications > Virtual Services > Create Virtual Service. Give the service a name and select TCP/UDP and application profile. Application profile is of type “System DNS”. Click save.
Figure 21: Create DNS service.
- Add DNS virtual service to Default-Cloud instance
Navigate to Administrator > Settings > DNS Service and select virtual DNS service.
Figure 22: Add virtual service
Export the SSL/TLS certificate
The certificate created in the earlier steps will be needed during Tanzu Workload Management deployment. Following these steps, export the certificate to be used later.
- Go to Templates > Security and select the certificate created earlier
- Click the down arrow on the right-hand side to export the certificate
Figure 23: Export certificate
- Copy the certificate to clipboard.
Figure 24: Copy certificate
Create route between workload and front-end networks
If the VIP and workloads are on separate networks, as in the case here, a route needs to be created between the front-end and workload networks.
- Navigate to Infrastructure > Routing > Static Route tab
- Create a new static route.
Figure 25: Create route
Enable vSphere with Tanzu Workload Management
Once NSX Advanced Load Balancer has been successfully deployed, vSphere with Tanzu Workload Management can be enabled. As a best practice, a Tanzu specific storage policy needs to be defined and storage tagged prior to enabling Workload Management. vSphere with Tanzu uses storage policies and tags to assign storage to Kubernetes cluster nodes and persistent volumes. This storage policy ensures that Tanzu workloads are placed on the desired storage pool, separate from other vSphere workloads. Process is described below.
Note: VxRail deploys a vCenter Server internal to the host cluster. An existing, customer managed vCenter Server can also be used with VxRail deployment. Please consult VxRail documentation for more info. For this document, vCenter Server installed by VxRail was used.
Create storage tag and policy
- Select your cluster in vCenter Server and go to Datastores. Select the Datastore to be used for Workload Management.
- Under “Tags” click “Assign”.
Figure 26: Create tag
- On the next screen click “ADD TAG” and Create Tag dialog opens. Enter a name for the tag and select a category. If desired a new category can be created. Click CREATE to create the tag.
Figure 27: Create tag
- In vCenter Server navigate to Menu > Policies and Profiles > VM Storage Policies and select CREATE. On the next screen give the policy a name and click NEXT.
- For Policy Structure, check the “Enable tag-based placement rules” and click NEXT
Figure 28: Placement Rules
- Create a rule by selecting the tag category, and “Use storage tagged with” as usage option. Browse and select the tag created earlier. Click NEXT.
Figure 29: Create rule
- Select storage and click next and finish the policy creation
Figure 30: Select storage
Create content library
A content library is required by Tanzu Kubernetes Grid Service that will hold the images required by vSphere to deploy Supervisor and workload clusters. The subscription URL used for this content library is https://wp-content.vmware.com/v2/latest/lib.json. Create a content library with the given subscription URL. Please note that it will take some time before content is downloaded and available in the library.
Enable Workload Management
- Navigate to Menu > Workload Management. Review the prerequisites for setting up Supervisor cluster and ensure that they are met before proceeding. Click “Get Started”.
Figure 31: Enable Workload Management
- Select vSphere Distributed Switch and click next.
Figure 32: vCenter Server and Network
- Select the cluster
Figure 33: Select cluster
- Select the storage policy created previously.
Figure 34: Storage Policy
- On the next screen fill out the NSX Advanced Load Balancer details and copy and paste the certificate exported earlier. Ensure to use “ < IP address>:443 “ format when entering the IP address for the controller. Click NEXT.
Figure 35: NSX Advanced Load Balancer details
- Fill in the management network details. Either DHCP or Static assignment can be used. When using static IP address assignment, ensure to reserve a block of five IP addresses for control plane VMs in the Supervisor cluster. When using DHCP, ensure that the DHCP server in your environment supports client identifiers to provide IP addresses for Supervisor Cluster control plane VMs and floating IP. The DHCP server must also be configured with compatible DNS server(s), NTP server(s), and DNS search domain(s). Click NEXT.
Figure 36: Configure management network.
- vSphere namespaces on this Supervisor Cluster require Workload Networks to provide connectivity to the nodes of Tanzu Kubernetes clusters and the workloads that run inside them. Internal IP addresses are used to allocate Kubernetes services of type ClusterIP. These IP addresses are internal to the cluster, but should not conflict with any other IP range. Configure the workload network information page for your specific network. Click NEXT.
Figure 37: Configure workload network
- Add content library.
Figure 38: Add content library
- Select the size of the control plane VM per your requirements and optionally enter DNS name designated for Kubernetes API server. For production deployments with Tanzu Mission Control integration, a large form factor is recommended for Supervisory control plane nodes. Click Finish to start the configuration process.
Figure 39: Control plane size and API server
Authentication and Access
Authentication to Tanzu Kubernetes clusters can be accomplished in different ways depending on your architecture, user authentication and access requirements. In an on-premises environment the simple and reliable method is to use vCenter SSO to authenticate users or to add a local identity source such as Active Directory over LDAPS. For this reference architecture Active Directory authentication with LDAPS was used to authenticate users. The domain controller was configured with Certificate Authority and the controller certificate was exported to be used in the identity source configuration process. What follows are some steps that the vSphere administrator will perform to give access to the domain users in the namespace created for initial workload cluster deployment.
vSphere Administrator Tasks.
- Add Active Directory as identity source to vCenter Server.
- Create namespace for DevOps admins and developers to deploy clusters to.
- Assign permissions to DevOps engineers in the namespace.
- Assign storage policies, virtual machine classes and quotas to namespace.
- Provide namespace access information to DevOps and/or Developers.
Add Active Directory as identity source to vCenter
- vSphere menu > Administration > Single Sign On > Configuration > Identity Provide > Identity Sources and click ADD.
- Fill in the required information for the domain, upload the domain controller certificate and connect to the domain controller using port LDAPS port (636)
Note: Use of domain names with “. Local” is not supported. Please see KB article.
Figure 40: Configure identity source
- To verify the Active Directory integration was successful, navigate to Users and Groups. The domain now should be visible in the drop-down list and query to find a user should be successful.
Figure 41: Validate AD integration
Create namespace
Tanzu Kubernetes Grid Service uses namespace to provide tenant separation and isolation. Namespaces are defined on the Supervisor cluster and can be configured with user permissions, resource quotas and storage policies. Depending on requirements you assign VM classes and content libraries to the namespaces to download latest Tanzu Kubernetes releases and VM images. Number of namespaces created depend on organizational requirements. For this reference architecture a single namespace was created.
Figure 42: Namespace configuration
Provide namespace and login information to users
Once the namespace is configured, the administrator needs to provide DevOps with relevant information such as username and password, vCenter Server certificate, as well as namespace URL, so they can create clusters and deploy workloads on them. The user will install the certificate on the access machine where they intend to run Kubernetes commands. The namespace URL can be obtained from the namespace configuration status page as show in figure 43.
Figure 43: Access URL
The URL provides instructions to the user to download vSphere CLI tools to access the namespace.
Figure 44: Kubernetes CLI tools
Lifecyle Management via Tanzu Mission Control
As companies grow their cloud native environments to multiple cloud providers, platform consistency and manageability becomes a challenge. Each cloud provider has its own management portal and lifecycle management of such environment can become a nightmare. Enterprises need a solution to help platform operators efficiently expand control and provide Kubernetes environments with guardrails so DevOps teams can have consistency and developers can operate autonomously, in a self-service fashion. For user authentication, an identity source such as Microsoft Active Directory or another a 3rd party identity source needs to be federated with Tanzu Mission Control. Please see “Self-Service Federation Setup” in Tanzu Mission Control Documentation.
VMware Tanzu Mission Control is a centralized management hub with cluster lifecycle management and a unified policy engine that simplifies multi-cloud and multi-cluster Kubernetes management across teams in the enterprise.
Administrator can perform several tasks to manage their on-premises or multi-cloud environments. Some of the tasks that an administrator needs to perform to administer their environment is listed and explained below.
- Create a cluster group
- Add management cluster to Tanzu Mission Control
- Create Kubernetes workload clusters
- Attach existing Kubernetes clusters
- Install Tanzu toolkit packages and extensions
- Configure policies
- DevOps access to clusters
Note: It is not the intent of this document to cover all aspects and features of Tanzu Mission Control. For more details please see Tanzu Mission Control documentation.
Create a Cluster Group
Creating a cluster group for different deployments or site is an optional step. The advantage is that it organizes different cluster types and policies can be applied to all cluster at the group level. Create cluster groups from the left menu pane in Tanzu Mission Control portal.
Add Management Cluster to Tanzu Mission Control
For Tanzu Mission Control to manage the Tanzu Kubernetes Grid environment, the management cluster need to be registered to it. The following steps depict the management cluster registration process.
- In the Tanzu Mission Control portal, navigate to Administration > Management clusters and click on Register Management Cluster and select type of management cluster you are registering
Figure 45: Register management cluster
- Enter name, cluster group, description, and label information if desired. Labels help organize various Tanzu Mission Control objects and that can be sorted and displayed easily.
Figure 46: Register management cluster
- Enter proxy information if your management cluster is behind a proxy.
Figure 47: Enter proxy
- Copy and provide the registration URL that has the registration key to the vSphere administrator. The vSphere administrator will perform the next step in registering the management cluster to Tanzu Mission Control.
Figure 48: Copy registration URL
- As a vSphere administrator, login to the Supervisor cluster and list namespaces. Take note of the TMC service namespace.
Figure 49: TMC namespace
- Create and apply .yaml file using the registration URL and svc-tmc-xx namespace as shown below.
Sample yaml:
apiVersion: installers.tmc.cloud.vmware.com/v1alpha1
kind: AgentInstall
metadata:
name: tmc-agent-installer-config
namespace: svc-tmc-c9
spec:
operation: INSTALL
registrationLink: https://org.tmc.cloud.vmware.com/installer?id= 17e139c2ba3551axxxxxxxxx
- Apply the yaml via kubectl create -f <filename.yaml> to complete the registration process.
- In Tanzu Mission Control console, verify that connection to the Supervisor cluster is successful and cluster is added and functional.
Figure 50: Verify connection
Create Kubernetes Workload Clusters
- Navigate to Clusters and click create cluster.
Figure 51: Create cluster
- Select the management cluster and click continue to create cluster.
Figure 52: Select management cluster
- Select provisioner which in this case is the namespace you created in workload management.
Figure: 53: Select provisioner
- On the next screen give cluster a name and select a group.
Figure 54: Cluster name and group
- Select a Kubernetes version and assign network CIDR and storage class.
Figure 55: Configure parameters
- On the next screen select a deployment model for your control plane nodes, select a VM class and storage policy. You can also create a volume at this point.
Figure 56: Configure control plane specifications.
- Modify the default pool configuration which has one node, to the desired number of worker nodes. Set the VM class and storage policy. Click Create Cluster to start the cluster creation process.
Figure 57: Modify default node pool
Attach Existing Kubernetes Clusters
- In Tanzu Mission Control navigate to Clusters > Attach Cluster and enter the desired information.
Figure 58: Attached Cluster
- On the next screen enter proxy information if you cluster is behind a proxy.
- On the “Install Agent” step copy the kubectl command.
Figure 59: Install agent
- Login to the cluster and run the command. Cluster should be added, and policies created.
Figure 60: Run cli command
- In the Tanzu Mission Control console verify that cluster has been attached
Figure 61: Verify
Tanzu Toolkit Packages and Extensions
Tanzu Mission Control operators can install, delete and manage packages on Kubernetes clusters. Tanzu Mission Control uses Carvel for package management. The “Catalog” page shows the packages available to be installed on Kubernetes clusters.
Figure 63: Packages
Package repositories available for each cluster can be viewed, enabled, or disabled via Cluster > Add-on tab. Custom package repositories can be added via the “Add Package Repository” button.
Figure 64: Repositories
Install Packages
Figure 63 shows the packages available with Tanzu standard repository. Method of deployment is the same for all packages. Some packages however have more customizable fields in Tanzu Mission Control during installation. Below is an example of how to install Prometheus and Grafana using Tanzu Mission Control.
Install Prometheus and Grafana
- Navigate to Catalog select a cluster and click on Prometheus and select install package.
- Give the package a name and select a version to be installed from the drop-down list. Under package configuration, fields that have a pencil icon can be modified and configured per your configuration requirement.
Figure 65: Install Prometheus
- Some Carvel Package settings can be modified such as Carvel Resources namespace via the “Carvel Settings” button.
Figure 66: Carvel settings
- Click install package either leaving the settings at default or modify as needed.
- Once Prometheus is installed successfully, install Grafana similarly.
Figure 67: Prometheus and Grafana
- Verify that you can access Grafana via its external IP address. Grafana is installed in the “tanzu-system-dashboards” namespace. Use “kubectl get svc -n tanzu-system-dashboards” command to get the external IP address Grafana is running on.
Figure 68: Grafana
Configure Policies
Various types of policies can be created by the platform administrator to manage operations of Kubernetes environments or other organizational objects. The two policies most relevant to Kubernetes operations are Role Based Access Control (RBAC) and Security Policies. Please note that security policies are supported on Kubernetes version 1.16 or higher. The application of these policies is discussed in the following section. For more information on policies, roles and role-bindings, please see Policy-Driven Cluster Management.
RBAC and Role binding
Access policies control how users and groups access and manage resources, such as clusters via Tanzu Mission Control. Organizations have predefined roles that govern access to an object based on granted permissions, whereas role binding defines the scope of the access policy to which the role applies. Roles are bound to a given user or group effectively granting permissions to the user or group of users to the desired object. The following example binds a user identity to a cluster via Tanzu Mission Control policy management engine.
- From left pane in Tanzu Mission Control navigate to Policies > Assignments > Access tab > Clusters and select the cluster or a group of clusters you want to apply the policy to. Expand the cluster name under “Direct access policies”.
Figure 69: Apply role binding
- Create role binding for a user and assign a cluster level role.
Figure 70: Create role binding
- Click ADD and SAVE. Role binding will be created.
Figure 71: Role binding created
- Verify that role binding is created on the cluster correctly. Use “Kubectl describe” command to view role binding configured.
Figure 72: Verify
Security Policies
Security policies allow you to manage the security context in which deployed pods operate in your clusters by imposing constraints on your clusters that define what pods can do and which resources they have access to. Tanzu Mission Control security policies are not implemented using the Kubernetes native “PodSecurityPolicy” object. Tanzu Mission Control uses Gatekeeper project from Open Policy Agent (OPA Gatekeeper). The security-sensitive aspects of the pod specification that they control are, however, the same. For more information, see the OPA Gatekeeper documentation. Tanzu Mission Control with Tanzu Standard only supports pre-defined, “Basic” and “Strict” policies. For custom policy implementation Tanzu Advanced is required. Security Policies can be assigned via Policies > Assignments > Security Tab. Below is an example of how to configure and verify security policies.
- Select the cluster or group of clusters the policy will be applied to.
Figure 73: Select cluster
- Under “Direct Security Policies” click “create Security Policy”. Select either Basic or Strict security template per your requirements. Give policy a name and enter label selector information if required.
Figure 74: Create policy
- Verify that policy is applied to the cluster. Since policies are applied via Gatekeeper constraints and not Kubernetes native POD security policy, you will run command “kubectl get constraints” to display applied policies to the cluster. Each constraint that has been applied will be appended by the policy name.
Figure 75: Verify constraints
Tanzu Observability Integration
Tanzu Mission Control provides centralized management including full-stack observability and monitoring through integration with Tanzu Observability. Tanzu Observability provides actionable insights into Tanzu infrastructure components. Tanzu Kubernetes Grid provides monitoring with the open-source Prometheus and Grafana services. You deploy these services on your Kubernetes cluster and can then take advantage of Grafana visualizations and dashboards. The Tanzu Kubernetes implementation of Prometheus includes Alert Manager, which you can configure to notify you when certain events occur. For more information visit Tanzu Observability Documentation.
Tanzu Observability integration with Tanzu Mission Control is done on a per-cluster basis. Following text outlines the process.
Integration steps:
- To integrate Tanzu Observability with Tanzu Mission Control an API token is needed. From the Tanzu Observability console right-top corner click the gear icon and select your account.
- Select “API Access” tab. Copy an existing token or generate a new one.
Figure 76: API Token
- In Tanzu Mission Control, navigate to Administration > Accounts > create account credentials. Select Tanzu Observability under “Integrations”.
Figure 77: Integration
- Give credentials a name, enter your Tanzu Observability URL and paste the API token copied earlier in Step 1.
Figure 78: Create credentials
- To add a cluster to Tanzu Observability, Select a cluster. From the “Actions” menu select Tanzu Observability > Add
Figure 79: Add Tanzu Observability
- Select credentials created earlier and confirm to finish.
Figure 80: select credentials
- Once the integration process is complete you will be able to connect with Tanzu Observability from the link provided under “Integrations”.
Figure 81: Integration complete