September 20, 2022

TKG 2 - Choose Your Own Authentication: Integrated Pinniped OIDC in vSphere with Tanzu

By Michael West, VMware With the launch of vSphere 8, vSphere with Tanzu has expanded your options for integrating external identity providers into your cluster authentication strategy.   Pinniped is an authentication service for Kubernetes clusters.  You can now bring your own OIDC, LDAP or Active Directory provider to be the source of user identities in your Kubernetes clusters.  You may also continue to use vSphere Single Sign-On(SSO) as you did in vSphere 7.  In this blog and corresponding you will see how to set up an external identity provider (IDP) and how the authentication process works.

By Michael West, VMware

With the launch of vSphere 8, vSphere with Tanzu has expanded your options for integrating external identity providers into your cluster authentication strategy.   Pinniped is an authentication service for Kubernetes clusters and OpenID Connect (OIDC) is a simple identity layer on top of the OAuth 2.0 protocol that allows Clients to verify the identity of the End-User based on the authentication performed by an identity provider.  Using Pinniped, you can now bring your own OIDC provider to be the source of user identities in your Kubernetes clusters.  You may also continue to use vSphere Single Sign-On(SSO) as you did in vSphere 7.  In this blog you will see how to set up an external identity provider (IDP) and how the authentication process works.

Why do we need Pinniped and external identity providers (IDP) ?

Kubernetes clusters generally have two categories of users.  Service accounts that are managed by Kubernetes and bound to Namespaces directly through API calls - and normal users.  These service accounts are tied to credentials that are stored as secrets in Kubernetes and mounted into pods.  They facilitate in cluster process access to the Kubernetes API server.  Normal users are just what you think they are; processes external to the Kubernetes cluster that need to be authenticated to the Kubernetes API.   Kubernetes has no mechanism for creating and managing external users and their access to the API.  This is done through an external service. 

In version 7 of vSphere with Tanzu, external user management and authentication of both the Supervisor Kubernetes cluster and Tanzu Kubernetes Grid (TKG) clusters was done through integration with vSphere Single Sign-On (SSO).   The users either resided in vCenter or in another Identity Provider (IDP) that was added as an Identity Source in vCenter.  An authenticating proxy on the Kubernetes API would route login requests to vCenter, which would return a json web token (JWT) that was added to a Kubernetes config file on the client.  The token was presented with each subsequent Kubernetes API call to authenticate the user.    This is a workable solution, but as customers moved to larger Kubernetes deployments with many clusters, there were a couple of limitations.  The vSphere SSO token exchange process was not designed to support the volume of exchanges that might be needed with many users logging in and generating tokens for large numbers of clusters.  More importantly, development teams are using a range of IDPs and have extensive investment in the management of users and integration with other apps through their preferred IDP.  They want to continue to use them natively without introducing vCenter into that data path.  Pinniped is the solution for this.

In this blog we will dig into the details of the vSphere Single Sign-On process and then see the implementation of an external IDP using the new Pinniped integration

 

Cluster Authentication with vSphere Single Sign-On(SSO)

Let's begin by looking at using the vSphere plugin for kubectl to authenticate our user.  This is the login that was available in vSphere 7 and will continue to be in vSphere 8.   Notice that there are two users that have been added to the TKG namespace in vCenter.  mwest@vsphere.local uses vCenter as the identity source and mwest@vmware.com will be externally authenticated through Gitlab using OIDC.   

image-20220829103543-1

 

Both users have been assigned the Edit Role on the TKG namespace.  This means that once they are Authenticated, they are Authorized with the privileges in the Kubernetes Edit ClusterRole through the use of a RoleBinding in the TKG namespace.  The external provider is simply telling the Kubernetes API that the user is authorized access, there is no change in how the VI Admin assigns the privileges on the clusters.  

kubectl get rolebinding -n tkg shows that both mwest@vmware.com and mwest@vsphere.local have been assigned the ClusterRole Edit

root@423d6fa70ad5f6badc8cd365fd6d90c8 [ ~ ]# kubectl get rolebinding -n tkg
NAME                                         ROLE                     AGE
cc-01-mc5gd-ccm                              Role/cc-01-mc5gd-ccm     6d21h
cc-01-mc5gd-pvcsi                            Role/cc-01-mc5gd-pvcsi   6d21h
wcp:tkg:group:vsphere.local:administrators   ClusterRole/edit         5d
wcp:tkg:user::mwest@vmware.com               ClusterRole/edit         3d1h
wcp:tkg:user:vsphere.local:mwest             ClusterRole/edit         6m29s
root@423d6fa70ad5f6badc8cd365fd6d90c8 [ ~ ]#

 

In order to authenticate through vSphere SSO, Developers will log in using the kubectl vsphere login command.  This command has two forms, one that builds contexts for the Supervisor namespace the user has access to and one that builds a context to access a TKG cluster.  You might ask why there are two separate commands instead of a single login that builds all contexts.  This is really a function of the architecture of vSphere SSO and the token exchange service.   DevOps users may have access to a large number of clusters.  With a linear token exchange process, login times would grow for each cluster a particular user needed access to.  In order to prevent multi-minute login times, users will initiate a separate login as they need to access a particular TKG cluster. 

With output verbosity set to max, you see the kubectl vsphere login command using the mwest@vsphere.local user.  This is the login to the Supervisor Cluster.  Two contexts were created; the default cluster context and the context for the namespace mwest@vsphere.local has access to.    You see these context creations in the login command output below with the kubectl config set-cluster 192.168.220.67 --server=https://192.168.220.67:6443 and kubectl config set-context tkg --cluster= 192.168.220.67 --user=wcp:192.168.220.67:mwest@vsphere.local --namespace=tkg

kubectl vsphere login --server 192.168.220.67 -u mwest@vsphere.local --insecure-skip-tls-verify -v10

ubuntu@cli-vm:~$ kubectl vsphere login --server 192.168.220.67 -u mwest@vsphere.local --insecure-skip-tls-verify -v10
DEBU[0000] User passed verbosity level: 10
DEBU[0000] Setting verbosity level: 10
DEBU[0000] Setting request timeout:
DEBU[0000] login called as: /usr/local/bin/kubectl-vsphere login --server 192.168.220.67 -u mwest@vsphere.local --insecure-skip-tls-verify -v10
DEBU[0000] Creating wcp.Client for 192.168.220.67.
INFO[0000] Does not appear to be a vCenter or ESXi address.
DEBU[0000] Got response:


INFO[0000] Using mwest@vsphere.local as username.
DEBU[0000] Env variable KUBECTL_VSPHERE_PASSWORD is present
KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
Password:
DEBU[0011] Got response: [{"namespace": "tkg", "master_host": "192.168.220.67", "control_plane_api_server_port": 6443, "control_plane_DNS_names": []}]
DEBU[0012] Got response: {"session_id": "eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxYS5jb3JwLnRhbnp1XC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiU3lzdGVtQ29uZmlndXJhdGlvbi5BZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIl0sImV4cCI6MTY2MTgyNDY5MywiaWF0IjoxNjYxNzg4NjkzLCJqdGkiOiIxYjM2NzlkYi0yOTUyLTQ4ZTgtOTljNS00ZTc0M2QxNzJkNTMiLCJ1c2VybmFtZSI6Im13ZXN0In0.PKV7K_Rcuz4IGbsYvPxNTf2henN0xfTIyVyIiHSzOd8XfYis9TaoKM9rPIpb873z449UFKUih3O2Z9HKY2TMSqcoQ_zhLh-dHsUK2cZssdyidTOuq2x9NsPY2KYKBIm02BFosCr2BzXkm5DkeasN0PSjO6PYKxxdxch_ykoIObiTc1WmvoExxgW5PoVy09QwY1082V8lvahFSa0Abzm805SVgh1w6FGSvJKL7KRLpYL9-tIvDwo9lMnA4aiyv30mzVhyUX8LyfHDjkOcmv8HZQmUo6Rl9umydsrH8nV9_m0QYNzhcAzcf2a2InIyyM5PsjkkIsesKUk25liRUQ77RVqiZkKscr-sfN9jZcHb_j41LUAFnqVcO85ttjUBJJ97lDvWh6DSmUM_veTtA-jK8pCdGDu1-V-_H9eLSUdFYWrsSyC6oY8CHW7APkNcGOyr6nD86tD-41sRQwFIHiNsh_f-YArqfaQcOvkwSqRRoLZQAG1F9s3-rnLuVeUKfqaL"}
DEBU[0012] Found kubectl in $PATH
INFO[0012] kubectl version:
INFO[0012] Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
DEBU[0012] Calling `kubectl config set-cluster 192.168.220.67 --server=https://192.168.220.67:6443 --insecure-skip-tls-verify`
DEBU[0013] stdout: Cluster "192.168.220.67" set.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.
DEBU[0013] stdout: User "wcp:192.168.220.67:mwest@vsphere.local" set.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.
DEBU[0013] Calling kubectl.
DEBU[0013] Calling kubectl.
DEBU[0013] Calling `kubectl config set-context tkg --cluster=192.168.220.67 --user=wcp:192.168.220.67:mwest@vsphere.local --namespace=tkg`
DEBU[0013] stdout: Context "tkg" created.
DEBU[0013] stderr:
DEBU[0013] Calling `kubectl config use-context tkg`
DEBU[0013] stdout: Switched to context "tkg".
DEBU[0013] stderr:
Logged in successfully.
DEBU[0013] Calling `kubectl config set-context 192.168.220.67 --cluster=192.168.220.67 --user=wcp:192.168.220.67:mwest@vsphere.local`
DEBU[0013] stdout: Context "192.168.220.67" created.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.

You have access to the following contexts:
   192.168.220.67
   tkg

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

   

Before we look at the config files, let's see that the config file tokens came from vCenter.  Authentication is facilitated through Authenticating Proxy pods in the Supervisor cluster that forward requests to vCenter.  Looking at the logs for the appropriate auth proxy pod using kubectl logs wcp-authproxy-423d7e09122c75090c90a977f7772070 -n kube-system we can see a connection to vCenter and that a bearer token was returned for user mwest@vsphere.local.  

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): vcsa-01a.corp.tanzu:443
DEBUG:urllib3.connectionpool:https://vcsa-01a.corp.tanzu:443 "POST /wcp HTTP/1.1" 200 262
DEBUG:wcp.resources:[139987200936336] Got response from WCP: [Summary(namespace='tkg', master_host='192.168.220.67', control_plane_api_server_port=6443, control_plane_dns_names=[])].
INFO:server:[139987200936336] "127.0.0.1" - - [29/Aug/2022:16:18:39 +0000] "GET /wcp/workloads HTTP/1.0" 200 125 "-" "kube-plugin-vsphere bld 18396598 - cln 9120757" "mwest@VSPHERE.LOCAL"
DEBUG:server:[139987200948560] Request: b'POST' b'/wcp/login' 127.0.0.1
INFO:vclib.sso:[139987200948560] Got bearer token for mwest@vsphere.local.

 

Note that we are going to look at details across several config files as a means of understanding the authentication process.  There is a lot of complexity in this, however users will simply execute login and use-config commands to switch contexts without needing to be exposed to these details.  The kubectl config file at ~.kube/config shows our cluster and namespace contexts configured for user mwest@vsphere.local as we would expect.  Also note the auth token that was returned from the vSphere token exchange process.

ubuntu@cli-vm:~$ more .kube/config
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://192.168.220.67:6443
  name: 192.168.220.67
contexts:
- context:
    cluster: 192.168.220.67
    user: wcp:192.168.220.67:mwest@vsphere.local
  name: 192.168.220.67
- context:
    cluster: 192.168.220.67
    namespace: tkg
    user: wcp:192.168.220.67:mwest@vsphere.local
  name: tkg
current-context: tkg
kind: Config
preferences: {}
users:
- name: wcp:192.168.220.67:mwest@vsphere.local
  user:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxY
S5jb3JwLnRhbnp1XC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiU3lzdGVtQ29uZmlndXJhdGlvbi5B
ZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIl0sImV4cCI6MTY2MTgyNTkyMCwiaWF0IjoxNjYxNzg5OTIwLCJqdGkiOiJmZWRlZTRiMi04ZDllLTQ4MTQtYWE0MS1lYzllMzg5ZjJkMmEiLCJ1c2VybmFtZSI6Im13ZXN0In0.kJIR5dkEn_uAfTkgwdzgRH5yp88l1u7Hqtce-gividTCdl9diCSbBo14HVrtySoJL20N8-M
yoR6h3_i42lbD-w8oYdXM61YOLk9GlvjKAqX2xRpS6-OBqrxeGyLDXjMa6L0pfMpw7vdT1P8QxYftZq5_ZE7zFTjpDT2yMsHsDktS2pa_SN7avSB2OlzB9KhjoG7o6DWfD-nZIVLgDakYNCFu0XqSKT17iX224em4TBqjqt3c_HTFvKzkhOKL52WKVPhF55D8Bp5uumwdI3rDX0NpLYSC5YbNtvipJaQvCK8JiOQyi7fSqH1fzB
VoMTfL3xUTbXJjG-m7ZyGXR-cNSPOfAommYBRJ-6bGi7ek0APkIYYYtSUJ6I5zsD6D6UyD_uZeVR0XgTAvKLNpnDEtnaDq-XT0_th3UGRPLnPBO9bLfMAVbIeLnszu7AKAQdJz781CEBxi-7tJZlITsVwb1o2cnTpKZZ7pwTODXSIkHOE4m2ozxGqe81UpEAeufS-U
ubuntu@cli-vm:~$

 

Digging further, we can decode the Json Web Token (JWT) or "Jot" and see the payload.  It is very clear that it was issued by vcsa-01a.corp.local.

{
  "sub": "mwest@vsphere.local",
  "aud": "vmware-tes:vc:vns:k8s",
  "domain": "vsphere.local",
  "iss": "https://vcsa-01a.corp.tanzu/openidconnect/vsphere.local",
  "group_names": [
    "LicenseService.Administrators@vsphere.local",
    "Administrators@vsphere.local",
    "Everyone@vsphere.local",
    "SystemConfiguration.Administrators@vsphere.local"
  ],
  "exp": 1661825920,
  "iat": 1661789920,
  "jti": "fedee4b2-8d9e-4814-aa41-ec9e389f2d2a",
  "username": "mwest"
}

 

 Now let's login to a TKG cluster.  BTW, this cluster is using the new ClusterClass based reconciliation that is new with vSphere 8 and TKG 2.0.   We execute kubectl vsphere login --server 192.168.220.67 -u mwest@vsphere.local  --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name cc-01 --tanzu-kubernetes-cluster-namespace tkg. You see the new context for TKG cluster cc-01, with the token needed to present to its API server.

ubuntu@cli-vm:~$ cat .kube/config
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://192.168.220.67:6443
  name: 192.168.220.67
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EZ3lNakUyTWpZd00xb1hEVE15TURneE9URTJNekV3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT1VRCkF2YXBDL1hNaGZGYlA2bDNQTFcrRnhIVXpDRXdEWENSdE5SSnhDcHN0TUZzeGQvc1NYOERUSnl4cG9JdEpESjIKOWJzbnNhV0N5clp5Q2hvekpyWUNTUDhKWDZPOHFVN1o5Zk9VaTNVbnhHeldGaUFTdVUzVk5HcFVnenluK1E3eQo5QUNkRlBtQzVENDlDbDFuTHJOSk5ZcUpHVlkxUnlUNEZZWWNLdUNORU9FNEFQZDhJTzlqNmQwdWYwSVRtamJRCkVlTVMxU1d2RFl1WXBEU0t6aFJPR3hVa0NBb2plM2p4dTNSdG5sWkpyQU05WG9uekFmTzduUjVWVXpwajBLT0MKL2RhNmt4K1pNQ2pFKzF6WWJqamRBQWZkTHBGUXNwOXhqYUxTRnNoQzh5VjFpbWc1Mk01a3d2ODZlNkY4UmFGbgprT3B3bkozemU5UWVDbElhRHowQ0F3RUFBYU5GTUVNd0RnWURWUjBQQVFIL0JBUURBZ0trTUJJR0ExVWRFd0VCCi93UUlNQVlCQWY4Q0FRQXdIUVlEVlIwT0JCWUVGSHZvWTFEWDdZbUl2NEdzK2xlcFhBZnd5M2gwTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQkk3aHdJaEZNK1lGMkNkRVBlVzdNRjdNMG9FSG8ybW12K2VIS3lSR0o0czFtegpPSkZGbXNtamwrMEp6aHN1enNaYzlET29kNjBpb2hiR0Z6OTZkbDF5LzV6UnBWVE5kVFdLUWJ2ZUFZN1hVNVJZCm1wcS9vNFBMZ3d2K25OTExCaUV1N2hPYldGeXhWTGkydzZPREoxcS9UMGpkdkRCMGs5TitCTDVHYWk0Rk9KZjEKMkhkbkdFczZGT2RET2pPUThNV29Da0hyTlJRRjdiU2FiNlUvbHhCNEtUeXdQd2ZTMUJZeElGWGVsL1hTT3lySApiMGhSQjVOQ3dDZTUySG9QOVoxaUkxS0hpbTNwVDJ1ejRZQzExRC9GSGkyNWl0cFU4RDV3VzV5SmdwdnVoTFdYCmtaNDRJZVluSGpqNDhvTmt2OWdrREZoTWlCV2pIRXBYMjhPMHI5cmgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.220.69:6443
  name: 192.168.220.69
- cluster:
    insecure-skip-tls-verify: true
    server: https://192.168.220.67:6443
  name: supervisor-192.168.220.67
contexts:
- context:
    cluster: 192.168.220.67
    user: wcp:192.168.220.67:mwest@vsphere.local
  name: 192.168.220.67
- context:
    cluster: 192.168.220.69
    user: wcp:192.168.220.69:mwest@vsphere.local
  name: cc-01
- context:
    cluster: 192.168.220.67
    namespace: tkg
    user: wcp:192.168.220.67:mwest@vsphere.local
  name: tkg
current-context: cc-01
kind: Config
preferences: {}
users:
- name: wcp:192.168.220.67:mwest@vsphere.local
  user:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxYS5jb3JwLnRhbnp1XC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiU3lzdGVtQ29uZmlndXJhdGlvbi5BZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIl0sImV4cCI6MTY2MTgyODk3MiwiaWF0IjoxNjYxNzkyOTcyLCJqdGkiOiI4MzdhZTcwOS0zMTdlLTRhMTAtYjYzNi1lZjU0NGJlMWViNjQiLCJ1c2VybmFtZSI6Im13ZXN0In0.B7LTFWadGU1JjMpP7IraiX0ktM56Nwf72MBerq9R-pylNU-bns8GCrMHX_HY7Hpf9a4OpKB27esk4qZhSYHm8ud6PNQq33jzQk4DD4VEknsESLt5P-p8NFGbeoda8VoquF_WHByFRxgZIkRrkuL8PldOBcCxRP04loQkXXtK91GSpNrrckxi9lK-RC9uqj47L0DZNar4Sun7sXC3itVXezaHxE0isiC9QkSTksv5FDTeJX4nZZNLPDGTz5NYu9RwAjzZirJPbP7hkrecav_0NII9sQK6aHir7dNBsgk9sbInGD-pTRyBawkcTKraq2R54llENMycennunsVAAeP8Wsrqe3AqPGomnzlT4Uy9IdmVgOMfMg_CFD-Fr4GMIgJ28E_DqeO66LpFa9ynlfO7vf55yfz5AGDt1YfRu0O2kF7wjnPLDD1N_CcXg4zdXCOc_XVV-eSEbtGd6vsb88X3jey5m2MjrBGZFpron0dX-rIqinPj6qZaCSjn5i1P4Pbi
- name: wcp:192.168.220.69:mwest@vsphere.local
  user:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzOjA1Mzc4YThjLWQ3MjgtNDZmZS1hMTNiLTE1OWFhMjkyZDhhMSIsImRvbWFpbiI6InZzcGhlcmUubG9jYWwiLCJpc3MiOiJodHRwczpcL1wvdmNzYS0wMWEuY29ycC50YW56dVwvb3BlbmlkY29ubmVjdFwvdnNwaGVyZS5sb2NhbCIsImdyb3VwX25hbWVzIjpbIkxpY2Vuc2VTZXJ2aWNlLkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJBZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIiwiRXZlcnlvbmVAdnNwaGVyZS5sb2NhbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCJdLCJleHAiOjE2NjE4Mjg5NzMsImlhdCI6MTY2MTc5Mjk3MywianRpIjoiY2I1NmQzYzQtMzg3My00MDUyLWEyMmUtNjI0M2IxMzUyYmM4IiwidXNlcm5hbWUiOiJtd2VzdCJ9.KqMemnoauGJZ-PSbRLwdu-pzeG86TCPGdTTD8RQex_N90rY4QMzuoDRP6TP8vT2TUlUCaKSE6-ZeIHpVSyLpP_Z1TNw6D1NhPbvxXQXnCVZv5n1nrjjkMxnrrrk1uGnY6nblgQnujczVrzEYVczsJbljHP8788DINE-iWqTT3xf5JDGesZDioXviDxw0uma4oNa8LVSQSwv2dGAaFVp1Aq1BBlgAR5rBjiz3S62MO97XmVlvkwKD4vv3tKbwpcYw1SfaPf5bE6IhZ25UFq_92-hYw0irZ7ggGHlmiVnIIzDkOpKOfWlJyhyihhKNQUOThwn_8NaUGYzMbw788v_husQWdVU7uIh_xjBV05lLHC-oCNr8VYwzzS65Qpet4utU9HhbbmsZj86M1ufxU1X4zTarYajF3EUJzUvPcWzVJhUll72JnePo_OXAJXRDsaBo5nq68qq0KmSXnUkNfiFpE5Ohl48KiUpHYTsrOZwBSQ672AsYwbRgbEwAmc29sCJD

 

kubectl config use-context cc-01 changes the context to point to the TKG cluster.  A quick list of the pods to verify that we are pointing to that TKG cluster.  kubectl get pods -n kube-system  The easiest way to know it's a TKG cluster is to look for the Antrea CNI pods.  The Supervisor does not use an Antrea overlay network.

ubuntu@cli-vm:~$ kubectl get pods -n kube-system
NAMESPACE                      NAME                                                       READY   STATUS      RESTARTS         AGE
kube-system                    antrea-agent-45vnp                                         2/2     Running     10 (5d3h ago)    7d
kube-system                    antrea-agent-8hnr4                                         2/2     Running     8 (5d12h ago)    7d
kube-system                    antrea-agent-sfrgs                                         2/2     Running     9 (5d3h ago)     7d
kube-system                    antrea-controller-c9b545758-z67kt                          1/1     Running     12 (5d3h ago)    7d
kube-system                    coredns-7d8f74b498-g4pj2                                   1/1     Running     4 (5d12h ago)    7d
kube-system                    coredns-7d8f74b498-q84fj                                   1/1     Running     4 (5d12h ago)    7d
kube-system                    docker-registry-cc-01-mgbpl-97dsm                          1/1     Running     0                7d
kube-system                    docker-registry-cc-01-node-pool-1-xxn9v-5ccb88b954-hvgh9   1/1     Running     0                7d
kube-system                    docker-registry-cc-01-node-pool-2-n6x66-784db4b47f-5zdjp   1/1     Running     0                7d
kube-system                    etcd-cc-01-mgbpl-97dsm                                     1/1     Running     7                7d
kube-system                    kube-apiserver-cc-01-mgbpl-97dsm                           1/1     Running     9 (5d3h ago)     7d
kube-system                    kube-controller-manager-cc-01-mgbpl-97dsm                  1/1     Running     821 (51m ago)    7d
kube-system                    kube-proxy-9wpgv                                           1/1     Running     0                7d
kube-system                    kube-proxy-gxlms                                           1/1     Running     0                7d
kube-system                    kube-proxy-xpkn7                                           1/1     Running     0                7d
kube-system                    kube-scheduler-cc-01-mgbpl-97dsm                           1/1     Running     816 (51m ago)    7d
kube-system                    metrics-server-fd7dbcdcf-sxk27                             1/1     Running     0                7d
 

 

Cluster Authentication with IDP through Pinniped

 

After deleting the ~/.kube/config file used in the vSphere SSO login, we will go through a similar process using an external IDP.  For this testing, we will use Gitlab as the IDP.   Setup will obviously be slightly different for each IDP, but the concepts are generally the same.   Each Supervisor can have a different Identity Provider associated with it.  So we start by going to the Supervisor Config page in vCenter and finding the Pinniped callback URL that is specific to your Supervisor Cluster.  In this case its https://192.168.220.67/wcp/pinniped/callback  This is where the IDP response should be sent upon user authentication and it needs to be added to your app in the IDP.  Note that we have already added the Gitlab IDP into this Supervisor, but will come back to update the configuration once we go through the setup in Gitlab itself.

image-20220830072137-1

 

 We have now logged into https://gitlab.com to tell it about the Supervisor Cluster we want to authenticate.  For Gitlab, we create a group called pinniped-testers that contains myself as a member.   Authentication will happen via my primary email address.

image-20220830072907-2

 

We then create an application, name it Pinniped auth and associate the callback URL for the Supervisor cluster.   We are going to need the Application ID and Secret from this screen to complete the IDP setup in the Supervisor Cluster.

image-20220830073131-3

 Add Identity Provider to Supervisor Cluster

 

From the Supervisor Cluster Config page, we Add Identity Provider.  For Gitlab we need to provide the gitlab URL https://gitlab.com, the client ID and Secret.  When working with Identity Providers and OIDC, you will see references to scopes and claims.  Claims are simply user info that are returned to the client app during the auth process.  Scopes are a way to categorize Claims into Groups.  In the example above, we are returning the openid, user profile and email scopes.  There may be other scopes you want to add that are specific to your environment and IDP, but those are generally optional.  Now we are ready to try it out.  Pretty simple setup.

 image-20220830073654-4

 

The Tanzu CLI

Pinniped based authentication is integrated with the Tanzu CLI.  For users of vSphere with Tanzu in vSphere 7, this will be something new.  We have integrated this CLI to provide consistency in the experience of deploying TKG clusters across public and private cloud environments.  You can learn more about the Tanzu CLI HERE.   We start with the Tanzu login command that specifies the endpoint for our Supervisor cluster and the external username we want to use - in this case mwest@vmware.com.   tanzu login  --endpoint https://192.168.220.67:443 --name mwest@vmware.com       

The command will return a link to authenticate your user.  Depending on your environment and/or IDP, you may automatically pop up a browser window and be routed to the login page for your IDP or you may have to manually paste this link into your browser.  You may also be presented with a code that must be pasted into the CLI to complete authentication.   

ubuntu@cli-vm:~$ tanzu login  --endpoint https://192.168.220.67:443 --name mwest@vmware.com
â
 ¹  Detected a vSphere Supervisor being used
Log in by visiting this link:

    https://192.168.220.67/wcp/pinniped/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=Y1sunJLJdJfSrzuO81Uwgfo9EAscfeZbve1IOOvPU1c&code_challenge_method=S256&nonce=429665b0a88c4cec7cea5d9501c918c5&redirect_uri=http%3A%2F%2F127.0.0.1%3A41513%2Fcallback&response_mode=form_post&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=d9dd99f0edaddc1380d4c0045a717d00

    Optionally, paste your authorization code: 

 

Once redirected to the appropriate IDP login screen, we enter the valid gitlab credentials for mwest@vmware.com.    The result will be the creation of the Json Web Token (JWT) and return of the Certificate Authority (CA) cert that can be used for authentication to the Kubernetes API Server

image-20220830081355-1    Screenshot of tanzu login --endpoint

 

Let's look at the configuration details.  Once you are successfully logged in, you will have two separate configuration files used by the Tanzu CLI to authenticate to your Kubernetes clusters.  There is a lot of stuff in this file, but let's focus on the ClientConfig.    You see that the user is mwest@vmware.com as we expected from our login command.  There is also a context, tanzu-cli....... , and a path to another config file (~/.kube-tanzu/config) that will contain context information in a format similar to what we saw with the ~/.kube/config used by kubectl after the vSphere Plugin based login. 

ubuntu@cli-vm:~$ cat .config/tanzu/config.yaml
apiVersion: config.tanzu.vmware.com/v1alpha1
clientOptions:
  ----
current: mwest@vmware.com
kind: ClientConfig
metadata:
  creationTimestamp: null
servers:
- managementClusterOpts:
    context: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
    endpoint: https://192.168.220.67:443
    path: /home/ubuntu/.kube-tanzu/config
  name: mwest@vmware.com
  type: managementcluster

 

The ~/.kube-tanzu/config file contains the appropriate credentials needed to access the Kubernetes APIs of the clusters mwest@vmware.com has access to.   The format is slightly different in that Pinniped functions differently from vSphere SSO token exchange.  The tokens are not contained in the file but are stored and refreshed as part of the Pinniped deployment.  The Certificate Authority (CA) cert tells the Tanzu CLI that it can trust the cluster it wants to access.  This kubeconfig file can be used in subsequent kubectl commands.

ubuntu@cli-vm:~$ cat .kube-tanzu/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZIekNDQTRlZ0F3SUJBZ0lKQU96dEFxT28zdFRMTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdhTVFzd0NRWUQKVlFRRERBSkRRVEVYTUJVR0NnbVNKb21UOGl4a0FSa1dCM1p6Y0dobGNtVXhGVEFUQmdvSmtpYUprL0lzWkFFWgpGZ1ZzYjJOaGJERUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEhEQWFCZ05WCkJBb01FM1pqYzJFdE1ERmhMbU52Y25BdWRHRnVlblV4R3pBWkJnTlZCQXNNRWxaTmQyRnlaU0JGYm1kcGJtVmwKY21sdVp6QWVGdzB5TWpBM01qa3hNak14TlRWYUZ3MHpNakEzTWpZeE1qTXhOVFZhTUlHYU1Rc3dDUVlEVlFRRApEQUpEUVRFWE1CVUdDZ21TSm9tVDhpeGtBUmtXQjNaemNHaGxjbVV4RlRBVEJnb0praWFKay9Jc1pBRVpGZ1ZzCmIyTmhiREVMTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnTUNrTmhiR2xtYjNKdWFXRXhIREFhQmdOVkJBb00KRTNaamMyRXRNREZoTG1OdmNuQXVkR0Z1ZW5VeEd6QVpCZ05WQkFzTUVsWk5kMkZ5WlNCRmJtZHBibVZsY21sdQpaekNDQWFJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dHUEFEQ0NBWW9DZ2dHQkFLWTFVQ2cxc1FSanEwYjI0QTlNCnVUTEd5dmVCVWZpelc4M2lxU0ZEUlNSL1Nld2xTVFNxMzlERkRXQVo4YXlnRTZYL1NOdzMzeXltMHJsb1daUGoKRkI1L0VWSEwrdVFZdi93OFdSNlZCb0NyWDhDOXMvZXhkM2ZuUVY0aDFDY3Y2UHlMNCttVm9wLy8wRVdOSWpCQgpWTm50dkFjamM1L0NSQjlDR29IZjhwbjBzMGRNZnRyZTZnaGRRMHAxYWdjMmpwYkx6TjNjd3NXSGFsT2dIcGtvCklvaDdBTVg3ZEVSem5UMnNObEFZcGE2aWM0NUQ2UVZpckRpZDR2bUZxQWltUm1nb3lzdjllb21ocmpiSldhbEsKVWhMNEwxczZhODhNQ2wwUFFDWlhmTEQ0eGl3UEtSeU9ia1ZmajJGeXBORm1LSDdMRlUzYVNvbWpCN3dBOUpMUwpqVUt5UEJUMnFiZzIrSlJPWGNUM1ZwUGIzNWl5QjFyNnB3U3FmUUZDQ0FZYmNHd2RmYTMrMVRMYzdpRVpnd0F2CnM2QUlBTWFaeHU0VDNmRXJraW1DTlFhRS91ZWszOUhqNy80WE1ONEhDVUUyRWkzamE3NG5XWjVuNVBjSnd0aEwKNU9zbWNRaVVwck42NDF3czgxdG5IR2Robkl4T1pTS21tdXAxeTR0a2xEd3M2d0lEQVFBQm8yWXdaREFkQmdOVgpIUTRFRmdRVWIvL0J3UUZ0Qmp3dVQzYjNQM1ZVbm5yVzYrOHdId1lEVlIwUkJCZ3dGb0VPWlcxaGFXeEFZV050ClpTNWpiMjJIQkg4QUFBRXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRQXcKRFFZSktvWklodmNOQVFFTEJRQURnZ0dCQUJHcnovME53L2VqMVNLZ1NWdVFWZndxZllRczE3SHVSMTNKNWFXcApEU0pQQW1JdEJqVFNYS3VOb1NWdVJjZUQxOXc1dlpkRVFTVmFiU1JEWGJCY0pRRFVWOURYMGdkWmVVZTBaVS9hCnN5YVRnWDFsWkxNZEVOejhHK1FQVEdjeTRGOGltRkJMMHFwWkdDbmJSK2FiMGh1S1hUU3NhKzFpdU10OUZnRWUKYXJrWG84b1FFdTBJZzNyelJBL0FpUU9ZaFRzZWt3aklIZWpEeDNtT3ZFaUJUQmlINlVqZ0puY00wVUJHRWlIeApnZDdsTnZVSzlDZFNueitxS0RDQ0F0QzVidGk5S2g1UHkwNEVUWUtUSlJjYW9qbzNmRnBJZmJZUHJOakFjL3orCmxoTGtBVENyREhERmtQMktadHlqN1VHeXc1Vml5TjBXeU0weUIxN2pZSEQwM056bzdNckh5MjlSSm91cTdFdlEKOE9CZHVOZXNTZFNiT2dXZ1NRK004KzVzcGF0eVJxZXdxcHNUNXJSTkk3L05raXpqOXlCNmtjSVBGeFVhemdpRwpaUTUvTGl5Nm5sWTJPSXNqRVdRczNIYmVnaVV2ZDdsY3VYZE1XYnRORi9UajhRZDdXOUFwM0RpNjZDOXdJdGtLCnUwUDhFV2ZYVXA4RWV3UUo2TWdweS91NjdBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.220.67:6443
  name: ac1495c2-1db5-48dc-b064-df533b221e2c
contexts:
- context:
    cluster: ac1495c2-1db5-48dc-b064-df533b221e2c
    user: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c
  name: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
current-context: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
kind: Config
preferences: {}
users:
- name: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - pinniped-auth
      - login
      - --enable-concierge
      - --concierge-authenticator-name=tkg-jwt-authenticator
      - --concierge-authenticator-type=jwt
      - --concierge-is-cluster-scoped=true
      - --concierge-endpoint=https://192.168.220.67:443
      - --concierge-ca-bundle-data=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZIekNDQTRlZ0F3SUJBZ0lKQU96dEFxT28zdFRMTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdhTVFzd0NRWUQKVlFRRERBSkRRVEVYTUJVR0NnbVNKb21UOGl4a0FSa1dCM1p6Y0dobGNtVXhGVEFUQmdvSmtpYUprL0lzWkFFWgpGZ1ZzYjJOaGJERUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEhEQWFCZ05WCkJBb01FM1pqYzJFdE1ERmhMbU52Y25BdWRHRnVlblV4R3pBWkJnTlZCQXNNRWxaTmQyRnlaU0JGYm1kcGJtVmwKY21sdVp6QWVGdzB5TWpBM01qa3hNak14TlRWYUZ3MHpNakEzTWpZeE1qTXhOVFZhTUlHYU1Rc3dDUVlEVlFRRApEQUpEUVRFWE1CVUdDZ21TSm9tVDhpeGtBUmtXQjNaemNHaGxjbVV4RlRBVEJnb0praWFKay9Jc1pBRVpGZ1ZzCmIyTmhiREVMTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnTUNrTmhiR2xtYjNKdWFXRXhIREFhQmdOVkJBb00KRTNaamMyRXRNREZoTG1OdmNuQXVkR0Z1ZW5VeEd6QVpCZ05WQkFzTUVsWk5kMkZ5WlNCRmJtZHBibVZsY21sdQpaekNDQWFJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dHUEFEQ0NBWW9DZ2dHQkFLWTFVQ2cxc1FSanEwYjI0QTlNCnVUTEd5dmVCVWZpelc4M2lxU0ZEUlNSL1Nld2xTVFNxMzlERkRXQVo4YXlnRTZYL1NOdzMzeXltMHJsb1daUGoKRkI1L0VWSEwrdVFZdi93OFdSNlZCb0NyWDhDOXMvZXhkM2ZuUVY0aDFDY3Y2UHlMNCttVm9wLy8wRVdOSWpCQgpWTm50dkFjamM1L0NSQjlDR29IZjhwbjBzMGRNZnRyZTZnaGRRMHAxYWdjMmpwYkx6TjNjd3NXSGFsT2dIcGtvCklvaDdBTVg3ZEVSem5UMnNObEFZcGE2aWM0NUQ2UVZpckRpZDR2bUZxQWltUm1nb3lzdjllb21ocmpiSldhbEsKVWhMNEwxczZhODhNQ2wwUFFDWlhmTEQ0eGl3UEtSeU9ia1ZmajJGeXBORm1LSDdMRlUzYVNvbWpCN3dBOUpMUwpqVUt5UEJUMnFiZzIrSlJPWGNUM1ZwUGIzNWl5QjFyNnB3U3FmUUZDQ0FZYmNHd2RmYTMrMVRMYzdpRVpnd0F2CnM2QUlBTWFaeHU0VDNmRXJraW1DTlFhRS91ZWszOUhqNy80WE1ONEhDVUUyRWkzamE3NG5XWjVuNVBjSnd0aEwKNU9zbWNRaVVwck42NDF3czgxdG5IR2Robkl4T1pTS21tdXAxeTR0a2xEd3M2d0lEQVFBQm8yWXdaREFkQmdOVgpIUTRFRmdRVWIvL0J3UUZ0Qmp3dVQzYjNQM1ZVbm5yVzYrOHdId1lEVlIwUkJCZ3dGb0VPWlcxaGFXeEFZV050ClpTNWpiMjJIQkg4QUFBRXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRQXcKRFFZSktvWklodmNOQVFFTEJRQURnZ0dCQUJHcnovME53L2VqMVNLZ1NWdVFWZndxZllRczE3SHVSMTNKNWFXcApEU0pQQW1JdEJqVFNYS3VOb1NWdVJjZUQxOXc1dlpkRVFTVmFiU1JEWGJCY0pRRFVWOURYMGdkWmVVZTBaVS9hCnN5YVRnWDFsWkxNZEVOejhHK1FQVEdjeTRGOGltRkJMMHFwWkdDbmJSK2FiMGh1S1hUU3NhKzFpdU10OUZnRWUKYXJrWG84b1FFdTBJZzNyelJBL0FpUU9ZaFRzZWt3aklIZWpEeDNtT3ZFaUJUQmlINlVqZ0puY00wVUJHRWlIeApnZDdsTnZVSzlDZFNueitxS0RDQ0F0QzVidGk5S2g1UHkwNEVUWUtUSlJjYW9qbzNmRnBJZmJZUHJOakFjL3orCmxoTGtBVENyREhERmtQMktadHlqN1VHeXc1Vml5TjBXeU0weUIxN2pZSEQwM056bzdNckh5MjlSSm91cTdFdlEKOE9CZHVOZXNTZFNiT2dXZ1NRK004KzVzcGF0eVJxZXdxcHNUNXJSTkk3L05raXpqOXlCNmtjSVBGeFVhemdpRwpaUTUvTGl5Nm5sWTJPSXNqRVdRczNIYmVnaVV2ZDdsY3VYZE1XYnRORi9UajhRZDdXOUFwM0RpNjZDOXdJdGtLCnUwUDhFV2ZYVXA4RWV3UUo2TWdweS91NjdBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
      - --issuer=https://192.168.220.67/wcp/pinniped
      - --scopes=offline_access,openid,pinniped:request-audience

 

Accessing The Clusters 

To access resources available to you in your Supervisor Cluster Namespace, you simply add the --kubeconfig ~/.kube-tanzu/config flag to your kubectl commands.  In this case we list all of the clusters in the tkg Namespace.

ubuntu@cli-vm:~$ kubectl --kubeconfig ~/.kube-tanzu/config get clusters -n tkg
NAME    PHASE         AGE     VERSION
cc-01   Provisioned   7d21h   v1.23.8+vmware.2
ubuntu@cli-vm:~$

 

We have spent a lot of time digging into the details of the config files.  The reality is that you will almost never look at them and interacting with the TKG clusters is easy.  You simply need to generate the proper context.  Remember that so far we only generated a Supervisor Cluster and Namespace context.  tanzu cluster kubeconfig get cc-01 -n tkg will generate the appropriate context for TKG cluster cc-01 in the .kube/config file.  This may be slightly confusing because you are now using the default Kubernetes config file rather than the ~/.kube-tanzu/config file you used previously.  We execute kubectl config use-context tanzu-cli-cc-01@cc-01 to switch our context

ubuntu@cli-vm:~$ tanzu cluster kubeconfig get cc-01 -n tkg
â
 ¹  You can now access the cluster by running 'kubectl config use-context tanzu-cli-cc-01@cc-01'
ubuntu@cli-vm:~$ kubectl config use-context tanzu-cli-cc-01@cc-01
Switched to context "tanzu-cli-cc-01@cc-01".

 

And finally, we can verify that the context is our TKG cluster cc-01 by listing the pods in the kube-system namespace.  We see the Antrea container networking pods running.  Since the Supervisor does not use an overlay network with Antrea, we know the context has changed and any Kubernetes objects we wish to create will be deployed onto cluster cc-01.

ubuntu@cli-vm:~$ kubectl get pods -n kube-system
NAME                                                       READY   STATUS    RESTARTS          AGE
antrea-agent-45vnp                                         2/2     Running   10 (5d23h ago)    7d21h
antrea-agent-8hnr4                                         2/2     Running   8 (6d9h ago)      7d21h
antrea-agent-sfrgs                                         2/2     Running   9 (5d23h ago)     7d21h
antrea-controller-c9b545758-z67kt                          1/1     Running   12 (5d23h ago)    7d21h
coredns-7d8f74b498-g4pj2                                   1/1     Running   4 (6d9h ago)      7d21h
coredns-7d8f74b498-q84fj                                   1/1     Running   4 (6d9h ago)      7d21h
docker-registry-cc-01-mgbpl-97dsm                          1/1     Running   0                 7d21h
docker-registry-cc-01-node-pool-1-xxn9v-5ccb88b954-hvgh9   1/1     Running   0                 7d21h
docker-registry-cc-01-node-pool-2-n6x66-784db4b47f-5zdjp   1/1     Running   0                 7d21h
etcd-cc-01-mgbpl-97dsm                                     1/1     Running   7                 7d21h
kube-apiserver-cc-01-mgbpl-97dsm                           1/1     Running   9 (5d23h ago)     7d21h
kube-controller-manager-cc-01-mgbpl-97dsm                  1/1     Running   840 (8m20s ago)   7d21h
kube-proxy-9wpgv                                           1/1     Running   0                 7d21h
kube-proxy-gxlms                                           1/1     Running   0                 7d21h
kube-proxy-xpkn7                                           1/1     Running   0                 7d21h
kube-scheduler-cc-01-mgbpl-97dsm                           1/1     Running   836 (8m20s ago)   7d21h
metrics-server-fd7dbcdcf-sxk27                             1/1     Running   0                 7d21h

 

With vSphere 8 customers now have a choice for how they want users to authenticate to their Kubernetes environment.  In this article , we have looked at the setup and configuration needed to allow users easy access to their TKG clusters using either the built-in vSphere Single Sign-On or an external IDP of choice through integration with the Pinniped authentication service.

Filter Tags

Modern Applications Tanzu Kubernetes Grid vSphere with Tanzu Kubernetes Tanzu Kubernetes Grid Blog Deep Dive Advanced Deploy Manage