September 20, 2022

TKG 2 - Choose Your Own Authentication: Integrated Pinniped OIDC in vSphere with Tanzu

By Michael West, VMware With the launch of vSphere 8, vSphere with Tanzu has expanded your options for integrating external identity providers into your cluster authentication strategy.   Pinniped is an authentication service for Kubernetes clusters.  You can now bring your own OIDC, LDAP or Active Directory provider to be the source of user identities in your Kubernetes clusters.  You may also continue to use vSphere Single Sign-On(SSO) as you did in vSphere 7.  In this blog and corresponding you will see how to set up an external identity provider (IDP) and how the authentication process works.

By Michael West, VMware

With the launch of vSphere 8, vSphere with Tanzu has expanded your options for integrating external identity providers into your cluster authentication strategy.   Pinniped is an authentication service for Kubernetes clusters and OpenID Connect (OIDC) is a simple identity layer on top of the OAuth 2.0 protocol that allows Clients to verify the identity of the End-User based on the authentication performed by an identity provider.  Using Pinniped, you can now bring your own OIDC provider to be the source of user identities in your Kubernetes clusters.  You may also continue to use vSphere Single Sign-On(SSO) as you did in vSphere 7.  In this blog you will see how to set up an external identity provider (IDP) and how the authentication process works.

Why do we need Pinniped and external identity providers (IDP) ?

Kubernetes clusters generally have two categories of users.  Service accounts that are managed by Kubernetes and bound to Namespaces directly through API calls - and normal users.  These service accounts are tied to credentials that are stored as secrets in Kubernetes and mounted into pods.  They facilitate in cluster process access to the Kubernetes API server.  Normal users are just what you think they are; processes external to the Kubernetes cluster that need to be authenticated to the Kubernetes API.   Kubernetes has no mechanism for creating and managing external users and their access to the API.  This is done through an external service. 

In version 7 of vSphere with Tanzu, external user management and authentication of both the Supervisor Kubernetes cluster and Tanzu Kubernetes Grid (TKG) clusters was done through integration with vSphere Single Sign-On (SSO).   The users either resided in vCenter or in another Identity Provider (IDP) that was added as an Identity Source in vCenter.  An authenticating proxy on the Kubernetes API would route login requests to vCenter, which would return a json web token (JWT) that was added to a Kubernetes config file on the client.  The token was presented with each subsequent Kubernetes API call to authenticate the user.    This is a workable solution, but as customers moved to larger Kubernetes deployments with many clusters, there were a couple of limitations.  The vSphere SSO token exchange process was not designed to support the volume of exchanges that might be needed with many users logging in and generating tokens for large numbers of clusters.  More importantly, development teams are using a range of IDPs and have extensive investment in the management of users and integration with other apps through their preferred IDP.  They want to continue to use them natively without introducing vCenter into that data path.  Pinniped is the solution for this.

In this blog we will dig into the details of the vSphere Single Sign-On process and then see the implementation of an external IDP using the new Pinniped integration


Cluster Authentication with vSphere Single Sign-On(SSO)

Let's begin by looking at using the vSphere plugin for kubectl to authenticate our user.  This is the login that was available in vSphere 7 and will continue to be in vSphere 8.   Notice that there are two users that have been added to the TKG namespace in vCenter.  mwest@vsphere.local uses vCenter as the identity source and will be externally authenticated through Gitlab using OIDC.   



Both users have been assigned the Edit Role on the TKG namespace.  This means that once they are Authenticated, they are Authorized with the privileges in the Kubernetes Edit ClusterRole through the use of a RoleBinding in the TKG namespace.  The external provider is simply telling the Kubernetes API that the user is authorized access, there is no change in how the VI Admin assigns the privileges on the clusters.  

kubectl get rolebinding -n tkg shows that both and mwest@vsphere.local have been assigned the ClusterRole Edit

root@423d6fa70ad5f6badc8cd365fd6d90c8 [ ~ ]# kubectl get rolebinding -n tkg
NAME                                         ROLE                     AGE
cc-01-mc5gd-ccm                              Role/cc-01-mc5gd-ccm     6d21h
cc-01-mc5gd-pvcsi                            Role/cc-01-mc5gd-pvcsi   6d21h
wcp:tkg:group:vsphere.local:administrators   ClusterRole/edit         5d               ClusterRole/edit         3d1h
wcp:tkg:user:vsphere.local:mwest             ClusterRole/edit         6m29s
root@423d6fa70ad5f6badc8cd365fd6d90c8 [ ~ ]#


In order to authenticate through vSphere SSO, Developers will log in using the kubectl vsphere login command.  This command has two forms, one that builds contexts for the Supervisor namespace the user has access to and one that builds a context to access a TKG cluster.  You might ask why there are two separate commands instead of a single login that builds all contexts.  This is really a function of the architecture of vSphere SSO and the token exchange service.   DevOps users may have access to a large number of clusters.  With a linear token exchange process, login times would grow for each cluster a particular user needed access to.  In order to prevent multi-minute login times, users will initiate a separate login as they need to access a particular TKG cluster. 

With output verbosity set to max, you see the kubectl vsphere login command using the mwest@vsphere.local user.  This is the login to the Supervisor Cluster.  Two contexts were created; the default cluster context and the context for the namespace mwest@vsphere.local has access to.    You see these context creations in the login command output below with the kubectl config set-cluster --server= and kubectl config set-context tkg --cluster= --user=wcp: --namespace=tkg

kubectl vsphere login --server -u mwest@vsphere.local --insecure-skip-tls-verify -v10

ubuntu@cli-vm:~$ kubectl vsphere login --server -u mwest@vsphere.local --insecure-skip-tls-verify -v10
DEBU[0000] User passed verbosity level: 10
DEBU[0000] Setting verbosity level: 10
DEBU[0000] Setting request timeout:
DEBU[0000] login called as: /usr/local/bin/kubectl-vsphere login --server -u mwest@vsphere.local --insecure-skip-tls-verify -v10
DEBU[0000] Creating wcp.Client for
INFO[0000] Does not appear to be a vCenter or ESXi address.
DEBU[0000] Got response:

INFO[0000] Using mwest@vsphere.local as username.
DEBU[0000] Env variable KUBECTL_VSPHERE_PASSWORD is present
KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
DEBU[0011] Got response: [{"namespace": "tkg", "master_host": "", "control_plane_api_server_port": 6443, "control_plane_DNS_names": []}]
DEBU[0012] Got response: {"session_id": "eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxYS5jb3JwLnRhbnp1XC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiU3lzdGVtQ29uZmlndXJhdGlvbi5BZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIl0sImV4cCI6MTY2MTgyNDY5MywiaWF0IjoxNjYxNzg4NjkzLCJqdGkiOiIxYjM2NzlkYi0yOTUyLTQ4ZTgtOTljNS00ZTc0M2QxNzJkNTMiLCJ1c2VybmFtZSI6Im13ZXN0In0.PKV7K_Rcuz4IGbsYvPxNTf2henN0xfTIyVyIiHSzOd8XfYis9TaoKM9rPIpb873z449UFKUih3O2Z9HKY2TMSqcoQ_zhLh-dHsUK2cZssdyidTOuq2x9NsPY2KYKBIm02BFosCr2BzXkm5DkeasN0PSjO6PYKxxdxch_ykoIObiTc1WmvoExxgW5PoVy09QwY1082V8lvahFSa0Abzm805SVgh1w6FGSvJKL7KRLpYL9-tIvDwo9lMnA4aiyv30mzVhyUX8LyfHDjkOcmv8HZQmUo6Rl9umydsrH8nV9_m0QYNzhcAzcf2a2InIyyM5PsjkkIsesKUk25liRUQ77RVqiZkKscr-sfN9jZcHb_j41LUAFnqVcO85ttjUBJJ97lDvWh6DSmUM_veTtA-jK8pCdGDu1-V-_H9eLSUdFYWrsSyC6oY8CHW7APkNcGOyr6nD86tD-41sRQwFIHiNsh_f-YArqfaQcOvkwSqRRoLZQAG1F9s3-rnLuVeUKfqaL"}
DEBU[0012] Found kubectl in $PATH
INFO[0012] kubectl version:
INFO[0012] Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
DEBU[0012] Calling `kubectl config set-cluster --server= --insecure-skip-tls-verify`
DEBU[0013] stdout: Cluster "" set.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.
DEBU[0013] stdout: User "wcp:" set.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.
DEBU[0013] Calling kubectl.
DEBU[0013] Calling kubectl.
DEBU[0013] Calling `kubectl config set-context tkg --cluster= --user=wcp: --namespace=tkg`
DEBU[0013] stdout: Context "tkg" created.
DEBU[0013] stderr:
DEBU[0013] Calling `kubectl config use-context tkg`
DEBU[0013] stdout: Switched to context "tkg".
DEBU[0013] stderr:
Logged in successfully.
DEBU[0013] Calling `kubectl config set-context --cluster= --user=wcp:`
DEBU[0013] stdout: Context "" created.
DEBU[0013] stderr:
DEBU[0013] Calling kubectl.

You have access to the following contexts:

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`


Before we look at the config files, let's see that the config file tokens came from vCenter.  Authentication is facilitated through Authenticating Proxy pods in the Supervisor cluster that forward requests to vCenter.  Looking at the logs for the appropriate auth proxy pod using kubectl logs wcp-authproxy-423d7e09122c75090c90a977f7772070 -n kube-system we can see a connection to vCenter and that a bearer token was returned for user mwest@vsphere.local.  

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): vcsa-01a.corp.tanzu:443
DEBUG:urllib3.connectionpool:https://vcsa-01a.corp.tanzu:443 "POST /wcp HTTP/1.1" 200 262
DEBUG:wcp.resources:[139987200936336] Got response from WCP: [Summary(namespace='tkg', master_host='', control_plane_api_server_port=6443, control_plane_dns_names=[])].
INFO:server:[139987200936336] "" - - [29/Aug/2022:16:18:39 +0000] "GET /wcp/workloads HTTP/1.0" 200 125 "-" "kube-plugin-vsphere bld 18396598 - cln 9120757" "mwest@VSPHERE.LOCAL"
DEBUG:server:[139987200948560] Request: b'POST' b'/wcp/login'
INFO:vclib.sso:[139987200948560] Got bearer token for mwest@vsphere.local.


Note that we are going to look at details across several config files as a means of understanding the authentication process.  There is a lot of complexity in this, however users will simply execute login and use-config commands to switch contexts without needing to be exposed to these details.  The kubectl config file at ~.kube/config shows our cluster and namespace contexts configured for user mwest@vsphere.local as we would expect.  Also note the auth token that was returned from the vSphere token exchange process.

ubuntu@cli-vm:~$ more .kube/config
apiVersion: v1
- cluster:
    insecure-skip-tls-verify: true
- context:
    user: wcp:
- context:
    namespace: tkg
    user: wcp:
  name: tkg
current-context: tkg
kind: Config
preferences: {}
- name: wcp:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxY


Digging further, we can decode the Json Web Token (JWT) or "Jot" and see the payload.  It is very clear that it was issued by vcsa-01a.corp.local.

  "sub": "mwest@vsphere.local",
  "aud": "vmware-tes:vc:vns:k8s",
  "domain": "vsphere.local",
  "iss": "https://vcsa-01a.corp.tanzu/openidconnect/vsphere.local",
  "group_names": [
  "exp": 1661825920,
  "iat": 1661789920,
  "jti": "fedee4b2-8d9e-4814-aa41-ec9e389f2d2a",
  "username": "mwest"


 Now let's login to a TKG cluster.  BTW, this cluster is using the new ClusterClass based reconciliation that is new with vSphere 8 and TKG 2.0.   We execute kubectl vsphere login --server -u mwest@vsphere.local  --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name cc-01 --tanzu-kubernetes-cluster-namespace tkg. You see the new context for TKG cluster cc-01, with the token needed to present to its API server.

ubuntu@cli-vm:~$ cat .kube/config
apiVersion: v1
- cluster:
    insecure-skip-tls-verify: true
- cluster:
- cluster:
    insecure-skip-tls-verify: true
  name: supervisor-
- context:
    user: wcp:
- context:
    user: wcp:
  name: cc-01
- context:
    namespace: tkg
    user: wcp:
  name: tkg
current-context: cc-01
kind: Config
preferences: {}
- name: wcp:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzIiwiZG9tYWluIjoidnNwaGVyZS5sb2NhbCIsImlzcyI6Imh0dHBzOlwvXC92Y3NhLTAxYS5jb3JwLnRhbnp1XC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiU3lzdGVtQ29uZmlndXJhdGlvbi5BZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIl0sImV4cCI6MTY2MTgyODk3MiwiaWF0IjoxNjYxNzkyOTcyLCJqdGkiOiI4MzdhZTcwOS0zMTdlLTRhMTAtYjYzNi1lZjU0NGJlMWViNjQiLCJ1c2VybmFtZSI6Im13ZXN0In0.B7LTFWadGU1JjMpP7IraiX0ktM56Nwf72MBerq9R-pylNU-bns8GCrMHX_HY7Hpf9a4OpKB27esk4qZhSYHm8ud6PNQq33jzQk4DD4VEknsESLt5P-p8NFGbeoda8VoquF_WHByFRxgZIkRrkuL8PldOBcCxRP04loQkXXtK91GSpNrrckxi9lK-RC9uqj47L0DZNar4Sun7sXC3itVXezaHxE0isiC9QkSTksv5FDTeJX4nZZNLPDGTz5NYu9RwAjzZirJPbP7hkrecav_0NII9sQK6aHir7dNBsgk9sbInGD-pTRyBawkcTKraq2R54llENMycennunsVAAeP8Wsrqe3AqPGomnzlT4Uy9IdmVgOMfMg_CFD-Fr4GMIgJ28E_DqeO66LpFa9ynlfO7vf55yfz5AGDt1YfRu0O2kF7wjnPLDD1N_CcXg4zdXCOc_XVV-eSEbtGd6vsb88X3jey5m2MjrBGZFpron0dX-rIqinPj6qZaCSjn5i1P4Pbi
- name: wcp:
    token: eyJraWQiOiJCODlFNkM4NTQ1M0VDM0UyRkQxNjlBM0JFNjUxOTExNjdFMkQ1RUUxIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJtd2VzdEB2c3BoZXJlLmxvY2FsIiwiYXVkIjoidm13YXJlLXRlczp2Yzp2bnM6azhzOjA1Mzc4YThjLWQ3MjgtNDZmZS1hMTNiLTE1OWFhMjkyZDhhMSIsImRvbWFpbiI6InZzcGhlcmUubG9jYWwiLCJpc3MiOiJodHRwczpcL1wvdmNzYS0wMWEuY29ycC50YW56dVwvb3BlbmlkY29ubmVjdFwvdnNwaGVyZS5sb2NhbCIsImdyb3VwX25hbWVzIjpbIkxpY2Vuc2VTZXJ2aWNlLkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJBZG1pbmlzdHJhdG9yc0B2c3BoZXJlLmxvY2FsIiwiRXZlcnlvbmVAdnNwaGVyZS5sb2NhbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCJdLCJleHAiOjE2NjE4Mjg5NzMsImlhdCI6MTY2MTc5Mjk3MywianRpIjoiY2I1NmQzYzQtMzg3My00MDUyLWEyMmUtNjI0M2IxMzUyYmM4IiwidXNlcm5hbWUiOiJtd2VzdCJ9.KqMemnoauGJZ-PSbRLwdu-pzeG86TCPGdTTD8RQex_N90rY4QMzuoDRP6TP8vT2TUlUCaKSE6-ZeIHpVSyLpP_Z1TNw6D1NhPbvxXQXnCVZv5n1nrjjkMxnrrrk1uGnY6nblgQnujczVrzEYVczsJbljHP8788DINE-iWqTT3xf5JDGesZDioXviDxw0uma4oNa8LVSQSwv2dGAaFVp1Aq1BBlgAR5rBjiz3S62MO97XmVlvkwKD4vv3tKbwpcYw1SfaPf5bE6IhZ25UFq_92-hYw0irZ7ggGHlmiVnIIzDkOpKOfWlJyhyihhKNQUOThwn_8NaUGYzMbw788v_husQWdVU7uIh_xjBV05lLHC-oCNr8VYwzzS65Qpet4utU9HhbbmsZj86M1ufxU1X4zTarYajF3EUJzUvPcWzVJhUll72JnePo_OXAJXRDsaBo5nq68qq0KmSXnUkNfiFpE5Ohl48KiUpHYTsrOZwBSQ672AsYwbRgbEwAmc29sCJD


kubectl config use-context cc-01 changes the context to point to the TKG cluster.  A quick list of the pods to verify that we are pointing to that TKG cluster.  kubectl get pods -n kube-system  The easiest way to know it's a TKG cluster is to look for the Antrea CNI pods.  The Supervisor does not use an Antrea overlay network.

ubuntu@cli-vm:~$ kubectl get pods -n kube-system
NAMESPACE                      NAME                                                       READY   STATUS      RESTARTS         AGE
kube-system                    antrea-agent-45vnp                                         2/2     Running     10 (5d3h ago)    7d
kube-system                    antrea-agent-8hnr4                                         2/2     Running     8 (5d12h ago)    7d
kube-system                    antrea-agent-sfrgs                                         2/2     Running     9 (5d3h ago)     7d
kube-system                    antrea-controller-c9b545758-z67kt                          1/1     Running     12 (5d3h ago)    7d
kube-system                    coredns-7d8f74b498-g4pj2                                   1/1     Running     4 (5d12h ago)    7d
kube-system                    coredns-7d8f74b498-q84fj                                   1/1     Running     4 (5d12h ago)    7d
kube-system                    docker-registry-cc-01-mgbpl-97dsm                          1/1     Running     0                7d
kube-system                    docker-registry-cc-01-node-pool-1-xxn9v-5ccb88b954-hvgh9   1/1     Running     0                7d
kube-system                    docker-registry-cc-01-node-pool-2-n6x66-784db4b47f-5zdjp   1/1     Running     0                7d
kube-system                    etcd-cc-01-mgbpl-97dsm                                     1/1     Running     7                7d
kube-system                    kube-apiserver-cc-01-mgbpl-97dsm                           1/1     Running     9 (5d3h ago)     7d
kube-system                    kube-controller-manager-cc-01-mgbpl-97dsm                  1/1     Running     821 (51m ago)    7d
kube-system                    kube-proxy-9wpgv                                           1/1     Running     0                7d
kube-system                    kube-proxy-gxlms                                           1/1     Running     0                7d
kube-system                    kube-proxy-xpkn7                                           1/1     Running     0                7d
kube-system                    kube-scheduler-cc-01-mgbpl-97dsm                           1/1     Running     816 (51m ago)    7d
kube-system                    metrics-server-fd7dbcdcf-sxk27                             1/1     Running     0                7d


Cluster Authentication with IDP through Pinniped


After deleting the ~/.kube/config file used in the vSphere SSO login, we will go through a similar process using an external IDP.  For this testing, we will use Gitlab as the IDP.   Setup will obviously be slightly different for each IDP, but the concepts are generally the same.   Each Supervisor can have a different Identity Provider associated with it.  So we start by going to the Supervisor Config page in vCenter and finding the Pinniped callback URL that is specific to your Supervisor Cluster.  In this case its  This is where the IDP response should be sent upon user authentication and it needs to be added to your app in the IDP.  Note that we have already added the Gitlab IDP into this Supervisor, but will come back to update the configuration once we go through the setup in Gitlab itself.



 We have now logged into to tell it about the Supervisor Cluster we want to authenticate.  For Gitlab, we create a group called pinniped-testers that contains myself as a member.   Authentication will happen via my primary email address.



We then create an application, name it Pinniped auth and associate the callback URL for the Supervisor cluster.   We are going to need the Application ID and Secret from this screen to complete the IDP setup in the Supervisor Cluster.


 Add Identity Provider to Supervisor Cluster


From the Supervisor Cluster Config page, we Add Identity Provider.  For Gitlab we need to provide the gitlab URL, the client ID and Secret.  When working with Identity Providers and OIDC, you will see references to scopes and claims.  Claims are simply user info that are returned to the client app during the auth process.  Scopes are a way to categorize Claims into Groups.  In the example above, we are returning the openid, user profile and email scopes.  There may be other scopes you want to add that are specific to your environment and IDP, but those are generally optional.  Now we are ready to try it out.  Pretty simple setup.



The Tanzu CLI

Pinniped based authentication is integrated with the Tanzu CLI.  For users of vSphere with Tanzu in vSphere 7, this will be something new.  We have integrated this CLI to provide consistency in the experience of deploying TKG clusters across public and private cloud environments.  You can learn more about the Tanzu CLI HERE.   We start with the Tanzu login command that specifies the endpoint for our Supervisor cluster and the external username we want to use - in this case   tanzu login  --endpoint --name       

The command will return a link to authenticate your user.  Depending on your environment and/or IDP, you may automatically pop up a browser window and be routed to the login page for your IDP or you may have to manually paste this link into your browser.  You may also be presented with a code that must be pasted into the CLI to complete authentication.   

ubuntu@cli-vm:~$ tanzu login  --endpoint --name
 ¹  Detected a vSphere Supervisor being used
Log in by visiting this link:

    Optionally, paste your authorization code: 


Once redirected to the appropriate IDP login screen, we enter the valid gitlab credentials for    The result will be the creation of the Json Web Token (JWT) and return of the Certificate Authority (CA) cert that can be used for authentication to the Kubernetes API Server

image-20220830081355-1    Screenshot of tanzu login --endpoint


Let's look at the configuration details.  Once you are successfully logged in, you will have two separate configuration files used by the Tanzu CLI to authenticate to your Kubernetes clusters.  There is a lot of stuff in this file, but let's focus on the ClientConfig.    You see that the user is as we expected from our login command.  There is also a context, tanzu-cli....... , and a path to another config file (~/.kube-tanzu/config) that will contain context information in a format similar to what we saw with the ~/.kube/config used by kubectl after the vSphere Plugin based login. 

ubuntu@cli-vm:~$ cat .config/tanzu/config.yaml
kind: ClientConfig
  creationTimestamp: null
- managementClusterOpts:
    context: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
    path: /home/ubuntu/.kube-tanzu/config
  type: managementcluster


The ~/.kube-tanzu/config file contains the appropriate credentials needed to access the Kubernetes APIs of the clusters has access to.   The format is slightly different in that Pinniped functions differently from vSphere SSO token exchange.  The tokens are not contained in the file but are stored and refreshed as part of the Pinniped deployment.  The Certificate Authority (CA) cert tells the Tanzu CLI that it can trust the cluster it wants to access.  This kubeconfig file can be used in subsequent kubectl commands.

ubuntu@cli-vm:~$ cat .kube-tanzu/config
apiVersion: v1
- cluster:
  name: ac1495c2-1db5-48dc-b064-df533b221e2c
- context:
    cluster: ac1495c2-1db5-48dc-b064-df533b221e2c
    user: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c
  name: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
current-context: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c@ac1495c2-1db5-48dc-b064-df533b221e2c
kind: Config
preferences: {}
- name: tanzu-cli-ac1495c2-1db5-48dc-b064-df533b221e2c
      - pinniped-auth
      - login
      - --enable-concierge
      - --concierge-authenticator-name=tkg-jwt-authenticator
      - --concierge-authenticator-type=jwt
      - --concierge-is-cluster-scoped=true
      - --concierge-endpoint=
      - --issuer=
      - --scopes=offline_access,openid,pinniped:request-audience


Accessing The Clusters 

To access resources available to you in your Supervisor Cluster Namespace, you simply add the --kubeconfig ~/.kube-tanzu/config flag to your kubectl commands.  In this case we list all of the clusters in the tkg Namespace.

ubuntu@cli-vm:~$ kubectl --kubeconfig ~/.kube-tanzu/config get clusters -n tkg
cc-01   Provisioned   7d21h   v1.23.8+vmware.2


We have spent a lot of time digging into the details of the config files.  The reality is that you will almost never look at them and interacting with the TKG clusters is easy.  You simply need to generate the proper context.  Remember that so far we only generated a Supervisor Cluster and Namespace context.  tanzu cluster kubeconfig get cc-01 -n tkg will generate the appropriate context for TKG cluster cc-01 in the .kube/config file.  This may be slightly confusing because you are now using the default Kubernetes config file rather than the ~/.kube-tanzu/config file you used previously.  We execute kubectl config use-context tanzu-cli-cc-01@cc-01 to switch our context

ubuntu@cli-vm:~$ tanzu cluster kubeconfig get cc-01 -n tkg
 ¹  You can now access the cluster by running 'kubectl config use-context tanzu-cli-cc-01@cc-01'
ubuntu@cli-vm:~$ kubectl config use-context tanzu-cli-cc-01@cc-01
Switched to context "tanzu-cli-cc-01@cc-01".


And finally, we can verify that the context is our TKG cluster cc-01 by listing the pods in the kube-system namespace.  We see the Antrea container networking pods running.  Since the Supervisor does not use an overlay network with Antrea, we know the context has changed and any Kubernetes objects we wish to create will be deployed onto cluster cc-01.

ubuntu@cli-vm:~$ kubectl get pods -n kube-system
NAME                                                       READY   STATUS    RESTARTS          AGE
antrea-agent-45vnp                                         2/2     Running   10 (5d23h ago)    7d21h
antrea-agent-8hnr4                                         2/2     Running   8 (6d9h ago)      7d21h
antrea-agent-sfrgs                                         2/2     Running   9 (5d23h ago)     7d21h
antrea-controller-c9b545758-z67kt                          1/1     Running   12 (5d23h ago)    7d21h
coredns-7d8f74b498-g4pj2                                   1/1     Running   4 (6d9h ago)      7d21h
coredns-7d8f74b498-q84fj                                   1/1     Running   4 (6d9h ago)      7d21h
docker-registry-cc-01-mgbpl-97dsm                          1/1     Running   0                 7d21h
docker-registry-cc-01-node-pool-1-xxn9v-5ccb88b954-hvgh9   1/1     Running   0                 7d21h
docker-registry-cc-01-node-pool-2-n6x66-784db4b47f-5zdjp   1/1     Running   0                 7d21h
etcd-cc-01-mgbpl-97dsm                                     1/1     Running   7                 7d21h
kube-apiserver-cc-01-mgbpl-97dsm                           1/1     Running   9 (5d23h ago)     7d21h
kube-controller-manager-cc-01-mgbpl-97dsm                  1/1     Running   840 (8m20s ago)   7d21h
kube-proxy-9wpgv                                           1/1     Running   0                 7d21h
kube-proxy-gxlms                                           1/1     Running   0                 7d21h
kube-proxy-xpkn7                                           1/1     Running   0                 7d21h
kube-scheduler-cc-01-mgbpl-97dsm                           1/1     Running   836 (8m20s ago)   7d21h
metrics-server-fd7dbcdcf-sxk27                             1/1     Running   0                 7d21h


With vSphere 8 customers now have a choice for how they want users to authenticate to their Kubernetes environment.  In this article , we have looked at the setup and configuration needed to allow users easy access to their TKG clusters using either the built-in vSphere Single Sign-On or an external IDP of choice through integration with the Pinniped authentication service.

Filter Tags

Modern Applications Tanzu Kubernetes Grid vSphere with Tanzu Kubernetes Tanzu Kubernetes Grid Blog Deep Dive Advanced Deploy Manage