API Guide for Container Service Extension 4.2

The Container Service Extension(CSE) 4.2 has been released with VMware Cloud Director Extension for Tanzu Mission Control-Self-Managed(TMC-SM for VCD). You can find announcement blogs for CSE here and for TMC-SM for VCD here. We reviewed API guide for 4.0 release in the past. While the VCD API is supported, there’s still a need for a detailed guide on using the Cluster API with VMware Cloud Director (VCD) to create and manage Tanzu Kubernetes Grid (TKG) clusters. Generating the necessary payload for these operations can be involved process, as it requires manually adjusting the payload produced by the Cluster API. This blog post will simplify this process, providing a step-by-step guide to help our customers create the correct payload for their VCD infrastructure, ensuring they can effectively integrate and manage their TKG clusters.

This API guide is applicable to clusters created by CSE 4.2 Tanzu Kubernetes Clusters.

The existing prerequisites for customers to create TKG clusters in their organizations also apply to the automation flow. These prerequisites are summarized here and can be found in the official documentation to onboard Provider and Tenant Admin users. The following sections provide an overview of the requirements for both cloud provider administrators and Tenant Admin users.

The Steps to onboard the customers is demonstrated in this video and documented here. Once customer organization and its users are onboarded, they can use next section to use APIs, or consume it to create automated Cluster operations.

As a quick summary following steps are expected to be performed by cloud provider to onboard and prepare the customer:

  1. CSE 4.2 Interoperability Matrix
  2. Allow necessary communication for CSE server
  3. Start CSE server (Refer Demo and Official Documentation)
  4. Onboard the customer ((Refer Demo and Official Documentation)

When the cloud provider has onboarded the customer onto the Container Service Extension, the organization administrator must create and assign users with the capability to create and manage TKG clusters for the customer organization. This documentation outlines the procedure for creating a user with the “Kubernetes cluster author” role within the tenant organization.

Customer Org Admin/Kubernetes Cluster Author Steps

When the cloud provider has onboarded the customer onto the Container Service Extension, the organization administrator must create and assign users with the capability to create and manage TKG clusters for the customer organization. This documentation outlines the procedure for creating a user with the “Kubernetes cluster author” role within the tenant organization.

In this section we assume that the user “c1kubadmin” has obtained the necessary resources and access within the customer organization to execute Kubernetes cluster operations.

Collect VCD Infrastructure and Kubernetes Cluster details

This Operation requires following information for VCD tenant portal. The following table describes what information is required to generate the payload. The right column describes example values used as reference in this blog post.

Infrastructure details:

Input Example Value for this blog
VCD_SITE https://vcd01.vcf.corp.local
VCD_ORGANIZATION Customer1
VCD_ORGANIZATION_VDC c1
VCD_ORGANIZATION_VDC_NETWORK network1
VCD_CATALOG cse shared catalog

Table 1 – Infrastructure details

Input Example Value for this blog
VCD_TEMPLATE_NAME Ubuntu 20.04 and Kubernetes v1.27.5+vmware.1
VCD_CONTROL_PLANE_SIZING_POLICY TKG small
VCD_CONTROL_PLANE_STORAGE_PROFILE lab-shared-storage
VCD_CONTROL_PLANE_PLACEMENT_POLICY capacity
VCD_WORKER_STORAGE_PROFILE lab-shared-storage
CONTROL_PLANE_MACHINE_COUNT 1
WORKER_MACHINE_COUNT 1
VCD_REFRESH_TOKEN_B64 “MHB1d0tXSllVb2twU2tGRjExNllCNGZnVWZqTm5UZ2U=”
*Encode your API token to base_64
Ref VMware Doc to Generate token before transforming it to Base64

Table 2 – Kubernetes Cluster Properties

2. Install tools on local machine to generat2.  Install tools on local machine to generate the capiyaml

Once the tenant user has collected all the information, user will have to install following components such as  Clusterctl 1.4.0, Kind(0.17.0), and Docker (20.10.21) on end user’s machine. The following step requires above collected information, and not the access to VCD Infrastructure to generate capiyaml payload.

3. Copy TKG Custom Resource Files locally

The CSE 4.2 release supports and recommends CAPVCD version 1.2.0. Copy TKG CRS Files locally. Follow the instructions from the official capvcd documentation to download files and generate capi yaml here. For completeness, the steps are described below. 

In case the desired Kubernetes version from a particular TKG version is missing (TKG version) is missing from the folder, please use the script to collect product versions using the script here and create files based on the instructions. The script is located here. The following table provides supported list of etcd, coredns, tkg, tkr versions for CSE 4.2 release.

Kubernetes Version etcd ImageTag CoreDNS ImageTag Complete Unique Version OVA TKG Product Version TKr Version
v1.27.5+vmware.1 v3.5.7_vmware.6 v1.10.1_vmware.7 v1.27.5+vmware.1-tkg.1 ubuntu-2004-kube-v1.27.5+vmware.1-tkg.1-0eb96d2f9f4f705ac87c40633d4b69st.ova 2.4 v1.27.5—vmware.1-tkg.1
v1.26.8+vmware.1 v3.5.6_vmware.20 v1.10.1_vmware.7 v1.26.8+vmware.1-tkg.1 ubuntu-2004-kube-v1.26.8+vmware.1-tkg.1-b8c57a6c8c98d227f74e7b1a9eef27st.ova 2.4 v1.26.8—vmware.1-tkg.1
v1.25.13+vmware.1 v3.5.6_vmware.20 v1.10.1_vmware.7 v1.25.13+vmware.1-tkg.1 ubuntu-2004-kube-v1.25.13+vmware.1-tkg.1-0031669997707d1c644156b8fc31ebst.ova 2.4 v1.25.13—vmware.1-tkg.1

Table 3 Kubernetes, etcd, Coredns , and other component versions for Tanzu Kubernetes Versions for CSE 4.2

Create a folder structure for CAPVCD in your working directory. 

mkdir ~/infrastructure-vcd/  cd ~/infrastructure-vcd mkdir v1.2.0 cd v1.2.0

mkdir ~/infrastructure-vcd/ 

cd ~/infrastructure-vcd

mkdir v1.2.0

cd v1.2.0

Copy the contents from templates directory to ~/infrastructure-vcd/v1.2.0/ . 

Copy metadata.yaml to ~/infrastructure-vcd/v1.2.0/

After copying all files the folder structure should look as follows:

ls -lrta ~/infrastructure-vcd/v1.2.0 total 472 -rw-r–r–@ 1 bhatts staff 9379 Feb 16 14:56 cluster-template.yaml -rw-r–r–@ 1 bhatts staff 8934 Feb 16 14:56 cluster-template-v1.27.5.yaml -rw-r–r–@ 1 bhatts staff 8996 Feb 16 14:56 cluster-template-v1.27.5-crs.yaml -rw-r–r–@ 1 bhatts staff 8935 Feb 16 14:59 cluster-template-v1.26.8.yaml -rw-r–r–@ 1 bhatts staff 8997 Feb 16 14:59 cluster-template-v1.26.8-crs.yaml -rw-r–r–@ 1 bhatts staff 8933 Feb 16 14:59 cluster-template-v1.25.7.yaml -rw-r–r–@ 1 bhatts staff 8989 Feb 16 14:59 cluster-template-v1.25.7-crs.yaml -rw-r–r–@ 1 bhatts staff 8940 Feb 16 15:00 cluster-template-v1.24.10.yaml -rw-r–r–@ 1 bhatts staff 9002 Feb 16 15:00 cluster-template-v1.24.10-crs.yaml -rw-r–r–@ 1 bhatts staff 9009 Feb 16 15:00 cluster-template-v1.20.8.yaml -rw-r–r–@ 1 bhatts staff 8983 Feb 16 15:00 cluster-template-v1.20.8-crs.yaml -rw-r–r–@ 1 bhatts staff 8234 Feb 16 15:00 cluster-class-template.yaml drwxr-xr-x 2 bhatts staff 64 Feb 16 15:02 crs drwxr-xr-x 4 bhatts staff 128 Feb 16 15:05 cni drwxr-xr-x 6 bhatts staff 192 Feb 16 15:06 csi drwxr-xr-x 4 bhatts staff 128 Feb 16 15:06 cpi -rw-r–r–@ 1 bhatts staff 332 Feb 16 15:07 metadata.yaml -rw-r–r–@ 1 bhatts staff 77038 Feb 16 15:40 infrastructure-components.yaml drwxr-xr-x 5 bhatts staff 160 Feb 17 15:47 .. -rw-r–r–@ 1 bhatts staff 6148 Feb 17 15:47 .DS_Store -rw-r–r–@ 1 bhatts staff 3315 Feb 17 15:53 clusterctl.yaml drwxr-xr-x 23 bhatts staff 736 Feb 18 00:51 . crs % ls -lrta total 0 drwxr-xr-x 6 bhatts staff 192 Jan 30 16:42 . drwxr-xr-x 4 bhatts staff 128 Jan 30 16:51 cni drwxr-xr-x 4 bhatts staff 128 Jan 30 16:54 cpi drwxr-xr-x 6 bhatts staff 192 Jan 30 16:55 csi drwxr-xr-x 13 bhatts staff 416 Jan 30 18:53 ..

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

ls -lrta ~/infrastructure-vcd/v1.2.0

total 472

-rw-r–r–@  1 bhatts  staff   9379 Feb 16 14:56 cluster-template.yaml

-rw-r–r–@  1 bhatts  staff   8934 Feb 16 14:56 cluster-template-v1.27.5.yaml

-rw-r–r–@  1 bhatts  staff   8996 Feb 16 14:56 cluster-template-v1.27.5-crs.yaml

-rw-r–r–@  1 bhatts  staff   8935 Feb 16 14:59 cluster-template-v1.26.8.yaml

-rw-r–r–@  1 bhatts  staff   8997 Feb 16 14:59 cluster-template-v1.26.8-crs.yaml

-rw-r–r–@  1 bhatts  staff   8933 Feb 16 14:59 cluster-template-v1.25.7.yaml

-rw-r–r–@  1 bhatts  staff   8989 Feb 16 14:59 cluster-template-v1.25.7-crs.yaml

-rw-r–r–@  1 bhatts  staff   8940 Feb 16 15:00 cluster-template-v1.24.10.yaml

-rw-r–r–@  1 bhatts  staff   9002 Feb 16 15:00 cluster-template-v1.24.10-crs.yaml

-rw-r–r–@  1 bhatts  staff   9009 Feb 16 15:00 cluster-template-v1.20.8.yaml

-rw-r–r–@  1 bhatts  staff   8983 Feb 16 15:00 cluster-template-v1.20.8-crs.yaml

-rw-r–r–@  1 bhatts  staff   8234 Feb 16 15:00 cluster-class-template.yaml

drwxr-xr-x   2 bhatts  staff     64 Feb 16 15:02 crs

drwxr-xr-x   4 bhatts  staff    128 Feb 16 15:05 cni

drwxr-xr-x   6 bhatts  staff    192 Feb 16 15:06 csi

drwxr-xr-x   4 bhatts  staff    128 Feb 16 15:06 cpi

-rw-r–r–@  1 bhatts  staff    332 Feb 16 15:07 metadata.yaml

-rw-r–r–@  1 bhatts  staff  77038 Feb 16 15:40 infrastructure-components.yaml

drwxr-xr-x   5 bhatts  staff    160 Feb 17 15:47 ..

-rw-r–r–@  1 bhatts  staff   6148 Feb 17 15:47 .DS_Store

-rw-r–r–@  1 bhatts  staff   3315 Feb 17 15:53 clusterctl.yaml

drwxr-xr-x  23 bhatts  staff    736 Feb 18 00:51 .

 

crs % ls -lrta

total 0

drwxr-xr-x   6 bhatts  staff  192 Jan 30 16:42 .

drwxr-xr-x   4 bhatts  staff  128 Jan 30 16:51 cni

drwxr-xr-x   4 bhatts  staff  128 Jan 30 16:54 cpi

drwxr-xr-x   6 bhatts  staff  192 Jan 30 16:55 csi

drwxr-xr-x  13 bhatts  staff  416 Jan 30 18:53 ..

Compose ‘clusterctl’ yaml to generate CAPI yaml

Copy the ~/infrastructure-vcd/v1.2.0/clusterctl.yaml to ~/.cluster-api/clusterctl.yaml. clusterctl command uses clusterctl.yaml from ~/.cluster-api/clusterctl.yaml to create the capiyaml payload. Update infrastructure details from step 1 in this ~/.cluster-api/clusterctl.yaml for coming steps. 

Update the providers.url in ~/.cluster-api/clusterctl.yaml to ~/infrastructure-vcd/v1.2.0/infrastructure-components.yaml. 

providers: – name: “vcd” url: “/Users/bhatts/infrastructure-vcd/v1.2.0/infrastructure-components.yaml” ##this must be an absolute path type: “InfrastructureProvider”

providers:

  – name: “vcd”

    url: “/Users/bhatts/infrastructure-vcd/v1.2.0/infrastructure-components.yaml” ##this must be an absolute path

    type: “InfrastructureProvider”

At this point your ~/cluster-api/clusterctl.yaml values should look as follows:

# This file needs to be copied to ~/.cluster-api/ for clusterctl init and generate commands to work. # Replace the providers.url to your local repo # # Below are the sample commands # clusterctl init –infrastructure vcd # clusterctl generate cluster demo -i vcd:v1.2.0 # clusterctl generate cluster demo -i vcd:v1.2.0 -f v1.25.7 LatestRelease: URL: https://github.com/kubernetes-sigs/cluster-api/releases/tag/v1.4.0 Version: v1.4.0 cert-manager: url: “https://github.com/cert-manager/cert-manager/releases/latest/cert-manager.yaml” providers: # provider name must correspond with provider url, as clusterctl follows semantic “path/infrastructure-${name}/v1.0.0/infrastructure-components.yaml” # example url for name “vcdInfra” would be: /basepath/infrastructure-vcdInfra/v1.0.0/infrastructure-components.yaml” # if “v1.0.0” or “infrastructure-” prefix is omitted, there will be an error thrown expecting path format: {basepath}/{provider-name or provider-label}/{version}/{components.yaml} # after the following path has been created, paste all cluster-templates inside the path specified provider url # a fully functional folder will look similar to below: # {basepath}/infrastructure-vcd/v1.0.0/infrastructure-components.yaml, clusterctl-template.yaml, clusterctl-template-v1.20.8.yaml – name: “vcd” url: “/Users/bhatts/infrastructure-vcd/v1.2.0/infrastructure-components.yaml” type: “InfrastructureProvider” EXP_CLUSTER_RESOURCE_SET: true # Mandatory VCD specific properties VCD_SITE: “https://vcd01.vcf.corp.local” VCD_ORGANIZATION: “Customer1” VCD_ORGANIZATION_VDC: “c1” VCD_ORGANIZATION_VDC_NETWORK: “network1” VCD_CATALOG: “cse shared catalog” VCD_TEMPLATE_NAME: “Ubuntu 20.04 and Kubernetes v1.27.5+vmware.1” VCD_USERNAME_B64: “” # It is okay to leave username and password empty if VCD_REFRESH_TOKEN_B64 is specified VCD_PASSWORD_B64: “” # It is okay to leave username and password empty if VCD_REFRESH_TOKEN_B64 is specified VCD_REFRESH_TOKEN_B64: “” # API token of the VCD tenant user; it is okay to leave this empty if VCD_USERNAME_B64 and VCD_PASSWORD_B64 are specified # Optional VCD specific properties VCD_CONTROL_PLANE_SIZING_POLICY: “TKG small” VCD_CONTROL_PLANE_STORAGE_PROFILE: “*” VCD_CONTROL_PLANE_PLACEMENT_POLICY: “” VCD_WORKER_SIZING_POLICY: “mTKG small” VCD_WORKER_PLACEMENT_POLICY: “” VCD_WORKER_STORAGE_PROFILE: “*” DISK_SIZE: 20Gi VCD_RDE_ID: “urn:vcloud:entity:vmware:capvcdCluster:UUID” VCD_VIP_CIDR: “” # Kubernetes cluster properties CLUSTER_NAME: “cse42” TARGET_NAMESPACE: default CONTROL_PLANE_MACHINE_COUNT: 1 WORKER_MACHINE_COUNT: 1 KUBERNETES_VERSION: Ubuntu 20.04 and Kubernetes v1.27.5+vmware.1 # Ensure this matches with the version of the VCD_TEMPLATE_NAME ETCD_VERSION: v3.4.13_vmware.14 # Ignore this property if you are using one of the existing flavors DNS_VERSION: v1.7.0_vmware.12 # Ignore this property if you are using one of the existing flavors POD_CIDR: “100.96.0.0/11” SERVICE_CIDR: “100.64.0.0/13” TKR_VERSION: v1.20.8—vmware.1-tkg.2 # Ignore this property if you are using one of the existing flavors TKG_VERSION: v1.4.2 # Ignore this property if you are using one of the existing flavors HTTP_PROXY: “” HTTPS_PROXY: “” NO_PROXY: “” SSH_PUBLIC_KEY: “” WORKER_POOL_NAME: “worker-pool-1”

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

# This file needs to be copied to ~/.cluster-api/ for clusterctl init and generate commands to work.

# Replace the providers.url to your local repo

#

# Below are the sample commands

# clusterctl init –infrastructure vcd

# clusterctl generate cluster demo -i vcd:v1.2.0

# clusterctl generate cluster demo -i vcd:v1.2.0 -f v1.25.7

 

 

LatestRelease:

  URL: https://github.com/kubernetes-sigs/cluster-api/releases/tag/v1.4.0

  Version: v1.4.0

cert-manager:

  url: “https://github.com/cert-manager/cert-manager/releases/latest/cert-manager.yaml”

providers:

# provider name must correspond with provider url, as clusterctl follows semantic “path/infrastructure-${name}/v1.0.0/infrastructure-components.yaml”

# example url for name “vcdInfra” would be: /basepath/infrastructure-vcdInfra/v1.0.0/infrastructure-components.yaml”

# if “v1.0.0” or “infrastructure-” prefix is omitted, there will be an error thrown expecting path format: {basepath}/{provider-name or provider-label}/{version}/{components.yaml}

# after the following path has been created, paste all cluster-templates inside the path specified provider url

# a fully functional folder will look similar to below:

# {basepath}/infrastructure-vcd/v1.0.0/infrastructure-components.yaml, clusterctl-template.yaml, clusterctl-template-v1.20.8.yaml

  – name: “vcd”

    url: “/Users/bhatts/infrastructure-vcd/v1.2.0/infrastructure-components.yaml”

    type: “InfrastructureProvider”

 

 

EXP_CLUSTER_RESOURCE_SET: true

 

 

# Mandatory VCD specific properties

VCD_SITE: “https://vcd01.vcf.corp.local”

VCD_ORGANIZATION: “Customer1”

VCD_ORGANIZATION_VDC: “c1”

VCD_ORGANIZATION_VDC_NETWORK: “network1”

VCD_CATALOG: “cse shared catalog”

VCD_TEMPLATE_NAME: “Ubuntu 20.04 and Kubernetes v1.27.5+vmware.1”

VCD_USERNAME_B64: “” # It is okay to leave username and password empty if VCD_REFRESH_TOKEN_B64 is specified

VCD_PASSWORD_B64: “” # It is okay to leave username and password empty if VCD_REFRESH_TOKEN_B64 is specified

VCD_REFRESH_TOKEN_B64: “” # API token of the VCD tenant user; it is okay to leave this empty if VCD_USERNAME_B64 and VCD_PASSWORD_B64 are specified

 

 

# Optional VCD specific properties

VCD_CONTROL_PLANE_SIZING_POLICY: “TKG small”

VCD_CONTROL_PLANE_STORAGE_PROFILE: “*”

VCD_CONTROL_PLANE_PLACEMENT_POLICY: “”

VCD_WORKER_SIZING_POLICY: “mTKG small”

VCD_WORKER_PLACEMENT_POLICY: “”

VCD_WORKER_STORAGE_PROFILE:  “*”

DISK_SIZE: 20Gi

VCD_RDE_ID: “urn:vcloud:entity:vmware:capvcdCluster:UUID”

VCD_VIP_CIDR: “”

 

 

# Kubernetes cluster properties

CLUSTER_NAME: “cse42”

TARGET_NAMESPACE: default

CONTROL_PLANE_MACHINE_COUNT: 1

WORKER_MACHINE_COUNT: 1

KUBERNETES_VERSION: Ubuntu 20.04 and Kubernetes v1.27.5+vmware.1 # Ensure this matches with the version of the VCD_TEMPLATE_NAME

ETCD_VERSION: v3.4.13_vmware.14 # Ignore this property if you are using one of the existing flavors

DNS_VERSION: v1.7.0_vmware.12 # Ignore this property if you are using one of the existing flavors

POD_CIDR: “100.96.0.0/11”

SERVICE_CIDR: “100.64.0.0/13”

TKR_VERSION: v1.20.8—vmware.1-tkg.2 # Ignore this property if you are using one of the existing flavors

TKG_VERSION: v1.4.2 # Ignore this property if you are using one of the existing flavors

HTTP_PROXY:  “”

HTTPS_PROXY:  “”

NO_PROXY:  “”

SSH_PUBLIC_KEY: “”

WORKER_POOL_NAME: “worker-pool-1”

Create kind cluster and clusterctl to generate capi.yaml for CSE API payload

“cat > kind-cluster-with-extramounts.yaml <

“cat > kind-cluster-with-extramounts.yaml <

kind: Cluster

apiVersion: kind.x-k8s.io/v1alpha4

nodes:

– role: control-plane

  extraMounts:

    – hostPath: /var/run/docker.sock

      containerPath: /var/run/docker.sock

EOF”

 

Create a local cluster on local machine. The commands below are for macOs // This can be similarly executed on choice of your operating system.

 

kind create cluster –config kind-cluster-with-extramounts.yaml

kubectl cluster-info –context kind-kind

kubectl config set-context kind-kind

kubectl get po -A -owide

Initialize clusterctl and Generate capiyaml on the kind cluster

clusterctl init –core cluster-api:v1.4.0 -b kubeadm:v1.4.0 -c kubeadm:v1.4.0 -i vcd:v1.2.0 clusterctl generate cluster thursday31 –kubernetes-version v1.27.5-crs > thursday31.yaml

clusterctl init –core cluster-api:v1.4.0 -b kubeadm:v1.4.0 -c kubeadm:v1.4.0 -i vcd:v1.2.0

 

clusterctl generate cluster thursday31 –kubernetes-version v1.27.5-crs > thursday31.yaml

Update “Kind: Cluster” objects to reflect cluster type “TKG”

OLD Metadata: apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: ccm: external cni: antrea csi: external name: api5 namespace: default New Metadata: apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: labels: cluster-role.tkg.tanzu.vmware.com/management: “” tanzuKubernetesRelease: v1.27.5—vmware.1-tkg.1 tkg.tanzu.vmware.com/cluster-name: thursday31 annotations: osInfo: ubuntu,20.04,amd64 TKGVERSION: v2.4.0 name: thursday31 namespace: default

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

OLD Metadata:

apiVersion: cluster.x-k8s.io/v1beta1

kind: Cluster

metadata:

  labels:

    ccm: external

    cni: antrea

    csi: external

  name: api5

  namespace: default

 

New Metadata:

 

 

apiVersion: cluster.x-k8s.io/v1beta1

kind: Cluster

metadata:

  labels:

    cluster-role.tkg.tanzu.vmware.com/management: “”

    tanzuKubernetesRelease: v1.27.5—vmware.1-tkg.1

    tkg.tanzu.vmware.com/cluster-name: thursday31

  annotations:

    osInfo: ubuntu,20.04,amd64

    TKGVERSION: v2.4.0

  name: thursday31

  namespace: default

At this point, the capiyaml is ready to be consumed by VCD APIs to perform various operations. For verification, make sure cluster name, namespace values are consistent. Copy the content of capiyaml to generate jsonstring using similar tool as here. Or execute following command with ‘jq’.

jq -Rs ‘.’ < capiyaml_filename.yaml

jq -Rs ‘.’ < capiyaml_filename.yaml

Please note that non-supported TKG versions can result in unexpected behavior when performing API calls.

List Clusters

List all clusters in the customer organization. for CSE 4.2 release the CAPVCD version is 1.2

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1.2.0

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1.2.0

Get Clusters

Filter Clusters by name

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

Get Cluster by ID

GET https://{{vcd}}/cloudapi/1.0.0/entities/id

GET https://{{vcd}}/cloudapi/1.0.0/entities/id

Get Kubeconfig of the cluster

In CSE 4.2, entity.status.capvcd.private.kubeconfig is marked as a secure field. Regular GET call will not have this property. Users need to invoke a behavior that decrypts the content.

Execute the Behavior invocation API

https://{{vcd}}/cloudapi/1.0.0/entities/{{cluster-id}}/behaviors/urn:vcloud:behavior-interface:getFullEntity:cse:capvcd:1.2.0/invocations

https://{{vcd}}/cloudapi/1.0.0/entities/{{cluster-id}}/behaviors/urn:vcloud:behavior-interface:getFullEntity:cse:capvcd:1.2.0/invocations

Fetch the property from the header “Location”, and associated task href

Location: https://{{vcd}}/api/task/036d881c-3435-44f3-9ad2-2ee9b299cabd

Location: https://{{vcd}}/api/task/036d881c-3435-44f3-9ad2-2ee9b299cabd

Perform GET call to return the full decrypted contents of the RDE as part of task Result.

Create a Cluster

With CSE 4.2 release, secret object in the capiYaml is an optional field. Alternatively the users can use “spec.vcdKe.secure.apiToken” in the API call. This approach gives advantage that the apiToken is marked as a secure field and will not be part of regular GET payload.

“entityType”: “urn:vcloud:type:vmware:capvcdCluster:1.2.0”, “name”: “thursday31”, “externalId”: null, “entity”: { “kind”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “secure”: { “apiToken”: “MHB1d0tXSllVb2twU2tGRjExNllCNGZnVWZqTm5UZ2U=” } , “isVCDKECluster”: true, “autoRepairOnErrors”: false, “defaultStorageClassOptions”: { “filesystem”: “ext4”, “k8sStorageClassName”: “default-storage-class-1”, “vcdStorageProfileName”: “*”, “useDeleteReclaimPolicy”: true } }, “capiYaml”:”“}, “apiVersion”: “capvcd.vmware.com/v1.1” }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

    “entityType”: “urn:vcloud:type:vmware:capvcdCluster:1.2.0”,

    “name”: “thursday31”,

    “externalId”: null,

    “entity”: {

        “kind”: “CAPVCDCluster”,

        “spec”: {

            “vcdKe”: {

                “secure”: {

                    “apiToken”: “MHB1d0tXSllVb2twU2tGRjExNllCNGZnVWZqTm5UZ2U=”

                }

                    ,

                “isVCDKECluster”: true,

                “autoRepairOnErrors”: false,

                “defaultStorageClassOptions”: {

                    “filesystem”: “ext4”,

                    “k8sStorageClassName”: “default-storage-class-1”,

                    “vcdStorageProfileName”: “*”,

                    “useDeleteReclaimPolicy”: true

                }

            },

            “capiYaml”:”“},

        “apiVersion”: “capvcd.vmware.com/v1.1”

 

 

    }

Resize a Cluster

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

  • Fetch the Cluster ID(“id”: “urn:vcloud:entity:vmware:capvcdCluster:) from the above API call’s output.
  • Copy the complete output of the API response.
  • Notedown eTag Value from API response header
  • Modify “capiyaml” with following values:
    • To resize Control Plane VMs Modify kubeadmcontrolplane.spec.replicas with desired number of control plane vms. Note only odd numbers of control plane are supported.
    • To resize Worker Plane VMS Modify MachineDeployment.spec.replicas with desired number of worker plane VMs
  • While performing the PUT API call, ensure to include fetched eTag value as If-Match

PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response} headers: Accept: application/json; value=37.0>> this can be your VCD’s supported API value Authorization: Bearer Token {token} If-Match: {eTag value from previous GET call} BODY: Copy entire body from the previous GET call, modify capiyaml values as described in above Modify step.

PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response}

headers:

Accept: application/json; value=37.0>> this can be your VCD’s supported API value

Authorization: Bearer Token {token}

If-Match: {eTag value from previous GET call}

BODY: Copy entire body from the previous GET call, modify capiyaml values as described in above Modify step.

Upgrade a Cluster

To Upgrade a cluster, Provider admin needs to publish desired the Tanzu Kubernetes templates to the customer organization in catalog used by Container Service Extension.

collect the GET API response for the cluster to be upgraded as follows:

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

GET https://{{vcd}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1?filter=name==clustername

  • Fetch the Cluster ID(“id”: “urn:vcloud:entity:vmware:capvcdCluster:) from the above API call’s output.
  • Copy the complete output of the API response.
  • Notedown eTag Value from API response header
  • The customer user performing cluster upgrade will require access to Table 3 information. Modify Following values matching the target TKG version. The Following table shows Upgrade for TKG version 1.5.4 from v1.20.15+vmware.1 to v1.22.9+vmware.1

While performing the PUT API call, ensure to include fetched eTag value as If-Match

PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET} headers: Accept: application/json; value=37.0>> this can be your VCD’s supported API value Authorization: Bearer Token If-Match: < eTag value from previous GET call> BODY: Copy entire body from the previous GET call, modify capiyaml values as described in above step to modify capiyaml.

PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET}

headers:

Accept: application/json; value=37.0>> this can be your VCD’s supported API value

Authorization: Bearer Token

If-Match: < eTag value from previous GET call>

BODY: Copy entire body from the previous GET call, modify capiyaml values as described in above step to modify capiyaml.

Delete a Cluster

GET https://vcd01.vcf.corp.local/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1.2.0?filter=name==thursday31 PUT https://vcd.tanzu.lab/cloudapi/1.0.0/entities/ urn:vcloud:entity:vmware:capvcdCluster:ae297ec4-5c60-4a7b-9546-3b25a508be7c

GET https://vcd01.vcf.corp.local/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1.2.0?filter=name==thursday31

 

PUT https://vcd.tanzu.lab/cloudapi/1.0.0/entities/ urn:vcloud:entity:vmware:capvcdCluster:ae297ec4-5c60-4a7b-9546-3b25a508be7c

 

Insert below key/value pair to mark cluster as delete or force delete. 

“isVCDKECluster”: true, –Add or modify the this field to delete the cluster

 “markForDelete”: true, — Add or modify the this field to force delete the cluster

Example snippet of the API call is as below: 

For actual operation, user must take entire output of GET API call with ETAG value as described above. 

{ “id”: “urn:vcloud:entity:vmware:capvcdCluster:41eb820a-ecef-45a1-b509-6d3cc756167a”, “entityType”: “urn:vcloud:type:vmware:capvcdCluster:1.2.0”, “name”: “thursday26”, “externalId”: null, “entity”: { “kind”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “secure”: “******”, “forceDelete”: true, “markForDelete”: true, “isVCDKECluster”: true, “autoRepairOnErrors”: false, “defaultStorageClassOptions”: { “fileSystem”: “ext4”, “k8sStorageClassName”: “default-storage-class-1”, “vcdStorageProfileName”: “*”, “useDeleteReclaimPolicy”: true } }, “capiYaml”:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

{

    “id”: “urn:vcloud:entity:vmware:capvcdCluster:41eb820a-ecef-45a1-b509-6d3cc756167a”,

    “entityType”: “urn:vcloud:type:vmware:capvcdCluster:1.2.0”,

    “name”: “thursday26”,

    “externalId”: null,

    “entity”: {

        “kind”: “CAPVCDCluster”,

        “spec”: {

            “vcdKe”: {

                “secure”: “******”,

                “forceDelete”: true,

                “markForDelete”: true,

                “isVCDKECluster”: true,

                “autoRepairOnErrors”: false,

                “defaultStorageClassOptions”: {

                    “fileSystem”: “ext4”,

                    “k8sStorageClassName”: “default-storage-class-1”,

                    “vcdStorageProfileName”: “*”,

                    “useDeleteReclaimPolicy”: true

                }

            },

            “capiYaml”:

Recommendation for API Usage during automation

DO NOT hardcode API urls with RDE versions. ALWAYS parameterize RDE versions. For example:

POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:type:vmware:capvcdCluster:1.2.0

POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:type:vmware:capvcdCluster:1.2.0

Ensure to declare 1.2.0 as a variable. This will ensure easy API client upgrades to future versions of CSE.

Ensure the API client code ignores any unknown/additional properties while unmarshaling the API response

#For example, capvcdCluster 1.2.0 API payload looks like below { status: { kubernetesVersion: 1.25.7, nodePools: {} } } #In the future, next version of capvcdCluster 1.3.0 may add more properties (“add-ons”) to the payload. # The old API client code must ensure it does not break on seeing newer properties in the future payloads. { status: { kubernetesVersion: 1.25.7, nodePools: {}, add-ons: {} // new property in the future version } }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

#For example, capvcdCluster 1.2.0 API payload looks like below

{

  status: {

     kubernetesVersion: 1.25.7,

     nodePools: {}

  }

}

#In the future, next version of capvcdCluster 1.3.0 may add more properties (“add-ons”) to the payload.

# The old API client code must ensure it does not break on seeing newer properties in the future payloads.

{

  status: {

     kubernetesVersion: 1.25.7,

     nodePools: {},

     add-ons: {} // new property in the future version

  }

}

To summarize, we looked at CRUD operations for a Tanzu Kubernetes clusters on VMware Cloud Director platform using VMware Cloud Director supported APIs. Please feel free to checkout other resources for Container Service Extension as follows:

  1. Generate API token using VMware Cloud Director
  2. CSE 4.0 Official Documentation
  3. Cluster API for VMware Cloud Director Platform official Documentation

Source