After my previous posts related to Argo CD (one about
argocd-autopilot and another with some
usage examples) I started to look into
Kluctl (I also plan to review Flux, but I’m more interested on the kluctl
approach right now).
While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time.
On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about
argocd) as the host and will show how to:
- use its ingress (in our case
traefik) to access the API of the virtual one (removes the need of having to use thevcluster connectcommand to access it withkubectl), - publish the ingress objects deployed on the virtual cluster on the host ingress, and
- use the
sealed-secretsof the host cluster to manage the virtual cluster secrets.
Creating the virtual cluster
Installing the vcluster application
To create the virtual clusters we need the vcluster command, we can install it with arkade:
❯ arkade get vclusterThe vcluster.yaml file
To create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all
its options here):
controlPlane:
proxy:
# Extra hostnames to sign the vCluster proxy certificate for
extraSANs:
- my-vcluster-api.lo.mixinet.net
exportKubeConfig:
context: my-vcluster_k3d-argocd
server: https://my-vcluster-api.lo.mixinet.net:8443
secret:
name: my-vcluster-kubeconfig
sync:
toHost:
ingresses:
enabled: true
serviceAccounts:
enabled: true
fromHost:
ingressClasses:
enabled: true
nodes:
enabled: true
clearImageStatus: true
secrets:
enabled: true
mappings:
byName:
# sync all Secrets from the 'my-vcluster-default' namespace to the
# virtual "default" namespace.
"my-vcluster-default/*": "default/*"
# We could add other namespace mappings if needed, i.e.:
# "my-vcluster-kube-system/*": "kube-system/*"On the controlPlane section we’ve added the proxy.extraSANs entry to add an extra host name to make sure it is added
to the cluster certificates if we use it from an ingress.
The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided
host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.
On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the
host cluster:
- We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
- The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.
On the opposite direction (from the host to the virtual cluster) we synchronize:
- The
IngressClassobjects, to be able to use the host ingress server(s). - The
Nodes(we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster). - The
Secretsfrom themy-vcluster-defaulthostnamespaceto thedefaultof the virtual cluster; that synchronization allows us to deploySealedSecretson the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for onenamespacebut if the virtual cluster needs others we can addnamespaceson the host and their mappings to the virtual one on thevcluster.yamlfile.
Creating the virtual cluster
To create the virtual cluster we run the following command:
vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
--values vcluster.yamlIt creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without
connecting to the cluster from our local machine (if we don’t pass that option the command adds an entry on our
kubeconfig and launches a proxy to connect to the virtual cluster that we don’t plan to use).
Adding an ingress TCP route to connect to the vcluster api
As explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the
following definition:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: my-vcluster-api
namespace: my-vcluster
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`my-vcluster-api.lo.mixinet.net`)
services:
- name: my-vcluster
port: 443
tls:
passthrough: trueOnce we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL
using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted
it on the vcluster.yaml file, as explained before).
Getting the kubeconfig for the vcluster
Once the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its
namespace on the host cluster.
To dump it to the ~/.kube/my-vcluster-config we can do the following:
❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
--template="{{.data.config}}" | base64 -d > ~/.kube/my-vcluster-configOnce available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:
alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change
the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and
only has the PATH to a single file the merge can be done running the following:
KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
--flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it
works we can run the cluster-info subcommand:
❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.Installing the dummyhttpd application
To test the virtual cluster we are going to install the dummyhttpd application using the following
kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
- name: dummyhttp-configmap
literals:
- CM_VAR="Vcluster Test Value"
behavior: create
options:
disableNameSuffixHash: true
patches:
# Change the ingress host name
- target:
kind: Ingress
name: dummyhttp
patch: |-
- op: replace
path: /spec/rules/0/host
value: vcluster-dummyhttp.lo.mixinet.net
# Add reloader annotations -- it will only work if we install reloader on the
# virtual cluster, as the one on the host cluster doesn't see the vcluster
# deployment objects
- target:
kind: Deployment
name: dummyhttp
patch: |-
- op: add
path: /metadata/annotations
value:
reloader.stakater.com/auto: "true"
reloader.stakater.com/rollout-strategy: "restart"It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run
kustomize and vkubectl:
❯ kustomize build . | vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp createdWe can check that everything worked using curl:
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}The objects available on the vcluster now are:
❯ vkubectl get all,configmap,ingress
NAME READY STATUS RESTARTS AGE
pod/dummyhttp-55569589bc-9zl7t 1/1 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp ClusterIP 10.43.51.39 <none> 80/TCP 24s
service/kubernetes ClusterIP 10.43.153.12 <none> 443/TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dummyhttp 1/1 1 1 24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/dummyhttp-55569589bc 1 1 1 24s
NAME DATA AGE
configmap/dummyhttp-configmap 1 24s
configmap/kube-root-ca.crt 1 14m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 24sWhile we have the following ones on the my-vcluster namespace of the host cluster:
❯ kubectl get all,configmap,ingress -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster 1/1 Running 0 45s
pod/my-vcluster-0 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp-x-default-x-my-vcluster ClusterIP 10.43.51.39 <none> 80/TCP 45s
service/kube-dns-x-kube-system-x-my-vcluster ClusterIP 10.43.91.198 <none> 53/UDP,53/TCP,9153/TCP 18m
service/my-vcluster ClusterIP 10.43.153.12 <none> 443/TCP,10250/TCP 19m
service/my-vcluster-headless ClusterIP None <none> 443/TCP 19m
service/my-vcluster-node-k3d-argocd-agent-1 ClusterIP 10.43.189.188 <none> 10250/TCP 18m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 19m
NAME DATA AGE
configmap/coredns-x-kube-system-x-my-vcluster 2 18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster 1 45s
configmap/kube-root-ca.crt 1 19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster 1 11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster 1 18m
configmap/vc-coredns-my-vcluster 1 19m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 45sAs shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the
Deployment or ReplicaSet.
Creating a sealed secret for dummyhttpd
To use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace
and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:
❯ kubectl create namespace my-vcluster-default
❯ echo -n "Vcluster Boo" | kubectl create secret generic "dummyhttp-secret" \
--namespace "my-vcluster-default" --dry-run=client \
--from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yamlAfter running the previous commands we have the following objects available on the host cluster:
❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME STATUS SYNCED AGE
sealedsecret.bitnami.com/dummyhttp-secret True 34s
NAME TYPE DATA AGE
secret/dummyhttp-secret Opaque 1 34sAnd we can see that the secret is also available on the virtual cluster with the content we expected:
❯ vkubectl get secrets
NAME TYPE DATA AGE
dummyhttp-secret Opaque 1 34s
❯ vkubectl get secret/dummyhttp-secret --template="{{.data.SECRET_VAR}}" \
| base64 -d
Vcluster BooBut the output of the curl command has not changed because, although we have the reloader controller deployed on the
host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}Installing the reloader application
To make reloader work on the virtual cluster we just need to install it as we did on the host using the following
kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
kind: Deployment
name: reloader-reloader
patch: |-
- op: add
path: /spec/template/spec/containers/0/args
value:
- '--reload-on-create=true'
- '--reload-on-delete=true'
- '--reload-strategy=annotations'We deploy it with kustomize and vkubectl:
❯ kustomize build . | vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader createdAs the controller was not available when the secret was created the pods linked to the Deployment are not updated, but
we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed
version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the
new output:
❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
❯ sleep 2
❯ vkubectl get pods
NAME READY STATUS RESTARTS AGE
dummyhttp-78bf5fb885-fmsvs 1/1 Terminating 0 6m33s
dummyhttp-c68684bbf-nx8f9 1/1 Running 0 6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"Vcluster Boo"}If we change the secret on the host systems things get updated pretty quickly now:
❯ echo -n "New secret" | kubectl create secret generic "dummyhttp-secret" \
--namespace "my-vcluster-default" --dry-run=client \
--from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"New secret"}Pause and restore the vcluster
The status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:
❯ kubectl get pods,statefulsets -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster 1/1 Running 0 4m39s
pod/my-vcluster-0 1/1 Running 0 128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster 1/1 Running 0 60m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 128mPausing the vcluster
If we don’t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because
the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that
have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on
clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that
now can be used for other things):
❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME READY AGE
statefulset.apps/my-vcluster 0/0 130mNow the curl command fails:
❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not foundAlthough the ingress is still available (it returns a 404 because there is no pod behind the service):
❯ kubectl get ingress -n my-vcluster
NAME CLASS HOSTS ADDRESS PORTS AGE
dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 120mIn fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related
to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:
❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 | grep subject
* subject: CN=lo.mixinet.net
* subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"Resuming the vcluster
When we want to use the virtual cluster again we just need to use the resume command:
❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vclusterOnce all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.
Cleaning up
The virtual cluster can be removed using the delete command:
❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deletedThat removes everything we used on this post except the sealed secrets and secrets that we put on the
my-vcluster-default namespace because it was created by us.
If we delete the namespace all the secrets and sealed secrets on it are also removed:
❯ kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deletedConclusions
I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I’ve encountered in real projects in the past:
- need of short lived clusters for developers or teams,
- execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).
For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting.
In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).