As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post.
For our examples we are going to test a solution to the problem we had when we updated a ConfigMap
used by the
argocd-server
(the resource was updated but the application Pod was not because there was no change on the
argocd-server
deployment); our original fix was to kill the pod manually, but the manual operation is something we
want to avoid.
The proposed solution to this kind of issues on the
helm documentation is to add
annotations to the Deployments
with values that are a hash of the ConfigMaps
or Secrets
used by them, this way if
a file is updated the annotation is also updated and when the Deployment
changes are applied a roll out of the pods is
triggered.
On this post we will install a couple of controllers and an application to show how we can handle Secrets
with
argocd
and solve the issue with updates on ConfigMaps
and Secrets
, to do it we will execute the following tasks:
- Deploy the Reloader controller to our cluster. It is a tool that watches
changes in
ConfigMaps
andSecrets
and does rolling upgrades on thePods
that use them fromDeployment
,StatefulSet
,DaemonSet
orDeploymentConfig
objects when they are updated (by default we have to add some annotations to the objects to make things work). - Deploy a simple application that can use
ConfigMaps
andSecrets
and test that theReloader
controller does its job when we add or update aConfigMap
. - Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.
Creating the test
project for argocd-autopilot
As we did our installation using argocd-autopilot
we will use its structure to manage the applications.
The first thing to do is to create a project (we will name it test
) as follows:
❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'
Now that the test
project is available we will use it on our argocd-autopilot
invocations when creating applications.
Installing the reloader
controller
To add the reloader
application to the test
project as a kustomize
application and deploy it on the tools
namespace with argocd-autopilot
we do the following:
❯ argocd-autopilot app create reloader \
--app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
--project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader
That command creates four files on the argocd
repository:
One to create the
tools
namespace:bootstrap/cluster-resources/in-cluster/tools-ns.yaml
apiVersion: v1 kind: Namespace metadata: annotations: argocd.argoproj.io/sync-options: Prune=false creationTimestamp: null name: tools spec: {} status: {}
Another to include the
reloader
base application from the upstream repository:apps/reloader/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
The
kustomization.yaml
file for thetest
project (by default it includes the same configuration used on thebase
definition, but we could make other changes if needed):apps/reloader/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: tools resources: - ../../base
The
config.json
file used to define the application onargocd
for thetest
project (it points to the folder that includes the previouskustomization.yaml
file):apps/reloader/overlays/test/config.json
{ "appName": "reloader", "userGivenName": "reloader", "destNamespace": "tools", "destServer": "https://kubernetes.default.svc", "srcPath": "apps/reloader/overlays/test", "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git", "srcTargetRevision": "", "labels": null, "annotations": null }
We can check that the application is working using the argocd
command line application:
❯ argocd app get argocd/test-reloader -o tree
Name: argocd/test-reloader
Project: test
Server: https://kubernetes.default.svc
Namespace: tools
URL: https://argocd.localhost.mixinet.net:8443/applications/test-reloader
Source:
- Repo: https://forgejo.mixinet.net/blogops/argocd.git
Target:
Path: apps/reloader/overlays/test
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: Synced to (2893b56)
Health Status: Healthy
KIND/NAME STATUS HEALTH MESSAGE
ClusterRole/reloader-reloader-role Synced
ClusterRoleBinding/reloader-reloader-role-binding Synced
ServiceAccount/reloader-reloader Synced serviceaccount/reloader-reloader created
Deployment/reloader-reloader Synced Healthy deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f Healthy
└─Pod/reloader-reloader-5b6dcc7b6f-vwjcx Healthy
Adding flags to the reloader
server
The runtime configuration flags for the reloader
server are described on the project
README.md
file, in our case we want to adjust three values:
- We want to enable the option to reload a workload when a
ConfigMap
orSecret
is created, - We want to enable the option to reload a workload when a
ConfigMap
orSecret
is deleted, - We want to use the
annotations
strategy for reloads, as it is the recommended mode of operation when usingargocd
.
To pass them we edit the apps/reloader/overlays/test/kustomization.yaml
file to patch the pod container template, the
text added is the following:
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
kind: Deployment
name: reloader-reloader
patch: |-
- op: add
path: /spec/template/spec/containers/0/args
value:
- '--reload-on-create=true'
- '--reload-on-delete=true'
- '--reload-strategy=annotations'
After committing and pushing the updated file the system launches the application with the new options.
The dummyhttp
application
To do a quick test we are going to deploy the dummyhttp web server using an
image generated using the following Dockerfile
:
# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>
# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX
# Latest tested version of alpine
FROM ${OCI_REGISTRY_PREFIX}alpine:3.21.3
# Tool versions
ARG DUMMYHTTP_VERS=1.1.1
# Download binary
RUN ARCH="$(apk --print-arch)" && \
VERS="$DUMMYHTTP_VERS" && \
URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
wget "$URL" -O "/tmp/dummyhttp" && \
install /tmp/dummyhttp /usr/local/bin && \
rm -f /tmp/dummyhttp
# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]
The kustomize
base application is available on a monorepo
that contains the following files:
A
Deployment
definition that uses the previous image but uses/bin/sh -c
as itsentrypoint
(command
in the k8sPod
terminology) and passes as its argument a string that runs theeval
command to be able to expand environment variables passed to thepod
(the definition includes two optional variables, one taken from aConfigMap
and another one from aSecret
):apiVersion: apps/v1 kind: Deployment metadata: name: dummyhttp labels: app: dummyhttp spec: selector: matchLabels: app: dummyhttp template: metadata: labels: app: dummyhttp spec: containers: - name: dummyhttp image: forgejo.mixinet.net/oci/dummyhttp:1.0.0 command: [ "/bin/sh", "-c" ] args: - 'eval dummyhttp -b \"{\\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\"}\"' ports: - containerPort: 8080 env: - name: CM_VAR valueFrom: configMapKeyRef: name: dummyhttp-configmap key: CM_VAR optional: true - name: SECRET_VAR valueFrom: secretKeyRef: name: dummyhttp-secret key: SECRET_VAR optional: true
A
Service
that publishes the previousDeployment
(the only relevant thing to mention is that the web server uses the port8080
by default):apiVersion: v1 kind: Service metadata: name: dummyhttp spec: selector: app: dummyhttp ports: - name: http port: 80 targetPort: 8080
An
Ingress
definition to allow access to the application from the outside:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dummyhttp annotations: traefik.ingress.kubernetes.io/router.tls: "true" spec: rules: - host: dummyhttp.localhost.mixinet.net http: paths: - path: / pathType: Prefix backend: service: name: dummyhttp port: number: 80
And the
kustomization.yaml
file that includes the previous files:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - service.yaml - ingress.yaml
Deploying the dummyhttp
application from argocd
We could create the dummyhttp
application using the argocd-autopilot
command as we’ve done on the reloader
case,
but we are going to do it manually to show how simple it is.
First we’ve created the apps/dummyhttp/base/kustomization.yaml
file to include the application from the previous
repository:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml
file to include the previous file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
And finally we add the apps/dummyhttp/overlays/test/config.json
file to configure the application as the
ApplicationSet
defined by argocd-autopilot
expects:
{
"appName": "dummyhttp",
"userGivenName": "dummyhttp",
"destNamespace": "default",
"destServer": "https://kubernetes.default.svc",
"srcPath": "apps/dummyhttp/overlays/test",
"srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
"srcTargetRevision": "",
"labels": null,
"annotations": null
}
Once we have the three files we commit and push the changes and argocd
deploys the application; we can check that
things are working using curl
:
❯ curl -s https://dummyhttp.localhost.mixinet.net:8443/ | jq -M .
{
"c": "",
"s": ""
}
Patching the application
Now we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml
file:
- One to add annotations for
reloader
(one to enable it and another one to set the roll out strategy torestart
to avoid touching thedeployments
, as that can generate issues withargocd
). - Another to change the ingress
hostname
(not really needed, but something quite reasonable for a specificproject
).
The file diff is as follows:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
+patches:
+# Add reloader annotations
+- target:
+ kind: Deployment
+ name: dummyhttp
+ patch: |-
+ - op: add
+ path: /metadata/annotations
+ value:
+ reloader.stakater.com/auto: "true"
+ reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+ kind: Ingress
+ name: dummyhttp
+ patch: |-
+ - op: replace
+ path: /spec/rules/0/host
+ value: test-dummyhttp.localhost.mixinet.net
After committing and pushing the changes we can use the argocd
cli to check the status of the application:
❯ argocd app get argocd/test-dummyhttp -o tree
Name: argocd/test-dummyhttp
Project: test
Server: https://kubernetes.default.svc
Namespace: default
URL: https://argocd.localhost.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo: https://forgejo.mixinet.net/blogops/argocd.git
Target:
Path: apps/dummyhttp/overlays/test
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: Synced to (fbc6031)
Health Status: Healthy
KIND/NAME STATUS HEALTH MESSAGE
Deployment/dummyhttp Synced Healthy deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc Healthy
└─Pod/dummyhttp-55569589bc-qhnfk Healthy
Ingress/dummyhttp Synced Healthy ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp Synced Healthy service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl
As we can see, the Deployment
and Ingress
where updated, but the Service
is unchanged.
To validate that the ingress is using the new hostname
we can use curl
:
❯ curl -s https://dummyhttp.localhost.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.localhost.mixinet.net:8443/
{"c": "", "s": ""}
Adding a ConfigMap
Now that the system is adjusted to reload the application when the ConfigMap
or Secret
is created, deleted or
updated we are ready to add one file and see how the system reacts.
We modify the apps/dummyhttp/overlays/test/kustomization.yaml
file to create the ConfigMap
using the
configMapGenerator
as follows:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+ literals:
+ - CM_VAR="Default Test Value"
+ behavior: create
+ options:
+ disableNameSuffixHash: true
patches:
# Add reloader annotations
- target:
After committing and pushing the changes we can see that the ConfigMap
is available, the pod has been deleted and
started again and the curl
output includes the new value:
❯ kubectl get configmaps,pods
NAME READY STATUS RESTARTS AGE
configmap/dummyhttp-configmap 1 11s
configmap/kube-root-ca.crt 1 4d7h
NAME DATA AGE
pod/dummyhttp-779c96c44b-pjq4d 1/1 Running 0 11s
pod/dummyhttp-fc964557f-jvpkx 1/1 Terminating 0 2m42s
❯ curl -s https://test-dummyhttp.localhost.mixinet.net:8443 | jq -M .
{
"c": "Default Test Value",
"s": ""
}
Using helm
with argocd-autopilot
Right now there is no direct support in argocd-autopilot
to manage applications using helm
(see the issue
#38 on the project), but we want to use a chart in our
next example.
There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot
is to
use kustomize
applications that call helm
as described
here.
The only thing needed before being able to use the approach is to add the kustomize.buildOptions
flag to the
argocd-cm
on the bootstrap/argo-cd/kustomization.yaml
file, its contents now are follows:
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
literals:
# Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
- kustomize.buildOptions="--enable-helm"
- |
repository.credentials=- passwordSecret:
key: git_token
name: autopilot-secret
url: https://forgejo.mixinet.net/
usernameSecret:
key: git_username
name: autopilot-secret
name: argocd-cm
# Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
literals:
- "server.insecure=true"
name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml
On the following section we will explain how the application is defined to make things work.
Installing the sealed-secrets
controller
To manage secrets
in our cluster we are going to use the
sealed-secrets controller and to install it we are going to use its
chart.
As we mentioned on the previous section, the idea is to create a kustomize
application and use that to deploy the
chart, but we are going to create the files manually, as we are not going import the base kustomization
files from a
remote repository.
As there is no clear way to override helm Chart values using
overlays we are going to use a generator to create the helm configuration from an external resource and include it
from our overlays (the idea has been taken from this repository,
which was referenced from a comment
on the kustomize
issue #38 mentioned earlier).
The sealed-secrets
application
We have created the following files and folders manually:
apps/sealed-secrets/
├── helm
│ ├── chart.yaml
│ └── kustomization.yaml
└── overlays
└── test
├── config.json
├── kustomization.yaml
└── values.yaml
The helm
folder contains the generator
template that will be included from our overlays
.
The kustomization.yaml
includes the chart.yaml
as a resource:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml
And the chart.yaml
file defines the HelmChartInflationGenerator
:
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml
For this chart the template adjusts the namespace
to kube-system
and adds the fullnameOverride
on the
valuesInline
key because we want to use those settings on all the projects
(they are the values expected by the
kubeseal
command line application, so we adjust them to avoid the need to add additional parameters to it).
We adjust global values as inline to be able to use a the valuesFile
from our overlays; as we are using a generator
the path is relative to the folder that contains the kustomization.yaml
file that calls it, in our case we will need
to have a values.yaml
file on each overlay
folder (if we don’t want to overwrite any values for a project we can
create an empty file, but it has to exist).
Finally, our overlay folder contains three files, a kustomization.yaml
file that includes the generator from the
helm
folder, the values.yaml
file needed by the chart and the config.json
file used by argocd-autopilot
to
install the application.
The kustomization.yaml
file contents are:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm
The values.yaml
file enables the ingress
for the application and adjusts its hostname
:
ingress:
enabled: true
hostname: test-sealed-secrets.localhost.mixinet.net
And the config.json
file is similar to the ones used with the other applications we have installed:
{
"appName": "sealed-secrets",
"userGivenName": "sealed-secrets",
"destNamespace": "kube-system",
"destServer": "https://kubernetes.default.svc",
"srcPath": "apps/sealed-secrets/overlays/test",
"srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
"srcTargetRevision": "",
"labels": null,
"annotations": null
}
Once we commit and push the files the sealed-secrets
application is installed in our cluster, we can check it using
curl
to get the public certificate used by it:
❯ curl -s https://test-sealed-secrets.localhost.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
The dummyhttp-secret
To create sealed secrets we need to install the kubeseal
tool:
❯ arkade get kubeseal
Now we create a local version of the dummyhttp-secret
that contains some value on the SECRET_VAR
key (the easiest
way for doing it is to use kubectl
):
❯ echo -n "Boo" | kubectl create secret generic dummyhttp-secret \
--dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
>/tmp/dummyhttp-secret.yaml
The secret definition in yaml format is:
apiVersion: v1
data:
SECRET_VAR: Qm9v
kind: Secret
metadata:
creationTimestamp: null
name: dummyhttp-secret
To create a sealed version using the kubeseal
tool we can do the following:
❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml
That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart
to use the kube-system
namespace and set the controller name to sealed-secrets-controller
as the tool expects.
If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:
❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
--cert https://test-sealed-secrets.localhost.mixinet.net:8443/v1/cert.pem
Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.
The sealed version of the secret looks like this:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: dummyhttp-secret
namespace: default
spec:
encryptedData:
SECRET_VAR: [...]
template:
metadata:
creationTimestamp: null
name: dummyhttp-secret
namespace: default
This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd
application),
but before doing that we are going to check the output of our dummyhttp
service and get the list of Secrets
and
SealedSecrets
in the default namespace:
❯ curl -s https://test-dummyhttp.localhost.mixinet.net:8443 | jq -M .
{
"c": "Default Test Value",
"s": ""
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.
Now we add the SealedSecret
to the dummyapp
copying the file and adding it to the kustomization.yaml
file:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
+- dummyhttp-sealed-secret.yaml
# Create the config map value
configMapGenerator:
- name: dummyhttp-configmap
Once we commit and push the files Argo CD creates the SealedSecret
and the controller generates the Secret
:
❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME STATUS SYNCED AGE
sealedsecret.bitnami.com/dummyhttp-secret True 3s
NAME TYPE DATA AGE
secret/dummyhttp-secret Opaque 1 3s
If we check the command output we can see the new value of the secret:
❯ curl -s https://test-dummyhttp.localhost.mixinet.net:8443 | jq -M .
{
"c": "Default Test Value",
"s": "Boo"
}
Using sealed-secrets
in production clusters
If you plan to use sealed-secrets
look into its
documentation to understand how it manages the
private keys, how to backup things and keep in mind that, as the documentation
explains, you can rotate
your sealed version of the secrets, but that doesn’t change the actual secrets.
If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).
Final remarks
On this post we have seen how to deploy applications using the argocd-autopilot
model, including the use of helm
charts inside kustomize
applications and how to install and use the sealed-secrets
controller.
It has been interesting and I’ve learnt a lot about argocd
in the process, but I believe that if I ever want to use it
in production I will also review the native helm
support in argocd
using a separate repository to manage the
applications, at least to be able to compare it to the model explained here.