This post describes how I’ve put together a simple static content server for kubernetes clusters using a Pod with a persistent volume and multiple containers: an sftp server to manage contents, a web server to publish them with optional access control and another one to run scripts which need access to the volume filesystem.
The sftp server runs using
MySecureShell, the web
server is nginx and the script runner uses the
webhook tool to publish endpoints to call
them (the calls will come from other Pods that run backend servers or are
executed from Jobs
or CronJobs
).
History
The system was developed because we had a NodeJS
API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.
Thinking about our requirements the use of a regular filesystem to keep the files and folders was a good option, as uploading and serving files is simple.
For the upload I decided to use the sftp protocol, mainly because I already had an sftp container image based on mysecureshell prepared; once we settled on that we added sftp support to the API server and configured it to upload the files to our server instead of using S3 buckets.
To publish the files we added a nginx container configured to work as a reverse proxy that uses the ngx_http_auth_request_module to validate access to the files (the sub request is configurable, in our deployment we have configured it to call our API to check if the user can access a given URL).
Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using kubectl exec
with the existing containers
did not seem a good idea, as that is not supported by CronJobs
objects, for
example).
The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was to use the webhook tool to provide the endpoints to call the scripts; for now we have three:
- one to get the disc usage of a
PATH
, - one to
hardlink
all the files that are identical on the filesystem, - one to copy files and folders from S3 buckets to our filesystem.
Container definitions
mysecureshell
The mysecureshell
container can be used to provide an sftp service with
multiple users (although the files are owned by the same UID
and GID
) using
standalone containers (launched with docker
or podman
) or in an
orchestration system like kubernetes, as we are going to do here.
The image is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
FROM alpine:$ALPINE_VERSION as builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN apk update &&\
apk add --no-cache alpine-sdk git musl-dev &&\
git clone https://github.com/sto/mysecureshell.git &&\
cd mysecureshell &&\
./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man\
--localstatedir=/var --with-shutfile=/var/lib/misc/sftp.shut --with-debug=2 &&\
make all && make install &&\
rm -rf /var/cache/apk/*
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
COPY --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell
COPY --from=builder /usr/bin/sftp-* /usr/bin/
RUN apk update &&\
apk add --no-cache openssh shadow pwgen &&\
sed -i -e "s|^.*\(AuthorizedKeysFile\).*$|\1 /etc/ssh/auth_keys/%u|"\
/etc/ssh/sshd_config &&\
mkdir /etc/ssh/auth_keys &&\
cat /dev/null > /etc/motd &&\
add-shell '/usr/bin/mysecureshell' &&\
rm -rf /var/cache/apk/*
COPY bin/* /usr/local/bin/
COPY etc/sftp_config /etc/ssh/
COPY entrypoint.sh /
EXPOSE 22
VOLUME /sftp
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
The /etc/sftp_config
file is used to
configure
the mysecureshell
server to have all the user homes under /sftp/data
, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after 5m
of inactivity:
The entrypoint.sh
script is the one responsible to prepare the container for
the users included on the /secrets/user_pass.txt
file (creates the users with
their HOME
directories under /sftp/data
and a /bin/false
shell and
creates the key files from /secrets/user_keys.txt
if available).
The script expects a couple of environment variables:
SFTP_UID
:UID
used to run the daemon and for all the files, it has to be different than0
(all the files managed by this daemon are going to be owned by the same user and group, even if the remote users are different).SFTP_GID
:GID
used to run the daemon and for all the files, it has to be different than0
.
And can use the SSH_PORT
and SSH_PARAMS
values if present.
It also requires the following files (they can be mounted as secrets in kubernetes):
/secrets/host_keys.txt
: Text file containing the ssh server keys in mime format; the file is processed using thereformime
utility (the one included on busybox) and can be generated using thegen-host-keys
script included on the container (it usesssh-keygen
andmakemime
)./secrets/user_pass.txt
: Text file containing lines of the formusername:password_in_clear_text
(only the users included on this file are available on thesftp
server, in fact in our deployment we use only thescs
user for everything).
And optionally can use another one:
/secrets/user_keys.txt
: Text file that contains lines of the formusername:public_ssh_ed25519_or_rsa_key
; the public keys are installed on the server and can be used to log into thesftp
server if theusername
exists on theuser_pass.txt
file.
The contents of the entrypoint.sh
script are:
The container also includes a couple of auxiliary scripts, the first one can be
used to generate the host_keys.txt
file as follows:
$ docker run --rm stodh/mysecureshell gen-host-keys > host_keys.txt
Where the script is as simple as:
And there is another script to generate a .tar
file that contains auth data
for the list of usernames passed to it (the file contains a user_pass.txt
file with random passwords for the users, public and private ssh keys for them
and the user_keys.txt
file that matches the generated keys).
To generate a tar
file for the user scs
we can execute the following:
$ docker run --rm stodh/mysecureshell gen-users-tar scs > /tmp/scs-users.tar
To see the contents and the text inside the user_pass.txt
file we can do:
$ tar tvf /tmp/scs-users.tar
-rw-r--r-- root/root 21 2022-09-11 15:55 user_pass.txt
-rw-r--r-- root/root 822 2022-09-11 15:55 user_keys.txt
-rw------- root/root 387 2022-09-11 15:55 id_ed25519-scs
-rw-r--r-- root/root 85 2022-09-11 15:55 id_ed25519-scs.pub
-rw------- root/root 3357 2022-09-11 15:55 id_rsa-scs
-rw------- root/root 3243 2022-09-11 15:55 id_rsa-scs.pem
-rw-r--r-- root/root 729 2022-09-11 15:55 id_rsa-scs.pub
$ tar xfO /tmp/scs-users.tar user_pass.txt
scs:20JertRSX2Eaar4x
The source of the script is:
nginx-scs
The nginx-scs
container is generated using the following Dockerfile
:
ARG NGINX_VERSION=1.23.1
FROM nginx:$NGINX_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
RUN rm -f /docker-entrypoint.d/*
COPY docker-entrypoint.d/* /docker-entrypoint.d/
Basically we are removing the existing docker-entrypoint.d
scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:
AUTH_REQUEST_URI
: URL to use for theauth_request
, if the variable is not found on the environmentauth_request
is not used.HTML_ROOT
: Base directory of the web server, if not passed the default/usr/share/nginx/html
is used.
Note that if we don’t pass the variables everything works as if we were using
the original nginx
image.
The contents of the configuration script are:
As we will see later the idea is to use the /sftp/data
or /sftp/data/scs
folder as the root of the web published by this container and create an
Ingress
object to provide access to it outside of our kubernetes cluster.
webhook-scs
The webhook-scs
container is generated using the following Dockerfile
:
ARG ALPINE_VERSION=3.16.2
ARG GOLANG_VERSION=alpine3.16
FROM golang:$GOLANG_VERSION AS builder
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
ENV WEBHOOK_VERSION 2.8.0
ENV WEBHOOK_PR 549
ENV S3FS_VERSION v1.91
WORKDIR /go/src/github.com/adnanh/webhook
RUN apk update &&\
apk add --no-cache -t build-deps curl libc-dev gcc libgcc patch
RUN curl -L --silent -o webhook.tar.gz\
https://github.com/adnanh/webhook/archive/${WEBHOOK_VERSION}.tar.gz &&\
tar xzf webhook.tar.gz --strip 1 &&\
curl -L --silent -o ${WEBHOOK_PR}.patch\
https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/${WEBHOOK_PR}.patch &&\
patch -p1 < ${WEBHOOK_PR}.patch &&\
go get -d && \
go build -o /usr/local/bin/webhook
WORKDIR /src/s3fs-fuse
RUN apk update &&\
apk add ca-certificates build-base alpine-sdk libcurl automake autoconf\
libxml2-dev libressl-dev mailcap fuse-dev curl-dev
RUN curl -L --silent -o s3fs.tar.gz\
https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/$S3FS_VERSION.tar.gz &&\
tar xzf s3fs.tar.gz --strip 1 &&\
./autogen.sh &&\
./configure --prefix=/usr/local &&\
make -j && \
make install
FROM alpine:$ALPINE_VERSION
LABEL maintainer="Sergio Talens-Oliag <sto@mixinet.net>"
WORKDIR /webhook
RUN apk update &&\
apk add --no-cache ca-certificates mailcap fuse libxml2 libcurl libgcc\
libstdc++ rsync util-linux-misc &&\
rm -rf /var/cache/apk/*
COPY --from=builder /usr/local/bin/webhook /usr/local/bin/webhook
COPY --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs
COPY entrypoint.sh /
COPY hooks/* ./hooks/
EXPOSE 9000
ENTRYPOINT ["/entrypoint.sh"]
CMD ["server"]
Again, we use a multi-stage build because in production we wanted to support a
functionality that is not already on the official versions (streaming the
command output as a response instead of waiting until the execution ends); this
time we build the image applying the PATCH
included on this
pull request against a released
version of the source instead of creating a fork.
The entrypoint.sh
script is used to generate the webhook
configuration file
for the existing hooks
using environment variables (basically the
WEBHOOK_WORKDIR
and the *_TOKEN
variables) and launch the webhook
service:
The entrypoint.sh
script generates the configuration file for the webhook
server calling functions that print a yaml
section for each hook
and
optionally adds rules to validate access to them comparing the value of a
X-Webhook-Token
header against predefined values.
The expected token values are taken from environment variables, we can define
a token variable for each hook
(DU_TOKEN
, HARDLINK_TOKEN
or S3_TOKEN
)
and a fallback value (COMMON_TOKEN
); if no token variable is defined for a
hook
no check is done and everybody can call it.
The Hook
Definition documentation explains the options you can use for each hook
, the
ones we have right now do the following:
du
: runs on the$WORKDIR
directory, passes as first argument to the script the value of thepath
query parameter and sets the variableOUTPUT_FORMAT
to the fixed valuejson
(we use that to print the output of the script in JSON format instead of text).hardlink
: runs on the$WORKDIR
directory and takes no parameters.s3sync
: runs on the$WORKDIR
directory and sets a lot of environment variables from values read from the JSON encoded payload sent by the caller (all the values must be sent by the caller even if they are assigned an empty value, if they are missing thehook
fails without calling the script); we also set thestream-command-output
value totrue
to make the script show its output as it is working (we patched thewebhook
source to be able to use this option).
The du
hook script
The du
hook script code checks if the argument passed is a directory,
computes its size using the du
command and prints the results in text format
or as a JSON dictionary:
The hardlink
hook script
The hardlink
hook script is really simple, it just runs the
util-linux version of the
hardlink
command on its working directory:
We use that to reduce the size of the stored content; to manage versions of files and folders we keep each version on a separate directory and when one or more files are not changed this script makes them hardlinks to the same file on disc, reducing the space used on disk.
The s3sync
hook script
The s3sync
hook script uses the s3fs
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using rsync
; all values needed to execute
the task are taken from environment variables:
Deployment objects
The system is deployed as a StatefulSet
with one replica.
Our production deployment is done on AWS and to be
able to scale we use EFS for our
PersistenVolume
; the idea is that the volume has no size limit, its
AccessMode
can be set to ReadWriteMany
and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.
For development we use k3d and we are also able to scale the
StatefulSet
for testing because we use a ReadWriteOnce
PVC, but it points
to a hostPath
that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different k3d
nodes use the same folder
on the host.
secrets.yaml
The secrets file contains the files used by the mysecureshell
container that
can be generated using kubernetes pods as follows (we are only creating the
scs
user):
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-host-keys >"./host_keys.txt"
$ kubectl run "mysecureshell" --restart='Never' --quiet --rm --stdin \
--image "stodh/mysecureshell:latest" -- gen-users-tar scs >"./users.tar"
Once we have the files we can generate the secrets.yaml
file as follows:
$ tar xf ./users.tar user_keys.txt user_pass.txt
$ kubectl --dry-run=client -o yaml create secret generic "scs-secret" \
--from-file="host_keys.txt=host_keys.txt" \
--from-file="user_keys.txt=user_keys.txt" \
--from-file="user_pass.txt=user_pass.txt" > ./secrets.yaml
The resulting secrets.yaml
will look like the following file (the base64
would match the content of the files, of course):
pvc.yaml
The persistent volume claim for a simple deployment (one with only one instance
of the statefulSet
) can be as simple as this:
On this definition we don’t set the storageClassName
to use the default one.
Volumes in our development environment (k3d)
In our development deployment we create the following PersistentVolume
as
required by the
Local
Persistence Volume Static Provisioner (note that the /volumes/scs-pv
has to
be created by hand, in our k3d
system we mount the same host directory on the
/volumes
path of all the nodes and create the scs-pv
directory by hand
before deploying the persistent volume):
And to make sure that everything works as expected we update the PVC definition
to add the right storageClassName
:
Volumes in our production environment (aws)
In the production deployment we don’t create the PersistentVolume
(we are
using the
aws-efs-csi-driver which
supports Dynamic Provisioning) but we add the storageClassName
(we set it
to the one mapped to the EFS
driver, i.e. efs-sc
) and set ReadWriteMany
as the accessMode
:
statefulset.yaml
The definition of the statefulSet
is as follows:
Notes about the containers:
nginx
: As this is an example the web server is not using anAUTH_REQUEST_URI
and uses the/sftp/data
directory as the root of the web (to get to the files uploaded for thescs
user we will need to use/scs/
as a prefix on the URLs).mysecureshell
: We are adding theIPC_OWNER
capability to the container to be able to use some of thesftp-*
commands inside it, but they are not really needed, so adding the capability is optional.webhook
: We are launching this container in privileged mode to be able to use thes3fs-fuse
, as it will not work otherwise for now (see this kubernetes issue); if the functionality is not needed the container can be executed with regular privileges; besides, as we are not enabling public access to this service we don’t define*_TOKEN
variables (if required the values should be read from aSecret
object).
Notes about the volumes:
- the
devfuse
volume is only needed if we plan to use thes3fs
command on thewebhook
container, if not we can remove the volume definition and its mounts.
service.yaml
To be able to access the different services on the statefulset we publish the
relevant ports using the following Service
object:
ingress.yaml
To download the scs
files from the outside we can add an ingress object like
the following (the definition is for testing using the localhost
name):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: scs-ingress
labels:
app.kubernetes.io/name: scs
spec:
ingressClassName: nginx
rules:
- host: 'localhost'
http:
paths:
- path: /scs
pathType: Prefix
backend:
service:
name: scs-svc
port:
number: 80
Deployment
To deploy the statefulSet
we create a namespace and apply the object
definitions shown before:
$ kubectl create namespace scs-demo
namespace/scs-demo created
$ kubectl -n scs-demo apply -f secrets.yaml
secret/scs-secrets created
$ kubectl -n scs-demo apply -f pvc.yaml
persistentvolumeclaim/scs-pvc created
$ kubectl -n scs-demo apply -f statefulset.yaml
statefulset.apps/scs created
$ kubectl -n scs-demo apply -f service.yaml
service/scs-svc created
$ kubectl -n scs-demo apply -f ingress.yaml
ingress.networking.k8s.io/scs-ingress created
Once the objects are deployed we can check that all is working using kubectl
:
$ kubectl -n scs-demo get all,secrets,ingress
NAME READY STATUS RESTARTS AGE
pod/scs-0 3/3 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/scs-svc ClusterIP 10.43.0.47 <none> 22/TCP,80/TCP,9000/TCP 21s
NAME READY AGE
statefulset.apps/scs 1/1 24s
NAME TYPE DATA AGE
secret/default-token-mwcd7 kubernetes.io/service-account-token 3 53s
secret/scs-secrets Opaque 3 39s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/scs-ingress nginx localhost 172.21.0.5 80 17s
At this point we are ready to use the system.
Usage examples
File uploads
As previously mentioned in our system the idea is to use the sftp
server from
other Pods, but to test the system we are going to do a kubectl port-forward
and connect to the server using our host client and the password we have
generated (it is on the user_pass.txt
file, inside the users.tar
archive):
$ kubectl -n scs-demo port-forward service/scs-svc 2020:22 &
Forwarding from 127.0.0.1:2020 -> 22
Forwarding from [::1]:2020 -> 22
$ PF_PID=$!
$ sftp -P 2020 scs@127.0.0.1 1
Handling connection for 2020
The authenticity of host '[127.0.0.1]:2020 ([127.0.0.1]:2020)' can't be \
established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:2020' (ED25519) to the list of known \
hosts.
scs@127.0.0.1's password: **********
Connected to 127.0.0.1.
sftp> ls -la
drwxr-xr-x 2 sftp sftp 4096 Sep 25 14:47 .
dr-xr-xr-x 3 sftp sftp 4096 Sep 25 14:36 ..
sftp> !date -R > /tmp/date.txt 2
sftp> put /tmp/date.txt .
Uploading /tmp/date.txt to /date.txt
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt
sftp> ln date.txt date.txt.1 3
sftp> ls -l
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
sftp> put /tmp/date.txt date.txt.2 4
Uploading /tmp/date.txt to /date.txt.2
date.txt 100% 32 27.8KB/s 00:00
sftp> ls -l 5
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt
-rw-r--r-- 2 sftp sftp 32 Sep 25 15:21 date.txt.1
-rw-r--r-- 1 sftp sftp 32 Sep 25 15:21 date.txt.2
sftp> exit
$ kill "$PF_PID"
[1] + terminated kubectl -n scs-demo port-forward service/scs-svc 2020:22
- We connect to the
sftp
service on the forwarded port with thescs
user. - We put a file we have created on the host on the directory.
- We do a hard link of the uploaded file.
- We put a second copy of the file we created locally.
- On the file list we can see that the two first files have two hardlinks
File retrievals
If our ingress is configured right we can download the date.txt
file from the
URL http://localhost/scs/date.txt:
$ curl -s http://localhost/scs/date.txt
Sun, 25 Sep 2022 17:21:51 +0200
Use of the webhook container
To finish this post we are going to show how we can call the hooks
directly,
from a CronJob
and from a Job
.
Direct script call (du
)
In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a port-forward
and call the script with an existing PATH (the
root directory) and a bad one:
$ kubectl -n scs-demo port-forward service/scs-svc 9000:9000 >/dev/null &
$ PF_PID=$!
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=.")"
$ echo $JSON
{"path":"","bytes":"4160"}
$ JSON="$(curl -s "http://localhost:9000/hooks/du?path=foo")"
$ echo $JSON
{"error":"The provided PATH ('foo') is not a directory"}
$ kill $PF_PID
As we only have files on the base directory we print the disk usage of the .
PATH and the output is in json
format because we export OUTPUT_FORMAT
with
the value json
on the webhook
configuration.
Cronjobs (hardlink
)
As explained before, the webhook
container can be used to run cronjobs
; the
following one uses an alpine
container to call the hardlink
script each
minute (that setup is for testing, obviously):
The following console session shows how we create the object, allow a couple of executions and remove it (in production we keep it running but once a day, not each minute):
$ kubectl -n scs-demo apply -f webhook-cronjob.yaml 1
cronjob.batch/hardlink created
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" -w 2
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Pending 0 0s
hardlink-27735351-zvpnb 0/1 ContainerCreating 0 0s
hardlink-27735351-zvpnb 0/1 Completed 0 2s
^C
$ kubectl -n scs-demo logs pod/hardlink-27735351-zvpnb 3
Mode: real
Method: sha256
Files: 3
Linked: 1 files
Compared: 0 xattrs
Compared: 1 files
Saved: 32 B
Duration: 0.000220 seconds
$ sleep 60
$ kubectl -n scs-demo get pods -l "cronjob=hardlink" 4
NAME READY STATUS RESTARTS AGE
hardlink-27735351-zvpnb 0/1 Completed 0 83s
hardlink-27735352-br5rn 0/1 Completed 0 23s
$ kubectl -n scs-demo logs pod/hardlink-27735352-br5rn 5
Mode: real
Method: sha256
Files: 3
Linked: 0 files
Compared: 0 xattrs
Compared: 0 files
Saved: 0 B
Duration: 0.000070 seconds
$ kubectl -n scs-demo delete -f webhook-cronjob.yaml 6
cronjob.batch "hardlink" deleted
- This command creates the cronjob object.
- This checks the pods with our
cronjob
label, we interrupt it once we see that the first run has been completed. - With this command we see the output of the execution, as this is the fist
execution we see that
date.txt.2
has been replaced by a hardlink (the summary does not name the file, but it is the only option knowing the contents from the original upload). - After waiting a little bit we check the pods executed again to get the name of the latest one.
- The log now shows that nothing was done.
- As this is a demo, we delete the cronjob.
Jobs (s3sync
)
The following job can be used to synchronise the contents of a directory in a S3 bucket with the SCS Filesystem:
The file with parameters for the script must be something like this:
Once we have both files we can run the Job as follows:
$ kubectl -n scs-demo create secret generic webhook-job-secrets \ 1
--from-file="s3sync.json=s3sync.json"
secret/webhook-job-secrets created
$ kubectl -n scs-demo apply -f webhook-job.yaml 2
job.batch/s3sync created
$ kubectl -n scs-demo get pods -l "cronjob=s3sync" 3
NAME READY STATUS RESTARTS AGE
s3sync-zx2cj 0/1 Completed 0 12s
$ kubectl -n scs-demo logs s3sync-zx2cj 4
Mounted bucket 's3fs-test' on '/root/tmp.jiOjaF/s3data'
sending incremental file list
created directory ./test
./
kyso.png
Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74
sent 15,183 bytes received 74 bytes 30,514.00 bytes/sec
total size is 15,075 speedup is 0.99
Called umount for '/root/tmp.jiOjaF/s3data'
Script exit code: 0
$ kubectl -n scs-demo delete -f webhook-job.yaml 5
job.batch "s3sync" deleted
$ kubectl -n scs-demo delete secrets webhook-job-secrets 6
secret "webhook-job-secrets" deleted
- Here we create the
webhook-job-secrets
secret that contains thes3sync.json
file. - This command runs the job.
- Checking the label
cronjob=s3sync
we get the Pods executed by the job. - Here we print the logs of the completed job.
- Once we are finished we remove the Job.
- And also the secret.
Final remarks
This post has been longer than I expected, but I believe it can be useful for someone; in any case, next time I’ll try to explain something shorter or will split it into multiple entries.