Last week I decided I wanted to try out forgejo actions to build this blog instead of using webhooks, so I looked the documentation and started playing with it until I had it working as I wanted.
This post is to describe how I’ve installed and configured a forgejo runner, how I’ve added an oci organization to my instance to build, publish and mirror container images and added a couple of additional organizations (actions and docker for now) to mirror interesting actions.
The changes made to build the site using actions will be documented on a separate post, as I’ll be using this entry to test the new setup on the blog project.
Installing the runner
The first thing I’ve done is to install a runner on my server, I decided to use the OCI image installation method, as it seemed to be the easiest and fastest one.
The commands I’ve used to setup the runner are the following:
$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.sh
The setup-runner.sh
script does multiple things:
- create a
forgejo-runner
user and group - create the necessary directories for the runner
- create a
.runner
file with a predefined secret and the docker label
The setup-runner.sh
code is available here.
After running the script the runner has to be registered with the forgejo server, it can be done using the following command:
$ forgejo forgejo-cli actions register --name "$RUNNER_NAME" \
--secret "$FORGEJO_SECRET"
The RUNNER_NAME
variable is defined on the setup-runner.sh
script and the FORGEJO_SECRET
must match the value used
on the .runner
file.
Starting it with docker-compose
To launch the runner I’m going to use a docker-compose.yml
file that starts two containers, a docker
in docker
service to run the containers used by the workflow jobs and another one that runs the forgejo-runner
itself.
The initial version used a TCP port to communicate with the dockerd
server from the runner, but when I tried to build
images from a workflow I noticed that the containers launched by the runner
were not going to be able to execute
another dockerd
inside the dind
one and, even if they were, it was going to be expensive computationally.
To avoid the issue I modified the dind
service to use a unix socket on a shared volume that can be used by the
runner
service to communicate with the daemon and also re-shared with the job containers so the dockerd
server can
be used from them to build images.
Warning:
The use of the same docker server that runs the jobs from them has security implications, but this instance is
for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in
fact, I could use the host docker server directly instead of using a dind
service, but just in case I want to run
other containers on the host I prefer to keep the one used for the runner isolated from it).
For those concerned about sharing the same server an alternative would be to launch a second dockerd
only for the jobs
(i.e. actions-dind
) using the same approach (the volume with its socket will have to be shared with the runner
service so it can be re-shared, but the runner
does not need to use it).
The final docker-compose.yaml
file is as follows:
services:
dind:
image: docker:dind
container_name: 'dind'
privileged: 'true'
command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
restart: 'unless-stopped'
volumes:
- ./dind:/dind
runner:
image: 'data.forgejo.org/forgejo/runner:6.2.2'
links:
- dind
depends_on:
dind:
condition: service_started
container_name: 'runner'
environment:
DOCKER_HOST: 'unix:///dind/docker.sock'
user: $RUNNER_UID:$RUNNER_GID
volumes:
- ./config.yaml:/config.yaml
- ./data:/data
- ./dind:/dind
restart: 'unless-stopped'
command: '/bin/sh -c "sleep 5; forgejo-runner daemon -c /config.yaml"'
There are multiple things to comment about this file:
- The
dockerd
server is started with the-H unix:///dind/docker.sock
flag to use the unix socket to communicate with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the containers started by the runner). - We are running the
dockerd
daemon with theRUNNER_GID
group so the runner can communicate with it (the socket gets that group which is the same used by the runner). - The runner container mounts three volumes: the
data
directory, thedind
folder where docker creates the unix socket and aconfig.yaml
file used by us to change the default runner configuration.
The config.yaml
file was originally created using the forgejo-runner
:
$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
forgejo-runner generate-config > config.yaml
The changes to it are minimal, the runner capacity
has been increased to 2
(that allows it to run two jobs at the
same time) and the /dind/docker.sock
value has been added to the valid_volumes
key to allow the containers launched
by the runner to mount it when needed; the diff against the default version is as follows:
@@ -13,7 +13,8 @@
# Where to store the registration result.
file: .runner
# Execute how many tasks concurrently at the same time.
- capacity: 1
+ # STO: Allow 2 concurrent tasks
+ capacity: 2
# Extra environment variables to run jobs.
envs:
A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
- valid_volumes: []
+ # STO: Allow to mount the /dind/docker.sock on the containers
+ valid_volumes:
+ - /dind/docker.sock
# overrides the docker client host with the specified one.
# If "-" or "", an available docker host will automatically be found.
# If "automount", an available docker host will automatically be found and ...
To start the runner we export the RUNNER_UID
and RUNNER_GID
variables and call docker-compose up
to start the
containers on the background:
$ RUNNER_UID="$(id -u forgejo-runner)" RUNNER_GID="$(id -g forgejo-runner)" \
docker compose up -d
If the server was configured right we are now able to start using actions with this runner.
Preparing the system to run things locally
To avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain our own actions and container images and mirror remote ones.
The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the actions and images, which is good for performance and security reasons.
In fact, we are going to build our own images for some things to install the tools we want without needing to do it over and over again on the workflow jobs.
Mirrored actions
The actions we are mirroring are on the actions and docker organizations, we have
created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled
manually all the forgejo modules except the code
one for them):
- actions/checkout: Action for checking out a repo.
- docker/login-action: Action to login against a Docker registry.
- docker/setup-buildx-action: Action to set up Docker Buildx.
- docker/build-push-action: Action to build and push Docker images with Buildx.
To use our actions by default (i.e., without needing to add the server URL on the uses
keyword) we have added the
following section to the app.ini
file of our forgejo server:
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://forgejo.mixinet.net
Setting up credentials to push images
To be able to push images to the oci
organization I’ve created a token
with package:write
permission for my own
user because I’m a member of the organization and I’m authorized to publish packages on it (a different user could be
created, but as I said this is for personal use, so there is no need to complicate things for now).
To allow the use of those credentials on the actions I have added a secret
(REGISTRY_PASS
) and a variable
(REGISTRY_USER
) to the oci
organization to allow the actions to use them.
I’ve also logged myself on my local docker client to be able to push images to the oci
group by hand, as I it is
needed for bootstrapping the system (as I’m using local images on the worflows I need to push them to the server before
running the ones that are used to build the images).
Local and mirrored images
Our images will be stored on the packages section of a new organization called oci, inside it we have created two projects that use forgejo actions to keep things in shape:
On the next sections we are going to describe the actions and images we have created and mirrored from those projects.
The oci/images
project
The images
project is a monorepo that contains the source files for the images we are going to build and a couple of
actions.
The image sources are on sub directories of the repository, to be considered an image the folder has to contain a
Dockerfile
that will be used to build the image.
The repository has two workflows:
build-image-from-tag
: Workflow to build, tag and push an image to theoci
organizationmulti-semantic-release
: Workflow to create tags for the images using themulti-semantic-release
tool.
As the workflows are already configured to use some of our images we pushed some of them from a checkout of the repository using the following commands:
registry="forgejo.mixinet.net/oci"
for img in alpine-mixinet node-mixinet multi-semantic-release; do
docker build -t $registry/$img:1.0.0 $img
docker tag $registry/$img:1.0.0 $registry/$img:latest
docker push $registry/$img:1.0.0
docker push $registry/$img:latest
done
On the next sub sections we will describe what the workflows do and will show their source code.
build-image-from-tag
workflow
This workflow uses a docker
client to build an image from a tag on the repository with the format
image-name-v[0-9].[0-9].[0-9]+
.
As the runner
is executed on a container
(instead of using lxc
) it seemed unreasonable to run another dind
container from that one, that is why, after some tests, I decided to share the dind
service server socket with the
runner
container and enabled the option to mount it also on the containers launched by the runner when needed (I only
do it on the build-image-from-tag
action for now).
The action was configured to run using a trigger or when new tags with the right format were created, but when the tag
is created by multi-semantic-release
the trigger does not work for some reason, so now it only runs the job on
triggers and checks if it is launched for a tag with the right format on the job itself.
The source code of the action is as follows:
name: build-image-from-tag
on:
workflow_dispatch:
jobs:
build:
# Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'
if: ${{ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v') }}
runs-on: docker
container:
image: forgejo.mixinet.net/oci/node-mixinet:latest
# Mount the dind socket on the container at the default location
options: -v /dind/docker.sock:/var/run/docker.sock
steps:
- name: Extract image name and tag from git and get registry name from env
id: job_data
run: |
echo "::set-output name=img_name::${GITHUB_REF_NAME%%-v*}"
echo "::set-output name=img_tag::${GITHUB_REF_NAME##*-v}"
echo "::set-output name=registry::$(
echo "${{ github.server_url }}" | sed -e 's%https://%%'
)"
echo "::set-output name=oci_registry_prefix::$(
echo "${{ github.server_url }}/oci" | sed -e 's%https://%%'
)"
- name: Checkout the repo
uses: actions/checkout@v4
- name: Export build dir and Dockerfile
id: build_data
run: |
img="${{ steps.job_data.outputs.img_name }}"
build_dir="$(pwd)/${img}"
dockerfile="${build_dir}/Dockerfile"
if [ -f "$dockerfile" ]; then
echo "::set-output name=build_dir::$build_dir"
echo "::set-output name=dockerfile::$dockerfile"
else
echo "Couldn't find the Dockerfile for the '$img' image"
exit 1
fi
- name: Login to the Container Registry
uses: docker/login-action@v3
with:
registry: ${{ steps.job_data.outputs.registry }}
username: ${{ vars.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASS }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and Push
uses: docker/build-push-action@v6
with:
push: true
tags: |
${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:${{ steps.job_data.outputs.img_tag }}
${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:latest
context: ${{ steps.build_data.outputs.build_dir }}
file: ${{ steps.build_data.outputs.dockerfile }}
build-args: |
OCI_REGISTRY_PREFIX=${{ steps.job_data.outputs.oci_registry_prefix }}/
Some notes about this code:
- The
if
condition of thebuild
job is not perfect, but it is good enough to avoid wrong uses as long as nobody uses manual tags with the wrong format and expects things to work (it checks if theREGISTRY_USER
andREGISTRY_PASS
variables are set, if theref
is a tag and if it contains the-v
string). - To be able to access the
dind
socket we mount it on the container using theoptions
key on thecontainer
section of the job (this only works if supported by the runner configuration as explained before). - We use the
job_data
step to get information about the image from the tag and the registry URL from the environment variables, it is executed first because all the information is available without checking out the repository. - We use the
job_data
step to get the build dir andDockerfile
paths from the repository (right now we are assuming fixed paths and checking if theDockerfile
exists, but in the future we could use a configuration file to get them, if needed). - As we are using a docker daemon that is already running there is no need to use the docker/setup-docker-action to install it.
- On the build and push step we pass the
OCI_REGISTRY_PREFIX
build argument to theDockerfile
to be able to use it on theFROM
instruction (we are using it in our images).
multi-semantic-release
workflow
This workflow is used to run the multi-semantic-release
tool on pushes to the main
branch.
It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a
Dockerfile
using a couple of template files available on the repository’s .forgejo
directory) and run the
multi-semantic-release
tool to create tags and push them to the repository if new versions are to be built.
Initially we assumed that the tag creation pushed by multi-semantic-release
would be enough to run the
build-tagged-image-task
action, but as it didn’t work we removed the rule to run the action on tag creation and added
code to trigger the action using an api call for the newly created tags (we get them from the output of the
multi-semantic-release
execution).
The source code of the action is as follows:
name: multi-semantic-release
on:
push:
branches:
- 'main'
jobs:
multi-semantic-release:
runs-on: docker
container:
image: forgejo.mixinet.net/oci/multi-semantic-release:latest
steps:
- name: Checkout the repo
uses: actions/checkout@v4
- name: Generate multi-semantic-release configuration
shell: sh
run: |
# Get the list of images to work with (the folders that have a Dockerfile)
images="$(for img in */Dockerfile; do dirname "$img"; done)"
# Generate a values.yaml file for the main packages.json file
package_json_values_yaml=".package.json-values.yaml"
echo "images:" >"$package_json_values_yaml"
for img in $images; do
echo " - $img" >>"$package_json_values_yaml"
done
echo "::group::Generated values.yaml for the project"
cat "$package_json_values_yaml"
echo "::endgroup::"
# Generate the package.json file validating that is a good json file with jq
tmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl" | jq . > "package.json"
echo "::group::Generated package.json for the project"
cat "package.json"
echo "::endgroup::"
# Remove the temporary values file
rm -f "$package_json_values_yaml"
# Generate the package.json file for each image
for img in $images; do
tmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl" | jq . > "$img/package.json"
echo "::group::Generated package.json for the '$img' image"
cat "$img/package.json"
echo "::endgroup::"
done
- name: Run multi-semantic-release
shell: sh
run: |
multi-semantic-release | tee .multi-semantic-release.log
- name: Trigger builds
shell: sh
run: |
# Get the list of tags published on the previous steps
tags="$(
sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \
.multi-semantic-release.log
)"
rm -f .multi-semantic-release.log
if [ "$tags" ]; then
# Prepare the url for building the images
workflow="build-image-from-tag.yaml"
dispatch_url="${{ github.api_url }}/repos/${{ github.repository }}/actions/workflows/$workflow/dispatches"
echo "$tags" | while read -r tag; do
echo "Triggering build for tag '$tag'"
curl \
-H "Content-Type:application/json" \
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
-d "{\"ref\":\"$tag\"}" "$dispatch_url"
done
fi
Notes about this code:
- The use of the
tmpl
tool to process themulti-semantic-release
configuration templates comes from previous uses, but on this case we could use a different approach (i.e.envsubst
could be used) but we left it because it keeps things simple and can be useful in the future if we want to do more complex things with the template files. - We use
tee
to show and dump to a file the output of themulti-semantic-release
execution. - We get the list of pushed
tags
usingsed
against the output of themulti-semantic-release
execution and for each one found we usecurl
to call theforgejo
API to trigger the build job; as the call is against the same project we can use theGITHUB_TOKEN
generated for the workflow to do it, without creating a user token that has to be shared as a secret.
The .forgejo/package.json.tmpl
file is the following one:
{
"name": "multi-semantic-release",
"version": "0.0.0-semantically-released",
"private": true,
"multi-release": {
"tagFormat": "${name}-v${version}"
},
"workspaces": {{ .images | toJson }}
}
As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and paths, but it could be simplified).
And the .forgejo/ws-package.json.tmpl
file is the following one:
{
"name": "{{ .img_name }}",
"license": "UNLICENSED",
"release": {
"plugins": [
[
"@semantic-release/commit-analyzer",
{
"preset": "conventionalcommits",
"releaseRules": [
{ "breaking": true, "release": "major" },
{ "revert": true, "release": "patch" },
{ "type": "feat", "release": "minor" },
{ "type": "fix", "release": "patch" },
{ "type": "perf", "release": "patch" }
]
}
],
[
"semantic-release-replace-plugin",
{
"replacements": [
{
"files": [ "{{ .img_path }}/msr.yaml" ],
"from": "^version:.*$",
"to": "version: ${nextRelease.version}",
"allowEmptyPaths": true
}
]
}
],
[
"@semantic-release/git",
{
"assets": [ "msr.yaml" ],
"message": "ci(release): {{ .img_name }}-v${nextRelease.version}\n\n${nextRelease.notes}"
}
]
],
"branches": [ "main" ]
}
}
The oci/mirrors
project
The repository contains a template for the configuration file we are going to use with regsync
(regsync.envsubst.yml
) to mirror images from remote registries using a workflow that generates a configuration file
from the template and runs the tool.
The initial version of the regsync.envsubst.yml
file is prepared to mirror alpine
containers from version 3.21
to
3.29
(we explicitly remove version 3.20
) and needs the forgejo.mixinet.net/oci/node-mixinet:latest
image to run
(as explained before it was pushed manually to the server):
version: 1
creds:
- registry: "$REGISTRY"
user: "$REGISTRY_USER"
pass: "$REGISTRY_PASS"
sync:
- source: alpine
target: $REGISTRY/oci/alpine
type: repository
tags:
allow:
- "latest"
- "3\\.2\\d+"
- "3\\.2\\d+.\\d+"
deny:
- "3\\.20"
- "3\\.20.\\d+"
mirror
workflow
The mirror
workflow creates a configuration file replacing the value of the REGISTRY
environment variable (computed
by removing the protocol from the server_url
), the REGISTRY_USER
organization value and the REGISTRY_PASS
secret
using the envsubst
command and running the regsync
tool to mirror the images using the configuration file.
The action is configured to run daily, on push events when the regsync.envsubst.yml
file is modified on the main
branch and can also be triggered manually.
The source code of the action is as follows:
name: mirror
on:
schedule:
- cron: '@daily'
push:
branches:
- main
paths:
- 'regsync.envsubst.yml'
workflow_dispatch:
jobs:
mirror:
if: ${{ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' }}
runs-on: docker
container:
image: forgejo.mixinet.net/oci/node-mixinet:latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Sync images
run: |
REGISTRY="$(echo "${{ github.server_url }}" | sed -e 's%https://%%')" \
REGISTRY_USER="${{ vars.REGISTRY_USER }}" \
REGISTRY_PASS="${{ secrets.REGISTRY_PASS }}" \
envsubst <regsync.envsubst.yml >.regsync.yml
regsync --config .regsync.yml once
rm -f .regsync.yml
Conclusion
We have installed a forgejo-runner
and configured it to run actions for our own server and things are working fine.
This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining personal projects and playing with things without needing SaaS platforms like github or gitlab.