
Sergio Talens-Oliag Technical Blog


Over the past few weeks I’ve been developing and using a personal command-line tool called gwt (Git Worktree) to manage Git repositories using worktrees. This article explains what the tool does, how it evolved, and how I used GitHub Copilot CLI to develop it (in fact the idea of building the script was also to test the tool). The Problem: Managing Multiple BranchesI was working on a project with multiple active branches, including orphans; the regular branches are for fixes or features, while the orphans are used to keep copies of remote documents or store processed versions of those documents. The project also uses a special orphan branch that contains the scripts and the CI/CD configuration to store and process the external documents (it is on a separate branch to avoid mixing its operation with the main project code). The plan is trigger a pipeline against the special branch from remote projects to create or update the doc branch for it in our git repository, retrieving artifacts from the remote projects to get the files and put them on an orphan branch (initially I added new commits after each update, but I changed the system to use force pushes and keep only one commit, as the history is not really needed). The original documents have to be changed, so, after ingesting them, we run a script that modifies them and adds or updates another branch with the processed version; the contents of that branch are used by the main branch build process (there we use git fetch and git archive to retrieve its contents). When working on the scripts to manage the orphan branches I discovered the worktree feature of git, a functionality that allows me to keep multiple branches checked out in parallel using a single .git folder, removing the need to use git switch and git stash when changing between branches (until now I’ve been a heavy user of those commands). Reading about it I found that a lot of people use worktrees with the help of a wrapper script to simplify the management. After looking at one or two posts and the related scripts I decided to create my own using a specific directory structure to simplify things. That’s how I started to work on the gwt script; as I also wanted to test copilot I decided to build it using its help (I have a pro license at work and wanted to play with the cli version instead of integrated into an editor, as I didn’t want to learn a lot of new keyboard shortcuts). The gwt Philosophy: Opinionated and Transparentgwt enforces a simple, filesystem-visible model: Exactly one bare repository named bare.git (treated as an implementation detail)One worktree directory per branch where the directory name matches the branch nameSingle responsibility: gwt doesn’t try to be a general git wrapper; it only handles operations that map cleanly to this layout...
When I configured forgejo-actions I used a docker-compose.yaml file to execute the runner and a dind container configured to run using privileged mode to be able to build images with it; as mentioned on my post about my setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the installation. On a work chat the other day someone mentioned that the GitLab documentation about using kaniko says it is no longer maintained (see the kaniko issue #3348) so we should look into alternatives for kubernetes clusters. I never liked kaniko too much, but it works without privileged mode and does not need a daemon, which is a good reason to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use with my forgejo-actions setup. I was going to try buildah and podman but it seems that they need to adjust things on the systems running them: When I tried to use buildah inside a docker container in Ubuntu I found the problems described on the buildah issue #1901 so I moved on.Reading the podman documentation I saw that I need to export the fuse device to run it inside a container and, as I found other option, I also skipped it....
After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I’m more interested on the kluctl approach right now). While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time. On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about argocd) as the host and will show how to: use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the vcluster connect command to access it with kubectl),publish the ingress objects deployed on the virtual cluster on the host ingress, anduse the sealed-secrets of the host cluster to manage the virtual cluster secrets....
As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post. For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid. The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered. On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks: Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically....
For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects I was working on. As now I have some spare time I’ve decided I’m going to play a little with Argo CD, Flux and Kluctl to test them and be able to use one of them in a real project in the future if it looks appropriate. On this post I will use Argo-CD Autopilot to install argocd on a k3d local cluster installed using OpenTofu to test the autopilot approach of managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as well). Installing tools locally with arkadeRecently I’ve been using the arkade tool to install kubernetes related applications on Linux servers and containers, I usually get the applications with it and install them on the /usr/local/bin folder. For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the $HOME/.arkade/bin folder if missing (I’m assuming that docker is already available, as it is not installable with arkade): #!/bin/sh # TOOLS LIST ARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu" # Add the arkade binary directory to the path if missing case ":${PATH}:" in *:"${HOME}/.arkade/bin":*) ;; *) export PATH="${PATH}:${HOME}/.arkade/bin" ;; esac # Install or update arkade if command -v arkade >/dev/null; then echo "Trying to update the arkade application" sudo arkade update else echo "Installing the arkade application" curl -sLS https://get.arkade.dev | sudo sh fi echo "" echo "Installing tools with arkade" echo "" for app in $ARKADE_APPS; do app_path="$(command -v $app)" || true if [ "$app_path" ]; then echo "The application '$app' already available on '$app_path'" else arkade get "$app" fi done cat <<EOF Add the ~/.arkade/bin directory to your PATH if tools have been installed there EOF...