GitLab CI/CD Tips: Using Rule Templates

This post describes how to define and use rule templates with semantic names using extends or !reference tags, how to define manual jobs using the same templates and how to use gitlab-ci inputs as macros to give names to regular expressions used by rules. Basic rule templatesI keep my templates in a rules.yml file stored on a common repository used from different projects as I mentioned on my previous post, but they can be defined anywhere, the important thing is that the files that need them include their definition somehow. The first version of my rules.yml file was as follows: .rules_common: # Common rules; we include them from others instead of forcing a workflow rules: # Disable branch pipelines while there is an open merge request from it - if: >- $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE != "merge_request_event" when: never .rules_default: # Default rules, we need to add the when: on_success to make things work rules: - !reference [.rules_common, rules] - when: on_success...

September 24, 2023 · 8 min · Sergio Talens-Oliag

GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project. Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository: dumb.sh #!/bin/sh echo "The script arguments are: '$@'" If we run the following job on the common repository: job: script: - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2...

September 17, 2023 · 8 min · Sergio Talens-Oliag

Testing cilium with k3d and kind

This post describes how to deploy cilium (and hubble) using docker on a Linux system with k3d or kind to test it as CNI and Service Mesh. I wrote some scripts to do a local installation and evaluate cilium to use it at work (in fact we are using cilium on an EKS cluster now), but I thought it would be a good idea to share my original scripts in this blog just in case they are useful to somebody, at least for playing a little with the technology. LinksAs there is no point on explaining here all the concepts related to cilium I’m providing some links for the reader interested on reading about it: What is CNI?What is Cilium?What is eBPF?What is Hubble?Why use Cilium with Kubernetes?...

July 18, 2023 · 6 min · Sergio Talens-Oliag

Shared networking for Virtual Machines and Containers

This entry explains how I have configured a linux bridge, dnsmasq and iptables to be able to run and communicate different virtualization systems and containers on laptops running Debian GNU/Linux. I’ve used different variations of this setup for a long time with VirtualBox and KVM for the Virtual Machines and Linux-VServer, OpenVZ, LXC and lately Docker or Podman for the Containers. Required packagesI’m running Debian Sid with systemd and network-manager to configure the WiFi and Ethernet interfaces, but for the bridge I use bridge-utils with ifupdown (as I said this setup is old, I guess ifupdow2 and ifupdown-ng will work too). To start and stop the DNS and DHCP services and add NAT rules when the bridge is brought up or down I execute a script that uses: ip from iproute2 to get the network information,dnsmasq to provide the DNS and DHCP services (currently only the dnsmasq-base package is needed and it is recommended by network-manager, so it is probably installed),iptables to configure NAT (for now docker kind of forces me to keep using iptables, but at some point I’d like to move to nftables)....

October 9, 2022 · 9 min · Sergio Talens-Oliag

Kubernetes Static Content Server

This post describes how I’ve put together a simple static content server for kubernetes clusters using a Pod with a persistent volume and multiple containers: an sftp server to manage contents, a web server to publish them with optional access control and another one to run scripts which need access to the volume filesystem. The sftp server runs using MySecureShell, the web server is nginx and the script runner uses the webhook tool to publish endpoints to call them (the calls will come from other Pods that run backend servers or are executed from Jobs or CronJobs). Note: This service has been developed for Kyso and the version used in our current architecture includes an additional container to index documents for Elasticsearch, but as it is not relevant for the description of the service as a general solution I’ve decided to ignore it on this post. HistoryThe system was developed because we had a NodeJS API with endpoints to upload files and store them on S3 compatible services that were later accessed via HTTPS, but the requirements changed and we needed to be able to publish folders instead of individual files using their original names and apply access restrictions using our API. Thinking about our requirements the use of a regular filesystem to keep the files and folders was a good option, as uploading and serving files is simple. For the upload I decided to use the sftp protocol, mainly because I already had an sftp container image based on mysecureshell prepared; once we settled on that we added sftp support to the API server and configured it to upload the files to our server instead of using S3 buckets. To publish the files we added a nginx container configured to work as a reverse proxy that uses the ngx_http_auth_request_module to validate access to the files (the sub request is configurable, in our deployment we have configured it to call our API to check if the user can access a given URL). Finally we added a third container when we needed to execute some tasks directly on the filesystem (using kubectl exec with the existing containers did not seem a good idea, as that is not supported by CronJobs objects, for example). The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was to use the webhook tool to provide the endpoints to call the scripts; for now we have three: one to get the disc usage of a PATH,one to hardlink all the files that are identical on the filesystem,one to copy files and folders from S3 buckets to our filesystem....

September 26, 2022 · 29 min · Sergio Talens-Oliag