As promised, on this post I’m going to explain how I’ve configured this blog using hugo, asciidoctor and the papermod theme, how I publish it using nginx, how I’ve integrated the remark42 comment system and how I’ve automated its publication using forgejo and json2file-go.

It is a long post, but I hope that at least parts of it can be interesting for some, feel free to ignore it if that is not your case …​ :wink:

Hugo Configuration

Theme settings

The site is using the PaperMod theme and as I’m using asciidoctor to publish my content I’ve adjusted the settings to improve how things are shown with it.

The current config.yml file is the one shown below (probably some of the settings are not required nor being used right now, but I’m including the current file, so this post will have always the latest version of it):

config.yml
baseURL: https://blogops.mixinet.net/
title: Mixinet BlogOps
paginate: 5
theme: PaperMod
destination: public/
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
minify:
  disableXML: true
  minifyOutput: true
languages:
  en:
    languageName: "English"
    description: "Mixinet BlogOps - https://blogops.mixinet.net/"
    author: "Sergio Talens-Oliag"
    weight: 1
    title: Mixinet BlogOps
    params:
      homeInfoParams:
        Title: "Sergio Talens-Oliag Technical Blog"
        Content: >
          ![Mixinet BlogOps](/images/mixinet-blogops.png)
    taxonomies:
      category: categories
      tag: tags
      series: series
    menu:
      main:
        - name: Archive
          url: archives
          weight: 5
        - name: Categories
          url: categories/
          weight: 10
        - name: Tags
          url: tags/
          weight: 10
        - name: Search
          url: search/
          weight: 15
outputs:
  home:
    - HTML
    - RSS
    - JSON
params:
  env: production
  defaultTheme: light
  disableThemeToggle: false
  ShowShareButtons: true
  ShowReadingTime: true
  disableSpecial1stPost: true
  disableHLJS: true
  displayFullLangName: true
  ShowPostNavLinks: true
  ShowBreadCrumbs: true
  ShowCodeCopyButtons: true
  ShowRssButtonInSectionTermList: true
  ShowFullTextinRSS: true
  ShowToc: true
  TocOpen: false
  comments: true
  remark42SiteID: "blogops"
  remark42Url: "/remark42"
  profileMode:
    enabled: false
    title: Sergio Talens-Oliag Technical Blog
    imageUrl: "/images/mixinet-blogops.png"
    imageTitle: Mixinet BlogOps
    buttons:
      - name: Archives
        url: archives
      - name: Categories
        url: categories
      - name: Tags
        url: tags
  socialIcons:
    - name: CV
      url: "https://www.uv.es/~sto/cv/"
    - name: Debian
      url: "https://people.debian.org/~sto/"
    - name: GitHub
      url: "https://github.com/sto/"
    - name: GitLab
      url: "https://gitlab.com/stalens/"
    - name: Linkedin
      url: "https://www.linkedin.com/in/sergio-talens-oliag/"
    - name: RSS
      url: "index.xml"
  assets:
    disableHLJS: true
    favicon: "/favicon.ico"
    favicon16x16:  "/favicon-16x16.png"
    favicon32x32:  "/favicon-32x32.png"
    apple_touch_icon:  "/apple-touch-icon.png"
    safari_pinned_tab:  "/safari-pinned-tab.svg"
  fuseOpts:
    isCaseSensitive: false
    shouldSort: true
    location: 0
    distance: 1000
    threshold: 0.4
    minMatchCharLength: 0
    keys: ["title", "permalink", "summary", "content"]
markup:
  asciidocExt:
    attributes: {}
    backend: html5s
    extensions: ['asciidoctor-html5s','asciidoctor-diagram']
    failureLevel: fatal
    noHeaderOrFooter: true
    preserveTOC: false
    safeMode: unsafe
    sectionNumbers: false
    trace: false
    verbose: false
    workingFolderCurrent: true
privacy:
  vimeo:
    disabled: false
    simple: true
  twitter:
    disabled: false
    enableDNT: true
    simple: true
  instagram:
    disabled: false
    simple: true
  youtube:
    disabled: false
    privacyEnhanced: true
services:
  instagram:
    disableInlineCSS: true
  twitter:
    disableInlineCSS: true
security:
  exec:
    allow:
      - '^asciidoctor$'
      - '^dart-sass-embedded$'
      - '^go$'
      - '^npx$'
      - '^postcss$'

Some notes about the settings:

  • disableHLJS and assets.disableHLJS are set to true; we plan to use rouge on adoc and the inclusion of the hljs assets adds styles that collide with the ones used by rouge.
  • ShowToc is set to true and the TocOpen setting is set to false to make the ToC appear collapsed initially. My plan was to use the asciidoctor ToC, but after trying I believe that the theme one looks nice and I don’t need to adjust styles, although it has some issues with the html5s processor (the admonition titles use <h6> and they are shown on the ToC, which is weird), to fix it I’ve copied the layouts/partial/toc.html to my site repository and replaced the range of headings to end at 5 instead of 6 (in fact 5 still seems a lot, but as I don’t think I’ll use that heading level on the posts it doesn’t really matter).
  • params.profileMode values are adjusted, but for now I’ve left it disabled setting params.profileMode.enabled to false and I’ve set the homeInfoParams to show more or less the same content with the latest posts under it (I’ve added some styles to my custom.css style sheet to center the text and image of the first post to match the look and feel of the profile).
  • On the asciidocExt section I’ve adjusted the backend to use html5s, I’ve added the asciidoctor-html5s and asciidoctor-diagram extensions to asciidoctor and adjusted the workingFolderCurrent to true to make asciidoctor-diagram work right (haven’t tested it yet).

Theme customisations

To write in asciidoctor using the html5s processor I’ve added some files to the assets/css/extended directory:

  1. As said before, I’ve added the file assets/css/extended/custom.css to make the homeInfoParams look like the profile page and I’ve also changed a little bit some theme styles to make things look better with the html5s output:

    custom.css
    /* Fix first entry alignment to make it look like the profile */
    .first-entry { text-align: center; }
    .first-entry img { display: inline; }
    /**
     * Remove margin for .post-content code and reduce padding to make it look
     * better with the asciidoctor html5s output.
     **/
    .post-content code { margin: auto 0; padding: 4px; }
  2. I’ve also added the file assets/css/extended/adoc.css with some styles taken from the asciidoctor-default.css, see this blog post about the original file; mine is the same after formatting it with css-beautify and editing it to use variables for the colors to support light and dark themes:

    adoc.css
    /* AsciiDoctor*/
    table {
        border-collapse: collapse;
        border-spacing: 0
    }
    
    .admonitionblock>table {
        border-collapse: separate;
        border: 0;
        background: none;
        width: 100%
    }
    
    .admonitionblock>table td.icon {
        text-align: center;
        width: 80px
    }
    
    .admonitionblock>table td.icon img {
        max-width: none
    }
    
    .admonitionblock>table td.icon .title {
        font-weight: bold;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        text-transform: uppercase
    }
    
    .admonitionblock>table td.content {
        padding-left: 1.125em;
        padding-right: 1.25em;
        border-left: 1px solid #ddddd8;
        color: var(--primary)
    }
    
    .admonitionblock>table td.content>:last-child>:last-child {
        margin-bottom: 0
    }
    
    .admonitionblock td.icon [class^="fa icon-"] {
        font-size: 2.5em;
        text-shadow: 1px 1px 2px var(--secondary);
        cursor: default
    }
    
    .admonitionblock td.icon .icon-note::before {
        content: "\f05a";
        color: var(--icon-note-color)
    }
    
    .admonitionblock td.icon .icon-tip::before {
        content: "\f0eb";
        color: var(--icon-tip-color)
    }
    
    .admonitionblock td.icon .icon-warning::before {
        content: "\f071";
        color: var(--icon-warning-color)
    }
    
    .admonitionblock td.icon .icon-caution::before {
        content: "\f06d";
        color: var(--icon-caution-color)
    }
    
    .admonitionblock td.icon .icon-important::before {
        content: "\f06a";
        color: var(--icon-important-color)
    }
    
    .conum[data-value] {
        display: inline-block;
        color: #fff !important;
        background-color: rgba(100, 100, 0, .8);
        -webkit-border-radius: 100px;
        border-radius: 100px;
        text-align: center;
        font-size: .75em;
        width: 1.67em;
        height: 1.67em;
        line-height: 1.67em;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        font-style: normal;
        font-weight: bold
    }
    
    .conum[data-value] * {
        color: #fff !important
    }
    
    .conum[data-value]+b {
        display: none
    }
    
    .conum[data-value]::after {
        content: attr(data-value)
    }
    
    pre .conum[data-value] {
        position: relative;
        top: -.125em
    }
    
    b.conum * {
        color: inherit !important
    }
    
    .conum:not([data-value]):empty {
        display: none
    }
  3. The previous file uses variables from a partial copy of the theme-vars.css file that changes the highlighted code background color and adds the color definitions used by the admonitions:

    theme-vars.css
    :root {
        /* Solarized base2 */
        /* --hljs-bg: rgb(238, 232, 213); */
        /* Solarized base3 */
        /* --hljs-bg: rgb(253, 246, 227); */
        /* Solarized base02 */
        --hljs-bg: rgb(7, 54, 66);
        /* Solarized base03 */
        /* --hljs-bg: rgb(0, 43, 54); */
        /* Default asciidoctor theme colors */
        --icon-note-color: #19407c;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #bf6900;
        --icon-caution-color: #bf3400;
        --icon-important-color: #bf0000
    }
    
    .dark {
        --hljs-bg: rgb(7, 54, 66);
        /* Asciidoctor theme colors with tint for dark background */
        --icon-note-color: #3e7bd7;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #ff8d03;
        --icon-caution-color: #ff7847;
        --icon-important-color: #ff3030
    }
  4. The previous styles use font-awesome, so I’ve downloaded its resources for version 4.7.0 (the one used by asciidoctor) storing the font-awesome.css into on the assets/css/extended dir (that way it is merged with the rest of .css files) and copying the fonts to the static/assets/fonts/ dir (will be served directly):

    FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
    curl "$FA_BASE_URL/css/font-awesome.css" \
      > assets/css/extended/font-awesome.css
    for f in FontAwesome.otf fontawesome-webfont.eot \
      fontawesome-webfont.svg fontawesome-webfont.ttf \
      fontawesome-webfont.woff fontawesome-webfont.woff2; do
        curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
    done
  5. As already said the default highlighter is disabled (it provided a css compatible with rouge) so we need a css to do the highlight styling; as rouge provides a way to export them, I’ve created the assets/css/extended/rouge.css file with the thankful_eyes theme:

    rougify style thankful_eyes > assets/css/extended/rouge.css
  6. To support the use of the html5s backend with admonitions I’ve added a variation of the example found on this blog post to assets/js/adoc-admonitions.js:

    adoc-admonitions.js
    // replace the default admonitions block with a table that uses a format
    // similar to the standard asciidoctor ... as we are using fa-icons here there
    // is no need to add the icons: font entry on the document.
    window.addEventListener('load', function () {
      const admonitions = document.getElementsByClassName('admonition-block')
      for (let i = admonitions.length - 1; i >= 0; i--) {
        const elm = admonitions[i]
        const type = elm.classList[1]
        const title = elm.getElementsByClassName('block-title')[0];
    	const label = title.getElementsByClassName('title-label')[0]
    		.innerHTML.slice(0, -1);
        elm.removeChild(elm.getElementsByClassName('block-title')[0]);
        const text = elm.innerHTML
        const parent = elm.parentNode
        const tempDiv = document.createElement('div')
        tempDiv.innerHTML = `<div class="admonitionblock ${type}">
        <table>
          <tbody>
            <tr>
              <td class="icon">
                <i class="fa icon-${type}" title="${label}"></i>
              </td>
              <td class="content">
                ${text}
              </td>
            </tr>
          </tbody>
        </table>
      </div>`
        const input = tempDiv.childNodes[0]
        parent.replaceChild(input, elm)
      }
    })

    and enabled its minified use on the layouts/partials/extend_footer.html file adding the following lines to it:

    {{- $admonitions := slice (resources.Get "js/adoc-admonitions.js")
      | resources.Concat "assets/js/adoc-admonitions.js" | minify | fingerprint }}
    <script defer crossorigin="anonymous" src="{{ $admonitions.RelPermalink }}"
      integrity="{{ $admonitions.Data.Integrity }}"></script>

Remark42 configuration

To integrate Remark42 with the PaperMod theme I’ve created the file layouts/partials/comments.html with the following content based on the remark42 documentation, including extra code to sync the dark/light setting with the one set on the site:

comments.html
<div id="remark42"></div>
<script>
  var remark_config = {
    host: {{ .Site.Params.remark42Url }},
    site_id: {{ .Site.Params.remark42SiteID }},
    url: {{ .Permalink }},
    locale: {{ .Site.Language.Lang }}
  };
  (function(c) {
    /* Adjust the theme using the local-storage pref-theme if set */
    if (localStorage.getItem("pref-theme") === "dark") {
      remark_config.theme = "dark";
    } else if (localStorage.getItem("pref-theme") === "light") {
      remark_config.theme = "light";
    }
    /* Add remark42 widget */
    for(var i = 0; i < c.length; i++){
      var d = document, s = d.createElement('script');
      s.src = remark_config.host + '/web/' + c[i] +'.js';
      s.defer = true;
      (d.head || d.body).appendChild(s);
    }
  })(remark_config.components || ['embed']);
</script>

In development I use it with anonymous comments enabled, but to avoid SPAM the production site uses social logins (for now I’ve only enabled Github & Google, if someone requests additional services I’ll check them, but those were the easy ones for me initially).

To support theme switching with remark42 I’ve also added the following inside the layouts/partials/extend_footer.html file:

{{- if (not site.Params.disableThemeToggle) }}
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () => {
  if (typeof window.REMARK42 != "undefined") {
    if (document.body.className.includes('dark')) {
      window.REMARK42.changeTheme('light');
    } else {
      window.REMARK42.changeTheme('dark');
    }
  }
});
</script>
{{- end }}

With this code if the theme-toggle button is pressed we change the remark42 theme before the PaperMod one (that’s needed here only, on page loads the remark42 theme is synced with the main one using the code from the layouts/partials/comments.html shown earlier).

Development setup

To preview the site on my laptop I’m using docker-compose with the following configuration:

docker-compose.yaml
version: "2"
services:
  hugo:
    build:
      context: ./docker/hugo-adoc
      dockerfile: ./Dockerfile
    image: sto/hugo-adoc
    container_name: hugo-adoc-blogops
    restart: always
    volumes:
      - .:/documents
    command: server --bind 0.0.0.0 -D -F
    user: ${APP_UID}:${APP_GID}
  nginx:
    image: nginx:latest
    container_name: nginx-blogops
    restart: always
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    ports:
      - 1313:1313
  remark42:
    build:
      context: ./docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    container_name: remark42-blogops
    restart: always
    env_file:
      - ./.env
      - ./remark42/env.dev
    volumes:
      - ./remark42/var.dev:/srv/var

To run it properly we have to create the .env file with the current user ID and GID on the variables APP_UID and APP_GID (if we don’t do it the files can end up being owned by a user that is not the same as the one running the services):

$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env

The Dockerfile used to generate the sto/hugo-adoc is:

Dockerfile
FROM golang:1.21-alpine AS build
ARG HUGO_BUILD_TAGS=extended
ARG CGO=1
ENV CGO_ENABLED=${CGO}
ENV GOOS=linux
ENV GO111MODULE=on
WORKDIR /go/src/github.com/gohugoio/hugo
RUN apk update &&\
 # gcc/g++ are required to build SASS libraries for extended version
 apk add --no-cache curl gcc g++ musl-dev git &&\
 api_url="https://api.github.com/repos/gohugoio/hugo/releases/latest" &&\
 download_url="$(\
  curl -sL "$api_url" | sed -n "s/^.*tarball_url\": \"\\(.*\)\",$/\1/p"\
 )" &&\
 curl -sL "$download_url" -o /tmp/hugo.tgz &&\
 tar xf /tmp/hugo.tgz -C . --strip-components=1 &&\
 go install github.com/magefile/mage@latest &&\
 mage hugo &&\
 mage install &&\
 cd / &&\
 rm -rf /tmp/hugo.tgz /go/src/github.com/gohugoio/hugo/*

FROM asciidoctor/docker-asciidoctor:latest
COPY --from=build /go/bin/hugo /usr/bin/hugo
RUN gem install --no-document asciidoctor-html5s &&\
 apk update &&\
 apk add --no-cache ca-certificates libc6-compat libstdc++ git &&\
 /usr/bin/hugo version &&\
 rm -rf /var/cache/apk/*
# Expose port for live server
EXPOSE 1313
ENTRYPOINT ["/usr/bin/hugo"]
CMD [""]

If you review it you will see that I’m using the docker-asciidoctor image as the base; the idea is that this image has all I need to work with asciidoctor and to use hugo I download the code and compile it on a builder container.

The image does not launch the server by default because I don’t want it to; in fact I use the same docker-compose.yml file to publish the site in production simply calling the container without the arguments passed on the docker-compose.yml file (see later).

When running the containers with docker-compose up (or docker compose up if you have the docker-compose-plugin package installed) we also launch a nginx container and the remark42 service so we can test everything together.

The Dockerfile for the remark42 image is the original one with an updated version of the init.sh script:

Dockerfile
FROM umputun/remark42:latest
COPY init.sh /init.sh

The updated init.sh is similar to the original, but allows us to use an APP_GID variable and updates the /etc/group file of the container so the files get the right user and group (with the original script the group is always 1001):

init.sh
#!/sbin/dinit /bin/sh

uid="$(id -u)"

if [ "${uid}" -eq "0" ]; then
  echo "init container"

  # set container's time zone
  cp "/usr/share/zoneinfo/${TIME_ZONE}" /etc/localtime
  echo "${TIME_ZONE}" >/etc/timezone
  echo "set timezone ${TIME_ZONE} ($(date))"

  # set UID & GID for the app
  if [ "${APP_UID}" ] || [ "${APP_GID}" ]; then
    [ "${APP_UID}" ] || APP_UID="1001"
    [ "${APP_GID}" ] || APP_GID="${APP_UID}"
    echo "set custom APP_UID=${APP_UID} & APP_GID=${APP_GID}"
    sed -i "s/^app:x:1001:1001:/app:x:${APP_UID}:${APP_GID}:/" /etc/passwd
    sed -i "s/^app:x:1001:/app:x:${APP_GID}:/" /etc/group
  else
    echo "custom APP_UID and/or APP_GID not defined, using 1001:1001"
  fi
  chown -R app:app /srv /home/app
fi

echo "prepare environment"

# replace {% REMARK_URL %} by content of REMARK_URL variable
find /srv -regex '.*\.\(html\|js\|mjs\)$' -print \
  -exec sed -i "s|{% REMARK_URL %}|${REMARK_URL}|g" {} \;

if [ -n "${SITE_ID}" ]; then
  #replace "site_id: 'remark'" by SITE_ID
  sed -i "s|'remark'|'${SITE_ID}'|g" /srv/web/*.html
fi

echo "execute \"$*\""
if [ "${uid}" -eq "0" ]; then
  exec su-exec app "$@"
else
  exec "$@"
fi

The environment file used with remark42 for development is quite minimal:

env.dev
TIME_ZONE=Europe/Madrid
REMARK_URL=http://localhost:1313/remark42
SITE=blogops
SECRET=123456
ADMIN_SHARED_ID=sto
AUTH_ANON=true
EMOJI=true

And the nginx/default.conf file used to publish the service locally is simple too:

default.conf
server { 
 listen 1313;
 server_name localhost;
 location / {
    proxy_pass http://hugo:1313;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
 }
 location /remark42/ {
    rewrite /remark42/(.*) /$1 break;
    proxy_pass http://remark42:8080/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Production setup

The VM where I’m publishing the blog runs Debian GNU/Linux and uses binaries from local packages and applications packaged inside containers.

To run the containers I’m using docker-ce (I could have used podman instead, but I already had it installed on the machine, so I stayed with it).

The binaries used on this project are included on the following packages from the main Debian repository:

  • git to clone & pull the repository,
  • jq to parse json files from shell scripts,
  • json2file-go to save the webhook messages to files,
  • inotify-tools to detect when new files are stored by json2file-go and launch scripts to process them,
  • nginx to publish the site using HTTPS and work as proxy for json2file-go and remark42 (I run it using a container),
  • task-spool to queue the scripts that update the deployment.

And I’m using docker and docker compose from the debian packages on the docker repository:

  • docker-ce to run the containers,
  • docker-compose-plugin to run docker compose (it is a plugin, so no - in the name).

Repository checkout

To manage the git repository I’ve created a deploy key, added it to forgejo and cloned the project on the /srv/blogops PATH (that route is owned by a regular user that has permissions to run docker, as I said before).

Compiling the site with hugo

To compile the site we are using the docker-compose.yml file seen before, to be able to run it first we build the container images and once we have them we launch hugo using docker compose run:

$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --

The compilation leaves the static HTML on /srv/blogops/public (we remove the directory first because hugo does not clean the destination folder as jekyll does).

The deploy script re-generates the site as described and moves the public directory to its final place for publishing.

Running remark42 with docker

On the /srv/blogops/remark42 folder I have the following docker-compose.yml:

docker-compose.yml
version: "2"
services:
  remark42:
    build:
      context: ../docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    env_file:
      - ../.env
      - ./env.prod
    container_name: remark42
    restart: always
    volumes:
      - ./var.prod:/srv/var
    ports:
      - 127.0.0.1:8042:8080

The ../.env file is loaded to get the APP_UID and APP_GID variables that are used by my version of the init.sh script to adjust file permissions and the env.prod file contains the rest of the settings for remark42, including the social network tokens (see the remark42 documentation for the available parameters, I don’t include my configuration here because some of them are secrets).

Nginx configuration

The nginx configuration for the blogops.mixinet.net site is as simple as:

server {
  listen 443 ssl http2;
  server_name blogops.mixinet.net;
  ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-443.error.log;
  root /srv/blogops/nginx/public_html;
  location / {
    try_files $uri $uri/ =404;
  }
  include /srv/blogops/nginx/remark42.conf;
}
server {
  listen 80 ;
  listen [::]:80 ;
  server_name blogops.mixinet.net;
  access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-80.error.log;
  if ($host = blogops.mixinet.net) {
    return 301 https://$host$request_uri;
  }
  return 404;
}

On this configuration the certificates are managed by certbot and the server root directory is on /srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason for that is that I want to be able to compile without affecting the running site, the deployment script generates the site on /srv/blogops/public and if all works well we rename folders to do the switch, making the change feel almost atomic.

json2file-go configuration

As I have a working WireGuard VPN between the machine running forgejo at my home and the VM where the blog is served, I’m going to configure the json2file-go to listen for connections on a high port using a self signed certificate and listening on IP addresses only reachable through the VPN.

To do it we create a systemd socket to run json2file-go and adjust its configuration to listen on a private IP (we use the FreeBind option on its definition to be able to launch the service even when the IP is not available, that is, when the VPN is down).

The following script can be used to set up the json2file-go configuration:

setup-json2file.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
J2F_DIR="$BASE_DIR/json2file"
TLS_DIR="$BASE_DIR/tls"

J2F_SERVICE_NAME="json2file-go"
J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"
J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"
J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"
J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"

J2F_BASEDIR_FILE="/etc/json2file-go/basedir"
J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"
J2F_CRT_FILE="/etc/json2file-go/certfile"
J2F_KEY_FILE="/etc/json2file-go/keyfile"
J2F_CRT_PATH="$TLS_DIR/crt.pem"
J2F_KEY_PATH="$TLS_DIR/key.pem"

# ----
# MAIN
# ----

# Install packages used with json2file for the blogops site
sudo apt update
sudo apt install -y json2file-go uuid
if [ -z "$(type mkcert)" ]; then
  sudo apt install -y mkcert
fi
sudo apt clean

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
J2F_DIRLIST="blogops:$(uuid)"
J2F_LISTEN_STREAM="172.31.31.1:4443"

# Configure json2file
[ -d "$J2F_DIR" ] || mkdir "$J2F_DIR"
sudo sh -c "echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"
[ -d "$TLS_DIR" ] || mkdir "$TLS_DIR"
if [ ! -f "$J2F_CRT_PATH" ] || [ ! -f "$J2F_KEY_PATH" ]; then
  mkcert -cert-file "$J2F_CRT_PATH" -key-file "$J2F_KEY_PATH" "$(hostname -f)"
fi
sudo sh -c "echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"
sudo sh -c "echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"
sudo sh -c "cat >'$J2F_DIRLIST_FILE'" <<EOF
$(echo "$J2F_DIRLIST" | tr ';' '\n')
EOF

# Service override
[ -d "$J2F_SERVICE_DIR" ] || sudo mkdir "$J2F_SERVICE_DIR"
sudo sh -c "cat >'$J2F_SERVICE_OVERRIDE'" <<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUP
EOF

# Socket override
[ -d "$J2F_SOCKET_DIR" ] || sudo mkdir "$J2F_SOCKET_DIR"
sudo sh -c "cat >'$J2F_SOCKET_OVERRIDE'" <<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAM
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"
sudo systemctl start "$J2F_SERVICE_NAME"
sudo systemctl enable "$J2F_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2
Warning:

The script uses mkcert to create the temporary certificates, to install the package on bullseye the backports repository must be available.

Forgejo configuration

To make forgejo use our json2file-go server we go to the project and enter into the hooks/forgejo/new page, once there we create a new webhook of type forgejo and set the target URL to https://172.31.31.1:4443/blogops and on the secret field we put the token generated with uuid by the setup script:

sed -n -e 's/blogops://p' /etc/json2file-go/dirlist

The rest of the settings can be left as they are:

  • Trigger on: Push events
  • Branch filter: *
Warning:

We are using an internal IP and a self signed certificate, that means that we have to review that the webhook section of the app.ini of our forgejo server allows us to call the IP and skips the TLS verification (you can see the available options on the forgejo documentation).

The [webhook] section of my server looks like this:

[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true

Once we have the webhook configured we can try it and if it works our json2file server will store the file on the /srv/blogops/webhook/json2file/blogops/ folder.

The json2file spooler script

With the previous configuration our system is ready to receive webhook calls from forgejo and store the messages on files, but we have to do something to process those files once they are saved in our machine.

An option could be to use a cronjob to look for new files, but we can do better on Linux using inotify …​ we will use the inotifywait command from inotify-tools to watch the json2file output directory and execute a script each time a new file is moved inside it or closed after writing (IN_CLOSE_WRITE and IN_MOVED_TO events).

To avoid concurrency problems we are going to use task-spooler to launch the scripts that process the webhooks using a queue of length 1, so they are executed one by one in a FIFO queue.

The spooler script is this:

blogops-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
TSP_DIR="$BASE_DIR/tsp"

WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"

# ---------
# FUNCTIONS
# ---------

queue_job() {
  echo "Queuing job to process file '$1'"
  TMPDIR="$TSP_DIR" TS_SLOTS="1" TS_MAXFINISHED="10" \
    tsp -n "$WEBHOOK_COMMAND" "$1"
}

# ----
# MAIN
# ----

INPUT_DIR="$1"
if [ ! -d "$INPUT_DIR" ]; then
  echo "Input directory '$INPUT_DIR' does not exist, aborting!"
  exit 1
fi

[ -d "$TSP_DIR" ] || mkdir "$TSP_DIR"

echo "Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR" -type f | sort | while read -r _filename; do
  queue_job "$_filename"
done

# Use inotifywatch to process new files
echo "Watching for new files under '$INPUT_DIR'"
inotifywait -q -m -e close_write,moved_to --format "%w%f" -r "$INPUT_DIR" |
  while read -r _filename; do
    queue_job "$_filename"
  done

# ----
# vim: ts=2:sw=2:et:ai:sts=2

To run it as a daemon we install it as a systemd service using the following script:

setup-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
J2F_DIR="$BASE_DIR/json2file"

SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"
SPOOLER_SERVICE_NAME="blogops-j2f-spooler"
SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"

# ----
# MAIN
# ----

# Install packages used with the webhook processor
sudo apt update
sudo apt install -y inotify-tools jq task-spooler
sudo apt clean

# Configure process service
sudo sh -c "cat > $SPOOLER_SERVICE_FILE" <<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMAND
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME" || true
sudo systemctl start "$SPOOLER_SERVICE_NAME"
sudo systemctl enable "$SPOOLER_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2

The forgejo webhook processor

Finally, the script that processes the JSON files does the following:

  1. First, it checks if the repository and branch are right,
  2. Then, it fetches and checks out the commit referenced on the JSON file,
  3. Once the files are updated, compiles the site using hugo with docker compose,
  4. If the compilation succeeds the script renames directories to swap the old version of the site by the new one.

If there is a failure the script aborts but before doing it or if the swap succeeded the system sends an email to the configured address and/or the user that pushed updates to the repository with a log of what happened.

The current script is this one:

blogops-webhook.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

# Values
REPO_REF="refs/heads/main"
REPO_CLONE_URL="https://forgejo.mixinet.net/mixinet/blogops.git"

MAIL_PREFIX="[BLOGOPS-WEBHOOK] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# If the following variable is set to 'true' the pusher gets mail on failures
MAIL_ERRFILE="false"
# If the following variable is set to 'true' the pusher gets mail on success
MAIL_LOGFILE="false"
# forgejo's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains
# when the KeepEmailPrivate option is enabled for a user
NO_REPLY_ADDRESS="noreply.example.org"

# Directories
BASE_DIR="/srv/blogops"

PUBLIC_DIR="$BASE_DIR/public"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"

WEBHOOK_BASE_DIR="$BASE_DIR/webhook"
WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"
WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"
WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"
WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"
WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"
WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"

# Files
TODAY="$(date +%Y%m%d)"
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"
WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"
WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"
WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"
WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"
WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"
WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"

# Query to get variables from a forgejo webhook json
ENV_VARS_QUERY="$(
  printf "%s" \
    '(.           | @sh "gt_ref=\(.ref);"),' \
    '(.           | @sh "gt_after=\(.after);"),' \
    '(.repository | @sh "gt_repo_clone_url=\(.clone_url);"),' \
    '(.repository | @sh "gt_repo_name=\(.name);"),' \
    '(.pusher     | @sh "gt_pusher_full_name=\(.full_name);"),' \
    '(.pusher     | @sh "gt_pusher_email=\(.email);")'
)"

# ---------
# Functions
# ---------

webhook_log() {
  echo "$(date -R) $*" >>"$WEBHOOK_LOGFILE_PATH"
}

webhook_check_directories() {
  for _d in "$WEBHOOK_SPOOL_DIR" "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" \
    "$WEBHOOK_REJECTED" "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR"; do
    [ -d "$_d" ] || mkdir "$_d"
  done
}

webhook_clean_directories() {
  # Try to remove empty dirs
  for _d in "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" "$WEBHOOK_REJECTED" \
    "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR" "$WEBHOOK_SPOOL_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null || true
    fi
  done
}

webhook_accept() {
  webhook_log "Accepted: $*"
  mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_ACCEPTED_JSON"
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_ACCEPTED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
}

webhook_reject() {
  [ -d "$WEBHOOK_REJECTED_TODAY" ] || mkdir "$WEBHOOK_REJECTED_TODAY"
  webhook_log "Rejected: $*"
  if [ -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
    mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_REJECTED_JSON"
  fi
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_REJECTED_LOGF"
  exit 0
}

webhook_deployed() {
  [ -d "$WEBHOOK_DEPLOYED_TODAY" ] || mkdir "$WEBHOOK_DEPLOYED_TODAY"
  webhook_log "Deployed: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_DEPLOYED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_DEPLOYED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
}

webhook_troubled() {
  [ -d "$WEBHOOK_TROUBLED_TODAY" ] || mkdir "$WEBHOOK_TROUBLED_TODAY"
  webhook_log "Troubled: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_TROUBLED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_TROUBLED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
}

print_mailto() {
  _addr="$1"
  _user_email=""
  # Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,
  # which should match the value of that variable on the forgejo 'app.ini' (it
  # is the domain used for emails when the user hides it).
  # shellcheck disable=SC2154
  if [ -n "${gt_pusher_email##*@"${NO_REPLY_ADDRESS}"}" ] &&
    [ -z "${gt_pusher_email##*@*}" ]; then
    _user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""
  fi
  if [ "$_addr" ] && [ "$_user_email" ]; then
    echo "$_addr,$_user_email"
  elif [ "$_user_email" ]; then
    echo "$_user_email"
  elif [ "$_addr" ]; then
    echo "$_addr"
  fi
}

mail_success() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_LOGFILE" = "true" ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="OK - $gt_repo_name updated to commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

mail_failure() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_ERRFILE" = true ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

# ----
# MAIN
# ----
# Check directories
webhook_check_directories

# Go to the base directory
cd "$BASE_DIR"

# Check if the file exists
WEBHOOK_JSON_INPUT_FILE="$1"
if [ ! -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
  webhook_reject "Input arg '$1' is not a file, aborting"
fi

# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"
eval "$(jq -r "$ENV_VARS_QUERY" "$WEBHOOK_JSON_INPUT_FILE")"

# Check that the repository clone url is right
# shellcheck disable=SC2154
if [ "$gt_repo_clone_url" != "$REPO_CLONE_URL" ]; then
  webhook_reject "Wrong repository: '$gt_clone_url'"
fi

# Check that the branch is the right one
# shellcheck disable=SC2154
if [ "$gt_ref" != "$REPO_REF" ]; then
  webhook_reject "Wrong repository ref: '$gt_ref'"
fi

# Accept the file
# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"

# Update the checkout
ret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository fetch failed"
  mail_failure
fi
# shellcheck disable=SC2154
git checkout "$gt_after" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository checkout failed"
  mail_failure
fi

# Remove the build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi

# Build site
docker compose run hugo -- >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
# go back to the main branch
git switch main && git pull
# Fail if public dir was missing
if [ "$ret" -ne "0" ] || [ ! -d "$PUBLIC_DIR" ]; then
  webhook_troubled "Site build failed"
  mail_failure
fi

# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf {} \; >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Site switch failed"
  mail_failure
else
  webhook_deployed "Site deployed successfully"
  mail_success
fi

# ----
# vim: ts=2:sw=2:et:ai:sts=2