Sunday, September 28, 2025

Vorta’s “No Matter What, Keep All…” Setting

Since switching from Pop!_OS (Gnome) to Kubuntu (KDE) for work, I have also changed my backup GUI.  There is no question that Vorta is more powerful than Pika Backup, but the price of that is the loss of simplicity.

One place I got confused was in the backup-retention rules, referred to as “pruning” by the GUI.  I have learned: when Vorta offers to “keep all backups made within…”, that is internally a separate rule with high priority. Therefore, when I set up my hourly backups to keep “one week” of hourly, two weeks of daily, and so on, but “keep everything from the last six weeks,” I ended up with seven weeks of hourly backups, followed by the two weeks of daily.

I noticed the problem when my laptop fans spun up for a while, which turned out to be Vorta verifying 300+ archives.  The work laptop is on only for work, producing backups for 8–9 hours per weekday, for around 250 extra archives over those first six weeks.

Unrelated, but one nice thing about Vorta is that, like Pika, it is a front-end to Borg.  I gave it the same repository on disk.  Now I have continuous backup history across the two GUIs, and emergency CLI access if necessary.

Sunday, September 21, 2025

Reflections on Breaking Something

Last week, I deployed some code, and then impossible phenomena followed on the website.  Ultimately, it was all my fault, because I have changed my design sensibilities over time.

Distant past me figured it would make for shorter commands if we left the .service suffix off of names.  It would be added automatically at the boundary, when actually invoking a systemctl command.

Present me is less tolerant of magic, hates checking at several places whether or not to add .service, and worries about whether the code works with other types of systemd units.

Hence, when I recently updated our deployment code, it also began passing the full service name to the reload script.  That script is a wrapper that sits between the deployment user and systemctl.  The deployment user has sudo rights to the script, which can only run very specific systemctl commands, and which validates the unit name against an “allowed” list.

For simplicity—because it is run through sudo—this wrapper script had zero magic. It expected the caller to give it an abbreviated name, to which it would add .service itself.  The change to the deployment code then broke that process.  Tests didn’t catch it, not only because there are none, but because the wrapper script lives outside of the deployment repository.  It’s an externally-provided service.

Consequently…

The “impossible phenomena” happened because the new files were unpacked, including templates, while the reload left the old code running.  The old code didn’t set the newly-used variables for the template to process, so the parts relying on those variables malfunctioned.  I had a lot of difficulty duplicating this effect, because out of habit, I restart the daemon with sudo systemctl ... after making code changes in dev.  I don’t use the wrapper script.  (Maybe I should.)

The first thing to do was fix the wrapper script to accept names with the .service suffix.

But after that, the biggest thing is that the deployer needs to cancel the operation and issue a rollback if the final reload fails.  This will restore consistency between the old code and the original files on disk.

I might also be able to improve robustness overall by using a relative path for the template root dir.  If we stay in a working directory below the symlink that is updated during deployment, instead of traversing that symlink on an absolute path, we’ll always get the templates that correspond to the running code. However, that’s more subtle and tricky than issuing a rollback, and hence, more likely to get broken in the future.

I like the sudo local-reload website.service approach.  The script can be tracked in version control easily, and the sudoers file remains as short and simple as possible.  Meanwhile, the deployment user isn’t given broad access to the entire set of subcommands that systemctl has to offer.

Sunday, September 7, 2025

Online Builds

As a long-time coder and tinkerer, who views computers as deterministic if we understand them properly, modern tooling feels wrong to me.

  • Python expects code to be distributed in a form where it has to contact PyPI for dependencies. (You can get around this—like the awscli installer—but I never did figure out how they build that.)
  • Python expects code to be distributed in a form where the installation process executes arbitrary code. This transitively happens with all dependencies. 🐝
  • composer install (usually) expects to be able to fetch code from GitHub.  Running it (non-interactively and with --no-dev, of course) as part of deployment makes deployment depend on the internet working.
  • Containerfile ADD and COPY will happily take URLs as sources, including URLs that are intended to be mutable, like GitHub /latest/ release artifact URLs. Projects may recommend using such URLs.
  • curl … | sudo sh also deeply connects the internet to the process, and treats the script itself as ephemera, discarding it as the process completes. If the script makes its own internet connections, the problem with preserving the canonical source is multiplied.

Quite aside from “the internet connection has to be up,” the referenced URLs must keep working over time.  A Containerfile built as recommended for the docker-php-extension-installer inherently requires the up-to-date source of code to remain at the github.com site, and under the mlocati user.

Building reliability and reproducibility into the process is left up to the user.  Those features can only be included if the thought, “what if…?” crosses someone’s mind.

However, saving remote resources into a local build context protects them from loss, but requires the maintainer of that build to update those resources.  Probably manually.  If it can’t be changed out from under me the next time I run podman build, then it also isn’t getting updates to follow changes in the base image.  It takes some discipline to track where these things come from, and sometimes, how to reproduce them.  For instance, when GPG keys for an Ubuntu PPA needed to be converted to binary before use, it wasn’t enough to leave only the URL written down.

Thus, it’s more work, but the result is stable, and that’s important to me.