Sunday, September 28, 2025

Vorta’s “No Matter What, Keep All…” Setting

Since switching from Pop!_OS (Gnome) to Kubuntu (KDE) for work, I have also changed my backup GUI.  There is no question that Vorta is more powerful than Pika Backup, but the price of that is the loss of simplicity.

One place I got confused was in the backup-retention rules, referred to as “pruning” by the GUI.  I have learned: when Vorta offers to “keep all backups made within…”, that is internally a separate rule with high priority. Therefore, when I set up my hourly backups to keep “one week” of hourly, two weeks of daily, and so on, but “keep everything from the last six weeks,” I ended up with seven weeks of hourly backups, followed by the two weeks of daily.

I noticed the problem when my laptop fans spun up for a while, which turned out to be Vorta verifying 300+ archives.  The work laptop is on only for work, producing backups for 8–9 hours per weekday, for around 250 extra archives over those first six weeks.

Unrelated, but one nice thing about Vorta is that, like Pika, it is a front-end to Borg.  I gave it the same repository on disk.  Now I have continuous backup history across the two GUIs, and emergency CLI access if necessary.

Sunday, September 21, 2025

Reflections on Breaking Something

Last week, I deployed some code, and then impossible phenomena followed on the website.  Ultimately, it was all my fault, because I have changed my design sensibilities over time.

Distant past me figured it would make for shorter commands if we left the .service suffix off of names.  It would be added automatically at the boundary, when actually invoking a systemctl command.

Present me is less tolerant of magic, hates checking at several places whether or not to add .service, and worries about whether the code works with other types of systemd units.

Hence, when I recently updated our deployment code, it also began passing the full service name to the reload script.  That script is a wrapper that sits between the deployment user and systemctl.  The deployment user has sudo rights to the script, which can only run very specific systemctl commands, and which validates the unit name against an “allowed” list.

For simplicity—because it is run through sudo—this wrapper script had zero magic. It expected the caller to give it an abbreviated name, to which it would add .service itself.  The change to the deployment code then broke that process.  Tests didn’t catch it, not only because there are none, but because the wrapper script lives outside of the deployment repository.  It’s an externally-provided service.

Consequently…

The “impossible phenomena” happened because the new files were unpacked, including templates, while the reload left the old code running.  The old code didn’t set the newly-used variables for the template to process, so the parts relying on those variables malfunctioned.  I had a lot of difficulty duplicating this effect, because out of habit, I restart the daemon with sudo systemctl ... after making code changes in dev.  I don’t use the wrapper script.  (Maybe I should.)

The first thing to do was fix the wrapper script to accept names with the .service suffix.

But after that, the biggest thing is that the deployer needs to cancel the operation and issue a rollback if the final reload fails.  This will restore consistency between the old code and the original files on disk.

I might also be able to improve robustness overall by using a relative path for the template root dir.  If we stay in a working directory below the symlink that is updated during deployment, instead of traversing that symlink on an absolute path, we’ll always get the templates that correspond to the running code. However, that’s more subtle and tricky than issuing a rollback, and hence, more likely to get broken in the future.

I like the sudo local-reload website.service approach.  The script can be tracked in version control easily, and the sudoers file remains as short and simple as possible.  Meanwhile, the deployment user isn’t given broad access to the entire set of subcommands that systemctl has to offer.

Sunday, September 7, 2025

Online Builds

As a long-time coder and tinkerer, who views computers as deterministic if we understand them properly, modern tooling feels wrong to me.

  • Python expects code to be distributed in a form where it has to contact PyPI for dependencies. (You can get around this—like the awscli installer—but I never did figure out how they build that.)
  • Python expects code to be distributed in a form where the installation process executes arbitrary code. This transitively happens with all dependencies. 🐝
  • composer install (usually) expects to be able to fetch code from GitHub.  Running it (non-interactively and with --no-dev, of course) as part of deployment makes deployment depend on the internet working.
  • Containerfile ADD and COPY will happily take URLs as sources, including URLs that are intended to be mutable, like GitHub /latest/ release artifact URLs. Projects may recommend using such URLs.
  • curl … | sudo sh also deeply connects the internet to the process, and treats the script itself as ephemera, discarding it as the process completes. If the script makes its own internet connections, the problem with preserving the canonical source is multiplied.

Quite aside from “the internet connection has to be up,” the referenced URLs must keep working over time.  A Containerfile built as recommended for the docker-php-extension-installer inherently requires the up-to-date source of code to remain at the github.com site, and under the mlocati user.

Building reliability and reproducibility into the process is left up to the user.  Those features can only be included if the thought, “what if…?” crosses someone’s mind.

However, saving remote resources into a local build context protects them from loss, but requires the maintainer of that build to update those resources.  Probably manually.  If it can’t be changed out from under me the next time I run podman build, then it also isn’t getting updates to follow changes in the base image.  It takes some discipline to track where these things come from, and sometimes, how to reproduce them.  For instance, when GPG keys for an Ubuntu PPA needed to be converted to binary before use, it wasn’t enough to leave only the URL written down.

Thus, it’s more work, but the result is stable, and that’s important to me.

Sunday, August 24, 2025

AI Erodes Knowledge

If one has an LLM “do the work,” then one does not actually learn anything.  Taking notes improves recall, and so does trying to remember the answer before looking something up.  In any case, to become more proficient, something has to happen inside the mind. If we skip that part in our rush to produce volume of output, we are trading away our future skills.

“Not gaining proficiency” would be indictment enough, but worse, unused skills decay. There is a reason that C is not on my résumé anymore!  I can’t imagine what would become of my PHP if I didn’t do the coding, and I utterly dread the potential outcome of using an LLM to make changes for an entire month.

An LLM will never produce original work, since it is trying to “predict” based on a corpus of past, public work.  It’s great for writing code like fanboys on the internet write code.  It’s not so great at cost optimization.

Overall, I wonder about how institutional knowledge might be affected by heavier LLM usage.  If we’re not doing the thinking, will we be able to remember anything about our own history?

Sunday, July 20, 2025

Finding Meaning

Two things came across my radar recently; cks talking about Job Vs Career, which is an old post, and apenwarr talking about Billionaire Math, which is not.  They’re very different, but they both give me the same “meaning of life” vibes, so let’s talk about that.

“Where am I going?” and “What do I do now that I have all this money?” are sort of the same question, just from different angles.  It’s a spiritual question, because the process of answering it has the shape of a spiritual journey.

What do you really, truly want?

It’s really hard to untangle from what everyone thinks we should want!  It’s a question that comes down to values and worth, and it takes time to uncover those values.  It takes thought, grit, commitment.  It takes looking inward to our own expectations, and deciding whether we need to hold onto those.

Sunday, July 6, 2025

There’s No HealthScore™

I don’t know who needs to hear this today, but there’s no single number that defines “healthy.”

Weight and BMI don’t work.  Total cholesterol, LDL, nor triglycerides cover it.  Blood glucose or A1c, as useful as they are for diabetes, do not have specific “Health” levels.  Exercise isn’t magic, either; there’s no step count or bike computer statistic that indicates perfection.

These are all data points in a larger picture, and should be regarded as a holistic output.  There is a complex, interlinked system regulating it all, and trying to directly change one of the outputs is not likely to be helpful or sustainable.  At least, not if I don’t have a disease that is specifically related to those markers.

It took all my willpower, but I finally quit dieting.

I don’t have anything else.  There’s no general advice I can give on diet, exercise, or health care that I can be confident I will be able to stand behind for even five years.  The science isn’t there to give anyone (me included) individualized advice.  Regardless, the limited hypothesis of “there is no silver bullet to health” should withstand the test of time.  I can hope.

Sunday, June 22, 2025

Early Thoughts on Coding with an LLM

I temporarily displaced my misgivings about the Plagiarism Machine That Also Destroys Earth.  Since I haven’t yet spent a full week working with it, I have some first impressions only:

  1. All generators have the highest accuracy on the smallest tasks, where they are the least useful.
  2. The tools all seem to prefer the same model, so there’s less differentiation than one might hope for. Nothing is worse… but nothing is better.
  3. Writing a good prompt takes a lot of planning.
  4. Every change must be tediously reviewed.  This includes every tab-completable inline suggestion in the editor.
  5. Junie is not actually integrated very deeply into PHPStorm.
  6. Agent mode is full-on Sorcerer’s Apprentice.  One must always be standing by on the “Emergency Stop – Never Use” button.

Due to homogenous model choices, UI is an important differentiator right now.  Using Control+Backslash to generate code in PHPStorm makes for results that are difficult for me to understand, because the diff is in character mode.  Cursor is much better at this, with line diffs.

As for Junie’s lack of integration, it failed to recognize «run the "foo api unit tests"» as an instruction to run the pre-existing test configuration named “foo api unit tests”.  I let it try running foo api unit tests in the terminal to see what it would do.  After getting a command-not-found error, it attempted to find any test that it could run, and try running that instead.  Fortunately, it needed permission to run further off the rails, so I denied it.

Summary

As for the overall experience…

The LLM removes the fun part of programming, the writing of the code, leaving the planning and debugging parts I am less fond of.  The incessant demands for attention from inline suggestions also fundamentally block entering a flow state. Meanwhile, hallucinations are always trying to stab me in the back; there is absolutely no meta-analysis of whether the prompt itself is misguided.

I don’t even see these tools as useful for exploring unfamiliar code.  My IDE already has a set of tools for that; like Find in Files, Go to Definition, and Find References/Usages.  Since these aren’t constrained to a sidebar, the results are also far more usable.

Even if it had no downsides, at first blush, the (paid) systems still rate a solid “meh.”