Sunday, December 28, 2025

Two Thoughts on Ubuntu Signing Keys

Here’s something I don’t get: why is there a trusted “2012 CD signing key” on my Ubuntu 24.04 machines, when there is also a “2018” signing key?  Shouldn’t this be a transition that could have completed within five years?  Shouldn’t we be able to tie the 2012 key to a specific repository set, instead of all packages?  The latter includes PPAs and I really wish neither of those CD signing keys were valid for that purpose.

The cryptographic domains should be separated:

  1. One CD signing key, tied to the CD/DVD packages
  2. One online release signing key, tied to the Ubuntu main/security sources
  3. One key per PPA, tied to that PPA

Deprecating globally-trusted keys for PPAs is a good step, but the globally-trusted release keys (especially ones that are over a decade old) should be cleaned out immediately as well.

Semi-related pro tip: extrepo

Many packages are supported in extrepo, which handles the keys for you.  There is no need for arcane gpg format-conversion commands, no worrying about whether it goes into /usr (incorrect under Unix philosophy, but widely recommended) or /etc, no manually editing sources files, and especially no cursed curl | bash invocation.

$ sudo apt install extrepo

And then you can do stuff like:

$ extrepo search github
$ sudo extrepo enable github-cli
$ sudo apt install --update gh

This is especially useful for upstreams that distribute an official deb package, outside of PPAs.  I aim to get the code from as close to the source as possible, where the distro itself doesn’t suit my needs.

Sunday, December 21, 2025

Adventures with my old iPod Touch

The iPod Touch (4th Gen) in the car could no longer be detected, so we pulled it out of the console to find it in DFU mode.  Yikes.

I had extremely little hope, but I took it and its USB-A to 30-pin dock cable (the only extant cable of this type in my collection) inside, plugged it into a USB-C to USB-A adapter, and plugged it into the Mac.  It… er… worked. Sure, it appeared to open in Finder and not iTunes (rip), but it actually worked (on the second try; the advice for “unknown error 9” is to try again.  What are we doing as a profession.)  I was able to restore iOS 6.1.6 to it, although I did not have the option to keep my data.

But, I never moved my music onto the Mac.  I figured, with a hardware device that is fifteen years old, and back in factory state, surely, Linux should be able to sync to it?

The first problem was even getting it connected, because Amarok threw an error from ifuse.  Copying the command out of the error message and running it in a terminal worked totally fine.  (I didn’t think of this at the time, but… Amarok logs in the systemd journal.  Maybe its permissions have been stripped down too far.)

Once that was up and running, I restarted Amarok a couple of times, before I found out where it had hidden the iPod.  It’s under “Local Collection.”

I then waited a long time for things to sync.  I waited so long that I wandered off and forgot to set “Don’t Sleep”, so the computer suspended.  The iPod made its ancient, discordant glissando when the computer woke up, and then Amarok—and any process trying to stat() the ifuse mount point—froze.  ifuse sat there burning 100% CPU for a couple of minutes, and then I restarted.

(Apparently the sleep interval was fifteen minutes, the longest time that doesn’t make KDE System Settings complain about ’using more energy.'  Well… I paid for it, one way or the other.)

I got it going again.  Amarok carefully loaded gigabytes of tracks onto the iPod Touch, then started complaining about checksum errors for the database.  That’s the part that makes them useful, instead of having the iPod show “No content” and a button for the iTunes Store.  That ended up being the final boss that I couldn’t beat.  The tracks are still there, apparently, showing up as “Other” data on the Mac.

Yeah.

I plugged the backup drive into the Mac, imported everything, and exported it to the iPod Touch.  The double copy was orders of magnitude faster than Amarok’s unidirectional efforts.  I should never have been so lazy.

  • Free Software: 0
  • Proprietary OS: 2

I don’t know how we got here.

Sunday, December 14, 2025

Three Zsh Tips

To expand environment variables in the prompt, make sure setopt prompt_subst is in effect.  This is the default in sh/ksh modes, but not zsh mode.

To automatically report the time a command took if it consumes more than a certain amount of CPU time, set REPORTTIME to the limit in seconds.  That is, REPORTTIME=1 (I am impatient) will act like I ran the command with time originally, if it consumes more than a second.

There’s a similar REPORTMEMORY variable to show the same (!) stats for processes that use more than the configured amount of memory during execution.  (Technically, RSS, the Resident Set Size.)  The value is in kilobytes, so REPORTMEMORY=10240 will print time statistics for processes larger than 10 MiB.  Relatedly, one should configure TIMEFMT to include “max RSS %M” in order to actually show the value that made the stats print.

Note that REPORTTIME and REPORTMEMORY do not have to be exported, as they’re only relevant to the executing shell.

# in ~/.zshrc
REPORTTIME=3
REPORTMEMORY=40960
TIMEFMT='%J  %U user %S system %P cpu %*E total; max RSS %M'

Sources: REPORTTIME and REPORTMEMORY are documented in the zshparam man page.  Prompt expansion is described in zshmisc, and the prompt_subst option is in zshoptions.

Sunday, December 7, 2025

Notes on an ECS Deployment

In order to try FrankenPHP and increase service isolation, we decided to split our API service off of our monolithic EC2 instances.  (The instances carry several applications side-by-side with PHP-FPM, and use Apache to route to the applications based on the Host header.  Each app is not supposed to meddle in the neighbor’s affairs, but there’s no technical barrier there.)

I finally got a working deployment, and I learned a lot along the way.  The documentation was a bit scattered, and searching for the error messages nearly useless, so I wanted to pull all of the things that tripped me up together into a single post.  It’s the Swiss Cheese Model, except that everything has to line up for the process to succeed, rather than fail.

  1. Networking problems
  2. ‘Force Redeployment’ is the normal course of operation
  3. The health check is not optional
  4. Logs are obscured by default
  5. The ports have to be correct (Podman vs. build args)
  6. The VPC Endpoint for an API Gateway “Private API” is not optional
  7. There are many moving parts

Let’s take a deeper look.