Sunday, January 4, 2026

Lazy Init Only Scatters Latency

People report on the Internet that their “Hello World” NodeJS functions in AWS Lambda initialize in 100–150 ms on a cold start, while our real functions spend 1000–1700 ms.  Naturally, I went looking for ways to optimize, but what I found wasn’t exactly clear-cut.

A popular option is for the function to handle multiple events, choosing internal processing based on the shape of the event.  Maybe a large fraction of their events don’t need to handle a PDF, so they can skip loading the PDF library up front.

Unfortunately, my needs are for a function which handles just two events (git push and “EC2 instance state changed” events) and in both cases, the code needs to contact AWS services:

  1. git push will always fetch the chat token from Secrets Manager
  2. Instance-state changes track run-time using DynamoDB (and may need the chat token)

If I push enough of the AWS SDK initialization off to run time, all I’m actually doing is pushing the delay over to run time.  I would need to be able to have a relatively high-frequency request that didn’t use the AWS SDK in order to separate the SDK latency from average processing.  Even then, it still wouldn’t work if the first request needed AWS!

Nonetheless, I did the experiments, and as far as I can tell, lazy init does exactly what I predicted: causes more run time and less init time, for a similar total, on a cold start.  Feeding it 33% more RAM+CPU lets it run 22% faster and consume more memory total, which suggests that it’s doing less garbage collection.  It would be nice to have GC stats, then!

Warm-start execution, for what it’s worth, is 10% of the overall runtime of a cold start, or what was about 25% of the cold-start run time before any changes were made.  Either GC or CPU throttling is hampering cold-start performance.

(I’d also love to know how AWS is doing the bin-packing in the background.  Do they allocate “bucket sizes” and put 257–512 MB functions into 512 MB “reservations,” or do they actually try to fill hosts precisely?  Actually, it’s probably oversubscribed, but by how much?  “Run code without thinking about servers,” they said, and I replied, “Don’t tell me what to do!”)

The experiment I didn’t do was whether using esbuild to publish a 1.60 MB single-file CommonJS bundle, instead of a 0.01 MB zip with ESM modules, will do anything.  Most sources say that keeping the file size down is the number one concern for init speed.  At this point, I think if I wanted more speed, I would port to Go.

Sunday, December 28, 2025

Two Thoughts on Ubuntu Signing Keys

Here’s something I don’t get: why is there a trusted “2012 CD signing key” on my Ubuntu 24.04 machines, when there is also a “2018” signing key?  Shouldn’t this be a transition that could have completed within five years?  Shouldn’t we be able to tie the 2012 key to a specific repository set, instead of all packages?  The latter includes PPAs and I really wish neither of those CD signing keys were valid for that purpose.

The cryptographic domains should be separated:

  1. One CD signing key, tied to the CD/DVD packages
  2. One online release signing key, tied to the Ubuntu main/security sources
  3. One key per PPA, tied to that PPA

Deprecating globally-trusted keys for PPAs is a good step, but the globally-trusted release keys (especially ones that are over a decade old) should be cleaned out immediately as well.

Semi-related pro tip: extrepo

Many packages are supported in extrepo, which handles the keys for you.  There is no need for arcane gpg format-conversion commands, no worrying about whether it goes into /usr (incorrect under Unix philosophy, but widely recommended) or /etc, no manually editing sources files, and especially no cursed curl | bash invocation.

$ sudo apt install extrepo

And then you can do stuff like:

$ extrepo search github
$ sudo extrepo enable github-cli
$ sudo apt install --update gh

This is especially useful for upstreams that distribute an official deb package, outside of PPAs.  I aim to get the code from as close to the source as possible, where the distro itself doesn’t suit my needs.

Sunday, December 21, 2025

Adventures with my old iPod Touch

The iPod Touch (4th Gen) in the car could no longer be detected, so we pulled it out of the console to find it in DFU mode.  Yikes.

I had extremely little hope, but I took it and its USB-A to 30-pin dock cable (the only extant cable of this type in my collection) inside, plugged it into a USB-C to USB-A adapter, and plugged it into the Mac.  It… er… worked. Sure, it appeared to open in Finder and not iTunes (rip), but it actually worked (on the second try; the advice for “unknown error 9” is to try again.  What are we doing as a profession.)  I was able to restore iOS 6.1.6 to it, although I did not have the option to keep my data.

But, I never moved my music onto the Mac.  I figured, with a hardware device that is fifteen years old, and back in factory state, surely, Linux should be able to sync to it?

The first problem was even getting it connected, because Amarok threw an error from ifuse.  Copying the command out of the error message and running it in a terminal worked totally fine.  (I didn’t think of this at the time, but… Amarok logs in the systemd journal.  Maybe its permissions have been stripped down too far.)

Once that was up and running, I restarted Amarok a couple of times, before I found out where it had hidden the iPod.  It’s under “Local Collection.”

I then waited a long time for things to sync.  I waited so long that I wandered off and forgot to set “Don’t Sleep”, so the computer suspended.  The iPod made its ancient, discordant glissando when the computer woke up, and then Amarok—and any process trying to stat() the ifuse mount point—froze.  ifuse sat there burning 100% CPU for a couple of minutes, and then I restarted.

(Apparently the sleep interval was fifteen minutes, the longest time that doesn’t make KDE System Settings complain about ’using more energy.'  Well… I paid for it, one way or the other.)

I got it going again.  Amarok carefully loaded gigabytes of tracks onto the iPod Touch, then started complaining about checksum errors for the database.  That’s the part that makes them useful, instead of having the iPod show “No content” and a button for the iTunes Store.  That ended up being the final boss that I couldn’t beat.  The tracks are still there, apparently, showing up as “Other” data on the Mac.

Yeah.

I plugged the backup drive into the Mac, imported everything, and exported it to the iPod Touch.  The double copy was orders of magnitude faster than Amarok’s unidirectional efforts.  I should never have been so lazy.

  • Free Software: 0
  • Proprietary OS: 2

I don’t know how we got here.

Sunday, December 14, 2025

Three Zsh Tips

To expand environment variables in the prompt, make sure setopt prompt_subst is in effect.  This is the default in sh/ksh modes, but not zsh mode.

To automatically report the time a command took if it consumes more than a certain amount of CPU time, set REPORTTIME to the limit in seconds.  That is, REPORTTIME=1 (I am impatient) will act like I ran the command with time originally, if it consumes more than a second.

There’s a similar REPORTMEMORY variable to show the same (!) stats for processes that use more than the configured amount of memory during execution.  (Technically, RSS, the Resident Set Size.)  The value is in kilobytes, so REPORTMEMORY=10240 will print time statistics for processes larger than 10 MiB.  Relatedly, one should configure TIMEFMT to include “max RSS %M” in order to actually show the value that made the stats print.

Note that REPORTTIME and REPORTMEMORY do not have to be exported, as they’re only relevant to the executing shell.

# in ~/.zshrc
REPORTTIME=3
REPORTMEMORY=40960
TIMEFMT='%J  %U user %S system %P cpu %*E total; max RSS %M'

Sources: REPORTTIME and REPORTMEMORY are documented in the zshparam man page.  Prompt expansion is described in zshmisc, and the prompt_subst option is in zshoptions.

Sunday, December 7, 2025

Notes on an ECS Deployment

In order to try FrankenPHP and increase service isolation, we decided to split our API service off of our monolithic EC2 instances.  (The instances carry several applications side-by-side with PHP-FPM, and use Apache to route to the applications based on the Host header.  Each app is not supposed to meddle in the neighbor’s affairs, but there’s no technical barrier there.)

I finally got a working deployment, and I learned a lot along the way.  The documentation was a bit scattered, and searching for the error messages nearly useless, so I wanted to pull all of the things that tripped me up together into a single post.  It’s the Swiss Cheese Model, except that everything has to line up for the process to succeed, rather than fail.

  1. Networking problems
  2. ‘Force Redeployment’ is the normal course of operation
  3. The health check is not optional
  4. Logs are obscured by default
  5. The ports have to be correct (Podman vs. build args)
  6. The VPC Endpoint for an API Gateway “Private API” is not optional
  7. There are many moving parts

Let’s take a deeper look.

Sunday, November 30, 2025

Container Friction

I find it inconvenient that a number of container settings are immutable.  Forget a volume?  Forget a port mapping?  Want to bring up a container, make a change, and then mark the root filesystem as read-only?  Start with a read-only root, and then realize it should have been read-write after all?

Too bad.  Go set up all of the other settings again, along with the desired change, and don’t miss anything or get any settings wrong this time.  If it’s “change something before going read-only,” it’s time to create a Containerfile and build a new image, too.  No matter what, don’t forget to delete the original container first, to free its name for reuse.

There’s no way to create a container based on an existing container’s settings.  There’s no direct option for it.  There’s not even an indirect option to export the settings of a container, and then use them as a template during container creation later.

It’s worse in Podman Desktop, which has no memory.  It doesn’t even have separate “last used directory” memory for building from a Containerfile versus choosing a volume to mount.  Why not make everyone go back and forth through the filesystem every time?

At least in the CLI, the previous commands could be in the shell history.  “Could be,” because I might have been distracted for two months, and the commands got pushed out of the history in the meantime.  (Speaking of, “the container” has its Containerfile baked in, but doesn’t remember where on the filesystem that Containerfile came from.  Not even locally.  Maybe it’s just me, but it’s always a treasure hunt to resume a project.)

I can see the logic of “not allowing settings drift” in production container environments, but it seems like Podman Desktop should be optimized for the developer experience and experimentation instead.

Sunday, November 23, 2025

Firefox’s new Profile Manager: a lightning review

Back in Firefox 138, the new profile UI made it to the stable channel.  If “Profiles” isn’t already showing near the top of the main menu, interested users can go in through about:config and flip the browser.profiles.enabled option to true.

I created a new Shopping profile to check it out.  (Formerly, my shopping has happened in a mix of dedicated containers for commonly-shopped stores, and temporary containers for less-common stores, in my core profile.  However, that profile frequently breaks checkout with its high level of privacy settings and extensions.)

First off, the good: this is far more convenient to access than about:profiles, and much prettier.  What’s more, it adds the profile badge to the Firefox icon in the Dock and Cmd+Tab list (macOS)!  There’s no more guessing about which identical Firefox corresponds to which profile.

Passkeys, since they are stored in the system’s keyring, are available across profiles.  Signing in at the new profile didn’t require any password management.  Finally, as a particularly geeky note, these are just like old Profiles, with independent extensions, themes, settings, bookmarks, and history.  The meaning of the “profile” name hasn’t been changed by this.

That leaves the one thing that could be improved.  This UI is completely separate from the traditional about:profiles.  Existing profiles do not import into the new UI.  Profiles created under the new UI aren’t visible at the old UI.  If my existing profiles had seamlessly imported, that would have been amazing.

Incidentally, if anyone needs to know, the new profile UI is at the url about:profilemanager.

On the whole, the new system is a no-brainer.  It’s love at first sight.  I will probably retire Containers from my core profile, only retaining them in Shopping or AWS profiles to keep sites/accounts separate within those domains.  I do know that AWS has multi-session support now, but I’m used to the containers for that.