Sunday, January 11, 2026

Trying Stage Manager

I tried Stage Manager on my desktop in macOS 26.  There’s not much to say about it, because it didn’t click for me.  I just don’t work with that many big windows.

If there’s a truly huge window like an image editor, that tends to be the only thing I’m using “at one time,” and there’s no need to Stage Manage through them.  When there is, Cmd+Tab has worked well.

When I’m working hard on my personal website, it tends to involve four windows arranged spatially: Podman Desktop, MacVim, and iTerm2 non-overlapping on one workspace, and Firefox (and its dev tools) on the next.

I was confused about the order of apps in the sidebar.  I eventually realized they were “swapping” between the app being restored and the one being minimized when changing apps, but that didn’t really help in terms of efficiency.  Using several apps in sequence means they keep moving around instead of having a consistent placement.

It might be more of a revelation on a laptop, where having half the screen size means having a quarter of the area for individual windows.  Or maybe I’m just set in my ways after 30 years.

Sunday, January 4, 2026

Lazy Init Only Scatters Latency

People report on the Internet that their “Hello World” NodeJS functions in AWS Lambda initialize in 100–150 ms on a cold start, while our real functions spend 1000–1700 ms.  Naturally, I went looking for ways to optimize, but what I found wasn’t exactly clear-cut.

A popular option is for the function to handle multiple events, choosing internal processing based on the shape of the event.  Maybe a large fraction of their events don’t need to handle a PDF, so they can skip loading the PDF library up front.

Unfortunately, my needs are for a function which handles just two events (git push and “EC2 instance state changed” events) and in both cases, the code needs to contact AWS services:

  1. git push will always fetch the chat token from Secrets Manager
  2. Instance-state changes track run-time using DynamoDB (and may need the chat token)

If I push enough of the AWS SDK initialization off to run time, all I’m actually doing is pushing the delay over to run time.  I would need to be able to have a relatively high-frequency request that didn’t use the AWS SDK in order to separate the SDK latency from average processing.  Even then, it still wouldn’t work if the first request needed AWS!

Nonetheless, I did the experiments, and as far as I can tell, lazy init does exactly what I predicted: causes more run time and less init time, for a similar total, on a cold start.  Feeding it 33% more RAM+CPU lets it run 22% faster and consume more memory total, which suggests that it’s doing less garbage collection.  It would be nice to have GC stats, then!

Warm-start execution, for what it’s worth, is 10% of the overall runtime of a cold start, or what was about 25% of the cold-start run time before any changes were made.  Either GC or CPU throttling is hampering cold-start performance.

(I’d also love to know how AWS is doing the bin-packing in the background.  Do they allocate “bucket sizes” and put 257–512 MB functions into 512 MB “reservations,” or do they actually try to fill hosts precisely?  Actually, it’s probably oversubscribed, but by how much?  “Run code without thinking about servers,” they said, and I replied, “Don’t tell me what to do!”)

The experiment I didn’t do was whether using esbuild to publish a 1.60 MB single-file CommonJS bundle, instead of a 0.01 MB zip with ESM modules, will do anything.  Most sources say that keeping the file size down is the number one concern for init speed.  At this point, I think if I wanted more speed, I would port to Go.