Sunday, January 25, 2026

Identity Requires Long Term Secrets

One cannot remove all long-term credentials.  The process of establishing a session is one of identifying a stable entity (such as a user account) and giving that entity temporary access to the resource servers.  In simplest form, this is providing the username (entity) and password (secret that authenticates the username), and receiving a session cookie (temporary credentials) for accessing the service.

Somewhere at the root of trust must be a long-term credential.  Otherwise, if all temporary credentials have expired, how is the user authenticated in order to generate a new one?  What would stop anyone else from going through the same process for the user?

An individual service can outsource user authentication: they can email a code, use SMS or a voice call, or integrate with a third party service like Okta or any OAuth provider.  In those cases, the long-term exists, but the actual location of the key store is externalized.  Then the service is at the mercy of the security of that key store.  Email is probably low risk for normal people, who have an account with multi-factor authentication at a large provider who’s going to notice ‘unusual’ logins, but my quirky personal email isn’t like that.

The other problem with outsourcing is that if the provider changes their mind about account requirements, users can get locked out of both their email and their service account at the same time.  (Ask someone how hard it is to maintain a secondary Google account.)

Everything else is a long-term credential stored by the service.  Passwords need their hash to be checked against.  Passkeys, authenticator app codes, and client certificates are also linked to a user account, so must be stored with it.  The service cannot accept any of these things for the wrong user.

Sunday, January 11, 2026

Trying Stage Manager

I tried Stage Manager on my desktop in macOS 26.  There’s not much to say about it, because it didn’t click for me.  I just don’t work with that many big windows.

If there’s a truly huge window like an image editor, that tends to be the only thing I’m using “at one time,” and there’s no need to Stage Manage through them.  When there is, Cmd+Tab has worked well.

When I’m working hard on my personal website, it tends to involve four windows arranged spatially: Podman Desktop, MacVim, and iTerm2 non-overlapping on one workspace, and Firefox (and its dev tools) on the next.

I was confused about the order of apps in the sidebar.  I eventually realized they were “swapping” between the app being restored and the one being minimized when changing apps, but that didn’t really help in terms of efficiency.  Using several apps in sequence means they keep moving around instead of having a consistent placement.

It might be more of a revelation on a laptop, where having half the screen size means having a quarter of the area for individual windows.  Or maybe I’m just set in my ways after 30 years.

Sunday, January 4, 2026

Lazy Init Only Scatters Latency

People report on the Internet that their “Hello World” NodeJS functions in AWS Lambda initialize in 100–150 ms on a cold start, while our real functions spend 1000–1700 ms.  Naturally, I went looking for ways to optimize, but what I found wasn’t exactly clear-cut.

A popular option is for the function to handle multiple events, choosing internal processing based on the shape of the event.  Maybe a large fraction of their events don’t need to handle a PDF, so they can skip loading the PDF library up front.

Unfortunately, my needs are for a function which handles just two events (git push and “EC2 instance state changed” events) and in both cases, the code needs to contact AWS services:

  1. git push will always fetch the chat token from Secrets Manager
  2. Instance-state changes track run-time using DynamoDB (and may need the chat token)

If I push enough of the AWS SDK initialization off to run time, all I’m actually doing is pushing the delay over to run time.  I would need to be able to have a relatively high-frequency request that didn’t use the AWS SDK in order to separate the SDK latency from average processing.  Even then, it still wouldn’t work if the first request needed AWS!

Nonetheless, I did the experiments, and as far as I can tell, lazy init does exactly what I predicted: causes more run time and less init time, for a similar total, on a cold start.  Feeding it 33% more RAM+CPU lets it run 22% faster and consume more memory total, which suggests that it’s doing less garbage collection.  It would be nice to have GC stats, then!

Warm-start execution, for what it’s worth, is 10% of the overall runtime of a cold start, or what was about 25% of the cold-start run time before any changes were made.  Either GC or CPU throttling is hampering cold-start performance.

(I’d also love to know how AWS is doing the bin-packing in the background.  Do they allocate “bucket sizes” and put 257–512 MB functions into 512 MB “reservations,” or do they actually try to fill hosts precisely?  Actually, it’s probably oversubscribed, but by how much?  “Run code without thinking about servers,” they said, and I replied, “Don’t tell me what to do!”)

The experiment I didn’t do was whether using esbuild to publish a 1.60 MB single-file CommonJS bundle, instead of a 0.01 MB zip with ESM modules, will do anything.  Most sources say that keeping the file size down is the number one concern for init speed.  At this point, I think if I wanted more speed, I would port to Go.