Sunday, December 8, 2024

Side Note: Firefox’s Primary Password is Local

When signing into Firefox Sync to set up a new computer, the primary password is not applied.  I usually forget this, and it takes a couple of runs for me to remember to set it up.

That’s not enough for a post, so here are some additional things about it:

The primary password protects all passwords, but not other data.  If someone can access Firefox data, bookmarks and history are effectively stored in the clear.

The primary password is intended to prevent reading credentials… and the Sync password is one of those credentials.  That’s why a profile with both Sync and a primary password wants that password as soon as Firefox starts; it wants to check for new data.

The same limitation of protections applies to Thunderbird.  If someone has access to the profile, they can read all historic/cached email, but they will not be able to connect and download newly received email without the primary password.

The Primary Password never times out.  As such, it creates a “before/after first unlock” distinction.  After first unlock, the password is in RAM somewhere, and the Passwords UI asking for it again is merely re-authentication.  Firefox obviously has the password saved already, because it can fill form data.

Some time ago, the hash that turns the primary password into an actual encryption key has been strengthened somewhat.  I believe it is now a 10,000-iteration thing, and not just one SHA-1 invocation.  The problem with upgrading it further is that the crypto is always applied; ”no password” is effectively a blank password, and the encryption key still needs to be derived from it to access the storage.  Mozilla understandably doesn’t want to introduce a noticeable startup delay for people who did not set a password.


Very recently (2024-10-17), the separate Firefox Sync authentication was upgraded.  Users need to log into Firefox Sync with their password again in order to take advantage of the change.

Sunday, December 1, 2024

Unplugging the Network

I ended up finding a use case for removing the network from something. It goes like this:

I have a virtual machine (guest) set up with nodejs and npm installed, along with @redocly/cli for generating some documentation from an OpenAPI specification. This machine has two NICs, one in the default NAT configuration, and one attached to a host-only network with a static IP. The files I want to build are shared via NFS on the host-only network, and I connect over the host-only network to issue the build command.

Meaning, there is no loss of functionality to remove the default NIC (the one configured for NAT), but it does cut npm off from the internet. That’s an immediate UX improvement: npm can no longer complain that it is out of date! Furthermore, if the software I installed happened to be compromised and running a Bitcoin miner, it has been cut off from its c2 server, and can’t make anyone money.

An interesting side benefit is that it also cuts off everyone’s telemetry, impassively.

I can’t update the OS packages, but I’m not sure that is an actual problem. If the code installed doesn’t have an exploit payload already, there’s no way to get one later. The vulnerability remains, but nothing is there to go after it.

P.S.: I could deactivate both NICs. Files could be shared using the hypervisor’s shared-folders system, and the actual build command could be run via console login. (If I could stand using Qwerty that long!) If the machine had a snapshot, I could shut it down by powering off and reverting to snapshot; then, I would not even need admin rights to run the appliance. The more I think about it… the more I like it.

Sunday, November 24, 2024

Mac Mini (M4/2024) First Impressions

I bought an M4 Mac Mini (2024) to replace my Ivy Bridge (2012) PC.

It was difficult to choose a configuration, because of the need to see a decade into the future, and the cost of upgrades.  It is hard to believe that an additional half terabyte (internal) would cost more than a whole terabyte external drive (with board, USB electronics/port, case, cable, and retail box.)

It feels pretty fast.  Apps open unexpectedly quickly.  Which is to say, on par with native apps on my 12C/16T Alder Lake work laptop.  Apparently, my expectations have been lowered by heavy use of Flatpaks.

It is quiet.  When I ejected the old USB drive I was using for file transfer, it spun down, and that was the noise I had been hearing all along.  The Mac itself is generally too quiet to hear.

It is efficient.  I have a power strip that detects when the main device is on, and powers an extra set of outlets for other devices.  Even with the strip moved from “PC” to “Netbook,” the Mini does not normally draw enough power to keep the other outlets on.  (I put the power strip on the desk and plugged it into the desk power, then turned off the Mac’s wake-on-sleep feature.  Now I can unplug the whole strip when not in use.)

It has been weird getting used to the Mac keyboard shortcuts again.  For two years, I haven’t needed to think about which computer I’m in front of; Windows and Linux share the basic structure for app shortcuts and cursor movement.  I don’t know how many times I have pressed Ctrl+T in Firefox on the Mac and waited a beat for the tab to open, before pressing Cmd+T instead.

It is extremely weird to me that the PC Home/End keys do nothing by default on the Mac.  It’s not like they do something better, or even different; they just don’t do anything. Why?

I also had to search the web to find out why an NTFS external drive couldn’t put things in the trash after I had copied them onto the Mac.  It seems the whole volume is read-only; macOS doesn’t have built-in support for writing to NTFS.  Meanwhile, I didn’t notice anything in the UI to suggest that the volume is read-only; some operations just don’t work (quietly, in the case of keyboard shortcuts.)

There was one time where I tried to wake the Mac up, and it didn't want to talk to the keyboard. I plugged and unplugged the USB (both the keyboard from the C-to-A adapter, and the adapter from the Mac) and tried it with a different keyboard, but to no avail.  I couldn’t find any way to open an on-screen keyboard with the trackpad alone.  I had to hard power off, but it has been fine ever since.

I guess that’s about it!  It doesn’t feel like “coming home” or anything, it just feels like a new computer to be set up.

Sunday, November 17, 2024

Fixing a Random ALB Alarm Failure

tl;dr: if an Auto Scaling Group’s capacity is updated on a schedule, the max instance lifetime is an exact number of days, and instances take a while to reach healthy state after launching… Auto Scaling can terminate running-but-healthy instances before new instances are ready to replace them.

I pushed our max instance lifetime 2 hours further out, so that the max-lifetime terminations happen well after scheduled launches.

Sunday, November 10, 2024

Ubuntu 24.10 First Impressions

I hit the button to upgrade Ubuntu Studio 24.04 to 24.10.  First impressions:

  1. The upgrade process was seriously broken.  Not sure if my fault.
  2. Sticky Keys is still not correct on Wayland.
  3. Orchis has major problems on Ubuntu Studio.

Sunday, October 6, 2024

Pulling at threads: File Capabilities

For unimportant reasons, on my Ubuntu 24.04 installation, I went looking for things that set file capabilities in /usr/bin and /usr/sbin.  There were three:

  • ping: cap_net_raw=ep
  • mtr-packet: cap_net_raw=ep
  • kwin_wayland: cap_sys_resource=ep

The =ep notation means that only the listed capabilities are set to “effective” and “permitted”, but not “inheritable.”  Processes can and do receive the capability, but cannot pass it to child processes.

ping and mtr-packet are “as expected.”  They want to send unusual network packets, so they need that right.  (This is the sort of thing I would also expect to see on nmap, if it were installed.)

kwin_wayland was a bit more surprising to see.  Why does it want that?  Through reading capabilities(7) and running strings /usr/bin/kwin_xwayland, my best guess is that kwin needs to raise its RLIMIT_NOFILE (max number of open files.)

There’s a separate kwin_wayland_wrapper file.  A quick check showed that it was not a shell script (a common form of wrapper), but an actual executable.  Could it have had the capability, set the limits, and launched the main process?  For that matter, could this whole startup sequence have been structured through systemd, so that none of kwin’s components needed elevated capabilities?

The latter question is easily answered: no.  This clearly isn’t a system service, and if it were run from the user instance, that never had any elevated privileges.  (The goal, as I understand it, is that a systemd user-session bug doesn’t permit privilege escalation, and “not having those privileges” is the surest way to avoid malicious use of them.)

If kwin actually adjusts the limit dynamically, in response to the actual number of clients seen, then the former answer would also be “no.”  To exercise the capability at any time, kwin itself must retain it.

I haven’t read the code to confirm any of this.  Really, it seems like this situation is exactly what capabilities are for; to allow limited actions like raising resource limits, without giving away broad access to the entire system.  Even if I were to engineer a less-privileged alternative, it doesn’t seem like it will measurably improve the “security,” especially not for cap_sys_resource.  It was just a fun little thought experiment.

Sunday, September 29, 2024

Scattered Thoughts on Distrobox

Distrobox’s aim is to integrate a container with the host OS’ data, to the extent possible.  By default, everything (process, network, filesystem) are shared, and in particular, the home directory is mounted into the container as well.  It is not even trying to be “a sandbox,” even when using the --unshare-all option.

I also found out the hard way that Distrobox integrates devices explicitly. If some USB devices are unplugged, the container will not start.  This happened because I pulled my laptop away from its normal dock area (with USB hub, keyboard, and fancy mouse) and tried to use a distrobox.  Thankfully, I wasn’t fully offline, so I was able to rebuild the container.  [Updated 2024-11-28: This danger is persistent.  Creating a container without the USB devices connected, then running it later with the devices, will make it fail to start if the devices are unplugged.  This ended up being impossible to live with.]

Before it stopped performing properly in Ubuntu Studio 23.10, I used distrobox to build ares and install it into my home directory.  This process yielded a ~/.local/share/applications/ares.desktop file, which my host desktop picked up, but which would not actually work from the host.  I always needed to be careful to click the “ares (on ares)“ in the menu after exporting, to start it on the ares container.

I have observed that distroboxes must be stopped to be deleted, but then distrobox will want to start them to un-export the apps.  Very recent distrobox versions ask whether to start the container and do the un-exporting, but there’s still a base assumption that you wanted the distrobox specifically to export GUI apps from it.  It clearly doesn’t track whether any apps are exported, because it always asks, even if none are.