Thursday, September 22, 2022

Puppy Linux First Impressions

I’ve used a lot of mainstream systems.  For the past 10 years, my primary desktops have been Mac (work) and Windows (home); before then, I spent over a decade moving through Red Hat 7, FreeBSD 4, Gentoo, Kubuntu 7.04, and regular Ubuntu through 10.04.  I spent memorable chunks of time using fwvm2, WindowMaker, Gnome 1, and KDE 3.  After an experiment with Amazon Linux 1, work standardized on Ubuntu for servers, starting with 14.04.

I say all this to say, I had some warning that Puppy Linux™ was different; there was something about filesystem “layers” and, in theory, being able to build your own like a mixtape. Well, it delivered on being different.

Because first, I had to set up my network.  I found out that a decent chunk of the help is actually online, so I was greeted with Pale Moon telling me “server not found” a few times.  I had the driver files available, but I didn’t have the devx or kernel-sources SFS layers.  I managed to find them on SourceForge, and once the driver installed (quite seamlessly, no-dkms mode), I finally got a network configured with the most-advanced wizard.  It seems that is the only place to be offered to allow WPA2 to be used with the device, since it (or its driver?) isn’t allow-listed already.

I updated Pale Moon, and then, since I was online, why not visit tumblr?  The dash rendered fine, albeit chewing on 100% of a CPU core, but then asking to make a new post crashes (white screen) once the editor loads.  I did not get to announce to the world that I was using Puppy, from Puppy.

(Because my toolchain for Decoded Node posts is on Windows, I am not making this post from Puppy, either.)

The next thing I need to do is figure out what I want to do with this thing. I don’t really have a goal for it yet.

Tuesday, August 30, 2022

Downgrading macOS Catalina to High Sierra

According to Apple Support, my old iMac can be used in Target Display Mode if it’s running High Sierra or earlier. However, I had upgraded it to Catalina.  Getting it back down to High Sierra proved to be quite the challenge.

Part 1: Easy-to-find stuff, cataloged for posterity

I was able to find the installers via Internet search, but after downloading it to the iMac, it wouldn’t run. It claimed that Catalina was too old, which seems like a bad error message for a two-sided version check (Catalina is actually too new, but they didn’t expect they would need to say that.)

However, once again, the Internet came to the rescue: I could use the command- line script bundled with the installer to write it to a USB stick.  It seems that Apple has this answer as well; in my case, it was this:

sudo "/Applications/Install macOS High Sierra.app/Contents/Resources/createinstallmedia" --volume /Volumes/BIT_BUCKET

(Any new-lines in the above command should be entered as spaces at the Terminal prompt.)

The command took about five minutes, but during one dead-end, I found it takes 30 minutes to copy to a USB 2.0 stick.

Part 2: The other roadblocks

The High Sierra installer acted like it couldn’t handle the Catalina disks, and did not quite know how to error out properly.  The installer would ask for the password twice, then… would not unlock the disk.  Clicking “Unlock…” to attempt to proceed would then do nothing.  Trying to mount the volumes first, using Disk Utility, didn’t improve the situation.

I tried to erase the volumes using the installer’s Disk Utility, but it hung on “deleting volume.”  Going to choose a startup disk when the volume was hypothetically deleted didn’t show it, but rebooting would prove that the volume was there.  Incidentally, quitting the startup disk selection would remove the menu bar and leave a featureless gray screen.  Oops.

I ran First Aid, turned off FileVault, and even reinstalled Catalina along the way, but none of that seemed to do anything.

What ended up working was to boot into Catalina’s recovery environment, select “show all devices” in its Disk Utility, then erase the entire APFS container and replace it with a “macOS Extended (Journaled)” disk.  After that point, booting the iMac would bring it into a new menu that looked like “choose startup disk” with no disks.

I had to unplug and re-plug the USB stick for it to show up, and then I could boot from it and install High Sierra in the usual manner.  Success!

Part 3: The cable failure

(Section added 2022-09-03.)

I missed the information that Target Display Mode specifically requires a Thunderbolt 2 cable. The Thunderbolt 3 to mini DisplayPort cable I actually bought fits physically, but does not function.

Moreover, the standard was completely revamped for version 3, so it requires an active adapter for Thunderbolt 2, and a specific Thunderbolt 2 cable. Between them, the current price is about $140, while a new monitor can be had for $200.

I have decided to declare the experiment a failure, and abandon the effort to have an external display for the laptop.

The future may eventually hold a 5K display, shared between the work laptop and my personal desktop, once that requires an upgrade.  My current CPU (the third PCIe GPU died years ago, so I quit putting them in) is rated for 2K output over a port that the motherboard doesn't have, or 1920x1200 otherwise.

Bonus chatter

My USB stick was named BIT_BUCKET because, once in a while, the stick would occasionally forget it had data, or even a filesystem.  But, it managed to survive long enough to get the macOS installer running.

Saturday, August 6, 2022

Podman Fully Qualified Image Names

I installed Podman.  Then, of course, I tried to pull an image, but not the example ubuntu image.  This immediately generated an error:

Error: short-name "golang" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"

(The “ubuntu” example the tutorial used is actually configured as a short-name in the file /etc/containers/registries.conf.d/shortnames.conf on my system. It’s also possible that later Podman versions will prompt for a repository.)

Luckily, there’s a fairly straightforward way to convert the image names to fully-qualified ones that Podman will accept.

Two basic concepts apply:

  1. Docker images are always on docker.io
  2. Docker images without a prefix (as in “golang” or “postgres”) are under library

That is, once we rewrite Docker commands for Podman, they might look like:

podman pull docker.io/library/postgres:alpine
podman pull docker.io/gitea/gitea:latest

These are the equivalent to pulling postgres and gitea/gitea with Docker.

If you’re looking at an image on Docker Hub, and its URL is something like hub.docker.com/_/golang, then the _ is the “lack of prefix.”  It indicates that the URL for Podman will be docker.io/library/golang.

If the URL is, instead, hub.docker.com/r/gitea/lgtm, then that shows the prefix (user gitea) and container (lgtm) after the /r/: the final URL for Podman will be docker.io/gitea/lgtm.

The final thing to note is that this information applies to the FROM directive in a Dockerfile as well, and it’s compatible with Docker.  Docker and Podman both happily accept FROM docker.io/library/golang:1.19-alpine.

Monday, July 25, 2022

How does macOS Keychain determine which app is using it?

Thinking about the differences between macOS and Gnome keyrings, I began to wonder… how does macOS know what the app making the access is?  It obviously can’t be by the filesystem path, or malware could steal keys by naming itself aws-vault and sneaking into the keychain.

According to this Stack Overflow answer, Apple is using the app’s bundle ID to determine this, authenticated by code signing.

That’s an infrastructure that doesn’t really exist on Linux.  Each distro signs its packages, but the on-disk artifacts are generally neither signed nor checked.  That turns it into a much larger problem for Gnome Keyring to tie secrets to individual apps, because first, we’d need to build a secure way to identify apps.

And then, culturally, Linux users would reject anything resembling a central authority.  Developers (including me) don’t usually bother even using GPG for decentralized code signing, and that’s not even an option with the fashionable curl|bash style installers.

Friday, July 22, 2022

Rescuing an Encrypted Pop!_OS 22.04 Installation

I wanted to access my encrypted Pop!_OS installation from a rescue environment.  I was ultimately successful, and this is how things unfolded.  Any or all of this information may be specific to version 22.04, or even 22.04 as it stands in July 2022.  Nonetheless…

I booted with SystemRescue.  At least as of version 9.03, if you want to change the keymap, read everything first because the UI erases the screen irretrievably (the console does not have scrollback.)

With that out of the way, I was kind of at a loss as to how to proceed.  I could fdisk -l /dev/nvme0n1 and determine that nvme0n1p3 was my root volume.  I tried mounting it with -t auto, to see if that could chain together everything on its own, but it simply failed with an error: mount: unknown filesystem type 'crypto_LUKS'

The command to decrypt it is fairly straightforward.  The final name appears to be arbitrary.  (Although the help/error seemed to indicate that it is optional, a name is definitely required.)

cryptsetup luksOpen /dev/nvme0n1p3 cryptodisk

Now, that created /dev/mapper/cryptodisk, but I couldn’t mount that; it was some sort of LVM type.  Ah.  I searched the web again, and arrived at the commands:

vgscan
vgchange -ay

These found the volume groups and brought them online, where they could be examined with vgs to get the group names, and lvs to get the volume names. As it turned out, there’s only one of each.  The VG was named data and the volume within it was named root, which makes the final command:

mount -t ext4 /dev/data/root /mnt

Had it not been inside LVM, the vg* commands would have been unnecessary, and I could have mounted it with something like mount -t ext4 /dev/mapper/cryptodisk /mnt (with “cryptodisk” being the name I gave it during the cryptsetup luksOpen command.)

Tuesday, July 19, 2022

Gnome Keyring Security

In the ongoing process of moving to a new work laptop, I have been working to protect secrets from hypothetical malware on the device.  I am using Pop!_OS 22.04 at the moment, examining the default environment: Gnome and gnome-keyring[-daemon].

Using Secrets

I had assumed, given the general focus on security in Linux, that it would be broadly similar to macOS.  To illustrate the user experience there, starting iTerm2 requires the login password to open the password manager for the first time.  The login password is also required to view the passwords through Keychain Access, regardless of whether iTerm2 is running.

I found exactly one Linux terminal with a password manager, Tilix, so of course I installed it. Within Tilix, no password is required to use the password manager.  Using Seahorse (the apparent equivalent to Keychain Access), the passwords Tilix has stored appear in the “login” keyring, unlocked by default, and no password is needed to reveal them.

In short, the keyring in Gnome is glorified plain text.  Access to the session bus is unconditional access to all secrets in all unlocked keyrings.

Who’s There?

Another major difference is that macOS seems to associate keyring entries with owners, such that each individual program gets its own settings in the OS about whether it can access particular secrets.  I can “always allow” aws-vault to access secrets in the aws-vault keyring, but I presume if awsthief tried to access them instead, I would get a new prompt.

Furthermore, if I uncheck “remember this password” on the Mac, it stays unchecked the next time the keyring is unlocked.  In Gnome, for the past 8 years, it re-checks itself every time, waiting for a moment of inattention to make the security of the alternative keyring (awsvault, of course) entirely moot.  It may be locked, but you can have D-Bus fetch the key from under the doormat.

Locking Up

I’m not certain yet whether the Gnome keyring can auto-lock collections, either.  My previous post on macOS’ security command includes how to lock the keyring after a timeout, or when the system is locked.  These capabilities are missing from Seahorse, but I haven’t fully analyzed the D-Bus interface.  (Still, I shouldn’t need to do so.)

Copying Microsoft Good Enough?

A cursory Web search suggests that the way Gnome handles the keyring is exactly like Windows.  Not only is Gnome chasing taillights, but it has chased the easiest ones to catch.

Overall, the quality of Gnome (GNU …) keyring lives up to the heuristic for bad cryptography.

Monday, July 18, 2022

Locking one single keyring in gnome-keyring from the terminal

Updated 2022-07-22: The original code didn't work in practice.  The default for --type changes if --print-reply is given, so it stopped working when I removed the latter after testing.  The command below has been updated to provide explicit values for all options: --type to make it work, and --session to future-proof it.  The original text follows.

I'm moving to a new work laptop, where I wanted to lock the aws-vault keyring when I close a shell/terminal window.  Previously, I did this on macOS, using security lock-keychain aws-vault.keychain-db in my ~/.zlogout file.  (I switched to zsh, then discovered zsh-syntax-highlighting, which is immensely useful.)

So anyway, what's the equivalent CLI command in Pop!_OS 22.04 (derived from Ubuntu 22.04)?

dbus-send --session --type=method_call --dest=org.gnome.keyring \
  /org/freedesktop/secrets \
  org.freedesktop.Secret.Service.Lock \
  array:objpath:/org/freedesktop/secrets/collection/awsvault

(Backslash-continuations and line breaks added for readability.  For debugging, adding the --print-reply option may be of use.  Also, the interface can be explored with D-Feet; first, point it to the session bus, then search for secrets, to get the needle out of the haystacks.)

Now, if they ever fix that 8-year-old issue in gnome-keyring, we'll have fully-usable secondary keyrings.  I mean, what is the point to having another keyring, if the option is "No, but ask me again every time?"