Thursday, September 28, 2023

Using Cloudflare DNS over TLS

My current Linux distributions (Ubuntu 23.04 and the Ubuntu-derived Pop!_OS 22.04) use NetworkManager for managing connections, and systemd-resolved for resolving DNS queries.  I’ve set up Cloudflare’s public DNS service with DoT (DNS over TLS) support twice… and I don’t really have a solid conclusion.  Is one “better?”  🤷🏻

Contents

  • Per-connection mode with NetworkManager only
  • Globally with systemd-resolved / NetworkManager
  • Useful background info

Wednesday, September 20, 2023

Update on earlyoom

Back in Linux Behavior Without Swap, I noted that the modern Linux kernel will still let the system thrash.  The OOM killer does not come out until it is extremely desperate, long after responsiveness is near zero.

It has been long enough since installing earlyoom on our clouds that I did another stupid thing.  earlyoom was able to terminate the script, and the instance stayed responsive.

I also mentioned “swap on zram” in that post.  It turns out, the ideal use case for zram is when there is no other swap device. When there’s a disk-based swap area (file or partition), one should activate zswap instead.  zswap acts as a front-end buffer to swap, storing the compressible pages, or letting others go to the swap device.

One other note, zswap is compiled into the default Ubuntu kernels, but zram is part of the rather large linux-modules-extra package set.  If there’s no other need for the extra modules, uninstalling them saves a good amount of disk space.

Saturday, September 9, 2023

Upgrading a debootstrap'ped Debian Installation

For context, last year, I created a Debian 11 recovery partition using debootstrap for recovering from issues with the 88x2bu driver I was using.

This year, I realized while reading that I have never used anything but Ubuntu’s do-release-upgrade tool to upgrade a non-rolling-release system.  My Debian 12 desktop felt far less polished than Ubuntu Studio, so I reinstalled the latter, and that means I once again don’t need the 88x2bu driver.

Therefore, if I trashed the Debian partition, it wouldn’t be a major loss.  It was time to experiment!

Doing the upgrade

The update process was straightforward, if more low-level than do-release-upgrade.  There are a couple of different procedures online that vary in their details, so I ended up winging it combining them:

  1. apt update
  2. apt upgrade
  3. apt full-upgrade
  4. apt autoremove --purge
  5. [edit my sources from bullseye to bookworm]
  6. apt clean
  7. apt update
  8. apt upgrade --without-new-pkgs
  9. apt full-upgrade
  10. reboot
  11. apt autoremove

DKMS built the 88x2bu driver for the new kernel, and userspace appeared to be fine.

Fixing the Network

The link came up with an IP, but the internet didn’t work: there was no DNS.  I didn’t have systemd-resolved, named, dnsmasq, nor nscd.  Now, to rescue the rescue partition, I rebooted into Ubuntu, chroot’ed to Debian, and installed systemd-resolved.

Fixing Half of the Races

One of the Debian boots left me confused.  Output to the console appeared to have stopped after some USB devices were initialized.  I thought it had crashed.  I unplugged a keyboard and plugged it in, generating more USB messages on screen, so I experimentally pressed Enter.  What I got was an (initramfs) prompt!  The previous one had been lost in the USB messages printed after it had appeared.

It seems that the kernel had done something different in probing the SATA bus vs. USB this time, and /dev/sdb3 didn’t have the root partition on it.  I ended up rebooting (I don’t know how to get the boot to resume properly if I had managed to mount /root by hand.)

When that worked, I updated the Ubuntu partition’s /boot/grub/custom.cfg to use the UUID instead of the device path for Debian.

It seems that the kernel itself only supports partition UUIDs, but Debian and Ubuntu use initrds (initial RAM disks) that contain the code needed to find the filesystem UUID.  That’s why root=UUID={fs-uuid} has always worked for me!  Including this time.

os-prober (the original source of this entry) has to be more conservative, though, so it put root=/dev/sdb3 on the kernel command line instead.

The Unfixed Race

Sometimes, the wlan0 interface can’t be renamed to wlx{MAC_ADDRESS} because the device is busy.  I tried letting wlan0 be an alias in the configuration for the interface (using systemd-networkd) but it doesn’t seem to take.

I resort to simply rebooting if the login prompt doesn’t reset itself and show a DHCP IP address in the banner within a few seconds.

You have to admire the kernel and systemd teams’ dedication to taking a stable, functional process and replacing it with a complex and fragile mess.

A Brief Flirtation with X11

I installed Cinnamon.  It ran out of space; I ran apt clean and then continued, successfully.  This is my fault; I had “only” given the partition 8 GiB, because I expected it to be a CLI-only environment.

Cinnamon, however, is insistent on NetworkManager, and I was already using systemd-networkd.  It’s very weird to have the desktop showing that there is “no connection” while the internet is actually working fine.

Due to the space issue, I decided to uninstall everything and go back to a minimal CLI.  I would definitely not be able to perform another upgrade to Debian 13, for instance, and it was unclear if I would even be able to do normal updates.

In Conclusion

The Debian upgrade went surprisingly well, considering it was initially installed with debootstrap, and is therefore an unusual configuration.

Losing DNS might have been recoverable by editing /etc/resolv.conf instead, but I wasn’t really in a “fixing this from here is my only option” space.  Actually, one might wonder what happened to the DHCP-provided DNS server?  I don’t know, either.

Trying to add X11 to a partition never designed for it did not work out, but it was largely a whim anyway.

Sunday, August 6, 2023

Sound for Firefox Flatpak on Ubuntu 23.04 with PipeWire

I reinstalled Ubuntu Studio recently, excised all Snaps, and installed Firefox from Flatpak.  Afterward, I didn’t have any audio output in Firefox.  Videos would just freeze.

I don’t know how much of this is fully necessary, but I quit Firefox, installed more PipeWire pieces, and maybe signed out and/or rebooted.

sudo apt install pipewire-audio pipewire-alsa

The pipewire-jack and pipewire-pulse packages were already installed.

AIUI, this means that “PipeWire exclusively owns the audio hardware” and provides ALSA, JACK, Pulse, and PipeWire interfaces into it.

It’s not perfect.  Thunderbird (also flatpak; I’d rather have “some” sandbox than “none”) makes a bit of cacophony when emails come in, but at least there’s sound for Firefox.

Tuesday, July 4, 2023

Boring Code Survives

Over on Wandering Thoughts, Chris writes about some fileserver management tools being fairly unchanged over time by changes to the environment.  There is a Python 2 to 3 conversion, and some changes when the disks being managed are no longer on iSCSI, “but in practice a lot of code really has carried on basically as-is.”

This is completely different than my experience with async/await in Python.  Async was new, so the library I used with it was in 0.x, and in 1.0, the authors inverted the entire control structure. Instead of being able to create an AWS client deep in the stack and return it upwards, clients could only be used as context managers.  It was quite a nasty surprise.

To allow testing for free, my code dynamically instantiated a module to “manage storage,” and whether that was AWS or in-memory was an implementation detail.  Suddenly, one of the clients couldn’t write self.client = c; return anymore.  The top-level had to know about the change.  Other storage clients would have to know about the change, to become context managers themselves, for no reason.

I held onto the 0.x version for a while, until the Python core team felt like “explicit event loop” was a mistake big enough that everyone’s code had to be broken.

Async had been hard to write in the first place, because so much example code out there was for the asyncio module’s decorators, which had preceded the actual async/await syntax.  What the difference between tasks and coroutines even was, and why one should choose one over the other, was never clear.  Why an explicit loop parameter should exist was especially unclear, but it was “best practice” to include it everywhere, so everyone did.  Then Python set it on fire.

(I never liked the Python packaging story, and pipenv didn’t solve it. To pipenv, every Python minor version is an incompatible version?)

I had a rewrite on my hands either way, so I went looking for something else to rewrite in, and v3 is in Go.  The other Python I was using in my VM build pipeline was replaced with a half-dozen lines of shell script.  It’s much less flexible, perhaps, but it’s clear and concise now.

In the end, it seems that boring code survives the changing seasons.  If you’re just making function calls and doing some regular expression work… there’s little that’s likely to change in that space.  If you’re coloring functions and people are inventing brand-new libraries in the space you’re working in, your code will find its environment altered much sooner.  The newer, fancier stuff is inherently closer to the fault-line of future shifts in the language semantics.

Sunday, June 25, 2023

Installing Debian 12 and morrownr's Realtek driver (2023-07-03 edit)

Due to deepening dissatisfaction with Canonical, I replaced my Ubuntu Studio installation with Debian 12 “bookworm” recently.

tl;dr:

  1. My backups, including the driver source, were compressed with lzip for reasons, but I fell back on a previously-built rescue partition to get the system online.
  2. I ended up with an improper grub installation, that couldn’t find /boot/grub/i386-pc/normal.mod. I rebooted the install media in rescue mode, got a shell in the environment, verified the disk structure with fdisk -l, and then ran grub-install /dev/___ to fix it. Replace the blank with your device, but beware: using the wrong device may make the OS on it unbootable.
  3. The USB doesn’t work directly with apt-cdrom to install more packages offline. I “got the ISO back” from dd if=/dev/sdd of=./bookworm.iso bs=1M count=4096 status=progress conv=fsync (1M * 4096 = 4G total, which is big enough for the 3.7G bookworm image; you may need to adjust to suit), then made it available with mount -o loop,ro ~+/bookworm.iso /media/cdrom0 (the mount point is the target of /media/cdrom.)
  4. Once finished, I found out the DVD had plzip, and if I’d searched for it (lzip), I could have used it (plzip). I didn’t actually need the rescue partition.
  5. Once finished, I realized I hadn’t needed to dd the ISO back from the USB stick. The downloaded ISO was on my external drive all along, and I could have loop-mounted that.
  6. [Added 2023-07-02]: Letting the swap partition get formatted gave it a new UUID. Ultimately, I would need to update the recovery partition’s /etc/fstab with the new UUID, and follow up with update-initramfs -u to get the recovery partition working smoothly again.

Full, detailed rambling with too much context (as usual) below.

Friday, March 31, 2023

Passing data from AWS EventBridge Scheduler to Lambda

The documentation was lacking images, or even descriptions of some screens ("Choose Next. Choose Next.")  So, I ran a little experiment to test things out.

When creating a new scheduled event in AWS EventBridge Scheduler, then choosing AWS Lambda: Invoke, a field called "Input" will be available.  It's pre-filled with the value of {}, that is, an empty JSON object. This is the value that is passed to the event argument of the Lambda handler:

export async function handler(event, context) {
  // handle the event
}

With an event JSON of {"example":{"one":1,"two":2}}, the handler could read event.example.two to get its value, 2.

It appears that EventBridge Scheduler allows one complete control over this data, and the context argument is only filled with Lambda-related information.  Therefore, AWS provides the ability to include the <aws.scheduler.*> values in this JSON data, to be passed to Lambda (or ignored) as one sees fit, rather than imposing any constraints of its own on the data format.  (Sorry, no examples; I was only testing the basic features.)

Note that the handler example above is written with ES Modules.  This requires the Node 18.x runtime in Lambda, along with a filename of "index.mjs".