Sunday, April 24, 2022

snapd

The currently published snap for ripgrep is version 12.1.0; the snap was released 2020-05-17.  The current version, 13.0.0, was released on 2021-06-12. There’s no contact for the publisher of the snap.

The currently published snap for alacritty is version 0.8.0, with the snap released 2021-06-17.  Meanwhile, the current version is 0.10.1; the snap packagers (the “Snapcrafters”) haven’t packaged 0.9.0 released 2021-08-03 or any later version.  The contact information listed for this snap is a Github repository that doesn’t allow issues to be reported; indeed, the contact is the pull request page.  [Updated 2022-07-29: fixed a typo in the snap release date.  Verified that neither snap has been updated since publishing.]

The snap store itself only recognizes legal reasons to “report” a snap, due to violations of either copyright/trademarks, or the terms of the Snap store. There’s no way to signal to the publishers that an update is desired.

It’s clear through their actions where Canonical stands, here: once again, they are interested in taking the work of the FOSS community and siloing it in a place where Canonical can profit, and nobody else can.

The real issue isn’t about snap itself; it’s Canonical’s core strategy. Consider their other initiatives, like upstart, Unity, and Mir.  They wanted to become owners of critical infrastructure, walled off from the whims of the community by a licensing agreement that gave Canonical—especially and only Canonical—the right to profit off the works of others which they did not pay for.

With snaps, Canonical attempts to further erode the community, by letting anyone take over any project’s name, publish under it, and (regardless of intentions) leave it to rot.  Then, they don’t even want to clean up the results, unless they are legally bound to do so.  This helps with the number of “available snap packages,” to be sure, but sinks the utility of the whole system.  Users can’t place their trust in it.

It appears that I may need to shift distributions again, to something that is neither Ubuntu nor downstream from it.

Sunday, April 3, 2022

Research in progress: SSH

Part 1: Certificates

I read up on how to use ssh certificates.  I still ran into a couple of surprises:

My known_hosts file is hashed.  The easiest way to figure out the entry for the server is to connect with ssh -v and look for the message where the host key was found "in known_hosts:4".  That means line 4 of the file.  But, it also includes the whole key for lookup, such as '[server.example.org]:222' for sshd listening on a non-standard port. With that, you can use the official command instead of counting line numbers:

ssh-keygen -R '[server.example.org]:222'

Another part I did not understand going into this, is that the certificate doesn't replace the keypair.  It certifies the keypair.  The keypair itself is still used, and it must be available to the client.  Having been in computing for so long, it's odd for a word to have its actual English meaning.

As far as I can tell for creating the certificates, everyone is on their own for building a signing infrastructure.  Which is why companies like smallstep or teleport will provide that piece, for a fee.

Part 2: Multiplexing

I finally got around to looking at sslh a bit, but it didn't exactly work.

Whether I start sslh or nginx first, the other thinks the address is already in use.  But starting two netcat processes, one on 0.0.0.0:x and one on 127.0.0.1:x, seems to work fine.  I could use nginx's stream module on my personal site, but the day job uses Apache, so I'm not sure how transferable all this is.

It would be nice to get it going from a "neat trick!" perspective, but it's not entirely necessary.  Mainly, it would let us access our git repositories from corporate headquarters without having to request opening the firewall, but I've been there a total of one time.

Updated 2022-04-24: I ultimately chose to set up another hostname, using proxytunnel to connect, and having Apache terminate TLS and pass the CONNECT request on to the SSH server. This further hides the SSH traffic behind a legitimate TLS session, in case it's not just port-blocking that we're facing.