Thursday, December 9, 2021

The Best Tool for the Job?

I've written 3 generations of memcached-to-DynamoDb server now.  That is, a server that speaks the memcached text protocol to its clients, but stores data in Amazon DynamoDb instead of memory.  (Why?)  Perl is dying, so generation 2 was written in Python, using the system version that was already in our base images.  But cultural issues plagued it, so I began thinking about generation 3.  What language to write in?

Perl is still dying.  PHP doesn't have a great async story; I could use stream_select, but I was hoping not to build even more of the server infrastructure myself.  Outside of that, we don't have any other interpreters or runtime environments pre-installed in our image, and I didn't want to bloat it more.

Of the compiled languages, then:

  1. Rust was known to be incomprehensible.
  2. C and C++ are not safe, and don't have package managers.
  3. Go... doesn't have generics?
  4. Nothing else in the category seems to have critical mass (e.g. an AWS SDK ready.)

We have another project in Go that I wrote circa Go 1.4; since then, it required a tiny bit of understandable work for the massive benefit of migrating to modules (and that became devproxy2.  I value stability.)  If you don't want to write a generic library, then Go is fine enough.

Go is still burdened by a self-isolating inner circle, making it faux-open-source at best.  But on the other hand, they have built a safe, concurrent, stable, popular, compiled language, with a standard package manager.  It even has an official AWS SDK.

Friday, August 27, 2021

Our brief use of systemd's TemporaryFileSystem feature

Late last year, we started having problems where services did not appear to get proper data after a reload.  But I have a hypothesis.

We have a monolithic server build, where several services end up running under one Apache instance.  I thought it would be nice to make each one think it was the only service running.  Instead of enumerating badness, I switched to using TemporaryFileSystem= to eliminate all directories, then allowing access the one needed for the service.

The problem is, I don’t think this gets reset/updated during a reload. Deployment renames the working directory, and renames an all-new directory in at the original name.  It’s possible that even though the reload happens, the processes are not changing working directory, thus still looking in the original one.

Things that have never failed were failing; in particular, Template Toolkit was still using stale template files.  I did some digging and found that it does have a cache by default, but it only holds entries for up to a second. Furthermore, because I was very strict about not having unexpected global state, we create a new Template object per request, and shouldn't be sharing the cache.

Anyway, systemd did not actually document what it does with TemporaryFileSystem on reload, so I can’t be sure.  I ended up abandoning TemporaryFileSystem= entirely.  I only have so much time to mess around, and there’s only so much “risk” that would be mitigated by improved isolation. It’s almost guaranteed that the individual applications have defects, and don’t need a multi-app exploit chain.

(Not in the sense that I know of any specific defects; in the sense that all software has defects, and some proportion of those can be leveraged into a security breach.)

Friday, August 20, 2021

More Fun with Linux I/O Stats

I did not disclose boot times or time spent reading with my previous post. This is because the Linux drive is, in fact, on SATA 2 and not SATA 3.  My motherboard only has a single SATA 3 (6 Gbps) port, and Windows was already on it.  I installed a second drive for Linux, which means that the new SSD went on SATA 2 instead.

However, I did temporarily swap the cables, and take some numbers for Linux booting with SATA 3 instead.  This occurred on Ubuntu Studio 21.04, which is the version that I have installed on the bare metal, rather than inside a virtual machine.

The boot time for Ubuntu Studio from Grub-to-desktop was at most 1 second faster on SATA 3, although Linux did report half the time spent reading in /proc/diskstats.

From this, I have to conclude that the disk is not a limiting factor: whether the disk is spending 5.5 or 11 seconds to return data leads to boot times of 13.6 vs 14.5 seconds.  Cutting the bandwidth in half only raises the overall boot time by 6.6%.

In fact, the last boot before I uninstalled VirtualBox, but on SATA 3, recorded 40914 I/O requests, 1958574 sectors, and 10917 ms waiting for the disk, for a 16.0 second boot.  After merely uninstalling VirtualBox and rebooting, there were 16217 requests (60% fewer), 1256806 sectors (36% fewer), 5698 ms waiting (48% less), and 13.8 seconds wall time (14% less).  For what it’s worth, “used memory” reported just after the /proc/diskstats check dropped from 673 MB to 594 MB (12% less).

In other words, the interface is less relevant than whether VirtualBox is installed.

(The virt-manager + Cinnamon experiment was caused by the removal of VirtualBox, but happened after I/O numbers were finished.  Also undisclosed previously: the drive itself is a Samsung 870 EVO 250 GB, so its performance should be fairly close to the limits of SATA 3.)

The conclusion remains the same; more speed on the disk is not going to help my current PC.  My next PC (whether Windows 10 or the hardware gives way first) will support NVMe, but I probably won't use it unless there are a couple of M.2 slots.  Being able to clone a drive has been incredibly useful to me.

Friday, August 13, 2021

Making Sense of PHP HTTP packages

There are, at the moment, several PSRs (PHP Standard Recommendations) involving HTTP:

  1. PSR-7: HTTP messages.  These are the Request, Response, and Uri interfaces.
  2. PSR-17: HTTP message factories.  These are the interfaces for “a thing that creates PSR-7 messages.”
  3. PSR-18: HTTP clients.  These are the interfaces for sending out a PSR-7 request to an HTTP server, and getting a PSR-7 response in return.
  4. PSR-15: HTTP handlers.  These are interfaces for adding “middleware” layers in a server-side framework. The middleware sits “between” application code and the actual Web server, and can process, forward, and/or modify requests and responses.

I will avoid PSR-15 from here on; the inspiration for this post was dealing with the client side of the stack, not the server.

As for the rest, the the two ending in 7 are related: PSR-17 is about creating PSR-7 objects, without the caller knowing the specific PSR-7 objects. However, it’s a kind of recursive problem: how does the caller know the specific PSR-17 object to use?

There's also some confusion caused by having “Guzzle” potentially refer to multiple packages.  There’s the guzzlehttp/guzzle HTTP client, and there’s a separate guzzlehttp/psr7 HTTP message implementation.  Unrelated packages may use guzzle in their name, but not be immediately clear on which exact package they work with.

  • php-http/guzzle7-adapter is related to the client.
  • http-interop/http-factory-guzzle provides PSR-17 factories for the Guzzle PSR-7 classes, and has nothing to do with the client.

Additionally, whether these packages are needed has changed somewhat over time.  guzzlehttp/psr7 version 2 has PSR-17 support built in, and guzzlehttp/guzzle version 7 is compatible with PSR-18.  Previous major versions of these packages lacked those features.

Discovery

The problem of finding PSR-compatible objects is solved by an important non-PSR library from the HTTPlug (HTTP plug) project: php-http/discovery (and its documentation.)

  • It lets code ask for a concrete class by interface, and Discovery takes care of finding which class is available, constructing it, and returning it.
  • It includes its own list of PSR-17 factories and PSR-18 clients, and can return those directly, where applicable. When Guzzle 7 is installed, Discovery (of a recent enough version) can return the actual \GuzzleHttp\Client when asked to find a PSR-18 client.
  • It has additional interfaces for defining and finding asynchronous HTTP clients, where code is not required to wait for the response before processing continues.

At its most basic, Discovery can find PSR-17 factories and PSR-18 HTTP clients.  These would be loaded through the Psr17FactoryDiscovery and Psr18ClientDiscovery classes.  For the more advanced features like asynchronous clients, the additional adapter packages are required.

For example, to use Guzzle 7 asynchronously, php-http/guzzle7-adapter is required.  At that point, it can be loaded using HttpAsyncClientDiscovery::find().  This method then returns an adapter class, which implements php-http's asynchronous client interface, and passes the actual work to Guzzle.

In any case, library code itself would only require php-http/discovery in its composer.json file; a project making use of the library would need to choose the concrete implementation to use in its composer.json file.

An Important Caveat

Discovery happens at run time.  Since Discovery supports a lot of ways to find a number of packages, it doesn't depend on them all, and it doesn't even have a hard dependency on, say, the PSR-17 interfaces themselves.  This means that Composer MAY install packages, even though requirements aren't fully met to make all of them usable.

To be sure the whole thing will actually work in practice, it's important to make some simple, safe HTTP request.  In my case, I use the Mailgun client to fetch recent log events.

When code using Discovery fails, the error message may suggest installing “one of the packages” from a list, and providing a link to Packagist.  That link may include the name of a package that is installed.  Why doesn’t it work? It’s probably the version that is the culprit.  If guzzlehttp/psr7 version 1 is installed, but not http-interop/http-factory-guzzle, then the error is raised because there is genuinely no PSR-17 implementation available with the installed versions. However, the guzzlehttp/psr7 package will be shown on Packagist as providing the necessary support, because the latest version does, indeed, support PSR-17.

Things Have Changed a Bit Over Time

As noted above, prior to widespread support for PSR-17 and PSR-18, using the php-http adapters was crucial for having a functioning stack.  So was installing http-interop/http-factory-guzzle to get a PSR-17 adapter for the guzzlehttp/psr7 version 1 code.

For code relying only on PSR-17 and PSR-18, and using the specific Discovery classes for those PSRs, the latest Guzzle components should not need any other packages installed to work.

However, things can be different, if there is another library in use that causes the older version of guzzlehttp/psr7 to be used.  This happens for me: the AWS SDK for PHP specifically depends on guzzlehttp/psr7 version 1, so I need to include http-interop/http-factory-guzzle as well, for Mailgun to coexist with it.

One Last Time

If you’re writing a library, use and depend on php-http/discovery.

If you’re writing an application, you must also depend on specific components for Discovery to find, such as guzzlehttp/guzzle.  Depending on how the libraries you are using fit together, you may also need http-interop/http-factory-guzzle for an older guzzlehttp/psr7 version, or a package like php-http/guzzle7-adapter if a simple PSR-18 client isn’t suitable.

There are alternatives to Guzzle, but since my projects are frequently AWS-based, and their SDK depends on Guzzle directly, that’s the one I end up having experience with.  I want all of the libraries to share dependencies, where possible.

Friday, August 6, 2021

Cinnamon with Accelerated Graphics in libvirt

While exploring virt-manager and Linux Mint Cinnamon Edition, I kept getting the warning that software rendering was in use (instead of hardware acceleration.) The warning asks to open the Driver Manager, which then says no drivers are needed.

The open-source stack for doing accelerated graphics in this case is SPICE, but virt-manager had configured a different graphics stack by default for the “Ubuntu 20.04” OS setting I had used.

For the guest to use SPICE, it needed the spice-vdagent package to be installed.  After that, the guest was shut down, so that its hardware could be reconfigured.

First, the “video” element needed to be converted from QXL to virtio driver, which includes a 3D acceleration option.  Once the latter option’s checkbox was ticked, I clicked Apply to save the configuration.

Next, the “display” needed to be changed to support direct OpenGL rendering. I needed to change the Listen to none, and to tick the OpenGL checkbox. (Doing this in the opposite order, an “invalid settings” message appeared, saying that OpenGL is not supported unless Listen is none.) This revealed an option to select a graphics device, but I only have one, so I left it alone. I then applied the display changes.

Finally, I booted the guest again.  Everything seems to have worked, and the warning did not appear.

Wednesday, July 28, 2021

Fun with Linux I/O Stats

A discussion with my wife about random I/O and how I’m not sure NVMe would make a measurable difference to my life got me to wondering, how much I/O does a modern Linux system perform during bootup?

I have Ubuntu Studio 21.04 installed on hardware, and it issues around 18,800 read requests for 1.5 million sectors to boot to the desktop (and to start Alacritty to cat /proc/diskstats.)

I then installed Linux Mint 20.2 Cinnamon Edition and Xubuntu 21.04 in virtual machines, and ran similar tests, using their native terminal emulators.  Mint issues around 19,000 read requests for 1.2 million sectors.

Xubuntu issues about 20,500 read requests for 1.4 million sectors.

To be fair, Xubuntu does win the “least RAM used” contest.  After checking the diskstats file, free -m indicated 428 MB in use on Xubuntu, 547 MB on Mint Cinnamon, and 637 MB on Ubuntu Studio.

Xubuntu was a surprising disappointment; besides issuing the most read requests to boot up, it lacked in features.  The keyboard layout wouldn’t work.  I also discovered along the way that its live environment can’t control the backlight on a “late 2012” iMac, unlike mainline Ubuntu.

(Originally, I was hoping to get the info from the live environment, but the automatic media check dashed my hopes.)

Based on the information so far, I don’t have any new evidence that NVMe would be much of a performance boost.  Sure, the numbers are bigger, but I’m looking for a life-altering wow factor, like going from HDD to SSD was.

Added 2021-07-29: Synthetic benchmarks don’t seem to add up to real-world performance.  (As usual!) It makes great gains in CrystalDiskMark, but actual tasks can turn game load times from 36 to 30 seconds, or a 1.8 GB Photoshop file from 18 to 11 seconds.  To be fair, that seems to be on the order of magnitude of Linux booting up, which I’m largely interested in as the most massive task I tend to put my hardware to.  It’s not like the SATA SSD is so slow that my mind wanders while waiting for Firefox to load.

For something that’s marketed as if it’s five to ten times faster, that’s a disappointingly weak effect in practice.  And worse, a PCIe 4.0 drive can cost more than double of what the SATA variant does, with the same manufacturer and capacity.  For that kind of money, I want the performance to double across the board.  But I’m getting older, wiser, and grumpier.

Sunday, June 20, 2021

What is @system-service?

Because it's subject to change, systemd does not officially publish what makes up "@system-service".  However, as of June 20th, 2021, the git repository defines the system calls allowed in src/shared/seccomp-util.c.  In addition to some more calls, @system-service (SYSCALL_FILTER_SET_SYSTEM_SERVICE) currently includes the following groups:

@aio, @basic-io, @chown, @default, @file-system, @io-event, @ipc, @keyring, @memlock, @network-io, @process, @resources, @setuid, @signal, @sync, and @timer.

It may be more interesting to consider what it doesn't include:

@clock, @cpu-emulation, @debug, @module, @mount, @obsolete, @pkey, @privileged, @raw-io, @reboot, and @swap.

This is not a list of what it "excludes," because system calls may appear on multiple lists; "chown" is permitted by @chown, and the non-inclusion of @privileged doesn't revoke that permission.

Some further observations:

  • @aio includes io_uring.
  • @raw-io is port-based I/O, and configuration of PCI MMIO.
  • @basic-io is read/write and related calls; @file-system is a much larger group that covers access, stat, chdir, open, and so on.
  • @io-event is evented I/O, that is, select/poll/epoll.

Essentially, @system-service is meant to permit a fairly wide range of operations.  This makes it easier to start using (less likely to break things), at the cost of potentially leaving the gates open more than necessary.

As for my own systems, I may start revoking @privileged and @memlock from daemons that run as a non-root user from the start.  Other than that, this exercise didn't turn out to be very informative.

Update on 2021-06-25: The command systemd-analyze syscall-filter will show all of the built-in sets; systemd-analyze syscall-filter @privileged will show the filters for a specific set (in this example, the "@privileged" list of calls.)

Tuesday, May 4, 2021

aws-vault on macOS Catalina

I recently installed aws-vault on my old iMac, for malware hardening.  If any finds me, I don’t want it to be able to pick up ~/.aws/credentials and start mining cryptocurrency.

One problem: there were a lot of password prompts, and the “aws-vault” keychain did not appear in Keychain Access as promised.

But it’s okay; there’s a security command that works with it, if we know the secret internal name of the keychain: aws-vault.keychain-db.  (I found this by looking in my ~/Library directory for the keychains; most of them have several files, but aws-vault has only that one.)

As in:

  1. security show-keychain-info aws-vault.keychain-db
  2. security set-keychain-settings -l -u -t 900 aws-vault.keychain-db
  3. security lock-keychain aws-vault.keychain-db
  4. security set-keychain-password aws-vault.keychain-db (this will prompt for current and new passwords)

Regarding the set-keychain-settings subcommand.  Each option specifies a setting: -l to lock the keychain whenever the screen is locked, -u to lock the keychain after a time period, and -t {seconds} to set that time period. So, in the example, we are setting the keychain to lock again after 900 seconds.  There do not seem to be opposites of these options documented, so I would guess running the command without a flag will remove that option.  I haven’t tested that, though.

I added the lock-keychain subcommand to my logout file [~/.zlogout for zsh, but formerly ~/.bash_logout].  Whenever I exit an iTerm2 tab, everything is locked again.  (I’ve been clearing SSH keys from the agent in that file for a long time.)

I’ve chosen to “Always Allow” aws-vault to access the keychain, which means I need to put in the password only for the unlocking operation, not for every time the binary wants to access information within the keychain.

Thursday, April 8, 2021

I Don't Know What Grub2 Is Doing

Background: there are a couple of things I do that are far more convenient on Linux, so I decided to replace the USB stick I was using with an SSD.  I bought and installed the SSD, installed Ubuntu Studio 20.10, and let it install Grub to /dev/sda.  Big mistake.

Windows could not be booted.  I wiped the Linux SSD, and then nothing booted; Grub expected to find itself on that drive.  Booting the Linux installer from the USB stick would detect the ghost of Grub in the MBR, and jump into it, instead of starting the stick.  I had to pull out an old copy of SysRescueCd to put something else in Windows' MBR.

But at that point, Windows would still not boot, nor would it repair itself from the repair mode.  I ended up having to reinstall Windows through an unnecessarily confusing and pointless set of menus on the install media.  That's not what the story is about, though.

With a functional Windows on /dev/sda, I reinstalled Linux again, and told it to put the boot loader on /dev/sdb.  This works fine, with the little weird problem that Grub still can't boot Windows 10?

I haven't gone through and tried set root=(hd0) and chain-loading that, but you know, this is 2021? This should work out of the box.

Despite having a "Dual UEFI BIOS" on my motherboard, it boots Windows in legacy/BIOS mode, and not UEFI mode.  So it's not a case of Linux being booted with legacy/BIOS, then not realizing that it needed grub-efi.  I think.

At some level, I don't care.  If it doesn't work, it doesn't work.  The PC is now configured to boot Windows by default, or Linux when I ask for a boot menu and pick the second SSD.  And I'm really just grumpy that the installer/Grub can't figure out how to get Windows up and running properly.

Saturday, April 3, 2021

The libsodium rant

Back when the inclusion of libsodium in PHP 7.2 was announced, I was excited.  A modern crypto library, with a modern, easy-to-use API?  Unfortunately, calling the API "modern" meant something completely different to me than the libsodium binding actually delivers to its users.

Rather than a modern, mistake-proof API, it's a direct, low-level replacement for the unmaintained mcrypt library.  It has more modern primitives, obscure names, and very little documentation.  Although the extension was promoted to core years ago, the functions are all "Warning: this is undocumented" to this day.  (Extension documentation is here.)

As for the names, I actually attended a libsodium talk at a PHP conference.  The speaker covered which functions are asymmetric and which are symmetric, but I don't remember which is which without checking my notes.  The docs say "Encrypt a message" for sodium_crypto_box, sodium_crypto_box_seal, and sodium_crypto_secretbox.  They also say that about sodium_crypto_stream_xor, but that definitely sounds wrong.

I had been expecting an interface more like defuse/php-encryption (which I have used) or perhaps soatok/dhole-cryptography (which I have not.)  Even if all the low-level crypto bits were exposed, because interoperability with other systems is important, I still expected a higher level API.  I expected an "encrypt this text with that key" operation that took care of nonces, formatting details, and algorithm choices.

Wednesday, March 31, 2021

The sea change to vendoring and containers

Not so long ago, we would install code to the global runtime path, where it would be available machine-wide.  CPAN was perhaps the earliest to do this, but PEAR, rubygems, and probably others followed.  After all, CPAN was seen as a great strength of Perl.

Users of such a library would say something like use Image::EXIF; or include_once "Smarty/Smarty.class.php"; Notice that these are a mechanism to follow some include path to resolve relative filenames.

But as servers got larger faster than applications, co-installing those applications became attractive.  The problem was that they can have different dependency requirements, which complicates the management of a shared include path.

This is about where Bundler happened for Ruby, and its ideas were brought to other ecosystems: Perl got carton, and PHP got composer. These systems build a directory within the project to hold the dependencies, and basically ignore the global include path entirely.

At the cost of disk space, this bypasses the problems of sharing dependencies between projects.  It bypasses all the problems of working with multiple versions of dependencies that may be installed in different hosts’ include paths.  It also makes the concept of the language’s include path obsolete, at least in PHP’s case: the Composer autoloader is a more-capable “include path” written in code.  Finally, it allows piecemeal upgrades—a small service can make a risky upgrade and gain experience, before a larger or more important service on the machine needs to upgrade.

Piecemeal upgrades also enable independent teams to make more decisions on their own.  Since their dependencies are not shared with other users, changes to them do not have to be coordinated with other users.

Containers are the next step in the evolution of this process.  Pulling even more dependencies into a container brings all the advantages of local management of packages into a broader scope.  It allows piecemeal upgrading of entire PHP versions.  And it makes an application cost an order of magnitude more space, once again.

In our non-containerized environment, I can switch the PHP version on a host pretty easily, using co-installable versions and either update-alternatives (for the CLI) or a2disconf/a2enconf (for FPM), but this means the services do not really have a choice about their PHP version.  It has been made before the application's entry point is reached.

Friday, February 5, 2021

What matters? Experimenting with Ubuntu Desktop VM boot time

It seemed slow to me that the "minimal" installation of Ubuntu Desktop 20.10 in a VirtualBox VM takes 37 seconds to boot up, so I ran some experiments.

The host system is a Late 2012 iMac running macOS Catalina.  It has 24 GB of RAM (2x4 GB + 2x8 GB) installed, and the CPU is the Intel Core i5-3470 (Ivy Bridge, 4 cores, 4 threads, 3.2 GHz base and 3.6 GHz max turbo.)  In all cases, the guests are running with VirtualBox Ubuntu Linux 64-bit defaults, which is to say, one ACHI SATA drive that is dynamically sized.  The backing store is on an APFS-formatted Fusion Drive.

The basic "37 seconds" number was timed with a stopwatch, from the point where the Oracle VM screen is replaced with black, to the point where the wallpaper is displayed.  Auto-login is enabled for consistency in this test.  This number should be considered ±1 second, since I took few samples of each system.  I'm looking for huge improvements.

So what are the results?

  • Baseline: 1 core, ext4, linux-generic, elevator=mq-deadline.  37 seconds.
  • XFS: 1 core, xfs root, linux-generic, elevator=mq-deadline.  37 seconds.
  • linux-virtual: 1 core, xfs root, linux-virtual, elevator=mq-deadline. 37 seconds.
  • No-op scheduler: 1 core, xfs root, linux-virtual, elevator=noop. 37 seconds.
  • Quad core: 4 cores, xfs root, linux-virtual, elevator=noop. 27 seconds!
The GUI status icons showed mostly disk access, with some heavy CPU usage for the last part of the boot, but only adding CPU resources made an appreciable difference.  I also tried the linux-kvm kernel, but it doesn't display any graphics or even any text on the console, so that was a complete failure.

(I've tried optimizing boot times in the past with systemd-analyze critical-chain and systemd-analyze blame, but it mostly tells me what random thing was starved for resources that boot, instead of consistently pointing to one thing causing a lot of delay.  Also, it has a habit of saying "boot complete in seven seconds" or whatever as soon as it has decided to start GDM, leaving a lot of time unaccounted for.  So I didn't bother on these tests.)

Correction: it was Ubuntu 20.10, not 20.04 LTS.  The post has been updated accordingly.

Wednesday, February 3, 2021

Stability vs Churn Culture

I’m working on rewriting memcache-dynamo in Go.  Why?  What was wrong with Python?

The problem is that the development community has diverged from my goals.  I’m looking for a language that’s stable, since this isn’t the company’s primary language.  (memcache-dynamo is utility code.) I want to write the code and then forget about it, more or less.  Python has made that impossible.

A 1-year release cycle with “deprecated for 1 cycle before removal” as the policy means it’s possible for users on Ubuntu LTS to end up in a situation where their previous LTS, 2 versions behind, doesn’t provide an alternative for something that’s been removed in the next LTS / current Python.

But looking closer to home, it’s a trend that’s sweeping the industry as a whole, and fracturing communities in its wake.  Perl wants to do the same thing, if “Perl 7” ever lands.

Also, PHP 5.6 has been unsupported for two years, yet there are still code bases out there that support PHP 5, and people are (presumably) still running 5.6 in production.  With Enterprise Linux distributions, we will see this continue for years; RHEL 7 shipped PHP 5.4, with support through June 2024.

There’s a separate group of people that are moving ahead with the yearly updates.  PHPUnit for example; every year, a new major version comes out, dropping support for the now-unsupported PHP versions, and whatever PHPUnit functionality has been arbitrarily renamed in the meantime.  The people writing for 5.x are still using PHPUnit 4 or 5, which don’t support 8.0; it’s not until PHPUnit 8.5.12 that installation on PHP 8 is allowed, and that still doesn’t support PHP 7.0 or 7.1.

This is creating two ecosystems, and it’s putting pressure on the projects that value stability to stop doing that.  People will make a pull request, and write, “but actually, can you drop php 5 support? i had to work extra because of that.”

The instability of Linux libraries, from glibc on up, made building for Linux excessively complex, and AFAICT few people bother.  Go decided to skip glibc by default when building binaries, apparently because that was the easier path?

Now everyone thinks we should repeat that mistake in most of our other languages.

Friday, January 1, 2021

Daisywriter

Growing up, my dad had an old daisy wheel printer hooked up to our computers.  (We had various Commodore hardware; we did not get a PC until 1996, and I'm not sure the Amiga 500 was really decommissioned until the early 2000s.)

The daisy wheel was like a typewriter on a circle.  There were cast metal heads with the letters, numbers, and punctuation on them.  The wheel was spun to position, and then what was essentially an electric hammer punched the letter against the ribbon and the paper.  Then the whole wheel/ribbon/hammer carriage moved to the next position, and the cycle repeated.

This was loud; like a typewriter, amplified by the casing and low-frequency vibrations carried through the furniture.  No printing could be done if anyone in the house had gone to bed, or was napping.

It was also huge. It could have printed on legal paper in landscape mode.

Because of the mechanical similarity to typewriters, the actual print output looked like it was really typewritten.  Teachers would compliment me for my effort on that, and I'd say, "Oh, my dad has a printer that does that."

Nowadays, people send a command language like PostScript, PCL, or even actual PDF to the printer, and it draws on the page.  Everything is graphics; text becomes curves.  But the Daisywriter simply took ASCII in from the parallel port, and printed it out.