Saturday, August 8, 2020

On "Perl 7"

Previous history:

  1. SawyerX: Announcing Perl 7.
  2. Grinnz: Perl 7: A Risk-Benefit Analysis.
  3. RJBS: On Perl 7 and the Perl Steering Comittee.

My comments at this point:

Hey #perl core. Is there anywhere I can upload a cpanfile, so you can know what's important in production? Like Debian's popularity-contest, but specialized. (We install most everything via Ubuntu repositories, so you don't get downloads/test feedback.)

The perl7 thing is probably accelerating our plans to phase Perl out entirely. I'm not sure what we'll do if Ubuntu 22.04 ships a "Perl 7" that can't run Perl 5 code.

To be honest, ~20% of hits to our client-facing site are served by CGI.pm because of some failures on my part (should have used CGI::Emulate::PSGI?) and on management (50% time savings is great, but let's work on features... forever.)

Anyway, it's going to be real hard to sell "work on Perl7 support" to a management team who have been ramping up the priority on "rewrite the application."

(Lightly edited from my twitter thread.)

Now, I realize I'm nobody, and nobody has to listen to me, but I think the popularity-ranking thing is kind of important, and maybe even a "State of Perl" survey to find out what's even broken. All I know from my vantage point is that Perl looks fairly dead already, despite how much I loved it. The Perl 7 crisis has so far been more anxiety-inducing than inspiring.

Thursday, April 30, 2020

pipenv's Surprise

Warning: Python 3.6 was not found on your system
You can specify specific versions of Python with:
  $ pipenv --python path/to/python
I am left with no clear path to making this project run with the system Python 3 across just two versions of Ubuntu LTS.  It doesn't work on Focal (Python 3.8) as-is, and if I update the Pipfile.lock, it won't run on Bionic (Python 3.6).  It doesn't have shebangs for python3.6, as it expects to run on Python 3.6 or up.  This is how SemVer works!

Maybe the answer is to build my own tools in order to run this in a way that suits me. Which is: I really want a build phase to create a tarball, which can be extracted and run in-place later. All the complexity of vendoring should be finished by the time deployment (from tarball) occurs, for reliability and reproducibility.

I do not want to write this project a third time, probably in a language I hate, just because virtualenv is both wholly inadequate, and central to Python tooling.

(Something like the way the awscli-bundle is packaged is an interesting start, but it still has to grind through a lot of code to "install" itself. It's not unzip-and-go. Also, I've got no idea how they build that in the first place.)

Wednesday, March 11, 2020

Unexpected Noise in Ardour

I've been learning to mix with Ubuntu Studio 19.10 and Ardour5, but I've had a puzzling issue: noise.

My workflow has been to load in the .wav files I want to mix, and then start working, but it always had some low-level noise on the channels.  Even when Ardour was not supposed to be playing, the noise would be there.

It turns out that Ardour still connects the inputs to the track, and passes them through the effects chain.  This is why the noise responds to the fader, noise gate effects, and the like.  The solution is to go to the input on each mixer strip and switch it from 1 (or 2, or 1/2, etc.) to "Disconnected" instead.  Otherwise, at minimum, thermal noise gets added on the channel.

I happened to discover this through a pair of accidents: I had a cheap headset mic connected and noise was worse than ever, but also, all of the meters jumped when some background noise happened.  Disconnecting the inputs fixed the noise, without affecting the ability to play back the imported audio.

(This isn't the normal fare for my blog, but if I put it on tumblr instead, it'll never be seen again, and what good is that?)

Thursday, December 19, 2019

Informative Shell Prompts

Recently, /u/NotSoButFarOtherwise wrote about Starship, a cross-shell prompt generator:
If you really forget your current directory, username, or host, there are commands to remind you. But you'd probably be surprised how often you don't.
I tried it, but I immediately noticed, I like having the context available in the shell history.  If I get interrupted and jump directories a couple of times to run a few brief commands, I rely on my prompt from a few lines or a few screens back to remind me of what I was doing.

Multi-tasking isn’t ideal, yet remains inescapable, so I don’t want to strip away all of my assistive configurations.  It reminds me a lot of my experience with returning to vim after using VS Code for 1-3 months: “You know what, having more information was actually rather useful.”

Sunday, November 3, 2019

Do the Hard Part First

Find the hardest part of the problem or task, and solve that first.  Search for the biggest risks, and explore them.  Shine light into the darkness of uncertainty.

It’s not as fun as plowing ahead and writing some code, but it makes it a lot easier to design when the components are known.  Also, it’s easier to make a good estimate of time remaining when there are fewer surprises remaining.  So ferret out those surprises, and destroy them.

Friday, August 9, 2019

Doing Affects Thinking

I ultimately decided not to use Psalm. Of the hundreds of errors I fixed when trying it, in a corpus of 1500+, only a handful would have had operational impact.

But ever since, I've been quietly noticing "Psalm errors," where the phpdoc doesn't match types in practice, or doesn't match the actual type declarations on the method.

(Of course, my API design has been strongly affected by PHP type declaration syntax; I am now trying to design "less convenient" interfaces that offer stronger type information for the IDE. I can't declare string|array|false in PHP, but I can declare ?array for an array or null. This just happens to align with reducing the amount of options Psalm has to deal with.)

Thursday, August 1, 2019

Containers are Interop

I mean this in the same sense as “XML was created to solve the interoperability problem.

The container craze is about the interoperability problem across environments. By vendoring the entire distribution and communicating only over the network, they essentially provide isolation for all the dependencies of a service. Maybe that part is the same, in essence, as the nix package manager.

But then containers have one more trick: they run anywhere with a “Linux syscall interface” underneath. Any environment with Docker support can run Docker containers, anywhere. (As long as the binaries run on the host, at least.) It’s not entirely simple—orchestration is an issue, and Docker is working on that, too—but the containers themselves become highly portable, since they’re shipped as black boxes. They don’t depend on code outside themselves, and as such, that outside code cannot break them so easily.

And maybe, by so fully entwining a Linux distro to our app, we’re forgetting how to be cross-distro or cross-platform. And the old coder in me wants to grump about that. Yet, that’s also a kind of freedom. Not everyone has to learn how to write cross-platform code if the container environment defines exactly one platform to run on.

Maybe we’re losing something, but we’re also gaining ease-of-use and accessibility in the deal.