Friday, August 9, 2019

Doing Affects Thinking

I ultimately decided not to use Psalm. Of the hundreds of errors I fixed when trying it, in a corpus of 1500+, only a handful would have had operational impact.

But ever since, I've been quietly noticing "Psalm errors," where the phpdoc doesn't match types in practice, or doesn't match the actual type declarations on the method.

(Of course, my API design has been strongly affected by PHP type declaration syntax; I am now trying to design "less convenient" interfaces that offer stronger type information for the IDE. I can't declare string|array|false in PHP, but I can declare ?array for an array or null. This just happens to align with reducing the amount of options Psalm has to deal with.)

Thursday, August 1, 2019

Containers are Interop

I mean this in the same sense as “XML was created to solve the interoperability problem.

The container craze is about the interoperability problem across environments. By vendoring the entire distribution and communicating only over the network, they essentially provide isolation for all the dependencies of a service. Maybe that part is the same, in essence, as the nix package manager.

But then containers have one more trick: they run anywhere with a “Linux syscall interface” underneath. Any environment with Docker support can run Docker containers, anywhere. (As long as the binaries run on the host, at least.) It’s not entirely simple—orchestration is an issue, and Docker is working on that, too—but the containers themselves become highly portable, since they’re shipped as black boxes. They don’t depend on code outside themselves, and as such, that outside code cannot break them so easily.

And maybe, by so fully entwining a Linux distro to our app, we’re forgetting how to be cross-distro or cross-platform. And the old coder in me wants to grump about that. Yet, that’s also a kind of freedom. Not everyone has to learn how to write cross-platform code if the container environment defines exactly one platform to run on.

Maybe we’re losing something, but we’re also gaining ease-of-use and accessibility in the deal.

Saturday, July 20, 2019

Search and Filter are Distinct Concepts

This wasn’t evident until I tried to implement a “unified” search-and-filter box.  The plan was to submit a search once the user typed 3 characters, then filter that search result set as the user continued typing.  Brilliant, right? It turned out not to be so simple.

What happens if the user corrects a typo in those first three characters?  We would have to recognize it as a new search.

What happens if the user backspaces into the first three characters?  By default, the filter code would see we had a search active, and continue to display all of the results.

What about the default state?  Do we display all our results, removing the idea of search (and its conservation of client memory), or do we display recently modified records?  Can we filter those recently-modified results without causing a search to wipe them out?  When do we go back to displaying recently-modified items as the user deletes text?

Would the user be surprised and dismayed if they crossed an invisible boundary, and their search results vanished, replaced by a filter of that default view?

There’s another potential optimization missing: if the user adds a character at the beginning of the string, then we could theoretically be filtering the already-available search results.  However, this means we would need to track the “search window” as it wandered around in the word.  That would make it even harder to know, as a user, when the result list was going to jump around.

Thinking about all of these details made me really appreciate the thought that went into NewEgg’s design, where “search terms” act as individual filters, exactly like product attributes.  (The difference is, NewEgg doesn’t have live filtering of the results; the “search within” UI reloads the page with an additional text term included.)

Tuesday, May 7, 2019

On Tooling

I used to think that tools didn’t matter.  In creative acts, it’s often the person that makes the difference.  Compare violins and 80-90% of the difference in performance comes down to the player.  The $10,000,000 violin does sound a little better in anyone’s hands, but Benny can still get an excellent performance out of Brett’s violin.  Before I was interested in music, Ken Rockwell said the same thing about cameras—the camera doesn’t matter, the photographer does.

In college, we used Visual Studio as if it were “Notepad with a compile button,” so I really didn’t think that much of IDEs.

I went on to program in Vim and GVim for 20 years.  I finally started using plugins; first Syntastic, and then ALE.  I knew I was missing a proper debugging experience, but I didn’t want to give up everything else for that.

But then, things happened.  I used VS Code when homebrew broke MacVim for a while, back in October or so, in which I began to really enjoy Intelephense. It was kind of disorienting to be back in vim, and not have those omnipresent hints.

That was the turning point.  That was the key experience that made me think, “I really should try PHPStorm after all.”

Just as much as VS Code is more productive (code completion and docs!) than vim, PHPStorm is another level beyond my VS Code setup.  It has far more static checks, and it has much more effective refactoring tools.  Oh, and its XDebug integration actually works, unlike everything else I ever tried.

There was some code I ported from Perl to PHP in vim, and didn’t have a good way to test.  I knew it was risky, so I tried extra hard to make sure it was right, then pushed it to production anyway.  By the time someone tried to use the feature, months later, it crashed before even being able to flag the job as “started”.  I opened the file in PHPStorm and fixed around a half-dozen bugs based on its warnings alone.  Then it ran fine.

There’s another project where I have been using the code navigation features heavily (open by class, go to test/implementation/definition) as well as the rename and “change signature” refactorings.  It’s a massive rewrite of an API implementation; we outsourced development for political reasons, which blew up in our face as usual.  But I figured I could clean it up when we took delivery.

Let’s just say, it’s a good thing I have PHPStorm for it, and it’s also clear the external team didn’t. I started out by generating a lot of PHPDoc blocks and locking down the types, just to give PHPStorm some traction on finding the next layer of bugs.

And I know editors are religious, and some would say that I could carefully configure VS Code or vim to do more, to be better at PHP or at Symfony or whatever.  The thing is, PHPStorm did it out of the box. (vim is at a special disadvantage here, because it was designed before IDEs, so it doesn’t have a whole lot of shortcuts available for IDE functionality.)

PHPStorm isn’t perfect, of course.  It’s missing a few warnings, the type analysis doesn’t always work, and it doesn’t seem to handle reworking the namespace if a file is moved around a PSR-4 root.  It’s not very good at understanding a collection of independent CLI scripts—definitions leak across files that don’t include each other.  But all in all, I can’t really imagine taking on the API project in vim or VS Code.  Even with the test suite, it would be slower going, or buggier, or both.

Sunday, February 3, 2019

Deployment May Be Stateful

Our deployment process can technically accept a commit hash or an alternate branch to deploy, but by default, it updates to the currently checked-out branch tip. This default also applies to the auto-update code that brings our pre-baked AMI up-to-date when it launches.

For the most part, this is fine.  We keep master in a deployable state, and that’s always the desired version to deploy.  Thus, the whole system is stateless…

But, it also means that we can’t use our fancy “change branch” or “deploy commit” operations very much.  If we do, then the desired version is no longer what the AMI will auto-deploy when new instances launch from it.  We have to either build a new AMI (for the branch) or restore the deployability of master before any new instances launch.

If we reach the “deploy from tarball” goal, then life would be easier.  Builds could happen from any branch or commit naturally, and we could prevent a broken tarball from auto-deploying by simply deleting it.