Tuesday, February 24, 2015

Broadband Asymmetry

Someone, somewhere on the intertubes, recently asked: “Why are we seeing 10:1 speed asymmetry?”

I’m pretty sure the speed asymmetry started for good technical reasons: through some signal magic, companies could deliver 48+ Kbps down and 33 Kbps up. ADSL took advantage of the technological landscape of the time (browsing and email were asymmetric) to deliver faster speeds where the customers cared, and indeed, my first DSL services were (if memory serves) in the 1.5/0.38 Mbps area, only a 4:1 asymmetry. My current service is ADSL at 9.3:1 (7.0/0.75), which is both notably closer to 10:1, and hasn’t qualified as broadband since the 4/1 Mbps definition went into effect.

Even though YouTube made video hit the web in a big way—they were there at the crossover point between better codecs and better bandwidth, plus some cleverness* on their part—most traffic was still downstream. The video being delivered was much larger than the return traffic that acknowledged receipt of the video.

The restricted upload rates are thus firmly grounded in historical reality, and they persist today because, I suspect, of two reasons.

One, there’s obviously a chicken-and-egg problem where uploads are less frequent because upload rates are low, and the rates are lower because uploads are less frequent. There’s a natural tendency for uploads to be less frequent anyway (how many funny cat pictures do you look at per picture you upload?) but low upload rates discourage actual upload usage in and of themselves.

Two, I think ISPs are rewarded if they keep upload rates low. Settlement-free peering has traditionally required each side to send “about equal” traffic as the other. If an ISP strongly encourages downloads through 10:1 or more asymmetry, then they will never come close to sending “about equal” traffic out of their network... and they can demand payment from anyone who wants access to their customers, such as Netflix.

I still believe that ISPs should be charging their own customers enough to support their own customers’ data requests including adequate network investment, but that doesn’t invalidate the reality.

As for 10:1 specifically, I can only speculate. It may be, that’s simply the size where sending email and uploading to Facebook “doesn’t seem to take too long” for users. And if more people sent more video to Facebook, then ISPs may reshuffle their plans to provide more upstream “so you can Facebook.” Regardless, in the absence of an obvious technical reason, I must assume it serves a specific marketing purpose.

* At one time, they showed 320-pixel video in a 425-pixel player. Although scaling technically hurts quality, it crossed the gap between “small” and “nicely sized,” looking much better on 1000-pixel browsers.

Monday, February 23, 2015

Perl is Dying

I like Perl. It's been a part of my life for over 14 years, since I had 5.6.0 on Red Hat 7.0. (Funny how Red Hat Enterprise is now 7.0, and the desktop project hasn't borne the name for over twenty releases.)

But, I'm getting the distinct impression that the language is losing its community. It seems to be getting too small to keep up with the nice things.

Case in point: AWS. Trawl metacpan.org, and there's no coherent set of packages for using Amazon services. There are some Net::Amazon modules, some Amazon modules, some AWS modules, and SimpleDB::Client doesn't even have Amazon in the name. UPDATE (2020-08-08): This update has been a long time coming, but I discovered and switched to Paws sometime in 2018. On the other hand, Paws::S3 (S3!!!) issues a warning on load that it's not finished. Yea, unto this very day.

Then there are the duplicate packages like Net::Amazon::DynamoDB and Amazon::DynamoDB, which are worlds apart. The former supports practically the original DynamoDB API, using signature version 2, and accepting numbers, strings, and sets of those types as data types. Not even binary types. Amazon::DynamoDB uses the version 4 signature and an updated data model, along with an async client built on IO::Async. Yes, seriously, in 2014 someone didn't use AnyEvent.

This wouldn't be so much of a problem if I didn't sail straight into RT#93107 while using it. The most insidious thing is that fds only get closed sometimes, so it mostly seems to work. Until it doesn't. But there's a patch... That's been open without comment for many months.

This is not unlike another patch, for AnyEvent::Connection, open even longer. I can understand if the maintainer thinks it's unimportant because it's "only a deprecation" or because an updated common::sense avoids the issue on affected Perl versions. But to not even comment? The package is apparently dead.

I recently ran into a little trouble finding any OAuth 2.0 server modules. I didn’t see anything general, nor anything for Dancer to be an OAuth server. We have plenty of clients and client-integrations to providers, though, and it didn’t take my boss long to find a PHP library that he wanted to use.

But enough about CPAN. Let’s talk Perl core.

Perl has been adding numerous ‘experimental’ features over the years. Since 5.16, the current status of all of these has been gathered in the perlexperiment documentation. Though 5.20.1 has now been released, there are items on that list introduced in 5.10.0.

An experimental feature that is neither accepted nor rejected is dead weight. Authors are discouraged from using them, so they’re not benefitting the majority of users, yet they require ongoing maintenance and care by the perl5-porters. An experimental feature that stays experimental for far, far longer than the deprecation cycle is the worst possible outcome.

Just imagine being excited about Perl and having conversations about, “Can it do «cool thing»?” or “Have they fixed «wart» yet?” and having to respond “Well, it’s experimental, so technically yes, but you shouldn’t use it.”

Eventually, that conversation is going to come down to “Why don’t you use «Ruby|JavaScript|PHP» instead? They have more cool stuff, and only one object system.”

Even if Perl made a major push to accept experimental features in “the next release,” there’s anywhere from months to years before that release is widely deployed as a baseline. Unless one wants to build their own Perl every week for their fresh cloud images, which is a whole other can of worms.

Meanwhile, the more unique stuff about Perl—regex syntax, contexts, sigils and their associated namespaces—aren’t exactly banner features anymore. Other languages have been absorbing regex syntax, and the rest of it tends to create bugs, even by authors who understand it all. That’s especially problematic when I change languages frequently, because the Perl-unique stuff takes longer to reload in my head than the rest of the language.

Overall, I’m thankful for what Perl has been for me. However, it seems like it has lost all its momentum, and it doesn’t have anything particularly unique to offer anymore. Certain parts of it like @_ and the choose-your-own-adventure object system feel particularly archaic, and I have to interact with them all the time.

Overall, it seems like basic stuff (IO::Poll working with SSL, please?) and easy fixes aren’t being taken care of, while new code isn’t showing up or isn’t high quality. If CPAN can’t keep up with the web, and the web is eating everything, then Perl will be eaten.

You are in a twisty little maze of event backends, all different.
Most of which can use most of the other backends as its own provider.

It pains me to write this. I don’t want to sound like a “BSD is dying” troll. I know I’m throwing a lot of criticism in the face of people who have put much more into Perl (and making Perl awesome) than I have. Miyagawa is perhaps singlehandedly responsible for why our API server started its life in Perl. But he can’t do everything, so the present signs are that no new Perl will be added to our codebase… because doing so would burden our future.

Sunday, February 8, 2015

Pointless metrics: Uptime

Back in the day when I was on Slashdot, a popular pasttime was making fun of Windows. The 9x line had a time counter that rolled over after 30-45 days (I forget and can’t dredge the real number up off the Internet now), which would crash the system. Linux could stay up, like, forever dude, because Unix is just that stable.

So I spent a while thinking that ‘high uptime’ was a good thing. I was annoyed, once upon a time, at a VPS provider that regularly rebooted my system for security upgrades approximately monthly, because they were using Virtuozzo and needed to really, seriously, and truly block privilege escalations.

About monthly. As in 30-45 days…

I thought that was bad, but nowadays, the public-facing servers my employer runs live for less than a week. Maybe a whole week if they get abnormally old before Auto Scaling gets around to culling them. And I’m cool with this!

I try to rebuild an image monthly even if “nothing” has happened, and definitely whenever upstream releases a new base image, and sometimes just because I know “major” updates were made to our repos (e.g. I just did composer update everywhere) and it’ll save some time in the self-update scripts when the image relaunches.

It turns out that the ‘uptime’ that the Slashdot crowd were so proud of was basically counterproductive. I do not want to trade security, agility, or anything else just to make that number larger. There is no benefit from it! Nothing improves solely because the kernel has been running longer, and if it does, then the kernel needs fixed to provide that improvement instantly.

And if the business is structured around one physical system that Must Stay Running, then the business on that server is crucial enough to have redundancy, backups, failovers… and regular testing of the failover by switching onto fresh hardware with fresh uptimes.

Thursday, February 5, 2015

Constant-time String Comparison

I mentioned hash_equals last post.  One of the things the documentation notes is that, should the string lengths not match, hash_equals will quickly return false and effectively reveal the length difference.

It seems to be a fairly common perception that this is a problem.  Take this StackOverflow answer:
This function still has a minor problem here:
if(strlen($a) !== strlen($b)) { 
    return false;
}
It lets you use timing attacks to figure out the correct length of the password, which lets you not bother guessing any shorter or longer passwords.
I believe an implementation that doesn’t fail fast on different lengths still leaks information, though.  Most of them (i.e. every one I’ve seen, including ones I’ve written before having this insight) compare all characters through the shorter of the two strings.  If an attacker can time comparisons and control the length of one string, then when the ‘constant time’ algorithm quits taking longer for longer strings, the attacker knows their supplied string is the longer one.

Therefore, I don’t believe “fail fast on different string lengths” is something to be concerned with.  If the threat model is concerned with a timing attack, then simply moving it around the function doesn’t actually form a defense.

Tuesday, February 3, 2015

When Did PHP Get That?

5.6.3

5.6.0
5.5.0
  • Generators
  • finally keyword for exception handling with try/catch
  • Password hashing API: password_hash and related functions
  • list() in foreach: foreach($nest as list($x, $y))
  • PBKDF2 support in hash and openssl (hash_pbkdf2, openssl_pbkdf2)
  • empty() accepts arbitrary expressions, not just lvalues
  • array and string literal dereferencing: "php"[2] or [1,2,10][2]
    • But is that useful?
  • ::class syntax: namespace A\B { class C { }; echo C::class; /* A\B\C */ }
  • OpCache extension added
  • MySQL-nd supports sha256-based authentication, which has been available since MySQL 5.6.6
  • curl share handles via curl_share_init and friends [added to this post on 2015-02-17]
  • DEPRECATED: preg_replace's /e modifier, in favor of preg_replace_callback
5.4.0
Removed:
  • safe_mode
  • register_globals
  • magic_quotes
  • call-time pass-by-reference
  • using TZ when guessing the default timezone
  • Salsa10 and Salsa20 hash algorithms
Added:
  • Traits
  • Class::{expr} syntax
  • Short array syntax: [2,3] for array(2,3)
  • Binary literal syntax: 0b1101
  • Closures can use $this in the body: function some_method () { return function () { return $this->protected_thing(); } }
  • Using new objects without assigning them: (new Foo)->bar()
  • CLI server
  • session_status function
  • header_register_callback function
  • mysqli_result now implements Traversable: $r=$dbh->query(...); foreach ($r as $res) { ... }
  • htmlspecialchars() and friends default to UTF-8 instead of ISO-8859-1 for character set.
    • This would be changed to use the default_charset INI setting in 5.6.
    • 5.4's documentation pointed out that default_charset could be used by sending an empty string, "", as the character set to htmlspecialchars—and said you didn't really want it.
5.3.0
Added:
  • Namespaces
  • Late static binding and associated 'static::method()' call syntax
  • "Limited" goto support
  • Closures aka anonymous functions, although they cannot make use of $this until 5.4
  • __callStatic() and __invoke() magic methods, to support $obj($arg) call syntax
  • 'nowdoc' and double-quoted 'heredoc' syntax
    • $x=<<<'FOO' consumes text until FOO and does not do any interpolation / variable substitution on it
    • $x=<<<"FOO" acts just like traditional $x=<<<FOO, both consuming text until FOO and doing interpolation / variable substitution as if it were a double-quoted string.
  • Constants definable outside of classes with the const keyword, not just the define function
  • Short ternary operator: a ?: b equivalent to a ? a : b
  • $cls::$foo syntax for accessing static members of a class whose name is stored in a variable ($cls in this example)
  • Chained exceptions: the constructor sprouted an extra parameter for a previous exception (generally should mean "the cause of this exception being constructed")
  • DEPRECATED: many features that would be removed in 5.4.0, including register_globals, magic_quotes, and safe_mode.