Tuesday, July 14, 2015

TIL: HTTP Upgrade

I’m thinking about creating devproxy in a different language. Tracking down some relevant specs, I found the CONNECT RFC.

This RFC includes not only the definition of CONNECT, but alternatively, the use of an Upgrade header to convert a regular HTTP connection to HTTPS, either with optional or mandatory encryption. It was like STARTTLS for HTTP, in a way.

CONNECT won out in the real world, of course, but I find this lost feature kind of fascinating.

Quick comparison:

  • Upgrade is a hop-by-hop header. The browser/proxy and proxy/upstream connection MAY be using different levels of encryption.
  • When using Upgrade, the proxy needs a valid TLS certificate to handle encrypting traffic with its clients.
  • Also, this means the proxy can still view/cache/log the data that was encrypted on the wire.

CONNECT is basically the opposite: once the request is made and the proxy allows it, the proxy reverts to being just as dumb as any router on the Internet. All it can do is shuttle the bytes, so the same bytes that leave the origin end up at the client without any caching or interpretation. Since CONNECT is mainly used for HTTPS, those bytes are most often encrypted, as well.

Google may have tried the Upgrade header when first developing SPDY, but they didn’t like the extra round-trip nor the ability for intermediate devices on the network to interfere (intentionally or otherwise.) So it didn’t end up getting resurrected from the dustbin of history for that, either.

So maybe I didn’t learn about it today, but only rediscovered it.

Friday, July 10, 2015

Letting Go of Go

The things that originally attracted me to Go were the concurrency model, the interface system, and the speed. I was kind of meh about static typing (and definitely meh about the interface{} escape hatch) but figured the benefits might be worth the price?

But it hasn’t really turned out that way. I still like the concept of having no locks exposed to the user (safely hidden in the channel internals) à la Erlang or Clojure. But I’m not going to pay for it with err everywhere, static types, a profusion of channels, and a lack of generics.

Seriously, all of the synchronization choices of Go seem to come down to, “Use another channel.” Keeping track of so many channels among a few stages of processing is a whole new layer of heavy work. That would be pretty much unnecessary if channels could be “closed with error,” which could then be collected by the UI end.

Then there is the whole problem of generics. The runtime clearly has them: basically, anything creatable through make() is generic. But there’s no way for Go code to define new types that make can create generically. There’s no way for Go code to accept a type name and act on it as a type, either.

You can pretend to hack around it with interface{} and runtime type assertions, but you lose all of the static checking. The compiler itself knows that a map[string]int uses strings as keys and can only store integers, but an interface{} based pseudomap won’t fail until runtime.

To get the purported advantages of static typing, the data has to be fit to the types that are already there.

I’d almost say it doesn’t matter to my code, but it seems to be a big deal for libraries. How they choose their data layout has effects on callers and integration with other libraries. I don’t want to write a tall stack of dull code to transform between one and the other.

The static types thing, I’m kind of ambivalent about. If the compiler can use the types to optimize the generated code, so much the better. But it radically slows down prototyping by forcing decisions earlier. On the balance, it doesn’t seem like a win or a loss.

Especially with all the performance optimization work centering on dynamic languages, refined in Java (to a certain extent), C#, and JRuby, now flowing into JavaScript. It’s getting crazy out there. I don’t know if static typing is going to hold onto its edge.

I think that brings us back around to err. Everywhere. I really want lisp’s condition system instead. It seems like a waste to define a new runtime, with new managed stacks, that doesn’t have restarts and handlers. With the approach they’ve chosen, half of go code is solving the problem, and the other half is checking and rethrowing err codes.

Go isn’t supposed to have exceptions, but if you can deal with the limitations of it, recover is a thing. (But it’s still not Lisp’s condition system, and by convention, panic/recover isn’t supposed to leak across module boundaries.)

I forgot about the mess that is vendoring and go get ruining everything, but I guess they’re working on fixing that. It’s a transient pain that’ll be gone in a couple more years, too late for my weary soul.

But am I wrong? What about the “go is the language of the cloud” thing that Docker, Packer, and friends have started? I don’t think Go is “natively cloud” because that’s meaningless. I think a few devs just happened to independently pick Go when they wanted to experiment, and their experiments became popular.

It surely helps that Go makes it easy to cross-compile machine-code binaries that will run anywhere without stupid glibc versioning issues, but you know what else is highly portable amongst systems? Anything that doesn’t compile to machine code. For instance, the AWS CLI is written in Python… while their Go SDK is still in alpha.


I find the limitations more troublesome than the good parts, on the balance. I recently realized I do not care about Go anymore, and haven’t written any serious code in it since 1.1 at the latest. It’s not interesting on all sides in the way Clojure is.