Editor's note: I found this in my drafts from 2012. By now, everything that can be reasonably converted to FastCGI has been, and a Perl-PHP bridge has been built to allow new code to be written for the site in PHP instead. However, the conclusion still seems relevant to designers working on frameworks, so without further ado, the original post follows...
The first conversions of CGI scripts to FastCGI have been launched into production. I have both the main login flow and six of the most popular pages converted, and nothing has run away with the CPU or memory in the first 50 hours. It’s been completely worry-free on the memory front, and I owe it to the PHP philosophy.
In PHP, users generally don’t have the option of persistence. Unless something has been carefully allocated in persistent storage in the PHP kernel (the C level code), everything gets cleaned up at the end of the request. Database connections are the famous example.
Perl is obviously different, since data can be trivially kept by using package level variables to
stash data, but my handler-modules (e.g.
Site::Entry::login) don’t use them. Such
handler-modules define one well-known function, which returns an object instance that carries all
the necessary state for the dispatch and optional post-dispatch phases. When this object is
destroyed in the FastCGI request loop, so too are all its dependencies.
Furthermore, dispatching returns its response, WSGI style, so that if dispatch
dies, the FastCGI loop can return a generic error for the browser. Dispatch isn’t
allowed to write anything to the output stream directly, including headers, which guarantees a blank
slate for the main loop’s error page. (I once wrote a one-pass renderer, then had to grapple with
questions like “How do I know whether HTML has been sent?”, “How do I close half-sent HTML?”, and
“What if it’s not HTML?” in the error handler.)