Late last year, we started having problems where services did not appear to get proper data after a reload. But I have a hypothesis.
We have a monolithic server build, where several services end up running under one Apache instance. I thought it would be nice to make each one think it was the only service running. Instead of enumerating badness, I switched to using
TemporaryFileSystem= to eliminate all directories, then allowing access the one needed for the service.
The problem is, I don’t think this gets reset/updated during a reload. Deployment renames the working directory, and renames an all-new directory in at the original name. It’s possible that even though the reload happens, the processes are not changing working directory, thus still looking in the original one.
Things that have never failed were failing; in particular, Template Toolkit was still using stale template files. I did some digging and found that it does have a cache by default, but it only holds entries for up to a second. Furthermore, because I was very strict about not having unexpected global state, we create a new Template object per request, and shouldn't be sharing the cache.
Anyway, systemd did not actually document what it does with TemporaryFileSystem on reload, so I can’t be sure. I ended up abandoning
TemporaryFileSystem= entirely. I only have so much time to mess around, and there’s only so much “risk” that would be mitigated by improved isolation. It’s almost guaranteed that the individual applications have defects, and don’t need a multi-app exploit chain.
(Not in the sense that I know of any specific defects; in the sense that all software has defects, and some proportion of those can be leveraged into a security breach.)