Why are device pixels so meaningful that we get stuck designing around pixels, even though we "know" we should design for device-independent units?
The main characteristic of a pixel is that it is crisp. When rendered on a display with 50% more PPI, a 1px line will be either thinner (physical size reduced) or antialiased (blurred). On the other hand, doubling the PPI lets those 1px lines render exactly as crisply, on precisely 2 physical pixels. (More crispness is possible, but Apple's version doesn't alter any art.)
If a user wants to zoom so that features are physically 50% larger, then the same problems of rendering 1-pixel features on 1.5px areas occur, but this time we know we can't tweak physical size. Antialiasing happens instead, resulting in a zoomed but blurry UI. Worse, subpixel rendering adds noise when not rendering precisely onto the intended subpixels, but the font rendering is done by the time the zooming layer gets to see it on Linux.
Unless everything is lovingly hinted and/or provided at multiple PPI steps, there's basically no solution to the problem. I'm willing to bet that people will skip properly handling multiple PPI settings if it's any more complicated than supporting power-of-2 sizes. As long as pixels matter, which they will up to 600 PPI or more, people are going to design for pixels.
Post a Comment