Hypercritical


Declaration of resolution-independence

Scaling the Mac OS X user interface.

Dave Hyatt recently posted a detailed explanation of how Safari will handle what he calls "high DPI web sites." The article contains a nice description of the pixel density issue, albeit from the perspective of a single application. The broader solution for this problem, "scalable UI" or “resolution-independent UI,” exists at the operating system level. Apple laid the groundwork for this technology in Mac OS X 10.4 Tiger. At the time of Tiger’s introduction, I wrote this.

Which will come first, the affordable 300dpi display or the resolution-independent version of Mac OS X? The race is on.

Apple has consciously chosen to stick to roughly 100dpi on all of its screens in the past few years. This bothers people like me who want higher-density displays. (The 1024x768 resolution of the 12-inch PowerBook was what pushed me to buy the 15-inch model instead.) But 100dpi is a reasonable compromise. And without a resolution-independent user interface across the entire OS, a compromise is definitely required. Although individual applications can provide their own means to scale content (e.g., increasing the text size in a web browser), this doesn’t help global elements like menus and buttons. The whole UI needs to scale, not just the content in individual applications.

The typical geek reaction to this problem can be summed up in one word. “Vectors!” Text scales nicely because the letter shapes are defined by a series of parameterized equations (vectors) that can be “solved” at any resolution, yielding smooth curves as the characters change size. So, the geek thinking goes, why not draw everything with vectors? Scrollbars, menus, buttons, icons, everything! Problem solved!

Congratulations. Now you have two more problems. Problem number one is finding a way to draw all those vectors efficiently. This is a “sort-of-solved” problem in that we know what to do, and even how to do it in hardware. Unfortunately, such hardware is not commonplace. Most video cards in use today accelerate only “old style” raster-based 2D graphics, and the 3D capabilities tend to favor speed over quality and consistency.

It’s possible that existing consumer video cards could be coerced into doing efficient vector drawing in hardware. Apple tried to do just that in Tiger, but then had to back off at the last minute and disable the feature in the shipping version of the OS. It remains disabled to this day.

Apple learned some important lessons from its efforts. It’s hard to make different consumer-level video cards produce exactly the same output when drawing hardware-accelerated, composited, vector graphics. It’s often difficult (and sometimes impossible) for some video cards to match the level of quality achieved when the CPU does all the drawing.

Perhaps most importantly, applications need to change their drawing behavior—the APIs they call and the frequency with which they call them—to take advantage of a hardware-accelerated drawing system. In fact, the optimal behavior of applications under acceleration is often the opposite of what works best for unaccelerated drawing. The performance of unmodified applications can even decrease under hardware acceleration in some cases.

Even in the best case, a move to “vectors everywhere” requires a transition period for new consumer video cards provide better support for the required features, for consumers to buy those cards (or new computers that include them), and for developers to change the behavior of their applications. That’s a lot to overcome. It’d take some time, but it’s possible.

On to problem number two. Text and other line art is well suited to being described by vectors. Photographs, on the other hard, are not. It’s possible to create photorealistic vector artwork, but it often degenerates into vector elements roughly the size of pixels. At that point, you’ve essentially created a computationally intensive bitmap. Even with hardware acceleration, that type of image can be taxing to draw. Now imagine that every icon in the Dock is a scaled photorealistic vector image. That’s a heck of a lot of GPU cycles to spend drawing a single interface element, and that’s before even considering the actual content on the rest of the screen.

These are all tractable problems, but at a certain point the approach bears reexamining. The goal is to be “scalable,” not necessarily to be able to scale indefinitely. There’s little need for a perfectly rendered, exquisitely detailed “OK” button to fill an entire 30-inch screen, for example. Practically speaking, “bounded scaling” is what we’re really talking about here. Let’s call it from 72dpi to 600dpi. That’d cover the next decades or so of display technology progress. Thinking longer-term, at around 1,200dpi there’s a point of very rapidly diminishing returns for increased density.

Given this newly refined goal of bounded scaling, there’s a solution that has nearly all of the benefits of “vectors everywhere,” but none of the drawbacks. It’s the same solution that Hyatt describes in his post about Safari. (See, I came back to it eventually.) Let’s continue to use bitmaps where they’re the most appropriate format. Simply provide higher-resolution bitmaps for high-DPI displays. Given a realistic upper-bound on pixel density, it’s possible to ensure a high-quality appearance with reasonably sized bitmaps.

As mentioned in my Tiger review, Apple has already doubled the maximum resolution of icon resources in Mac OS X. If Apple does the same thing for all other bitmaps that make up the user interface, then the Leopard UI can easily scale to accommodate twice the current pixel density: 200dpi screens. If Apple triples the resolution of all bit-mapped resources, then we’re ready for those fabled “affordable 300dpi screens,” whenever they arrive. No vectors required.

Increased memory footprint is a drawback of this approach, of course. But I think it’s a sensible trade of RAM for CPU/GPU cycles. More importantly, sticking with bitmaps doesn’t require a potentially painful and time-consuming transition to newly-capable consumer video hardware and a comprehensive overhaul of every application’s drawing code before the benefits can be realized.

Finally, this solution is not the same thing as “vectors nowhere.” A lot of the Mac OS X UI does actually lend itself well to vector drawing. Aqua buttons, for example, can be created fairly easily using only a few shapes and gradients. When the hardware and software is ready, there’s nothing stopping Apple from transitioning individual UI elements to vectors. They’ve got a resolution-independent drawing API ready and waiting to do the job.

As Hyatt puts it in a follow-up to his original post, “The fact that Web content will be zoomed (either via a browser-level feature or an OS-level feature) is inevitable.” I’ve been going back and forth on whether I think Leopard will have interface scaling enabled, or if we’ll have to wait for 10.6 for everything to come together. The work that Hyatt and company are putting into getting Safari ready for high-DPI screens makes me lean towards a 10.5 coming-out party for scalable UI. I hope I’m right. After all, the longer I have to wait for higher-density Apple laptop screens, the closer I get to the age where I can’t read one at its native scale factor anyway.


Update: apparently there’s widespread misunderstanding about the basic principles behind Dave Hyatt’s post. Determined fellow that he is, Hyatt has posted a third explanation in an attempt to get everyone on board the clue train.


This article originally appeared at Ars Technica. It is reproduced here with permission.