Hypercritical
LLVM and OpenGL revisited
Credit where credit is due.
One of the few benefits of not attending WWDC is that I get to actually write about things that attendees cannot, since they’re bound by a scary non-disclosure agreement. The disadvantage, of course, is that I was never actually told any of this NDA-encumbered information. That makes writing about it kind of hard.
Fortunately for curious Mac geeks (and perhaps unfortunately for Apple…or perhaps not), WWDC information leaks out to the net in a steady stream. There’s just no way to keep 4,200+ jazzed-up developers quiet when it comes to cool new technology, I guess.
Leaks are hardly as reliable as getting the information straight from an Apple engineer on stage, however. There’s an element of divination to the task of sorting through all the claims. Sometimes one leak will contradict another, and sometimes there are big holes in the information.
But post I must (hey, I get excited too), and so I take my best shot at summing up the leaks and distilling the information into a sensible story. Sometimes I nail it. Other times, not so much. Case in point: my recent post about LLVM and OpenGL performance. While it’s true that LLVM is being used in Leopard to speed up certain OpenGL functions (this much was confirmed publicly by an Apple engineer) and it’s also true that OpenGL performance was shown to be doubled in a demonstration of multi-threaded OpenGL, that speed-up had nothing to do with LLVM.
How do I know this? Well, a nice thing about posting obscure technical information derived from WWDC leaks is that the quality and authority of those leaks tends to increase dramatically in the case where some published information needs to be corrected. Who can stand to see an incorrect attribution of performance increases in the Mac OS X OpenGL stack? Certainly not an engineer.
The LLVM-optimized OpenGL code in Leopard handles software vertex processing only, and is not normally triggered on GPUs that support hardware vertex processing (this includes all shipping Mac GPUs except the Intel integrated graphics hardware used in the MacBook, Mac mini, and the education-only version of the iMac). The high-end GPU used in the multi-threaded OpenGL demo of World of Warcraft had support for hardware vertex processing. Therefore, LLVM was not a factor in the demo.
That said, the folks at Apple are still very excited about LLVM. I’m sticking with my speculation about its possible future role in Mac OS X, compiling a lot more than just OpenGL software vertex processing code, even if it didn’t turn out to be the hero of the WoW demo at WWDC 2006. There’s always next year, LLVM.
This article originally appeared at Ars Technica. It is reproduced here with permission.