A hard look at L4 in Leopard
Is the L4 microkernel a realistic option for Apple?
The suggestion that Apple could use the L4 microkernel in Leopard had a very brief honeymooon period, it seems. Dubious readers immediately chimed in by email and in the comments section, some rejecting the idea outright. Here are the most popular objections.
Mach does a lot more than L4. L4 is a very minimal microkernel. All of Mach's features that are missing from L4 would have to be re-implemented.
Some Mach APIs are exposed and supported by Apple in Mac OS X. Any replacement kernel would not only have to do everything that Mach currently does, it'd also have to provide the same APIs.
Mach's weaknesses would not be improved upon by L4. Context switching and IPC overhead would be just as bad or worse in L4.
At the end of my last post, I linked to the "Darbat" project, an actual port of the core of Mac OS X to the L4 microkernel. As some readers noted, Darbat doesn't actually replace Mach with L4. Instead, it runs Mach on top of L4.
Keeping Mach around solves the feature and API issues cited above. But doesn't this setup add even more overhead? If there are doubts that an L4-based Mac OS X kernel would be any faster than a Mach-based one, surely a kernel that layers Mach on top of L4 would be slower still.
I recently exchanged email with Charles Gray, project manager for the Darbat project, about the feasibility and wisdom of using L4 in Darwin, and about the possibility that Apple might use L4 in Leopard. To start, here's what he had to say about the decision to put Mach on top of L4 instead of entirely replacing it.
Vast amounts of user-land code rely on Mach - getting rid of it completely would be a huge task. Also, there's no really good reason to do so. Mach provides a lot of complex services which work and don't appear to be a performance bottle-neck. Re-writing them would buy you nothing. The overall strategy for [Darbat] is to basically optimise out the bits of Mach where it counts.
Darbat is still in the “get it to work” stage, however. Is the Darbat team confident that it can actually outperform the stock Mac OS X kernel? If so, why? Here's Charles Gray's answer.
L4 has far better performance on raw IPC cost, thread operations and VM operations. Of course, L4 also does less for you, but that's because you don't need all the features all the time.
Don't so much think of it as replacing Mach, but lifting Mach up and making short-cuts in and around the edges. […] We expect to find lower CPU usage (and hence better throughput) in heavily multi-tasked or multi-threaded systems since L4 can do much more light-weight scheduling, switching, synchronisation and message passing.
This means in complex systems with webservers, databases, applications and what not should be clear winners.
Performance isn't the only goal of Darbat. Increased stability is on the menu as well. In Mac OS X today, the Mach and the BSD portions of the kernel run in the same address space for performance reasons. In Darbat, the entire Mac OS X kernel (XNU) runs as a fully de-privileged application on top of L4. The same goes for IOKit device drivers. It's all in user space.
This arrangement begs us to circle back to performance, however. With all these pieces now in user space, won't this just add even more, costly transitions between kernel space and user space? How in the world can this ever be faster than the current arrangement where Mach and BSD are in the same address space and device drivers run in kernel mode? Charles Gray answers.
Not all context switches cost the same. When you do a Mach IPC, basically you end up scrubbing your L1 cache and vast amounts of L2. You spend a lot of time just fetching things from further down the memory hierarchy. L4 is specifically designed to be as cache friendly as possible on operations such as IPC.
Though Mach is not used as a true microkernel in Mac OS X, Darwin's internal discipline with respect to Mach also helps Darbat's cause.
Only a naive implementation switches eagerly. Proper microkernel designs avoid needless context switches. Darwin already tries to do this, so it makes our job easier.
Setting aside performance, the stability benefits of putting XNU and IOKit in user space are obvious. Buggy drivers can no longer bring down the entire OS, and even bugs in the “kernel” (meaning the XNU portion, which comprises the vast majority of the code) aren't fatal. This kind of added insurance against total system failure is a boon to server applications.
For desktop users, however, the rewards are less clear. If XNU crashes, for example, it'll also take down all the applications running on top of it. The fact that the core operating system is still running and can recover from the crash will be cold comfort to a user who has just seen all his applications disappear.
Nevertheless, it's unwise to discount any increase in abstraction. Windows NT famously moved its graphics drivers from user space to kernel space in version 4.0, decreasing system stability in order to increase performance. But with Windows Vista, Microsoft has revisited that decision. The Vista driver model aims to move some drivers back into user space. In the long view, the NT4 decision can be seen as a blip on a graph whose overall slope leads, inexorably, to more abstraction over time.
Viewed in this light, Darbat's user space XNU and IOKit look a bit more prescient. It may still be too early for obviously applicability to Mac OS X as a consumer product, but the day will inevitably come when the current, less-abstracted arrangement is seen as a liability.
So what's the final score for the L4-based Darbat project? It does an admirably job of maintaining compatibility—and avoiding a ton of difficult and unnecessary work—by essentially retaining all the public-facing Darwin kernel code as-is. Moving IOKit and XNU to user space provides a healthy dose of future-proofing, if not necessarily huge benefits to end users in the short term. And if, as the Darbat folks confidently predict, this can all be accomplished while also increasing performance, it seems like we have a winner.
Well, in theory, anyway. Darbat is still a work in progress. And so we (finally) come back to Leopard. Darbat may turn out to be the bee's knees, but does it, or L4 in general, have anything to do with Leopard? I asked Charles Gray point-blank if Apple is using L4 in Leopard. His answer was succinct.
We have no idea. If Apple uses L4 in Leopard it's got nothing to do with us. We would find it surprising, however, since the L4 community is pretty close-knit.
This seems like a mortal blow to the L4-in-Leopard theory. People keeping secrets or under NDA usually either refuse to talk at all, or give the old “no comment” line when questioned on sensitive topics. A flat denial is pretty clear.
Perhaps more damning is the notion that the “L4 community” is entirely unaware of any Apple involvement or interest in L4. Granted, Apple is pretty good at keeping secrets, but this good? Even the x86 transition (or rather, the possibility thereof) was telegraphed at as far back as 1999. But so far, Apple has giving zero indication that it has any interest at all in L4.
Apple hasn't (as far as I know) hired any L4 developers, and it's not involved in the most prominent project that aims to add L4 to the core of Mac OS X. Heck, the entire idea that Mach is going to be replaced/supplemented to begin with is just a desperate attempt to explain Apple's very strange behavior surrounding the x86 Tiger kernel source code.
Ah well, kernel replacement was always a long-shot. If it happens at all, I still think L4, particularly in the arrangement used by Darbat, would be a very good fit for Apple. But in the Leopard timeframe? Considering the fact that Darbat itself isn't even close to meeting that deadline, I find it hard to believe that some super-secret L4 project inside Apple could do so.
That's life in the world of Mac rumors and speculation, I suppose. I can only imagine what Apple's kernel engineers would think of all this guesswork. Whether they'd be snorting with derision or giggling with glee, I can't help but think they're at least a little glad that there are other geeks out there who find the future of the Mac OS X kernel as interesting as they do.
Many thanks to Tom Birch and Charles Gray from Darbat, and to all my readers for their insights and enthusiasm.
This article originally appeared at Ars Technica. It is reproduced here with permission.