Hypercritical


April 2013


The Lottery

In a recent podcast, I rejected the idea of a lottery system for selling WWDC tickets as too random. I wanted to preserve at least some aspect of the process that rewarded the most enthusiastic Apple fans: the people who are willing to be roused from bed at 2 a.m. and rush to their computers to buy tickets; the crazy ones; the people who just want it more.

After yesterday’s experience of watching WWDC tickets sell out in what I measured to be less than 2 minutes, I’ve changed my mind. If the tickets had sold out in, say, 10 minutes (and assuming no server errors—more on that in a moment), then dedicated buyers would have been rewarded. If you couldn’t be bothered to be online until more than 10 minutes after the tickets went on sale, well, tough luck. Someone else wanted it more.

But tickets selling out in less than 2 minutes does not reward anyone’s dedication. We were all online at 10 a.m. PDT sharp, all ready to purchase, all equally dedicated. It was a de facto lottery, with an extra layer of pointless stress added on top.

Apple’s servers performed admirably…for about the first 5 seconds after tickets went on sale. After that, it was a crapshoot. Even if the tickets had sold out in an hour, it’d still effectively be a lottery if that hour was filled with server errors. You’d “win” if you happened to get through the purchase process with no errors.

An actual lottery, pre-announced, with no time pressure for entry, would be more equitable than what happened yesterday. That’s what I recommend for next year.

The Heart of the Matter

Many more people want to attend WWDC than the conference can accommodate. There has been no shortage of interesting suggestions for how to fix this. Broadly speaking, WWDC has not changed in decades. Apple and its developer ecosystem, on the other hand, are radically different than they were just five years ago. Something has to give.

I’ve heard many non-developers discuss the rush to get WWDC tickets as if the big draw is the keynote presentation, where Apple typically reveals new products. That is the most interesting part of the conference for the public, but it’s not why WWDC sells out so fast.

Developers flock to WWDC because it’s a rare opportunity to communicate with Apple directly, human to human. The best way to decrease the demand for WWDC tickets is for Apple to increase its communication with developers throughout the year. And by communication I don’t mean throwing documentation or even video presentations over the wall to developers; I mean staffing up for more real, personal, timely, informal contact with developers outside the court-like atmosphere of the App Store review process or the artificial scarcity of Technical Support Incidents.

Apple’s decision to release WWDC session videos to all registered developers during the conference was long overdue, but it clearly didn’t decrease demand for WWDC tickets enough to make a difference. Maybe next year, after developers have experienced their first tape-delayed WWDC, it will make a dent. But I really believe that increased, improved communication between Apple and developers on all fronts is the best long-term solution.


Code Hard or Go Home

Come at me, Bro

When Apple decided to make its own web browser back in 2001, it chose KHTML/KJS from the KDE project as the basis of its rendering engine. Apple didn’t merely “adopt” this technology; it took the source code and ran with it, hiring a bunch of smart, experienced developers and giving them the time and resources they needed to massively improve KHTML/KJS over the course of several years. Thus, WebKit was born.

In the world of open source software, this is the only legitimate way to assert “ownership” of a project: become the driving force behind the development process by contributing the most—and the best—changes. As WebKit raced ahead, Apple had little motivation to help keep KHTML in sync. The two projects had different goals and very different constraints. KDE eventually incorporated WebKit. Though KHTML development continues, WebKit has clearly left it behind.

When Google introduced its own web browser in 2008, it chose WebKit as the basis for its rendering engine. Rather than forking off its own engine based on WebKit, Google chose to participate in the existing WebKit community. At the time, Apple was clearly the big dog in the WebKit world. But just look at what happened after Google joined the party. (Data from Bitergia.)

WebKit: Reviewed Commits
WebKit reviewed commits per company
WebKit: Active Authors
WebKit reviewed commits per company

Given these graphs, and knowing the history between Apple and Google over the past decade, one of two things seemed inevitable: either Google was going to become the new de facto “owner” of WebKit development, or it was going to create its own fork of WebKit. It turned out to be the latter. Thus, Blink was born.

Google has already proven that it has the talent, experience, and resources to develop a world-class web browser. It made its own JavaScript engine, its own multi-process architecture for stability and code isolation, and has added a huge number of improvements to WebKit itself. Now it’s taken the reins of the rendering engine too.

Where does this leave Apple? All the code in question is open-source, so Apple is free to pull improvements from Blink into WebKit. Of course, Google has little motivation to help with this effort. Furthermore, Blink is a clearly declared fork that’s likely to rapidly diverge from its WebKit origins. From Google’s press release about Blink: “[W]e anticipate that we’ll be able to remove 7 build systems and delete more than 7,000 files—comprising more than 4.5 million lines—right off the bat.” (There’s some streamlining in the works on the other side of the fence too.)

Does Apple—and the rest of the WebKit community—have the skill and capacity to continue to drive WebKit forward at a pace that matches Google’s grand plans for Blink? The easy answer is, “Of course it does! Apple created the WebKit project, and it got along fine before Google started contributing.” But I look at those graphs and wonder.

The recent history of WebKit also gives me pause. Google did not want to contribute its multi-process architecture back to the WebKit project, so Apple created its own solution: the somewhat confusingly named WebKit2. While Google chose to put the process management into the browser application, Apple baked multi-process support into the WebKit engine itself. This means that any application that uses WebKit2 gets the benefits of multi-process isolation without having to do anything special.

This all sounds great on paper, but in (several years of) practice, Google’s Chrome has proven to be far more stable and resilient in the face of misbehaving web pages than Apple’s WebKit2-based Safari. I run both browsers all day, and a week rarely goes by where I don’t find myself facing the dreaded “Webpages are not responding” dialog in Safari that invites me to reload every single open tab to resume normal operation.

Princes of Android

Having the development talent to take control of foundational technologies is yet another aspect of corporate self reliance. Samsung’s smartphone business currently relies on a platform developed by another company. Leveraging the work of others can save time and money, but Samsung would undoubtedly be a lot more comfortable if it had more control over the foundation of one of its most profitable product lines.

The trouble is, I don’t think Samsung has the expertise to go it alone with a hypothetical Android fork. Developing a modern OS and its associated toolchain, documentation, developer support system, app store, and so on is a huge task. Only a handful of companies in history have done it successfully on a large scale—and Samsung’s not one of them. Sure, it’s possible to staff-up and build that expertise, but it’s not easy and it requires years of commitment. I’d bet against Samsung pulling it off.

Facebook Home can also be viewed through the lens of developer-based self reliance. Facebook clearly wants to make sure it’s an important part of the future of mobile computing, but that’s not easy to do when you’re “just a website.” Home lets Facebook put itself front and center on existing Android-based smartphones.

It seems unwise for Facebook to build its mobile strategy on the back of a platform controlled by its mortal enemy, Google. But perhaps Home is just the first step of a long-term plan that will eventually lead to a Facebook fork of Android. If so, the question inevitably follows: can Facebook really take ownership of its own platform without help from Google?

Facebook has proven that it can expand its skill set. Over the past few years, it’s been hiring talented designers and acquiring companies with proven design chops. Facebook Home is the first result of those efforts, and by all accounts, the user interface exhibits a level of polish more commonly associated with Apple than Facebook.

Still, a lock screen replacement is a far cry from a full OS. Maybe Facebook just plans to ride the bear, relying on Google to do the grunt work of maintaining and advancing the platform for as long as it can, while Facebook slowly takes over an increasing amount of the user experience.

Some people wonder how Google can possibly have any power in the Android ecosystem if the source code is free. Facebook Home has been cited as an example of Google’s ineffectualness. Look at how one of Google’s fiercest enemies has played it for a fool, they say. Google did all the hard work, then Facebook came in at the last minute and co-opted it all for its own purposes.

But look again at the graphs above. Now imagine similar graphs for the Android source code. Any company with Android-based products that wants to be truly free from Google’s control has to be prepared—and able—to match Google’s output. Operating systems don’t write themselves; platforms don’t maintain themselves; developers need tools and support; technology marches on. It’s not enough just to just fix bugs and support new hardware. To succeed with an Android fork, a company has to drive development in the same way that Apple did when it spawned WebKit from KHTML, just as Google is doing as it forks Blink from WebKit.

This is not a real-time strategy game. Companies like Samsung and Facebook can’t just mine for more resources and build new developer barracks. Building up expertise in a new domain takes years of concerted effort—and a little bit of luck on the hiring front doesn’t hurt, either.

Facebook may already be a few years into that process. Its recent acquisition of the mysterious, possibly-OS-related startup osmeta provides another data point. Samsung, meanwhile, has just joined an exploratory project to develop a new web rendering engine.

Google certainly has its own share of problems, but what may save it in the end is its proven ability to tackle ambitious software projects and succeed. The challenge set before Facebook, Samsung, and other pretenders to the Android throne is clear. And as a wise man once said, you come at the king, you best not miss.


Technological Conservatism

Technology can be a surprisingly ideological topic. In politics, the spectrum of belief is right on the surface: conservative/liberal, right/left. In tech, that same spectrum exists, but it’s rarely discussed. What’s more, unlike political beliefs, I’m not sure most people are even aware of their own core ideas about technology.

Anyone who’s read the past three months of posts on this site could be forgiven for pegging me as a technological ideologue. Though I draw the line at outright dogmatism, railing against technological conservatism has indeed been a recurring theme of mine.

To illustrate the concept, I’ll use myself as an example. Back in the early days of the operating system now known as OS X, I was not happy that the user-customizable Apple menu from classic Mac OS had been replaced with an anemic, non-customizable incarnation. In classic Mac OS, the Apple menu was how I quickly found and launched commonly used applications and Desk Accessories. Apple removed this feature in Mac OS X and replaced it with…nothing, really. The Dock attempted to cover some of the same bases, but the Apple menu could comfortably hold many more items, and in a much more compact form.

In this situation, a technological-conservative position is that Mac OS X needs something like the classic customizable Apple menu. It wouldn’t necessarily have to be an Apple icon in the upper-left corner of the screen. It could be a hierarchical menu spawned from the Dock or another screen corner. (This was actually a popular request back in the days before the Dock supported any form of hierarchy.) The old OS had a feature like this, and it was useful. The new OS needs a similar feature, or it will be less useful.

Beneath what seems like a reasonable feature request lurks the heart of technological conservatism: what was and is always shall be.

In my review of the public beta, I was self-aware enough to moderate my position, merely asking for “some sort of mechanism that equals or betters the functional merits of the Apple Menu.” But what my conservatism prevented me from seeing was that things like LaunchBar, Quicksilver, and (later) Spotlight would provide similar functionality in an entirely different way, and with far more efficiency and elegance.

No one wants to think of themselves as a Luddite, which is part of what makes technological conservatism so insidious. It can color the thinking of the nerdiest among us, even as we use the latest hardware and software and keep up with all the important tech news. The certainty of our own tech savvy can blind us to future possibilities and lead us to reject anything that deviates from the status quo. We are not immune.

Previously on Hypercritical…

Consider four of my recent posts, each of which, in its own way, pressed uncomfortably against the dark matter of technological conservatism among tech nerds.

In response to The Case for a True Mac Pro Successor, a few readers insisted that there’s no longer anything technically interesting about high-performance personal computers. A new Mac Pro would just be a pair of the latest Xeons, some ECC RAM, a few SSDs and/or hard drives, and a big, hot video card.

That’s what the Mac Pro has been, so that’s what it will always be, right? And there it is.

Even explicitly listing several technologies that debuted on Apple’s high-end Macs did not derail the people whose feedback was based on the premise that the Mac Pro will never be anything that it is not already. This assumption is counter to the entire purpose of a product like the Mac Pro. It’s meant to push the envelope, to seek out new frontiers of computing power.

In Don’t Stop Thinking About Tomorrow, I tackled technological conservatism head on—though without naming it—by addressing the surprisingly widespread notion that the iPhone 5 is “too light.” This criticism leans heavily on the seductive view of the present as an endpoint, rather than just another step in a journey towards something radically different. (For a long time, I avoided writing the post you're reading now because it felt like a retread of this older one. But I eventually decided that these ideas bear repeating. Do not be surprised when both posts arrive at a similar conclusion.)

Fear of a WebKit Planet was a celebration of what turned out to be the tail end of peacetime in the browser wars. (Well, maybe it was really just a cold war turning hot again.) The post addressed the fear that “WebKit everywhere” would lead us into another dark age of web development. Even before Google’s fork of WebKit, I noted that WebKit was a lot more like Linux than IE6, and that “the products built with WebKit are as varied as those built with Linux.” Pondering that variety, the idea of a homogenous, stagnating WebKit monoculture seemed extremely unlikely. I didn’t have to wait long for confirmation.

Uphill, Both Ways

Finally, the point of Annoyance-Driven Development was completely blotted out in the minds of a few readers by the audacious suggestion that a beloved service remains ripe for further improvement. This post revealed technological conservatism in its most virulent form: not only is the current state of affairs satisfactory, but wanting more is evidence of a character flaw, perhaps even a moral failing.

I find this idea absurd in its present-day context, and numerous analogous historical contexts immediately spring to mind as a means to persuade those who don’t. The trouble is, I can also imagine those same people taking the same technological-conservative positions in all the historical contexts as well. How far back in time do I have to go before it finally clicks?

Poor baby, you have to wait a whole day after a new episode airs on cable before it magically appears on your silent, $99, network-connected TV box.

Walking to the mailbox, unsealing an envelope, and sticking a disc into a slot under your TV is too much work, is it? Now you need to be able to start watching a movie without even picking your lazy ass up off the couch?

Oh no! There are rooms in your house where you don’t have instant access to the sum of all human knowledge! And running wires is just so hard, isn’t it? Those few cents for zip ties to keep yourself from tripping over the wires will obviously break the bank. The prince demands radio-based networking everywhere in his castle!

I guess it’s just too much work to walk out the front door five steps, pick up the newspaper that was delivered while you slept, and then bring it back to your kitchen table each morning to read the news of the world. Now you want it to appear instantly on your computer screen. OK, Mr. Fancypants Bigshot.

Yeah, pressing seven buttons in sequence is so much work. You need a faster way to call someone. Pressing just one button instead will be such a big change in your life, won’t it? You’ll finally have time to write that novel.

You’ve got a way to send a piece of paper from your home to anywhere in the entire country for literal pocket change, but that’s just too much work for you. You need to talk to someone right now, hearing an actual voice as if it’s in the same room instead of miles away.

You are warmed by the sun for nearly all your waking hours, but I guess that’s not good enough for you. No, you’re so important that you need to have light and heat at night as well. What you need, you precious snowflake, is a miniature artificial sun that’s under your control—obviously!

The Unreasonable Man

At some point, we’re all guilty of looking down upon things that have changed since our own formative years, but this attitude has no place in technology criticism—and it’s absolute poison for anyone trying to create great tech products and services. Not all new ideas represent progress. (Do I really need to spell this out? It seems so.) But ideas should not be rejected based merely on a lifetime of having lived without them. Today’s “unnecessary” frill is tomorrow’s baseline.

As the famous saying goes, the reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

Every great scientific and engineering triumph in human history has been a slap in the face of technological conservatism—the little ones, perhaps even more so. And yet each new step forward, no matter what the size, is inevitably met with a fresh crop of familiar objections. “Just look at what you have already, and it’s still not enough for you. Where does it end?”

It doesn’t. It never ends. Keep moving or get out of the way.