In the Spring of 2019, I was looking for a way to promote one of our time-limited merchandise sales for Accidental Tech Podcast. As part of these sales, we receive promo codes from our vendor for hitting certain milestones. Each promo code is good for a free t-shirt (including free shipping). I decided to give away these promo codes to fans on Twitter.
I wanted to do it in a fun way, perhaps with an Apple-themed trivia contest. Sadly, most trivia succumbs immediately to the power of a web search engine. I needed something that wasn’t so easy to Google. My first attempt was to post some hand-drawn line art, then ask people to identify it. Since I’d just created the drawing, I knew it wouldn’t be in any search results. And the crude nature of the art meant that a Google image search wouldn’t turn up any matching photos.
It worked (I think), but I couldn’t come up with anything to draw after that. Instead, I posted a small portion of a larger image which I asked people to identify. Again, success. The image I’d chosen happened to be a frame from a TV show, and that gave me an idea.
From that point on, I’d post a small portion of a frame and then ask people to identify the movie or TV show from which it was extracted. I created a notes document to keep track of everything, and I titled it “Frame Game.”
Since then, I’ve posted almost sixty frames over three years, including a few excursions into audio. People seem to enjoy it. Movies and TV shows are great, and who doesn’t like free stuff?
What I enjoy the most about Frame Game is the process of carefully selecting the frame and the crop such that people who are very familiar with the piece of media will be able to guess the answer, while people who are not will be absolutely dumbfounded that anyone was able to figure it out at all, let alone so quickly. The best example of this was when I posted a tiny, 64-pixel square from a 1920 x 800 frame that was guessed in one minute and four seconds.
Have some people figured out how to use computers or web searches to brute-force this game? Almost certainly. But it makes me happier to believe that most people are playing it legitimately. I’d like to humbly suggest that playing for real will make the players happier too.
Frame Game has taken place entirely on Twitter, and it’s meant to be played in real time. Unfortunately, the way I’ve chosen to chain the tweets does not make it particularly easy to follow in the Twitter archives. In an effort to better preserve the historical record, I’ve created my own archive, linked below.
There is no score-keeping, but you can “play” the game by attempting to guess the answer before clicking to reveal the full frame. If you cheat now, you’re only cheating yourself! Some frames also have hints that show ever-larger portions of the frame. (Hold down the Option key when clicking the button to reveal the full frame immediately without seeing any hints.)
I’ve had to resort to posting hints a few times during Frame Game, but the history viewer contains all the hint frames that I had prepared, regardless of whether or not they were needed. I’ve also linked to the original tweet, the declaration of the winner, and the winning tweet itself, if available. (Some winning tweets have since been deleted.) The time elapsed since the question was posted is also shown.
If you like this kind of thing and want to play something similar every day, check out the recently released, Wordle-inspired framed.wtf.
There is no schedule for Frame Game, other than usually coinciding with one of ATP’s seasonal merchandise sales. I’m not even sure if it helps increase sales at all. It’s just something fun that I like to do for the handful of fans who like to participate. If you want to play, follow me on Twitter and watch for a tweet that begins with the magic phrase, “The first person to identify…”
Frame Game can start at any time, so be vigilant!
My unsolicited streaming app spec has garnered a lot of feedback. I’m sure streaming app developers already gather feedback from their users, and I’m also sure that the tone of my post has skewed the nature of the feedback I received. Nevertheless, for posterity, here’s how people are feeling about the streaming video apps they use.
The number one complaint, by far, was that streaming apps make it too difficult to resume watching whatever you were already watching. As I noted earlier, conflicting incentives easily explain this, but people still hate it. A reader who wished to remain anonymous sent this story of how customer satisfaction gets sacrificed on the altar of “engagement.”
There was an experiment at Hulu last year to move “Continue Watching” below the fold (down 2 rows from where it was). This was done with a very small group of users. It was so successful that the increased engagement was projected to generate more than $20 million a year. The experiment was immediately ended and the new position was deployed to all users.
While I understand (and largely agree with) your frustration that your “in progress” show isn’t the top feature, you can argue that [making new content more prominent] provides the user more value as they discover content they wouldn’t have otherwise (hence the increased engagement).
This is definitely a case of “be careful what you measure.” I don’t doubt that whatever metric is being used to gauge “engagement” is indeed boosted by burying the “Continue Watching” section, but I must emphasize again, according to the feedback I received, people hate this practice with a fiery passion. It makes them dislike the app, and sometimes also the streaming service itself.
I don’t think any engagement-related metric is worth angering users in this way—even if it really does help users discover new content or stay subscribed longer. I’m reminded of the old saying, “People won’t remember what you said, but they will remember how you made them feel.” It applies to apps as well as people.
(Furthermore, given the fact that seemingly every popular streaming app does this to some degree, there’s an opportunity to seize a competitive advantage by becoming the first app to stop this user-hostile practice.)
The second biggest category of feedback was about detecting, preserving, and altering state. Apps that do a poor job of deciding when something has been “watched” drew much ire. (Hint: most people don’t sit through all the ending credits.) Compounding this is the inability to manually mark something as watched or unwatched. Failure to reliably sync state across devices is the cherry on top.
People don’t feel like they are in control of their “data,” such as it is. The apps make bad guesses or forget things they should remember, and the user has no way to correct them. Some people told me they have simply given up. They now treat their streaming app as a glorified search box, hunting anew each time for the content they want to watch, and keeping track of what they’ve already watched using other means, sometimes even using other apps. (I imagine this flailing on each app launch may read as “increased engagement.”)
Finally, there was a long tail of basic usability complaints: text that’s too small; text that’s truncated, with no way to see more; non-obvious navigation; inscrutable icons and controls; and a general lack of preferences or settings, leaving everyone at the mercy of the defaults. Oh yeah, and don’t forget bugs, of course. Multiple people cited my personal most-hated bug: pausing and then resuming playback only to have it start playing from a position several minutes in the past. Have fun trying to fast-forward to where you actually left off without accidentally spoiling anything for yourself by over-shooting!
While again acknowledging how the nature of my original post (and my audience in general) surely affects the feedback I receive, I think it’s worth noting that no one—not a single person—wrote to tell me how much they loved using their streaming app. I didn’t expect to get much pushback on a post criticizing something so widely maligned, but I did expect to get some. I’m sure many people do enjoy their streaming app of choice, especially if it’s one of the more obscure, tech-oriented ones like Plex or Channels, but the overall sentiment is clear. Do streaming services care? I think they should.
Thanks to either my opinionated nature or the fact that I have voiced my opinions on various podcasts for years, people often ask me to recommend products. Which Mac should I buy? What’s the best microwave oven? What kind of car should I get for a family of four?
Now, I’m no Wirecutter or Consumer Reports. I’m just one person. With a few exceptions, I don’t have personal experience with more than a handful of individual products in a given category. But I know a good product when I see it (and use it).
This page lists some products that I consider “good.” This may sound like a low bar, but sometimes “good” is as good as it gets for a certain type of product. Even with this lenient standard, the list is not long. As with my Great Games list, I will add products to this page over time. I may also remove or replace products if something better comes along.
If you buy something after following a product link on this page, I may receive money through the seller’s affiliate program. (Not all retailers have affiliate programs, and not all products are eligible for affiliate payments.)
I love toaster ovens, and I’ve personally tested many of them over the years. Casey Liss, my friend and ATP co-host, tells the tale of the strange confluence of events that led me to try so many toaster ovens, and provides links to listen to my (audio) reviews of each one, if you want all the gory details. If you just want my recommendation, it’s (still) the Breville 650 XL. (It’s also available at Amazon.)
There are two caveats about this toaster oven. First, it’s bigger than you might expect: 16.5 inches wide, 13 inches deep, and 9.5 inches high. Measure your counter space before purchasing this beast. Second, the knob-feel is terrible: loose, imprecise, unsatisfying.
As a product, this is a good toaster oven. But if you can get past its user-interface foibles, it does a great job actually toasting (or cooking) things. I’ve had mine for a decade and, I’ve still not found anything better.
If you have too little counter space for the Breville and want a toaster oven that can toast bread both well and quickly, consider the Panasonic FlashXpress. I think its user interface is subpar—confusing, poorly arranged buttons clustered below the door—but it’s a speed demon when it comes to making toast.
Breville also makes a smaller 450 XL model that is not quite as powerful as its big sister, and not quite as fast as the Panasonic, but it’s a good choice if you like the Breville’s proportions and UI.
(And, no, I don’t have any recommendations for slot toasters. Toaster ovens forever.)
The OXO Good Grips Solid Stainless Steel Ice Cream Scoop is (probably) the world’s greatest ice cream scoop. I know it looks like just the ones you’ve used before that can’t make a dent in hard-frozen ice cream and end up forming ugly, rusty pits in the well of the scoop, but I can assure you that this is a different class of product entirely.
As the name suggests, it’s made of solid stainless steel. It’s strong, uniform throughout (no coating to chip away), and pleasingly hefty. The pointed tip can defeat even the hardest ice cream. Soak it in warm water and the thermal mass of this heavy instrument will keep doing work, scoop after scoop, for as long as you need it. The handle is typical Oxo: soft, grippy rubber.
As I am writing this, I am ordering myself a backup scoop just in case Oxo ever stops making this product. (The only thing I can imagine damaging the one I already have is a trip into the garbage disposal…but that is a thing that has been known to happen in my house, so better safe than sorry.)
Update (January 2023): Like seemingly all the Oxo products that I love, it looks like this one is no longer available. In its place, there’s this scoop, which matches the shape of mine, but not the material finish, and this scoop, which matches the material, but not the shape. People have reported getting scoops that don’t match either photo on Amazon, however, so beware. One person suggested this scoop from SUMO, which he said arrived looking very much like the Oxo that I recommend.
The Victorinox Fibrox Pro Knife, 8-Inch is the best inexpensive chef’s knife I have ever used. There are better knives for (much) more money, but none in this price range come close. I own knives that cost twice as much and are not even half as good.
The grip is not quite up to Oxo‘s standards in terms of materials, but it follows the same philosophy: grippy and comfortable, with no concern for how it looks. The blade is shaped perfectly and stays sharp for much longer than you would expect. And it’s easy to clean and sharpen: no weird seams or chamfers.
Like the ice cream scoop, this is a product I love so much that I’ve purchased backup copies just in case it’s ever discontinued. I still routinely purchase more-expensive chef’s knives (I love kitchen tools), but so far, none has displaced this $35 wonder for all-around utility.
The Breville BWM640XL Smart 4-Slice Waffle Maker is $350. This is a ridiculous amount of money to spend on a waffle maker. It’s huge and heavy. And I personally prefer thinner waffles with more, smaller squares. (The Breville makes four waffles that are over an inch thick, each with 25 squares.)
All of that said, it does a pretty amazing job. The waffles are evenly cooked and release easily from the non-stick surface. The gutter around the edge, meant to catch excess batter, does actually work. The controls and the LCD screen are surely overkill for what boils down to a fancy way to set the cooking time, but they work well and are easy to understand.
You might think the lack of removable heating surfaces would make it hard to clean, but cooked waffles leave almost nothing behind after they’re removed. Wiping the surfaces with a damp paper towel is usually all the cleaning that’s necessary. The permanently attached heating surfaces make the whole device feel sturdy, and they help prevent any batter from getting inside the machine.
I resisted buying this over-priced monstrosity for a long time. I purchased and returned several waffle makers that were just terrible. I could not find a reasonably priced model that was competent and consistent. I finally bit the bullet and bought the Breville. This price is (still) galling, and I (still) wish the waffles were thinner and had more, smaller squares. But within the size constraints inherent in its design, this damned thing makes perfectly cooked waffles every single time. It’s infuriating, really.
For a few years now, I’ve tracked the TV shows I’m watching using the iOS app Couchy, which integrates with the Trakt.tv service. Sadly, Couchy ceased development last year. I’ve kept using it since then, but in the past few weeks it’s finally started to fail.
I looked at (and purchased) many, many alternative apps back when Couchy’s demise was announced, but I could never find one that I liked as much. In particular, I haven’t found a match for the information density of Couchy’s main screen combined with its “smart” sort order.
Couchy’s main screen shows a scrollable grid of portrait-orientation poster images for each TV show, three to a row on my iPhone, each with text below it showing the show name, how many episodes behind I am, the season, the episode number, and title of the next episode. (I’d include a screenshot here, but poster images are no longer loading for me in Couchy, so it wouldn’t be much to look at.)
The sort order determines how the shows are placed in the grid. Within the app, Couchy describes its smart sort as follows:
Shows will be sorted in the following order:
- Episodes airing today
- Missing episodes
- Awaiting episode
- Ended shows
As I’ve tweeted about my search for a Couchy-replacement app, I’ve found it difficult to explain what I’m looking for in terms of sorting. And even Couchy’s sorting is sometimes not quite what I want. So I’d like to explain here instead, free from Twitter’s character limits.
I use an app like Couchy because I’m usually in the middle of watching many different TV shows. When I have some time to watch TV, I launch Couchy to remind myself what I’m currently watching, how far behind I am, and which shows have new episodes waiting for me. This is my most important use case: choosing a show to watch.
I have so many TV shows in my trakt.tv collection that sorting is essential to helping me select a show. I don’t want to scroll through dozens of shows to make a selection. I want to look at the top one or two screenfuls of shows on my phone and be sure that I’m seeing all the shows I’m most interested in watching now.
Most simple sort orders don’t work for my purposes. For example, consider sorting by the date of the latest episode. There are many shows in my collection that I’m not actively watching. Maybe I’ll get to them in the future, but for now, the unwatched episodes are just piling up. If those shows jump to the top of the sort order every time a new episode is released, it’s just noise to me. They’re obscuring the shows I actually want to watch.
Sorting by the number of unwatched episodes has similar problems. Sorting by the date I last watched an episode of a show might seem like it’d work, but I might really want to know about a newly released episode of a show that I’m caught up on but that hasn’t released an episode in a while.
If I had an actual, concrete algorithm in mind, I wouldn’t be writing all this. I could have explained it in a tweet. But I haven’t thought it through enough to nail it down at that level. What I can do instead is describe the desirable features of such an algorithm.
If I’m not actively watching a show, it should be pushed down in the list. Deciding what “actively watching” means will surely involve some thresholds (e.g., “has watched an episode in the last N days”), and it would be nice if those were configurable.
Shows that I’m actively watching should jump to the front of the list when a new episode is released.
Shows that I really like but that are on a break (e.g., between seasons) should jump to the front of the list when a new episode or season is released. Again, determining which shows I “really like” is tricky. An easy out here is to just have me choose by marking them as favorites. A ranked list of favorites would be even better and would help with sorting decisions near the top of the list.
When sorting shows that I’m actively watching (or really like) that just had a new episode or season released, favor shows with the smallest backlog—except in cases where a whole new season just dropped for a favorite show. For example, let’s say I’m one episode behind on Homeland, two episodes behind on Fargo, and caught up on The Expanse, which is a favorite show. If Homeland and Fargo both release new episodes and The Expanse releases a whole new season, all on the same day, the sort order should be: The Expanse first (even though it has the largest backlog), Homeland second (because it has a shorter backlog than Fargo), and Fargo next.
I could go on, but I think I’m getting into the weeds. The four points above capture most of it. I’m sure other people have their own preferred sorting orders, but this one is mine. I’ve seriously considered writing a trakt.tv client app for iOS just to scratch my own itch, but I don’t think I’m ready to tackle a task that large quite yet.
In the meantime, if you’re an author of one of the many trakt.tv client apps in the App Store, please consider implementing something like what I’ve described here. I’ve probably already purchased your app, but I’ll be extremely grateful on top of that.
Fumito Ueda’s first game, Ico, was a beautiful, moody masterpiece. Its spare depiction of a boy attempting to escape from a vast castle with the help of a mysterious companion discarded the gameplay and interface conventions of its day, delivering an almost meditative sense of immersion. Ueda’s next game, Shadow of the Colossus, added the bare minimum of status indicators to the screen to support its complex boss battles that required the player to clamber up and onto a succession of giant creatures.
In terms of both gameplay and mood, Ueda’s latest game, The Last Guardian, is a straightforward combination of its predecessors. It features a boy attempting to escape from a mysterious castle with the help of a giant creature. Like Ico, it eschews a conventional HUD, save system, inventory management, power-ups, and nearly every other modern gaming convention. And as in Shadow of the Colossus, players will find themselves scrambling up the back of a large, often uncooperative, incredibly life-like beast (cheekily named Trico).
Ico was able to deliver on the promise of its design by reducing complexity in other areas. It’s set in a largely rectilinear castle that the player navigates on foot. It has a small number of enemies. Its environmental puzzles are mechanically and conceptually simple. Similarly, Shadow of the Colossus manages to pull off its extremely ambitious boss battles by removing nearly everything from the game except those creatures.
While The Last Guardian attempts to combine the strengths of its predecessors, it’s burdened by the combination of their features. The environment and the player’s movement through it is far more complex than in Ico. The puzzles play fast and loose with their own rules at a few critical points. The giant creature, no longer confined to a limited engagement in a boss arena, sometimes pushes the game mechanics past their limits.
Nothing kills immersion more than an acute awareness of the game engine itself. In The Last Guardian, the camera often gets stuck on walls or briefly shows the view from inside Trico. (Spoiler alert: like all your favorite 3D-rendered characters, he’s hollow.) Arguably, Shadow of the Colossus had an even more frustrating camera and control scheme, but that game was released eleven years ago on a far less powerful console. The Last Guardian has made tremendous strides since then, but it’s still not quite enough to avoid illusion-breaking lapses.
These shortcomings are compounded by an uncharacteristic lack of faith in its design. Traditional (read: oppressive) on-screen prompts describing the control scheme mar the opening of the game and are impossible to completely banish. A voice-over extends beyond its narrative role to provide a dynamic hint system that is often too quick to reveal solutions. Several brief cutscenes in quick succession at the start of the game undercut player agency. It's tempting to attribute these lapses to Ueda’s departure from the project several years before its release, but the reason is less important than the result.
All of that said, it’s important to remember the context of these criticisms. Ico and Shadow of the Colossus are two of the greatest video games ever created. Both pushed the limits of the hardware they were released on, and both have influenced video game designers, filmmakers, and other creative professionals far out of proportion with their modest sales numbers. That The Last Guardian fails to resoundingly best its distinguished parents is only disappointing because of how close it comes.
Let’s start with the obvious. The Last Guardian is a gorgeous game. The world design is in line with Ico and Shadow of the Colossus, but the increased fidelity of the PlayStation 4 really makes it shine. (PlayStation 4 Pro running at 1080p is recommended for best frame rates.) Lighting effects that Ico could only dream of add a poignancy to already majestic vistas. At so many points, I wished this game had the photo mode from Uncharted 4.
Trico is an amazing achievement: a building-sized NPC that truly feels alive. Its animations rarely feel canned or repetitive. Its behavioral inscrutability is completely in keeping with its character. Learning to read Trico’s moods and signals is a core part of the game. The experience smoothly transitions from frustration to a deep, intuitive understanding by the end.
Anyone who has finished Ico and Shadow of the Colossus will have no trouble completing The Last Guardian. I found the environmental puzzles a bit more challenging than those in Ico, but I never had to go to the Internet to look up a solution. Anyone who got stuck in Ico will almost certainly be even more stymied by The Last Guardian, however. The hand-eye coordination required is substantially lower than in Shadow of the Colossus, but the camera management and overall control-scheme finesse is much more demanding than in Ico.
Also keep in mind that these are comparisons to the difficulty of two much older games. The Last Guardian has a significant skill-barrier to enjoyment when compared to contemporary console games, especially those with such an artistic bent. Inexperienced gamers looking for a better match for their skills should try Journey instead.
Longtime console gamers who have never played Ico or Shadow of the Colossus should definitely do so, preferably before playing The Last Guardian. High-definition remakes of both games are available for the PlayStation 3 on a single game disc for a combined price of $25. If your taste in games is anything like mine, it is absolutely worth buying or borrowing a PlayStation 3 console just to play these two games. (Plus Journey for just $15 more.) [Update: Both games are also available on the PS4 and Windows PC via the PlayStation Now cloud gaming service, though I have not tried playing them this way.]
If you loved Ico and Shadow of the Colossus, The Last Guardian is well worth playing, but it bears the scars of its nearly decade-long development. Like The Force Awakens, there’s almost no way The Last Guardian could have lived up to the expectations accumulated during the long wait for its release. In the end, its reach exceeds its grasp, if only slightly. But, oh, what a reach it was. Like its star creature, The Last Guardian occupies a lofty perch—defiantly idiosyncratic and occasionally inscrutable, but a towering achievement nonetheless.
These are the canonical bagel flavors:
Also:
Most of the nonfiction books I read these days fall into two broad categories: books about people I admire and books about the creation of things I admire. Good books about the latter often turn into the former by the end.
The book I just finished, Creativity, Inc. by Ed Catmull, co-founder of Pixar, had a head start on both counts. My love of Pixar is not surprising or uncommon. As for Ed Catmull, I’ve been aware of him and his contemporaries for decades (I had an Alvy Ray Smith quote in my .sig for a while in the 90s), but my nerd crush really stepped into high gear when I saw a video of Catmull’s talk at the Stanford Graduate School of Business in 2007.
It’s difficult for me to describe my reaction to that talk—and to his new book—without sounding absurdly self-aggrandizing, but I’m going to give it a shot. Saying what other people are thinking is a proven formula for mass-market appeal employed by everyone from talk radio hosts to stand-up comedians. But as someone whose thoughts and interests have always been outside the norm, I’ve rarely heard excerpts from my own inner dialog voiced on a broader stage.
Ed Catmull does that for me. If you’ve listened to my Hypercritical podcast or read the article that inspired it, you will find many familiar topics and themes in Creativity, Inc. Now, believe me, I harbor no illusions about this overlap. I am not the guy who hears Louis C.K. tell a joke and thinks he could be just as funny because he had a similar thought once. But shared values and the fulfillment of common aspirations are at the heart of all hero worship.
Ed Catmull’s dream was to create the first fully computer-animated feature film. As a child, I also dreamed of such a thing; Catmull and the rest of the people at Pixar actually made it happen. Similarly, as an adult, I’ve clung to the notion that critical thinking can be both useful and powerful. Creativity, Inc. explains just how powerful it can be when practiced by a handful of the most brilliant technical and creative people alive today.
Ay, there’s the rub. It’s so easy to hear the vaguest echo of your own thoughts expressed by someone fantastically smart and accomplished and view that as a cosmic endorsement of your approach to life. But that absolutely would not be in keeping with the message of the book—a message Catmull tries again and again to communicate to readers he knows will resist it.
Indeed, Catmull most often uses himself as an example of someone who has failed to see through to the heart of a problem. This is the true strength of the book. Unlike so many other tech-industry memoirs and business books, Creativity, Inc. is not an abstract exploration of a philosophy, nor is it a list of accomplishments interspersed with bold commandments. Instead, it is a deep, thoughtful investigation of a never-ending series of failures—and the reactions to those failures that eventually led to success.
Think of it: the man who invented texture mapping, made computer-animated films possible, and led his studio to release a string of amazing, Oscar-winning examples of the form decides to write a book…and then builds it around an examination of his own mistakes. Ed Catmull may not be your kind of hero, but he sure is mine.
Thirty years ago today, Steve Jobs introduced Macintosh. It was the single most important product announcement of my life. When that upright beige box arrived in my home, it instilled in me an incredible sense of urgency. I greedily consumed every scrap of information about this amazing new machine, from books, magazines, audio cassettes, and any adult whose ear I could bend. This was the future—my future, if I could help it.
The death of Steve Jobs in 2011 brought back a lot of these same memories. What I wrote then echoes my thoughts on the Mac’s 30th anniversary.
I was 9 years old at the time. That year, my grandfather had changed my life by purchasing a Macintosh 128K, and convincing my parents to do the same. My grandfather also had a subscription to Macworld magazine, including multiple copies of issue #1, two of which I took home with me. I cut the Macintosh team picture out of one [see above] and left the other intact. (I still have both.)
I pored over that magazine for years, long after the technical and product information it contained was useless. It was the Macintosh team that fascinated me. That’s why I’d chosen to cut out this particular picture, not a photo of the hardware or software. After seeing the Macintosh and then reading this issue of Macworld, I had an important realization in my young life: people made this.
That last part is the most important. It wasn’t just the product that galvanized me; it was the act of its creation. The Macintosh team, idealized and partially fictionalized as it surely was in my adolescent mind, nevertheless served as my north star, my proof that knowledge and passion could produce great things.
Memories are short in the tech industry. For most people, Apple and Steve Jobs will always be synonymous with the iPhone, an uncontested inflection point in our computing culture. For me, the introduction of the Macintosh will always be more important. Though people who didn’t live through it might not feel it as keenly as I do, the distance between pre-2007 smartphones and the iPhone is much smaller than the distance between MS-DOS and the Mac.
On a personal level, nothing will ever replace my tanned-plastic beauty, the greatest electronic gift I had ever received, or would ever receive. My attachment to the Mac explains why, in the late 1990s, I was desperate to know everything possible about the fate of Apple and the future of the Mac operating system. Almost fifteen years later—half the Mac’s life—I’ve reviewed every major release of OS X and zero releases of iOS. Don’t get me wrong, I love my iPad and iPod touch, but you never forget your first.
I’m eternally grateful to the people who created the Mac, and to the countless others who kept it alive and shepherded its rebirth. In this age of iOS, it’s heartening to hear Phil Schiller say, “Our view is, the Mac keeps going forever.” That’s just fine with me.
Ask a room of computer geeks how they came to deserve this appellation and you’ll likely hear many similar stories. “I got my first computer when I was very young. By the time I was a teenager, I’d logged thousands of hours at the keyboard doing everything imaginable with my computer: gaming, programming, networking, upgrades, the works.”
That’s certainly my story. I was lucky enough to get a Macintosh in 1984, and it changed my life. I spent so many hours in front of that computer, I often look back in wonder at how I found so much to do with so little. This was years before I had an Internet connection. I had very little software and no convenient way to get more. My dollar-a-week allowance didn’t go very far. The only other person I knew with a Mac was my grandfather who lived two hours away. Nevertheless, I put in the hours—willingly, joyfully—and became the seasoned Mac geek you see before you today.
My Macintosh origin story is part of who I am. Being there from the beginning (and staying with the Mac, even through the dark times) gives me a useful historical perspective on the platform. But this is not the only road to geekdom.
The Mac is actually one of the few things I’m a geek about that I’ve been in on since the start. Geekdom is not defined by historical entry points or even shared experiences. A geek must possess just two things: knowledge and enthusiasm.
I became interested in remote control cars in high school after seeing a friend drive one in his backyard. He’d been building and racing RC cars since he was in elementary school. I was fascinated by these machines, but I worried I’d never be a “real” RC car geek like my friend.
I saved my money, bought a car, built it (badly) myself—and then crashed it. Undaunted, I bought replacement parts, fixed it, learned to drive it with far less crashing, and eventually bought a better car. Most importantly, I subscribed to Radio Control Car Action magazine and read every issue from cover to cover as soon as they arrived at my house.
A year or so later, I found myself in my local hobby shop answering another customer’s questions about his car. It started to dawn on me that I now knew more about RC cars than the average hobby shop patron. I was no longer an outsider looking in.
Around the same time, I was engaged in one of those cheap-music-for-membership marketing schemes that led to me having to select some CDs on a whim. I ended up getting Achtung Baby, and it knocked my socks off. I’d been aware of U2 for years and had probably heard the hits from The Joshua Tree on the radio dozens of times, but I’d never really been into the band—or any band, for that matter. Achtung changed that.
I started to work my way backwards through U2’s catalog, buying as many CD long boxes as I could get my hands on. I bought and read biographies of the band. At my local library, I devoured reviews of all their past albums in Rolling Stone and Spin. I found every magazine with a cover story about U2. When I couldn’t find anything else in the stacks of back issues, I turned to the library’s microfiche collection.
In college, I finally had easy access to singles, b-sides, and bootlegs, allowing me to complete my collection. I also had a fast, reliable Internet connection for the first time. This was beyond the local hobby shop; I was communicating with other U2 fans across the entire planet.
I learned to play the guitar (badly) and downloaded tab for my favorite U2 songs. Dissatisfied with the state of lyrics websites (some things haven’t changed), I transcribed every U2 album, single, b-side, and rarity, leading to the creation of my first public website, The U2 Lyrics Archive. This was my first claim to fame on the net. (The site is gone now, but when the official u2.com website launched a few years after mine, it contained lyrics copied from my site, typos and all.)
Remote control cars existed for decades before I got my first kit. Achtung Baby was U2’s seventh album. Yet I was once a serious RC car geek and an unassailable U2 geek. It started with enthusiasm. Given the opportunity, I channeled that energy into a dogged pursuit of knowledge.
You don’t have to be a geek about everything in your life—or anything, for that matter. But if geekdom is your goal, don’t let anyone tell you it’s unattainable. You don’t have to be there “from the beginning” (whatever that means). You don’t have to start when you’re a kid. You don’t need to be a member of a particular social class, race, sex, or gender.
Geekdom is not a club; it’s a destination, open to anyone who wants to put in the time and effort to travel there. And if someone lacks the opportunity to get there, we geeks should help in any way we can. Take a new friend to a meetup or convention. Donate your old games, movies, comics, and toys. Be welcoming. Sharing your enthusiasm is part of being a geek.
Anyone trying to purposely erect border fences or demanding to see ID upon entry to the land of Geekdom is missing the point. They have no power over you. Ignore them and dive headfirst into the things that interest you. Soak up every experience. Lose yourself in the pursuit of knowledge. When you finally come up for air, you’ll find that the long road to geekdom no longer stretches out before you. No one can deny you entry. You’re already home.
At the beginning of last year, I posted a list of things Apple can and should do during 2013. It’s time to settle up. Because I’m feeling scholastic, I’ll give a letter grade to each item.
Ship OS X 10.9 and iOS 7. Done and done, with only a few minor bumps in the road. A-
Diversify the iPhone product line. “There needs to be more than one iPhone,” I wrote. This is a drum I’ve been beating for many years. Apple finally made it happen in 2013 with the cleverly conceived iPhone 5C. I’m disappointed that the 5C doesn’t have more internal changes beyond a slightly larger-capacity battery, and I’m still anxiously awaiting an iPhone with a larger screen, but Apple got the important parts right. The 5C is a good phone, and it’s easily distinguished from the 5S. B+
Keep the iPad on track. The iPad Air is impressive, and the mini finally went Retina. On the downside, the creaky old iPad 2 lives on, the iPad Air really deserves more RAM, and a larger “iPad Pro” is still off in the hazy future. The iPad is “on track,” for sure, but exciting times are still ahead. A-
Introduce more, better Retina Macs. The latest Retina MacBook Pro has Intel’s Iris Pro 5200 graphics, finally giving the integrated GPU enough muscle to handle all those pixels. Apple also kept around an option for a discrete GPU on the high-end model. But the MacBook Air and iMac are still excluded from the Retina club, and even the mighty Mac Pro has extremely limited high-DPI options. We’ll get ’em next year, right Tim? B-
Make Messages work correctly. It’s difficult to measure the scope and frequency of problems in Messages based solely on blog posts and tweets, but I feel safe in saying that weird behavior still exists and is likely to be seen by anyone who uses Messages every day. Hope is fading. D
Make iCloud better. The iCloud Core Data team got a chance to regroup in Mavericks. It may be too little, too late, but at least it’s a step in the right direction. More broadly, iCloud still doesn’t have a good reputation for reliability, and debugging problems related to it remains difficult. If the only user-accessible control for a service is a single checkbox, it had better “just work.” iCloud has yet to earn that label. C
Resurrect iLife and iWork. Be careful what you wish for, I suppose. Apple did finally release new versions of the applications formerly known as the iLife and iWork suites, but the focus on simplicity and feature parity with the web and iOS versions left Mac users wanting more. It does not feel like an upgrade worthy of the years that have passed since the last major revisions of these applications. B-
Reassure Mac Pro lovers. Apple was thoroughly convincing in its rededication to the Mac Pro, presenting a dramatic introduction video at WWDC for its radical new high-performance hardware. It’s not for everyone, but it represents a hell of a turnaround for a once-neglected product. Let’s hope it doesn’t take 18 months for the next revision to appear. A
Do something about TV. Sigh. F
Out of the 10 items on my to-do list, Apple did 8 of them well enough to earn a checkmark. (The TV thing was always a bit of a reach, anyway.) I’d call that a solid year.
On two recent episodes of Accidental Tech Podcast, I talked about calibrating my new TV. The reactions of my co-hosts and the feedback from listeners has made it clear that the entire concept of calibrating a home TV is foreign to most people.
While a full-zoot ISF HDTV calibration is expensive and unnecessary for most people, there are some important steps that every TV owner should take to improve image quality. If you have an iOS device plus either an HDMI output cable (Lightning or 30-pin) or an Apple TV, you can use the simple THX tune-up application to dial in your color, contrast, brightness, and other basic settings.
Before calibrating, don’t forget to turn off all the “image enhancement” features of your TV. These are the things with names like Vivid Color, Color Remaster, Motion Interpolation, Brilliance Enhancer, Black Extension, C.A.T.S., AGC, and so on. Check your TV’s manual for explanations of what each setting does, if you’re curious, but you really do want to turn them all off. They all mess with the image in ways not intended by the creator, and they will make proper calibration more difficult or impossible.
There’s one setting in particular that anyone can adjust without requiring any skill or special software. Let’s say you buy a new 1080p HDTV with a native resolution of 1920×1080. Out of the box, that TV will most likely be configured to never show you a full 1920×1080 pixels of information. In computer parlance, it’s running at a non-native resolution by default, like a 1024×768 LCD display set to a resolution of 800×600.
Imagine this test image exactly matches the native resolution of your HDTV. (It doesn't, so please don't use it to test your actual TV. Use a real calibration app or image instead.)
If you’re viewing this post on a Retina display, the thin lines extending from the squares in the corners should be crisp and pixel-perfect. Send this image to your HDTV, however, and this is what you’re likely to see:
The green box is no longer visible; the squares in the corners are now rectangles; the fine lines are now blurred together, producing an unpleasant moiré pattern. You can read all about the origins of this terrible behavior in the Wikipedia entry on “overscan,” but all you need to know is that it’s no longer necessary in the age of HDTV.
You paid for all 1920×1080 pixels of your fancy new HDTV—use them! Most HDTVs have a setting somewhere to correct this problem. It may be called “Overscan,” “1:1 Pixel Mapping,” “Native,” “Screen Fit,” “Just Scan,” or something even more generic like “Size 1” or “Size 2.” Consult your TV’s manual to find out. (If you can’t find your paper manual, a Google search for your TV’s model number followed by “manual PDF” will usually lead to an online version.) Don’t give up; the setting is almost always there somewhere. For TVs with no dedicated setting, you may have to change the input label to “PC” or similar to force the issue.
The nerd-rage I feel at the thought of a display running in non-native resolution may not be something you can relate to, but everyone can appreciate a sharper image that shows more information. This holiday, after you’re done fixing all your relatives’ computer problems and updating their software, take a moment to correct the image size on their HDTV as well. Your relatives might not thank you for it, but I will.
When Apple was on the ropes sixteen years ago, there was no shortage of advice about what the company should do to save itself, much of it fueled by a deep love for Apple’s products. It takes a diehard Apple fanatic to create something like the iconic “Pray” cover from the June 1997 issue of Wired magazine—coupled with the faith that there are enough like-minded readers to appreciate the sentiment. A decade later, those of us who spent the 1990s worrying about Apple felt relieved, and maybe even a little nervous about Apple’s newfound power. It was a hell of a ride.
Nintendo engenders the same kind of affection and loyalty. Like Apple, it has a recent history of defeat followed by unlikely triumph. Nintendo’s dark times were not as bad as Apple’s; the N64 and GameCube were outgunned by the PlayStation and PlayStation 2, but Nintendo wasn’t days away from bankruptcy at any point, nor did it have to buy another company to save itself.
Now the roles appear reversed. Apple is in a bit of a slump (or so the narrative goes), but it’s a comparatively mild crisis of expectations. Apple’s products are still in demand and selling in large numbers. Nintendo, meanwhile, is experiencing one of the most disastrous console launches in its history—and that’s not even the worst news, according to some observers. It’s the handheld market where Nintendo is in the most trouble, they say.
As expected, people who don’t want to live in a world without a successful, thriving Nintendo feel compelled to offer their heartfelt suggestions for saving the company. It’s this same compulsion that has briefly driven me out of my months-long Mavericks-review-writing haze to offer my own perspective.
I agree that Nintendo is in trouble. Before considering possible solutions, I’m forced to ask a tougher question: can it be saved? Some say no, that it’s only a matter of time. I think it comes down to this. As long as there continues to be a market for devices that are primarily designed to play games, then it’s possible for Nintendo to live to fight another day.
If not, then I fear the worst. Nintendo is not equipped to produce and maintain a long-lived, general-purpose software platform. Precious few companies have ever done it. You know all their names: Microsoft, Apple, Google. I don’t expect to ever see Nintendo on that list.
I think there is still a market for game-only (or at least “game-mostly”) hardware products. I’m not sure how long it will last, but I’m betting this upcoming generation of consoles will sell well enough in the aggregate to maintain the status quo, at the very least.
Assuming I’m right, Nintendo has all the tools it needs to pull itself out of its current tailspin. To understand how, just look at how Nintendo has always done it: with hardware and software working together to provide new, fun experiences.
The NES was Nintendo’s first big video game success. After the game console crash of the 1980s, home video game software alone was not going to lead Nintendo to riches. Personal computers were still expensive and wouldn’t have mass-market penetration for years. Any attempt to field an Atari-2600-like hardware product would surely be met with skepticism.
Nintendo’s solution required hardware and software. The hardware: an Atari-like game console, yes, but also…a robot? Yep, and a light gun, too. Very few games used these accessories, but you can be sure they were featured heavily in all the initial advertising for the NES. They were hardware decoys, misdirections. They existed to get the NES into homes. Once there, a tiny mustachioed trojan plumber spilled out of the belly of the beast and conquered a generation of gamers.
Now consider the Nintendo 64, the company’s first 3D console. The Saturn and the PlayStation beat it to market by years, and both had the good sense to use optical disks instead of cartridges. Though the PlayStation came to dominate that generation, it was the Nintendo that transformed 3D gaming forever with the potent combination of Super Mario 64 and the Nintendo 64 controller—hardware and software products that were designed together, and it showed.
Mario 64 taught the world how to make a good 3D game. Though it couldn’t save the N64 from an ignominious fate in the market, it left its mark on gaming history and perhaps singlehandedly kept Nintendo relevant. The idea of releasing a 3D gaming system today without a standard analog stick is absurd, but that’s just what Sega and Sony did in 1994. After the N64 was revealed to the world, analog sticks quickly appeared on both the Saturn and the PlayStation—hastily tacked onto the existing controller, in the latter case, but I’m sure that was only a temporary condition, right? (Sigh.)
Then there’s the Wii. Nintendo sacrificed hardware power for a novel input method and low price, then paired it with software that explained the value proposition to the world. After two generations of defeat at the hands of Sony, Nintendo put itself back on the top of the game console market.
None of these examples would have been possible if Nintendo didn’t make both the hardware and the software. And I didn’t even mention the Game Boy product line or the dual-screened DS, two of the top three best-selling gaming platforms of all time. Again, impossible without hardware and software synergy. This is how Nintendo succeeds.
When I read the current crop of advice for Nintendo, much of it focused on how to survive in a world where iOS comes to dominate portable gaming, I think about how it would have helped Nintendo at its previous low points. Nintendo should make games for iOS, some say. If you can’t beat ’em, join ’em.
At the tail end of the GameCube’s life, Sony had sold many times more consoles and games than Nintendo over the course of a decade. Should Nintendo have started writing games for the overwhelmingly dominant Sony platform? Would that have helped Nintendo achieve Wii-like success? I don’t think so; no amount of software alone could have done that.
The game software business is tough. It’s hit-driven, like Hollywood. Most games lose money or break even. A few big winners fund all the others—if you’re lucky. A game development studio going out of business shortly after releasing a critically acclaimed game is not unheard of. (Hell, the best game released last year bankrupted its developer.)
Consolidation is rampant in game development. Small players are routinely snatched up by behemoths that have a better capacity to absorb the inevitable losses that come with games that are not monster sales successes.
This is not a world that Nintendo should aspire to enter. Better to stick with hardware platforms that it controls, profiting from both the hardware sales and the fees collected from third-party games sold on its platforms. That’s the kind of steady (and potentially enormous) income that will keep Nintendo afloat as it works on the next big thing.
Even if Nintendo sticks to its guns, and even if the market for game-focused hardware continues to exist, Nintendo still faces some big challenges. A gaming platform doesn’t have to compete with iOS on its own terms, but it does have to at least match it in the areas that are relevant to gaming.
Right now, Apple is crushing Nintendo when it comes to the software purchase, installation, and ownership experience. Hell, even Steam—a PC gaming platform—embarrasses Nintendo’s e-commerce efforts. My Nintendo games should not be tied to a piece of hardware. My purchases should transfer seamlessly to any new Nintendo device I purchase. Illegal emulation should not be the easiest way (or only way) to play classic Nintendo games. Nintendo needs to get much, much better at this stuff—fast.
Apple is also winning when it comes to market access. It’s much easier for a two-person team to write an iOS game and put it up for sale than it is for that same team to get a game onto a Nintendo platform. Expensive, formal, limited developer access has no place in the modern gaming world. Nintendo needs to wake up and smell the App Store.
A lot of things have to go right for Nintendo to get its mojo back. It’s worth reiterating: if the market for dedicated gaming hardware disappears, I fear it’s game over for Nintendo as we know it.
But if the time of the game console is not yet at an end (handheld or otherwise), then Nintendo has a lot of work to do. It needs to get better at all of the game-related things that iOS is good at. It needs to produce software that clearly demonstrates the value of its hardware—or, if that’s not possible, then it needs to make new hardware.
Any advice that leads in a different direction is a distraction. There’s no point in any plan to “save” Nintendo that fails to preserve what’s best about the company. Nintendo needs to do what Nintendo does best: create amazing combinations of hardware and software. That’s what has saved the company in the past, and it’s the only thing that will ensure its future.
Now that the Xbox One has been revealed, joining the already-released Wii U and the previously announced PlayStation 4, we can finally get a sense of what the next generation of game consoles will look like.
This used to be a simple business. Cutthroat and fiercely competitive, yes, but at least all the players were racing for the same prize. Every handful of years, we’d get a new crop of consoles, each claiming to be the most powerful and to have the best games.
Seven years ago, after being outsold by Sony in the two previous console generations, Nintendo broke from the pack and went after a new market: people who were not interested in—or were too intimidated by—traditional game consoles.
The Wii was startlingly less powerful than the other consoles in its generation. This helped make it the least expensive and the smallest, which only increased its appeal to non-gamers. The coup de grâce was the Wii’s novel control scheme, which let your dad, who couldn’t get past World 1-1 back in the 80s, make an improbable transformation into a hardcore gamer…of a sort.
And if the idea of “winning” a console generation with laughably underpowered hardware wasn’t enough, the Wii and its contemporaries also put an end to the idea of a game console that just plays games. Just a few years after launch, all of the consoles—even the dainty, standard-definition Wii—supported some kind of social networking, photo viewing, and one or more video streaming services.
Arguably, this movement started to gain momentum with the original PlayStation’s ability to play music CDs, and continued with the PlayStation 2’s secondary role as a DVD player. But the Wii, PS3, and Xbox 360 definitively moved the entire product category beyond gaming. In fact, the PlayStation 3 ended up as the most popular way to view Netflix on a TV.
This was all a natural consequence of the decreased cost of storage and computation combined with the ubiquity of wireless networking. It was inevitable that any TV-connected box would eventually support these features. But it also means the Xbox One, PlayStation 4, and Wii U lack the clarity of purpose enjoyed by the previous generations of game consoles. Here’s how things look to me at the dawn of the next generation.
Stop me if you’ve heard this one before. The Wii U is dramatically less powerful than the Xbox One and PlayStation 4. In place of hardware power, Nintendo is offering an unconventional multi-screen gaming experience using a tablet-style controller. Although pricing has not been announced for its competitors, there’s a reasonable chance the Wii U will end up being the least expensive console in this generation.
It sure looks like the Wii formula all over again, but there’s a difference this time. The Wii U’s GamePad controller is significantly more intimidating to non-gamers than the familiar-looking Wii remote. Wii accessories (and games) also work with the Wii U, which is nice, but the GamePad is the face of the new system to consumers. For former Wii buyers who are intimidated by the GamePad, Wii hardware and software compatibility may only make them further question what the new system really offers beyond the Wii. And though the Wii U expands on the Wii’s non-gaming features, its TV integration feels half-hearted and has thus far failed to impress.
The end result has been dismal Wii U sales coming out of the 2012 holiday season. Nintendo’s rumored consideration of allowing smartphone apps to run on the Wii U seems uncharacteristically desperate.
Thanks to the novelty and accessibility of the Wii remote and the universal appeal of launch titles like Wii Sports, the Wii sold in such huge numbers that third-party developers couldn’t afford to ignore it. They dutifully cut down the features and graphics quality of their most popular games to get them to run on the Wii. These games were often terrible, but at least they existed, giving the Wii’s game library “checkbox parity” with the rest of the market.
Like the Wii, the Wii U is not powerful enough to run the same games as its competitors. Unlike the Wii, the Wii U’s sales numbers aren’t high enough to motivate cut-down ports of new games. That leaves the Wii U with Nintendo’s franchise titles (many of which are not yet available), a scant few Wii U exclusives from third-party developers, and several ports of previous-generation games that Nintendo’s new hardware is finally able to run.
It’s still too early to call this race, but the Wii U certainly looks like it’s in trouble. It may be that Nintendo has just built the wrong machine. For the most part, the Wii succeeded despite its underpowered hardware, not because of it. Choosing to produce another “next-generation” console with previous-generation power isolates Nintendo.
New multi-platform titles can easily target the Xbox One, the PlayStation 4, and the PC simultaneously. The Wii U isn’t even in the running—unless it sells so well that a hobbled port is justified. The same goes for exclusives built around the Wii U’s unique features. No third-party developer wants to invest in a game that can only ever be sold on a single platform with a tiny installed base.
I own a Wii U, and I’m convinced that it really does offer new, fun gaming experiences not available on any other platform. I’m also a diehard fan of several of Nintendo’s popular franchises. But I’m not the kind of customer that carried the Wii to head of the class in the previous generation. I’m the kind that would gladly pay twice the price of a Wii U for the ability to play a Zelda game on a console with the power of the PlayStation 4. The Wii U is not built for me. Whatever kind of customer it is built for, there sure don’t seem to be many of them.
Sony is the reigning king of overblown hardware hype, famously promising that the PS2’s emotion engine and the PS3’s Cell processor would change the face of computing forever. And maybe they did, in a tiny way. But their power was notoriously difficult to unlock. They became the standard-bearers for the gaming version of the ancient Chinese proverb: “May you develop for interesting hardware.”
Hardware eccentricity has been part and parcel of console development for decades. And the weirder the hardware, the more likely it is that a straightforward implementation of a game engine will run up against bottlenecks. The developer laments are familiar. “If only there were more bandwidth between the CPU and main memory.” “If only I had just 10% more RAM.” “If only this console had a much more powerful programmable GPU instead of a ring bus studded with custom SIMD processors, each with its own tiny local storage.”
The PlayStation 4 aims to repent for the sins of both its father and grandfather—and then some. Unlike its predecessors, it was designed in close cooperation with game developers. During the design process, new revisions of the PS4 architecture were presented to developers along with a challenge: find the bottleneck. Every aspect of the system was put through a similar gauntlet, from the shape and travel of the controller triggers to the accuracy of the gyroscopes.
All game consoles go through some version of this process, but the PlayStation 4 is defined by it. The hubris of the PS2 and PS3 is nowhere to be found in the PS4. This is a product of a newly humbled and rededicated Sony.
And the thing that Sony is rededicated to is gaming, plain and simple. Sony was the first console maker to really push the idea of a gaming system that does much more than just play games, but now it’s returning to its roots.
The PlayStation 4 is exactly the sort of thing that a hardcore gamer might have envisioned if presented with the product name back in the days when the original PlayStation reigned supreme. It’s got more of everything, and the vast majority of its resources are bent towards being the best system for developing and playing games. In this generation of consoles, that’s actually a radical notion.
The final entrant in this round of the console wars is the most ambitious. No longer content to walk the old paths blazed by Nintendo, Sega, and Sony, Microsoft is finally making its play for the entire living room.
Take a peek at the back of the box—a box that looks for all the world like a futuristic VCR—and you’ll find the hardware incarnation of this ambition: an HDMI input. Any form of entertainment that does not spring from the Xbox One is invited to at least flow through it, to be mediated and controlled by it. It’s all right there in the name: One box to rule them all.
The Xbox One announcement was unabashedly focused on everything but games. Microsoft promised more at E3, relying on the substantial goodwill it’s earned with gamers over the past decade to stave off any anxiety about the One’s gaming bona fides.
Indeed, at first glance, the core hardware architecture looks nearly identical to the PS4. But a closer look reveals a system designed to accommodate a much broader vision of home entertainment.
Where the PS4 uses high-speed GDDR5 RAM, the Xbox One opts for slower—but also less power-hungry—DDR3. And in the Xbox, that RAM is shared between two separate operating systems running simultaneously: one for games, and one for everything else.
These hardware features express two very different usage models. The PS4 expects to be turned on when in use, then turned “off” afterwards, entering a super-low-power mode during which a tiny auxiliary processor handles housecleaning chores like downloading game content and applying software updates.
The Xbox One, with its HDMI input and non-game-related OS and apps, expects to be fully powered whenever the television is on. Thus, Microsoft’s focus on idle power consumption—even at the cost of gaming performance.
To mitigate this disadvantage, the Xbox One includes 32MB of low-latency embedded SRAM right on the SoC. This is a common technique, but it leads to increased complexity. Game developers must now take care to ensure that the right data is in the tiny local eSRAM pool exactly when it’s needed. A single pool of uniformly fast memory (albeit with higher latency), as in the PS4, is a much simpler arrangement. Different priorities, different trade-offs.
(The eSRAM also consumes die space, which, along with power consumption and cost, may have contributed to Microsoft's decision to give the Xbox One 33% fewer GPU cores than the PS4.)
Then there’s the Xbox One’s companion hardware, the next iteration of Microsoft’s Kinect motion control system. The first version of this technology, released as an add-on for the Xbox 360, was the proverbial dancing bear: it didn’t work well, but it was amazing that it worked at all.
The new incarnation comes bundled with every Xbox One, and it dances like a furry Fred Astaire. It surpasses its predecessor by many multiples in every specification: resolution, depth perception, motion tracking, latency, noise cancellation, local computation. This technology is no joke.
But does it make games more fun? Or, failing that, is it a better way to control a television than a remote control? Microsoft is betting a lot, in terms of both hardware cost and software support, that the new Kinect will be an essential component of at least one of these activities in a way that the first Kinect was not.
When I’m feeling optimistic about the Kinect, I think back to the many generations of terrible touch-screen devices that preceded the iPhone. The history of touch-based interfaces on consumer electronics wasn’t a gradual ramp up to acceptable quality. The iPhone wasn’t just the next iteration; it was a discontinuity. Once the technology passed some critical threshold of responsiveness and reliability, it went from a nerdy curiosity to completely mainstream in the blink of an eye.
I don’t know where that threshold is for multi-sensor full-body motion control and voice recognition, but I do believe it’s out there. Microsoft does too. Of course, that belief will be of little consolation to Xbox One owners if the “iPhone moment” is still many years in the future.
Last generation, Nintendo did something crazy—and it worked. This generation, everyone is taking big risks.
Nintendo tried to play the same hand that it won with in the last round, but now finds itself stranded with previous-generation hardware in a next-generation market. Like Apple in the 90s, Nintendo is a sentimental favorite. But it took more than just the iMac and the iPod to transform Apple. The Wii U still has the potential to be an excellent platform for Nintendo’s beloved first-party games, and a low-cost alternative to the PS4 and Xbox One. Nintendo should milk it for all it’s worth, and get busy on the next great thing.
Sony is betting that the market for game consoles made by and for hardcore gamers has not yet peaked. If it’s right, Sony is well-positioned to dominate this generation. If it’s wrong, the PS4 could be Sony’s Spruce Goose: the ne plus ultra of game consoles, remembered in equal parts as a technical marvel and a cautionary tale.
Finally, there’s Microsoft, offering us a brief glimpse of the boundless hunger that once defined the company. But as Microsoft knows all too well, the living room is littered with the bones of past suitors.
I applaud the technical prowess of the Xbox One’s software, particularly the focus on responsiveness. The demonstrated performance when switching between live TV, gaming, and other apps puts all previous efforts at “smart” TV interfaces to shame.
That said, I seriously question the public’s appetite for displaying any additional content alongside a TV show or movie. The “second screen” experience is already well established, and it happens with a device that’s in your hand or on your lap. Grabbing one third of a large, communal TV screen to look up an actor on IMDB isn’t just unappealing and cumbersome, it’s downright rude.
There are other contexts where the Xbox One’s unique abilities might shine: jumping in and out of a game to check a sports score, for example, or quickly hitting the web to watch an extended version of an interview after finishing an episode of The Daily Show. Yes, I can see that.
But will it be enough to crown the Xbox One the king of the living room? As with all TV-connected devices, content is the key. The Xbox One has games, live TV, and video streaming services covered, but it appears to lack any form of time-shifting functionality. Given how much popular content remains locked up in broadcast and cable TV packages, there’s no way any box without DVR-like functionality can ever be the One True Interface to “watching television.”
Luckily for all three companies, things change quickly in this industry. If a critical mass of programming becomes available on streaming services a few years down the road, the Xbox One could finally fulfill its destiny.
On the other hand, Microsoft’s new focus could be a giant turn-off to gamers who were expecting an “Xbox 720,” not a Kinect-powered “media center.” However brief and anecdotal it may be, a Wii U sales spike accompanying the Xbox One announcement has to have Microsoft at least a bit worried. If the gamers who bought the Xbox 360 don’t show up in the expected numbers to buy the Xbox One, I have a hard time believing this monstrous, sensor-festooned device will pull a Wii and capture the imaginations—and dollars—of non-gamers on a grand scale.
No matter what happens, I don't envision a future where the market is evenly divided between these three very different products. Game on.
If you’d like to hear an expanded audio discussion of these topics, including my take on the TV-related efforts of Apple and Google, check out episode 3 of the Ad Hoc podcast with Guy English and Rene Ritchie.
The prevailing wisdom about software design at Apple is that the pendulum has swung too far in the direction of simulated real-world materials, slavish imitation of physical devices, and other skeuomorphic design elements, producing a recent crop of applications that suffer from an uncomfortable tension between the visual design of the software and its usability and features. After the executive reshuffle six months ago, we Apple fans have been hoping that Jony Ive, now in charge of Human Interface for both hardware and software, will end this destructive conflict and bring order to the galaxy.
With iOS 7 and OS X 10.9 looming, we’re left to wonder exactly what kind of software designer Ive will turn out to be. Certainly, Apple’s software has been influenced by Ive’s hardware designs in the past—and perhaps vice versa—but this will be the first time Ive is officially in charge of the virtual bits as well as the physical ones.
We may not have much to go on when predicting Ive’s software tastes, but we do know a heck of a lot about his opinions on hardware design. Though Ive has historically spent his time at Apple keynotes in the audience rather than on the stage, he’s starred in many, many videos wherein he explains why Apple’s great new hardware product looks and works the way it does. In these videos, his message has been remarkably consistent.
Ive demands that the hardware be true to itself—its purpose, its materials, the way it looks, and the way it feels. Here’s a quote from one of Ive’s rare appearances outside an Apple press event, talking about hardware design at Apple.
When we’re designing a product, we have to look to different attributes of the product. Some of those attributes will be the materials that it’s made from and the form that’s connected to those materials. So for example, with the first iMac that we made, the primary component of that was the cathode ray tube, which was spherical. We would have an entirely different approach to designing something like that than the current iMac, which is a very thin, flat-panel display. […]
A lot of what we seem to be doing in a product like [the iPhone] is actually getting design out of the way. And I think when forms develop with that sort of reason, and they’re not just arbitrary shapes, it feels almost inevitable. It feels almost undesigned. It feels almost like, well, of course it’s that way. You know, why wouldn’t it be any other way?
Steve Jobs also subscribed to this philosophy. Witness his explanation of the design of the first iMac with an LCD display at Macworld New York in 2002. Here’s how Jobs described Apple’s solution to the inherent compromises (in 2002 technology) of putting an optical drive in a vertical orientation and trying to pack an entire computer behind an LCD display.
The big ideas was, that rather than glom these things all together and ruin them all—a lower-performance computer and a flat screen that isn’t flat anymore—why don’t we let each element be true to itself? If the screen is flat, let it be flat. If the computer wants to be horizontal, let it be horizontal.
It’s interesting that Jobs and Ive saw eye to eye on hardware design and yet seemed far apart, at least in Jobs’s final years, when it comes to software design. While Jobs was reportedly a champion of rich Corinthian leather, Ive could only wince when asked about it in an interview.
I’m confident that we’ll see less leather, wood, felt, and animated reel-to-reel tapes in Apple’s future software products, but the question remains: what does it mean for an application or an OS to be true to itself?
I’m not sure how Ive will express that concept, but Loren Brichter, creator of Tweetie and Letterpress, offers one possible interpretation on an episode of the Debug podcast (starting at 6:10, and again at 1:02:26, specifically mentioning Ive). Letterpress is an exemplar of the so-called “flat design” aesthetic (and it’s also currently featured on the front page of Apple.com). Brichter designed the look and feel of Letterpress based on the things that modern graphics hardware is naturally good at doing: drawing and manipulating flat planes of mostly solid colors.
A design philosophy so tightly linked to nitty-gritty details of silicon chips and OpenGL APIs is unlikely to resonate with Ive as much as it does with a programmer like Brichter, but the end results may be similar. I expect Ive to focus on harmony between the look and feel of the software, the materials and finish of the hardware, and most importantly, the intended purpose of each specific application. (It’s kind of a shame that Apple’s already used the “Harmony” code name.) This is my message to Jony Ive and my hope for iOS 7, OS X 10.9, and each bundled application: to thine own self be true.
In a recent podcast, I rejected the idea of a lottery system for selling WWDC tickets as too random. I wanted to preserve at least some aspect of the process that rewarded the most enthusiastic Apple fans: the people who are willing to be roused from bed at 2 a.m. and rush to their computers to buy tickets; the crazy ones; the people who just want it more.
After yesterday’s experience of watching WWDC tickets sell out in what I measured to be less than 2 minutes, I’ve changed my mind. If the tickets had sold out in, say, 10 minutes (and assuming no server errors—more on that in a moment), then dedicated buyers would have been rewarded. If you couldn’t be bothered to be online until more than 10 minutes after the tickets went on sale, well, tough luck. Someone else wanted it more.
But tickets selling out in less than 2 minutes does not reward anyone’s dedication. We were all online at 10 a.m. PDT sharp, all ready to purchase, all equally dedicated. It was a de facto lottery, with an extra layer of pointless stress added on top.
Apple’s servers performed admirably…for about the first 5 seconds after tickets went on sale. After that, it was a crapshoot. Even if the tickets had sold out in an hour, it’d still effectively be a lottery if that hour was filled with server errors. You’d “win” if you happened to get through the purchase process with no errors.
An actual lottery, pre-announced, with no time pressure for entry, would be more equitable than what happened yesterday. That’s what I recommend for next year.
Many more people want to attend WWDC than the conference can accommodate. There has been no shortage of interesting suggestions for how to fix this. Broadly speaking, WWDC has not changed in decades. Apple and its developer ecosystem, on the other hand, are radically different than they were just five years ago. Something has to give.
I’ve heard many non-developers discuss the rush to get WWDC tickets as if the big draw is the keynote presentation, where Apple typically reveals new products. That is the most interesting part of the conference for the public, but it’s not why WWDC sells out so fast.
Developers flock to WWDC because it’s a rare opportunity to communicate with Apple directly, human to human. The best way to decrease the demand for WWDC tickets is for Apple to increase its communication with developers throughout the year. And by communication I don’t mean throwing documentation or even video presentations over the wall to developers; I mean staffing up for more real, personal, timely, informal contact with developers outside the court-like atmosphere of the App Store review process or the artificial scarcity of Technical Support Incidents.
Apple’s decision to release WWDC session videos to all registered developers during the conference was long overdue, but it clearly didn’t decrease demand for WWDC tickets enough to make a difference. Maybe next year, after developers have experienced their first tape-delayed WWDC, it will make a dent. But I really believe that increased, improved communication between Apple and developers on all fronts is the best long-term solution.
When Apple decided to make its own web browser back in 2001, it chose KHTML/KJS from the KDE project as the basis of its rendering engine. Apple didn’t merely “adopt” this technology; it took the source code and ran with it, hiring a bunch of smart, experienced developers and giving them the time and resources they needed to massively improve KHTML/KJS over the course of several years. Thus, WebKit was born.
In the world of open source software, this is the only legitimate way to assert “ownership” of a project: become the driving force behind the development process by contributing the most—and the best—changes. As WebKit raced ahead, Apple had little motivation to help keep KHTML in sync. The two projects had different goals and very different constraints. KDE eventually incorporated WebKit. Though KHTML development continues, WebKit has clearly left it behind.
When Google introduced its own web browser in 2008, it chose WebKit as the basis for its rendering engine. Rather than forking off its own engine based on WebKit, Google chose to participate in the existing WebKit community. At the time, Apple was clearly the big dog in the WebKit world. But just look at what happened after Google joined the party. (Data from Bitergia.)
Given these graphs, and knowing the history between Apple and Google over the past decade, one of two things seemed inevitable: either Google was going to become the new de facto “owner” of WebKit development, or it was going to create its own fork of WebKit. It turned out to be the latter. Thus, Blink was born.
Google has already proven that it has the talent, experience, and resources to develop a world-class web browser. It made its own JavaScript engine, its own multi-process architecture for stability and code isolation, and has added a huge number of improvements to WebKit itself. Now it’s taken the reins of the rendering engine too.
Where does this leave Apple? All the code in question is open-source, so Apple is free to pull improvements from Blink into WebKit. Of course, Google has little motivation to help with this effort. Furthermore, Blink is a clearly declared fork that’s likely to rapidly diverge from its WebKit origins. From Google’s press release about Blink: “[W]e anticipate that we’ll be able to remove 7 build systems and delete more than 7,000 files—comprising more than 4.5 million lines—right off the bat.” (There’s some streamlining in the works on the other side of the fence too.)
Does Apple—and the rest of the WebKit community—have the skill and capacity to continue to drive WebKit forward at a pace that matches Google’s grand plans for Blink? The easy answer is, “Of course it does! Apple created the WebKit project, and it got along fine before Google started contributing.” But I look at those graphs and wonder.
The recent history of WebKit also gives me pause. Google did not want to contribute its multi-process architecture back to the WebKit project, so Apple created its own solution: the somewhat confusingly named WebKit2. While Google chose to put the process management into the browser application, Apple baked multi-process support into the WebKit engine itself. This means that any application that uses WebKit2 gets the benefits of multi-process isolation without having to do anything special.
This all sounds great on paper, but in (several years of) practice, Google’s Chrome has proven to be far more stable and resilient in the face of misbehaving web pages than Apple’s WebKit2-based Safari. I run both browsers all day, and a week rarely goes by where I don’t find myself facing the dreaded “Webpages are not responding” dialog in Safari that invites me to reload every single open tab to resume normal operation.
Having the development talent to take control of foundational technologies is yet another aspect of corporate self reliance. Samsung’s smartphone business currently relies on a platform developed by another company. Leveraging the work of others can save time and money, but Samsung would undoubtedly be a lot more comfortable if it had more control over the foundation of one of its most profitable product lines.
The trouble is, I don’t think Samsung has the expertise to go it alone with a hypothetical Android fork. Developing a modern OS and its associated toolchain, documentation, developer support system, app store, and so on is a huge task. Only a handful of companies in history have done it successfully on a large scale—and Samsung’s not one of them. Sure, it’s possible to staff-up and build that expertise, but it’s not easy and it requires years of commitment. I’d bet against Samsung pulling it off.
Facebook Home can also be viewed through the lens of developer-based self reliance. Facebook clearly wants to make sure it’s an important part of the future of mobile computing, but that’s not easy to do when you’re “just a website.” Home lets Facebook put itself front and center on existing Android-based smartphones.
It seems unwise for Facebook to build its mobile strategy on the back of a platform controlled by its mortal enemy, Google. But perhaps Home is just the first step of a long-term plan that will eventually lead to a Facebook fork of Android. If so, the question inevitably follows: can Facebook really take ownership of its own platform without help from Google?
Facebook has proven that it can expand its skill set. Over the past few years, it’s been hiring talented designers and acquiring companies with proven design chops. Facebook Home is the first result of those efforts, and by all accounts, the user interface exhibits a level of polish more commonly associated with Apple than Facebook.
Still, a lock screen replacement is a far cry from a full OS. Maybe Facebook just plans to ride the bear, relying on Google to do the grunt work of maintaining and advancing the platform for as long as it can, while Facebook slowly takes over an increasing amount of the user experience.
Some people wonder how Google can possibly have any power in the Android ecosystem if the source code is free. Facebook Home has been cited as an example of Google’s ineffectualness. Look at how one of Google’s fiercest enemies has played it for a fool, they say. Google did all the hard work, then Facebook came in at the last minute and co-opted it all for its own purposes.
But look again at the graphs above. Now imagine similar graphs for the Android source code. Any company with Android-based products that wants to be truly free from Google’s control has to be prepared—and able—to match Google’s output. Operating systems don’t write themselves; platforms don’t maintain themselves; developers need tools and support; technology marches on. It’s not enough just to just fix bugs and support new hardware. To succeed with an Android fork, a company has to drive development in the same way that Apple did when it spawned WebKit from KHTML, just as Google is doing as it forks Blink from WebKit.
This is not a real-time strategy game. Companies like Samsung and Facebook can’t just mine for more resources and build new developer barracks. Building up expertise in a new domain takes years of concerted effort—and a little bit of luck on the hiring front doesn’t hurt, either.
Facebook may already be a few years into that process. Its recent acquisition of the mysterious, possibly-OS-related startup osmeta provides another data point. Samsung, meanwhile, has just joined an exploratory project to develop a new web rendering engine.
Google certainly has its own share of problems, but what may save it in the end is its proven ability to tackle ambitious software projects and succeed. The challenge set before Facebook, Samsung, and other pretenders to the Android throne is clear. And as a wise man once said, you come at the king, you best not miss.
Technology can be a surprisingly ideological topic. In politics, the spectrum of belief is right on the surface: conservative/liberal, right/left. In tech, that same spectrum exists, but it’s rarely discussed. What’s more, unlike political beliefs, I’m not sure most people are even aware of their own core ideas about technology.
Anyone who’s read the past three months of posts on this site could be forgiven for pegging me as a technological ideologue. Though I draw the line at outright dogmatism, railing against technological conservatism has indeed been a recurring theme of mine.
To illustrate the concept, I’ll use myself as an example. Back in the early days of the operating system now known as OS X, I was not happy that the user-customizable Apple menu from classic Mac OS had been replaced with an anemic, non-customizable incarnation. In classic Mac OS, the Apple menu was how I quickly found and launched commonly used applications and Desk Accessories. Apple removed this feature in Mac OS X and replaced it with…nothing, really. The Dock attempted to cover some of the same bases, but the Apple menu could comfortably hold many more items, and in a much more compact form.
In this situation, a technological-conservative position is that Mac OS X needs something like the classic customizable Apple menu. It wouldn’t necessarily have to be an Apple icon in the upper-left corner of the screen. It could be a hierarchical menu spawned from the Dock or another screen corner. (This was actually a popular request back in the days before the Dock supported any form of hierarchy.) The old OS had a feature like this, and it was useful. The new OS needs a similar feature, or it will be less useful.
Beneath what seems like a reasonable feature request lurks the heart of technological conservatism: what was and is always shall be.
In my review of the public beta, I was self-aware enough to moderate my position, merely asking for “some sort of mechanism that equals or betters the functional merits of the Apple Menu.” But what my conservatism prevented me from seeing was that things like LaunchBar, Quicksilver, and (later) Spotlight would provide similar functionality in an entirely different way, and with far more efficiency and elegance.
No one wants to think of themselves as a Luddite, which is part of what makes technological conservatism so insidious. It can color the thinking of the nerdiest among us, even as we use the latest hardware and software and keep up with all the important tech news. The certainty of our own tech savvy can blind us to future possibilities and lead us to reject anything that deviates from the status quo. We are not immune.
Consider four of my recent posts, each of which, in its own way, pressed uncomfortably against the dark matter of technological conservatism among tech nerds.
In response to The Case for a True Mac Pro Successor, a few readers insisted that there’s no longer anything technically interesting about high-performance personal computers. A new Mac Pro would just be a pair of the latest Xeons, some ECC RAM, a few SSDs and/or hard drives, and a big, hot video card.
That’s what the Mac Pro has been, so that’s what it will always be, right? And there it is.
Even explicitly listing several technologies that debuted on Apple’s high-end Macs did not derail the people whose feedback was based on the premise that the Mac Pro will never be anything that it is not already. This assumption is counter to the entire purpose of a product like the Mac Pro. It’s meant to push the envelope, to seek out new frontiers of computing power.
In Don’t Stop Thinking About Tomorrow, I tackled technological conservatism head on—though without naming it—by addressing the surprisingly widespread notion that the iPhone 5 is “too light.” This criticism leans heavily on the seductive view of the present as an endpoint, rather than just another step in a journey towards something radically different. (For a long time, I avoided writing the post you're reading now because it felt like a retread of this older one. But I eventually decided that these ideas bear repeating. Do not be surprised when both posts arrive at a similar conclusion.)
Fear of a WebKit Planet was a celebration of what turned out to be the tail end of peacetime in the browser wars. (Well, maybe it was really just a cold war turning hot again.) The post addressed the fear that “WebKit everywhere” would lead us into another dark age of web development. Even before Google’s fork of WebKit, I noted that WebKit was a lot more like Linux than IE6, and that “the products built with WebKit are as varied as those built with Linux.” Pondering that variety, the idea of a homogenous, stagnating WebKit monoculture seemed extremely unlikely. I didn’t have to wait long for confirmation.
Finally, the point of Annoyance-Driven Development was completely blotted out in the minds of a few readers by the audacious suggestion that a beloved service remains ripe for further improvement. This post revealed technological conservatism in its most virulent form: not only is the current state of affairs satisfactory, but wanting more is evidence of a character flaw, perhaps even a moral failing.
I find this idea absurd in its present-day context, and numerous analogous historical contexts immediately spring to mind as a means to persuade those who don’t. The trouble is, I can also imagine those same people taking the same technological-conservative positions in all the historical contexts as well. How far back in time do I have to go before it finally clicks?
Poor baby, you have to wait a whole day after a new episode airs on cable before it magically appears on your silent, $99, network-connected TV box.
Walking to the mailbox, unsealing an envelope, and sticking a disc into a slot under your TV is too much work, is it? Now you need to be able to start watching a movie without even picking your lazy ass up off the couch?
Oh no! There are rooms in your house where you don’t have instant access to the sum of all human knowledge! And running wires is just so hard, isn’t it? Those few cents for zip ties to keep yourself from tripping over the wires will obviously break the bank. The prince demands radio-based networking everywhere in his castle!
I guess it’s just too much work to walk out the front door five steps, pick up the newspaper that was delivered while you slept, and then bring it back to your kitchen table each morning to read the news of the world. Now you want it to appear instantly on your computer screen. OK, Mr. Fancypants Bigshot.
Yeah, pressing seven buttons in sequence is so much work. You need a faster way to call someone. Pressing just one button instead will be such a big change in your life, won’t it? You’ll finally have time to write that novel.
You’ve got a way to send a piece of paper from your home to anywhere in the entire country for literal pocket change, but that’s just too much work for you. You need to talk to someone right now, hearing an actual voice as if it’s in the same room instead of miles away.
You are warmed by the sun for nearly all your waking hours, but I guess that’s not good enough for you. No, you’re so important that you need to have light and heat at night as well. What you need, you precious snowflake, is a miniature artificial sun that’s under your control—obviously!
At some point, we’re all guilty of looking down upon things that have changed since our own formative years, but this attitude has no place in technology criticism—and it’s absolute poison for anyone trying to create great tech products and services. Not all new ideas represent progress. (Do I really need to spell this out? It seems so.) But ideas should not be rejected based merely on a lifetime of having lived without them. Today’s “unnecessary” frill is tomorrow’s baseline.
As the famous saying goes, the reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
Every great scientific and engineering triumph in human history has been a slap in the face of technological conservatism—the little ones, perhaps even more so. And yet each new step forward, no matter what the size, is inevitably met with a fresh crop of familiar objections. “Just look at what you have already, and it’s still not enough for you. Where does it end?”
It doesn’t. It never ends. Keep moving or get out of the way.
The xMac has been back in the news lately—the idea, if not necessarily the name. Whether it’s called a “Mac minitower" or a “Mac Pro mini,” we long-suffering Mac Pro fans are all looking forward to the “really great” thing Tim Cook told us to expect this year.
What almost no one expects is another straightforward revision of the existing Mac Pro, a gargantuan tower-style computer built with server-grade CPUs and RAM that pushes the limits of computing performance. Very few people want that kind of computer these days, and even fewer people actually need one.
On paper, the Mac Pro may no longer be a viable product, but it would be a mistake for Apple to abandon the concept that it embodies. Like the Power Mac before it, the Mac Pro was designed to be the most powerful personal computer Apple knows how to make. That goal should be maintained, even as the individual products that aim to achieve it evolve.
Why is this important? If Apple produces a new Mac that’s faster than any of its current models by leaps and bounds, will people suddenly buy it in huge numbers, choosing it over the laptops, tablets, and phones they prefer today? No. Is it because a very fast Mac can be sold for such a high price that its huge margins will make its profits significant, despite the expected low number of sales? No, that won’t happen either. Is a new, insanely fast Mac even guaranteed to make any money at all for Apple? Sadly, no.
So why bother creating a true Mac Pro successor at all? Good riddance, right?
In the automobile industry, there’s what’s known as a “halo car.” Though you may not know the term, you surely know a few examples. The Corvette is GM’s halo car. Chrysler has the Viper.
The vast, vast majority of people who buy a Chrysler car get something other than a Viper. The same goes for GM buyers and the Corvette. These cars are expensive to develop and maintain. Due to the low sales volumes, most halo cars do not make money for car makers. When Chrysler was recovering from bankruptcy in 2010, it considered selling the Viper product line.
Why wouldn’t a company want to get a low-volume, money-losing product line off its books, bankruptcy or no bankruptcy? If you can’t think of a reason, you may be what is known in the auto industry as a “bean counter.” Luckily for Viper fans, Chrysler had a few car guys left. Here’s a passage from Car and Driver’s preview of the 2013 SRT Viper—the Viper that almost didn’t exist.
“I knew the very last thing Chrysler needed during our bankruptcy was a 600-hp sports car,” says Ralph Gilles, the 42-year-old president and CEO of SRT and senior V-P of Chrysler Product Design. “But I’m an optimist. I wanted to fight for a chance. We discussed it for a year. I got Sergio [Marchionne, Chrysler CEO] to drive one of the last Vipers. He jumped in and disappeared to God knows where. He came back 15 minutes later and said, ‘Ralph, that’s a lot of work.’ He meant it was a brutal car. But he didn’t say, ‘Good riddance,’ or anything. Then in late ’09, I showed him a video of a Viper breaking the Nürburgring record. He watched all of it and was impressed. I gave him a list of the supercars the Viper had put away.
The car guys won; Chrysler chose to keep the Viper.
Apple is not yet in bankruptcy, but every other reason that Chrysler should have run screaming from the Viper applies equally to the Mac Pro (except perhaps the lack of profitability; Apple doesn’t share that information about individual Mac lines). To understand Chrysler’s decision, let’s consider why halo cars exist at all.
One reason is prestige. Though few people can afford to buy a Viper, its mere existence makes the affordable cars from the same manufacturer that have even the mildest bit of sporting pretension slightly more attractive to buyers. Yes, this makes little logical sense, but it’s a very real phenomenon. (There’s a reason the term “halo effect” reportedly dates back to at least 1938.)
Halo cars also push car makers to their limits. Engineering teams must use all their powers and all their skills to create the very best car possible. This exercise inevitably leads to the exploration of new technologies. The failed experiments are forgotten, but the winners eventually find their way into more prosaic cars from the same manufacturer.
The Mac Pro is Apple’s halo car. It’s a chance for Apple to make the fastest, most powerful computer it can, besting its own past efforts and the efforts of its competitors, year after year. This is Apple’s space program, its moonshot. It’s a venue for new technologies to be explored.
Consider Larrabee, Intel’s project to create a massively multi-core x86-based GPU. Rumor has it that Apple was working on integrating the technology into a Mac Pro. Intel eventually scuttled the project, but consider what would have happened if it had taken off, reshaping the GPU market in the process. Apple would have had a head start on integrating the technology into its OS and application frameworks. Its drivers would have had their kinks worked out. When it became feasible to incorporate Larrabee technology into the rest of its product line, Apple would have been ready.
I intentionally chose a (rumored) failure as an example because that’s part of the point. Better to experiment on your niche product than your high-volume money-maker. There are plenty of success stories as well.
Think of all the technologies that debuted on Apple’s high-end Macs: hard drives, color, FireWire, multiple CPUs, multi-core CPUs, 64-bit CPUs, programmable GPUs, real-time video processing. All these features had a chance to get shaken out on machines that most people don’t buy. When they trickled down to “normal” Macs, Apple had enough experience under its belt to implement them competently.
As for prestige, perhaps you think the existence of the Mac Pro has precisely zero influence on the average MacBook buyer. The existence of the Corvette probably doesn’t affect the behavior of Chevy Malibu buyers either. But things change as you creep up the respective product lines, edging closer to the high end. The Titanium PowerBook G4 was all the more impressive for incorporating the CPU previously only available on Apple’s “supercomputer” Power Mac G4.
I used the present tense earlier when I said that the Mac Pro is Apple’s halo car, but that hasn’t actually been true for a while. By allowing the Mac Pro line to languish for so long, Apple has negated any possible prestige effect and abandoned an arena where it could safely push the limits of PC performance.
I know what you're thinking. That was then, this is now. The age of the high-end PC is over! But halo cars are even more absurd than high-end PCs. There are some pretty hard limits on car performance. Anything that carries a human around can only pull so many Gs before its fragile cargo gives up the ghost.
Compare this to computing power, which has no apparent useful limit. While car performance has increased by perhaps a factor of 5 in the past 50 years (and that's being generous), humanity has absorbed a million-fold increase in computing power during that same period without sating its appetite for more. (And that factor gets quite a bit larger if I add GPUs to the mix.) Computers are not “fast enough.” They weren’t when they were invented, nor when they got 10x faster, nor when they got 100,000x faster still. They never will be.
To be clear, absolute performance is not the only worthy technological frontier. Apple continues to push the limits on many other fronts: miniaturization, power efficiency, manufacturing processes, materials, and, of course, user experience. The same is true for car manufacturing, where fuel efficiency, safety, reliability, and even comfort are arguably more important axes of innovation than absolute performance (the limits of which can’t be legally explored on public roads anyway). And yet there they all are, those absurd halo cars, laughing in the face of logic.
This brings us to the final, and perhaps most important reason that halo cars exist, and that the Mac Pro—or its spiritual equivalent—should continue to exist. Let’s talk about the Lexus LFA, a halo car developed by Toyota over the course of ten years. (Lexus is Toyota’s luxury nameplate.) When the LFA was finally released in 2010, it sold for around $400,000. A year later, only 90 LFAs had been sold. At the end of 2012, production stopped, as planned, after 500 cars.
Those numbers should make any bean counter weak in the knees. The LFA is a failure in nearly every objective measure—including, I might add, absolute performance, where it’s only about mid-pack among modern supercars.
The explanation for the apparent insanity of this product is actually very simple. Akio Toyoda, the CEO of Toyota, loves fast cars. He fucking loves them! That’s it. That’s the big reason. It’s why the biggest car maker in the world spent ten long years and well over a billion dollars developing a car that almost no one will ever own—or even know about, for that matter. It explains why Toyota scrapped the LFA’s frame design and essentially started over with carbon fiber midway through the development process. (Talk about a Steve Jobs move.)
And perhaps it also explains why the famously cantankerous Jeremy Clarkson of Top Gear, a man who has driven nearly every supercar produced in the last several decades, recently called the LFA “the best car I’ve ever driven.”
I’m not here to convince you that the LFA is a good car, that you should trust Jeremy Clarkson’s opinions on cars (or anything, really), or that you should buy a Mac Pro. All the common reasons you’ve heard for Apple to abandon the market for high-end PCs are logically and financially sound. They also don’t matter.
Apple should keep pushing the limits of PC performance because it’s a company that loves personal computers. If Apple can’t get on board with that, then all the other completely valid, practical reasons to keep chasing those demons at the high end are irrelevant. The spiritual battle will have already been lost.
I must confess, I was neither surprised nor disturbed by last month’s announcement that the Opera web browser was switching to the WebKit rendering engine. But perhaps I’m in the minority among geeks on this topic.
The anxiety about the possibility of a “WebKit monoculture” is based on past events that many of us remember all too well. Someday, starry-eyed young web developers may ask us, “You fought in the Web Standards Wars?” (Yes, I was once a Zeldi Knight, the same as your father.) In the end, we won.
As someone whose memory of perceived past technological betrayals and injustices is so keen that I still find myself unwilling to have a Microsoft game console in the house, my lack of anxiety about this move may seem incongruous, even hypocritical. I am open to the possibility that I’ll be proven wrong in time, but here’s how I see it today.
As much as I despised Internet Explorer for Windows, and what its simultaneous stagnation and dominance did to the web, I don’t think it’s the correct historical analog in this case. WebKit is not a web browser. It’s not even a product. It’s much more analogous to Linux, an open-source project that any company or individual is free to build on and enhance.
Linux, once a personal project created just for fun, now dominates the data center. It’s also in phones, tablets, game consoles, set-top boxes, and even (sometimes) PCs.
Is there a “Linux monoculture?” In some ways, yes. These days, it’s surprising if a startup creates a hardware product sophisticated enough to need an operating system and that operating system isn’t Linux. And let’s not forget that Linux has all but wiped out the proprietary Unix-based operating systems that once ruled the high-end.
Linux is the canonical open source success story. It succeeded for reasons that are now so boring they’re accepted as common sense. There’s still plenty of room for variation and innovation, but now all the significant achievements are shared with the world. If a company improves Linux, it’s not just improving its own products; it’s making Linux better for everyone. Linux let us “put all the wood behind one arrowhead” (to borrow one of Scott McNealy’s favorite sayings), but on a global—instead of merely a corporate—scale. (Funny how things turn out, eh, Scott?) Linux solved the Unix problem—for everyone.
WebKit fills a similar role. Thanks to WebKit, anyone who needs a world-class web rendering engine can get one—for free. And the products built with WebKit are as varied as those built with Linux. Even products in the same category vary wildly. Chrome and Safari, for example, have different features, different extension mechanisms, different JavaScript engines, different process models, and very different user interfaces. Opera adds yet more variation. And these are all just standalone web browsers. Consider all the embedded applications of WebKit, from game consoles to theme-park kiosks, and the idea of a homogenous, stagnating WebKit monoculture seems even more unlikely.
I haven’t forgotten the past. A single, crappy web browser coming to dominate the market would be just as terrible today as it was in the dark days of IE6. But WebKit is not a browser. Like Linux, it’s an enabling technology. Like Linux, it’s free, open-source, and therefore beyond the control of any single entity.
Web rendering engines are extremely complex. There are very few companies that have the expertise to create and maintain one on their own. (Again, the similarity to Linux is strong here.) I’m glad all those developers at Apple and Google are working on improving the same open-source web rendering engine, rather than dividing their efforts between two totally different, proprietary engines. Adding Opera’s developers can only make things better. The proliferation of WebKit will be a rising tide that lifts all boats.
I’ve been watching House of Cards, the new TV series available exclusively on Netflix, which reportedly outbid HBO, Showtime, and others for the rights to the show. This is part of Netflix’s ongoing effort to “become HBO faster than HBO can become us.” That quote, from Netflix’s chief content officer Ted Sarandos, neatly draws the battle lines between the old and new worlds of TV.
Once the upstart, HBO now finds itself playing catch-up with Netflix in terms of pricing and distribution. Netflix, meanwhile, is shelling out its own money to try to overcome its historic inability to offer the very best content.
I’m not ready to predict a winner in this race—though the two-year wait for HBO to add AirPlay support to its HBO Go iOS app does not inspire confidence in the old guard. I’m more interested in what Netflix offers that HBO doesn’t.
The answer is obvious to anyone who has used the service. For a fixed, low monthly fee, Netflix lets customers watch TV shows and movies whenever they want, wherever they want, on phones, tablets, “smart” TVs, game consoles, streaming media boxes, blu-ray players, even personal computers—remember those?
Netflix’s decision to release the entire first season of House of Cards all at once is in keeping with its disregard for the traditional limitations of TV. This is how products and services endear themselves to consumers: remove everything that gets in the way of what we want. We want to be entertained. We don’t want to arrange our schedules around your TV show. We don’t want to watch commercials. We don’t want to be forced to use a particular device. We just want it the way we want it.
But even Netflix has been unable to escape some of the trappings of the days of video past. A TV series like House of Cards that’s released a season at a time naturally lends itself to multi-episode viewing sessions. But as I recently tweeted, watching a minute and a half of opening credits before each episode can get tiresome.
This position proved somewhat controversial on Twitter. Hard-working people deserve credit, some said. Others said that the credits set the mood for the show. Some people just plain liked the credits, with no qualifiers.
But there were also people who agreed with me, people who routinely skip the opening credits (often lamenting the limited content-skipping tools provided by their chosen Netflix viewing device). One person even read my tweet while killing time as the House of Cards credits ran in another browser tab.
To be fair to Netflix, the existence of opening credits may not be entirely under its control, even when it’s paying for a series itself, given existing union contracts for actors, directors, writers, etc. But getting bogged down in the details of this debate misses the point.
Yes, opening credits are a longstanding part of traditional TV—but so were fixed broadcast schedules, commercial breaks, and viewing all TV shows on a television set. As the delivery mechanism changes, the content itself must also adapt to its changing context.
Not everyone binges on House of Cards four episodes at a time, but the people who do really love Netflix for making it possible. Every time I fast-forward through those 90-second opening credits (made more difficult by the occasional variable-length pre-credits scene), I get the opposite feeling about Netflix. It’s an unhappy reminder of the old world of TV. No explanation of contractual obligations or artistic credit is going to convince me that I’m mistaken about my own desires. I just want it the way I want it!
This may sound comically selfish, but true innovation comes from embracing this sentiment, not fighting it. For companies looking to get the best bang for their buck out of technology, this is the way forward. Find out what’s annoying the people you want to sell to. Question the assumptions of your business. Give people what they want and they will beat a path to your door.
This brings us, perhaps surprisingly, to the PlayStation 4, the newly announced successor to the six-year-old PlayStation 3. Six years is an eternity in the world of technology. For the first few decades of console gaming, each new hardware platform surpassed the capabilities of its predecessor by leaps and bounds. There was little question about what to do with technology. More, better, faster was an end in and of itself. If you build it, the games will come.
The Wii was the first console to break that cycle, directing a large chunk of its innovation toward a novel control scheme, sacrificing raw computing power to do so. It worked. The Wii became the best-selling console of its generation, and its competitors soon followed with non-traditional control schemes of their own.
Based on what’s been announced about the PlayStation 4 so far, it seems that Sony has learned at least some of the lessons of the Wii. While the PS4 will indeed be substantially more powerful than the PS3 (and embarrassingly more powerful than its competitor from Nintendo, the Wii U), Sony has not chosen to sink millions into developing a radical new CPU architecture like the PS3’s Cell processor in the hopes that raw MIPs will inexorably lead to market dominance.
Instead, Sony has built the PS4 using a nicely balanced arrangement of existing technology. All the time, money, and energy that would have otherwise gone toward a true Cell successor has been refocused on ensuring that the PS4 does things that makes Sony’s customers happy.
Game developers are one kind of customer. There may not be many of them relative to the number of people Sony hopes will buy its products at retail, but developers can make or break a game console by choosing which games to develop for which platform, and when. And developers sure weren’t happy with the PS3, which was unlike any piece of gaming hardware that had come before it. Thanks to its familiar combination of an x86 CPU and an ATI GPU, the PS4 will be much easier to write games for.
Sony feels gamers’ pain as well. The PS4 appears to have been designed by identifying the parts of the PS3 experience that are annoying and deploying technology to eliminate them. Deciding to play a game and being delayed by 30 minutes of mandatory system updates is not fun, so Sony added a dedicated processor to handle background downloads, and a low-power state for the entire system to allow this to happen unattended. Resuming an interrupted gaming session only to find yourself back at the last checkpoint in the game is not fun, so Sony promises the ability to suspend a game’s state in its entirety and resume later at the instant you left off. Waiting an hour for a multi-gigabyte game to download before you can start playing it is not fun, so the PS4 will allow games to be played as they download.
Sony is providing new features as well. A dedicated video encoder allows gameplay to be recorded in real time with no loss of performance, and a “share” button on the controller allows that video to be uploaded (in the background, naturally), without leaving the game. That same video encoding hardware plus Sony’s game-focused social network will allow players to invite their friends to watch them play in real time. Sony even promises the ability to play games remotely. If a player is having trouble with some part of a game, he could invite one of his friends to remotely assume control for a bit to help out.
Now, anyone who remembers Sony’s promises about the PlayStation 3 knows all too well how far they can be from the eventual reality. I’m very skeptical about Sony’s ability to deliver all the announced PlayStation 4 capabilities in a competent and timely manner. And then there are all the areas where the interests of gamers and game developers may conflict (e.g., the market for used games).
But when I look at the PlayStation 4 hardware itself, I see a shrewd acknowledgement of the true nature of innovation. It doesn’t cost much to add dedicated silicon to handle background network transfers and video encoding and decoding, and it sure isn’t sexy, technologically speaking. Low-power sleep states, instant suspend/resume, progressive downloads, and remote play are all features that are a giant pain to implement and do precisely nothing to make games look, sound, or perform better. But it’s these things, not the number of CPU/GPU cores or the amount of RAM, that really have a chance of making the PS4 gaming experience stand head and shoulders above what has come before.
We nerds love technology for its own sake. Indeed, there’s always something to be gained by advancing the state of the art and providing more of a good thing. But the most profound leaps are often the result of applying technology to historically underserved areas. By all means, make everything better and faster, but also find the things that seem like minor annoyances, the things that everyone just accepts as necessary evils. Go after those things and you’ll really make people love you. Accentuate the positive. Eliminate the negative.
I didn’t just lead Apple to a record quarterly profit of $13.1 billion on sales of $54.5 billion, so I don’t expect to be consulted. But were Tim to ask me, here’s what I would tell him Apple should do in 2013—in broad strokes, and in no particular order. (We’ve got people to work out the details—right, Tim?) This is not a fantasy wish list. These are things I think Apple can and should do this year. This list is not exhaustive.
Ship OS X 10.9. Last year, Apple announced OS X’s move to an annual release cycle. Lion was released in 2011; Mountain Lion followed in 2012. Two points may make a line, but it’ll take three points to fulfill this promise. As tired as I get just thinking about writing another OS X review, it’s time to do it all over again. (Big cat name optional.)
Ship iOS 7. Apple’s mobile platform started out way ahead of the competition, and it’s stayed ahead thanks to relentless iteration: six releases in six years. Apple can’t let up now. What’s left to do in iOS? Plenty.
Diversify the iPhone product line. There needs to be more than one iPhone. Selling models from previous years at a discount is no longer good enough. Apple can make more attractive phones at similar prices if they’re purpose-built using modern parts and processes. Margins may go down, but sales will go up. Apple has done this before, with the Mac, the iPod, and now the iPad. It’s the iPhone’s turn. Cheaper, smaller, bigger, or multiple combinations of these attributes—it doesn’t matter. Write it down, Tim: more new iPhones in 2013.
Keep the iPad on track. Ship some new, slimmer, faster, lighter iPads, just like everyone expects. Cheaper wouldn’t hurt either. The mini was a great start. Now ditch the iPad 2 and make a new model to fill that role, if necessary. (A larger, more powerful “iPad Pro” would also be great, but this year is probably too soon.)
Introduce more, better Retina Macs. The first Retina MacBook Pro had a GPU that could barely handle all the pixels it was asked to push. Burn-in was also an issue. This year, the available CPU, GPU, and display options should make the existing 13- and 15-inch Retina MacBook Pros look like the first-generation MacBook Air: technical marvels, but also compromises that we’ll soon be happy to forget. Oh, and a Retina display on a non-laptop Mac would be nice too.
Make Messages work correctly. Apple’s iMessage service is rapidly approaching MobileMe levels of undesirable brand association. Fix it in 2013, or be ready for an iCloud-like rebrand/relaunch in 2014. Speaking of which…
Make iCloud better. iCloud beats the pants off MobileMe, but it’s still got plenty of room for improvement. Google should be the reliability and performance target. Decide which technologies and APIs under the giant umbrella term “iCloud” are working well, and fix or deprecate the ones that are not.
Resurrect iLife and iWork. Both application suites are in desperate need of some serious attention. The last new release of iLife was two years ago; iWork hasn’t had a major revision in four years. People still use these apps. Abandoning them is not an option (yet).
Reassure Mac Pro lovers. Fans of the Mac Pro did not get the new machine they wanted in 2012. After WWDC 2012, Tim Cook said, “Although we didn’t have a chance to talk about a new Mac Pro at today’s event, don’t worry as we’re working on something really great for later next year.” As I’ve frequently noted, this statement is not a promise for a new Mac Pro, but merely for something that customers disappointed in the stagnant Mac Pro will consider “really great.” 2013 has not gotten off to a good start on that front, but the year is young. Wow me, Tim.
Do something about TV. After years of steadily ramping up its rhetoric, it’s time for Apple to put up or shut up about TV. Make an actual Apple TV set; allow third-party apps on a massively revised Apple TV box; buy Netflix; whatever—you decide, Tim. I agree, it’s a hard problem and a tough market. But it’s time for action.
Should be a cinch, right? Too bad there are only two items on this list that will help Apple’s stock price recover from its calamitous 35% drop over the past four months. Uneasy lies the head that wears a crown.
The highlight of Nintendo’s video presentation this week was the announcement of a Wii U remake of The Legend of Zelda: The Wind Waker, a GameCube game originally released in the US a decade ago. As a dedicated Zelda fan, my reaction was predictably enthusiastic.
Elsewhere on the net, fretting about the content and appearance of the game started immediately. It made me think about why I’m such a fan of video game remakes while my default position on movie remakes is to turn up my nose at them. How can I hate the Star Wars special editions but love the HD remakes of Ico and Shadow of the Colossus? I think both sentiments have the same underlying motivation: I don’t want to lose the things I love.
In the case of Star Wars, I’m frustrated not so much by the existence of alternate versions of the movies, but by the disappearance of the original theatrical releases. I discussed this at length in episode 45 of the Hypercritical podcast (the topic starts at 35:57), but here’s a summary: Artists are often not the best stewards of their own work. Once an artistic creation reaches a certain level of cultural significance, it belongs to society at large more than it belongs to the creators—philosophically, if not legally. Cultural touchstones belong to all of us, and they deserve to be treasured and preserved, regardless of the creator’s wishes.
Video games are an odd art form in many ways, one of which is that they’re extremely dependent on their delivery platform. More established kinds of art like paintings, books, video, and audio recordings have all proven resilient to changes in technology. The novels of Charles Dickens did not disappear as book technology evolved. Most filmmakers have been vigilant about preserving and (eventually) digitizing movies that were shot on film. (Again, Star Wars stands out as a sad exception.) All these art forms have a clear path to move forward in time; they’ll always be with us.
Video games are a different story. Historically, video game platform owners have been unwilling or unable to preserve the works of art originally delivered on their platforms. When the Wii, PS3, and Xbox 360 all launched with some ability to play games made for the consoles they replaced, I was optimistic about the future. But the PS3’s ability to play PS2 games rapidly diminished, first losing dedicated hardware support and then disappearing completely. Similarly, the latest iteration of the Wii can’t play GameCube games. Hoarding and preserving console launch hardware started to make a lot more sense.
Today, Nintendo sells its own emulated versions of many of its classic games. Presumably this will extend to Wii U games when that hardware is eventually phased out. But I have little faith in Nintendo’s motivation to preserve its past beyond its function as an income source. And let’s not forget all the important video game makers that have gone out of business—or been acquired and re-acquired so many times that they might as well have.
Again, as in the case of Star Wars, it has fallen to the fans to preserve classic games, sometimes by preserving the original hardware, but most often through emulation. This doesn’t just apply to video games that are 30 years old. Games are becoming inaccessible so rapidly that even platforms created just a handful of years ago already have active emulation projects.
That’s the fear that HD remakes tap into. Though there are many things that can go wrong when an older video game is ported and “improved” for release on a newer hardware platform, the risks are vastly outweighed in my mind by the playable-lifespan extension that a remake bestows on a beloved game.
Right now, I can play Wind Waker on my GameCube and my Wii. Newer Wiis (and the Wii U) don’t play GameCube games. Both the GameCube and the Wii send their video signal over a component cable, at best. I suspect TVs will stop shipping with component video inputs in a few years, which will leave me at the mercy of video converter boxes. Eventually, no matter how well I care for them, my 12-year-old GameCube and my 7-year-old Wii will break. (The optical drives will probably go first.) But when that happens, my Wii U, with its HDMI connection and 2012 manufacture date, will probably still be working. Time extended!
Alas, things get even more complicated when you consider not just the software but also the controller hardware and the details of the display device. I’ve still got my N64 in the attic, but my son experienced Ocarina of Time by playing the GameCube port on the Wii connected to a plasma HDTV. Was it the same as playing the original using an N64 controller and an old CRT television? Well, not quite. This problem only gets worse as the hardware gets more novel.
In the end, I’m content to at least preserve the software in some playable form, even if the controller and display are slightly different. Just doing this is turning out to be enough of a fight. I hope my purchase of the Wii U remake of Wind Waker will help convince Nintendo and other game makers that older titles are valued by gamers long past the death of their original platforms.
I’m also a little afraid that remakes like this will delay or prevent the original version of the game from appearing in an officially sanctioned emulated form. But for now, I’ll take what I can get. I’m glad my son has already played the original GameCube version of Wind Waker—twice. I’m also excited to replay Wind Waker with him on the Wii U in HD. It won’t be exactly the same as it was, but I think it’ll still be great. Most importantly, I hope he can share both of these experiences with his children someday.
Watching the CES coverage out of the corner of my Internet eye, I’m reminded of exactly how bad most hardware makers are at writing software. Mat Honan summed it up nicely last month: No One Uses Smart TV Internet Because It Sucks. Amen to that. But it’s not just TVs. Who really likes the “software” in their car, microwave, or blu-ray player?
All of this software is terrible in the same handful of ways. It’s buggy, unresponsive, and difficult to use. I actually think the second sin is the worst one, especially when it comes to appliances and consumer electronics. Dials and knobs respond to your touch right now. Anything that wants to replace them had better also do so. But just try finding and watching a YouTube video on your TV and see how far you get before your brain checks out. It’s faster to get up off the couch and walk to a computer—or, you know, whip out your iPhone.
The companies out there that know how to make decent software have been steadily eating their way into and through markets previously dominated by the hardware guys. Apple with music players, TiVo with video recording, even Microsoft with its decade-old Xbox Live service, which continues to embarrass the far weaker offerings from Sony and Nintendo. (And, yes, iOS is embarrassing all three console makers.)
Companies that make physical products that have only recently started sprouting sophisticated software features all find themselves in a similar bind. The obvious solution is to just make better software. If only. I have little faith that these companies are willing and able to transform themselves in the radical ways required to produce and support great software. Here’s what I see happening instead.
The long-term success of these companies now hinges on how difficult it is to create the hardware product that’s wrapped around their crappy software. Car makers, for example, are probably safe from software upstarts (if not from other car makers). The barrier to entry in the auto industry is immense, and the remaining successful car makers have deep expertise in their craft. If Tesla succeeds, for example, it won’t be because MyFord Touch is slow and unintuitive.
TV makers, on the other hand, should be worried. Most of the hardware they make is already a component of the industries dominated by the software guys. The proliferation of “smart” TV features is fueled by the fear of becoming a mere component supplier. Unfortunately for the companies involved, the terrible quality of these features may actually end up hastening the transitions from “TV maker” to “panel maker.”
At this point, the only thing keeping the hounds at bay is the reality that a TV with non-crappy software requires a much deeper cooperation with content providers. So while Apple can whip up a TV running iOS in its sleep, giving that software something useful to do requires talking to content owners—and possibly also cable companies and ISPs, who are even more keen to keep the content owners in their camp, and who have barriers to entry that the auto industry would die for. And this is before even considering the fragmentation of TV and Internet access in the US and around the world.
The hardware barriers that protect ISPs and car makers will probably hold up (much to our detriment, in the case of US ISPs), but I think the TV content owners will eventually come around—or be routed around. When that happens, the market for formerly “software-neutral” hardware devices like TVs will rapidly follow the same path as the mobile phone market. If it happens soon enough, it may even be the same familiar handful of companies that gobble up all the losers: Apple, Samsung, Google, maybe even Microsoft.
Until then, we’ll all just have to suffer through—or find a way to ignore—this avalanche of software that’s slowly making our a/v equipment, appliances, and vehicles more annoying to use.
This article originally appeared in issue 2 of The Magazine on October 25, 2012.
Journey for the PlayStation 3 is the best video game I’ve played in a long time. I’m going to use it to illustrate a larger point about technology, and in doing so, I’m going to spoil the game. If you have any interest in video games at all, I strongly recommend that you do not read any further until you’ve played it.
Online discourse can be harsh. Nowhere is this more true than in multiplayer video games. It’s nearly impossible to play a popular online game without being exposed to — or worse, being the target of — the most vile kinds of behaviors and insults, including sexist, racist, and homophobic slurs.
This problem is not confined to video games. Even something as seemingly benign as a comment form on a popular technology blog can trigger profoundly bad behavior. A well-known Penny Arcade comic sums up the phenomenon nicely in the form of John Gabriel’s Greater Internet Fuckwad Theory, which states: Normal Person + Anonymity + Audience = Total Fuckwad.
Many remedies have been tried: moderation, the use of “real names” (whatever that means), increasingly complex privacy settings, user voting, karma scores, etc. Sometimes these things help, but often only a little — and they all require constant vigilance.
In frustration, many users and content creators choose to take out the big hammer and end discourse entirely. Eliminate blog comments. Mute all voice chat. Disable communication between players on opposing teams. The only winning move is not to play.
So goes the conventional wisdom. But then there’s Journey, a $15 video game for the PlayStation 3. When you start playing Journey, it’s not even obvious that it’s a multiplayer game. When other players appear, they are not announced in any way, nor are you directed to interact with them. Some players choose to ignore them and complete the game on their own. Others dismiss them as computer-controlled NPCs. This is the first part of Journey’s solution: interaction with others is optional.
Those who choose to engage with others have only a few choices. Players can move, jump, and “sing” by pressing a single button, causing a musical note to play and a unique glyph to appear on screen. The glyph is not selected or drawn by the player; it’s automatically chosen by the game (so penis-themed griefing is out of the question). There is no text or voice chat. Singing is the only way to communicate, and the only control the player has over the note that’s played is the volume and duration.
Most critically, none of these actions can harm other players. Even movement can’t be used as a weapon; players simply pass through each other, making it impossible to bump other players off a high ledge or otherwise perturb their progress. Movement can’t even be used to race ahead and steal a desirable in-game item before another player can get to it, because power-ups are not consumed when acquired: they remain in place for future players to receive.
All of this may sound like it stops just short of banning communication entirely. Will players even bother to interact with each other? Surely, such a limited palette of options will render the multiplayer aspects of Journey trite and inconsequential.
But that’s not what happens at all. Instead, Journey players find themselves having some of the most meaningful and emotionally engaging multiplayer experiences of their lives. How is this possible?
Though players can’t harm each other, they can help each other. Touching another player recharges the power used to leap and (eventually) fly. In cold weather, touching warms both players, fighting back the encroaching frost. More experienced players can guide new players to secret areas and help them through difficult parts of the game.
Journey players are not better people than Call of Duty players or Halo players. In fact, they’re often the same people. The difference is in the design of the game itself. By so thoroughly eliminating all forms of negative interaction, all that remains is the positive.
Players do want to interact; real people are much more interesting than computerized entities. In Journey, players inevitably find themselves having positive interactions with others. And, as it turns out, many people find these positive, cooperative interactions even more rewarding than their usual adversarial gaming experiences.
Does this mean that playing Journey turns players into relaxed, peace-loving, spiritually enlightened beings? Certainly not — but the limited communication system works in more ways than one.
In the same way that you can imagine the actors in a subtitled film (speaking in a language you don’t understand) are all giving Oscar-worthy performances, it’s natural to assume that every other Journey player has only the best intentions. After all, while we may judge ourselves by our motivations, we tend to judge others by their actions. The actions in Journey are all either neutral or positive, so that’s how players perceive each other.
Journey players are also anonymous during the game. The unique player glyphs are only shown next to PlayStation Network account names when the game is over, and they change on each play-through. Again, this plays into that subtitled-movie optimism. It’s much easier to believe that the anonymous player with the winged glyph is the most caring, thoughtful person in the world when you don’t know his PSN account name is K1LLSh0t99.
If you want some evidence of the deep feelings triggered by this game, look no further than the Journey Apologies thread in the official forum for the game. Here, players apologize to the anonymous others they feel they have disappointed in the game. It’s like missed connections for gamers. Here’s an example post:
To my friend in the fifth area: I never wanted to leave you. I just whiffed really badly on a jump. I miss you. And I’m sorry.
Journey may be just a game, but the lessons it teaches about ourselves and the things we’re capable of creating can be applied to all of human endeavor.
Throughout history, we humans have invented many different sets of rules for ourselves. Some have worked better than others, but all of them have been exploited. As anyone with children knows, if there’s one thing humans are good at, it’s finding loopholes.
When a system of rules is applied to many people, thoroughly codified, and consistently enforced, you have something approaching a government. But for governments, even the most successful change occurs slowly and often happens painfully. This can lead even the most optimistic person to despair.
Human history is long, but how many different sets of rules have really been tried? In meatspace, it’s so difficult to establish a new set of rules or change the existing ones that the rate of design iteration is severely limited.
This is not so in the relatively consequence-free worlds of video games and the Internet. In the digital realm, wild experimentation and rapid iteration are the norm. It’s also much easier to establish and enforce an iron-clad set of rules in a virtual world than in the real one. This is the environment that created Journey, and its rarity is why it’s such a joy.
The lesson of Journey is that success is possible, even in an area like online multiplayer interaction which has seemed so hopeless for so long over so many thousands of iterations. Success is possible.
But let’s go further. Our digital lives increasingly affect our real lives. Consider Twitter, another system for online interaction that has succeeded in large part thanks to its novel set of rules and limitations. There’s a whole world of bad behavior that doesn’t fit into 140 characters and doesn’t work when producer/consumer relationships are asymmetrical. Twitter isn’t just a game; its influence extends into the real world, in ways we don’t yet fully understand.
As another US presidential election season grinds on and I become freshly disillusioned with the seemingly intractable problems in our system of government, Journey and Twitter give me hope. They make me believe that maybe, just maybe, the digital world can be both a laboratory for new ideas and, eventually, a giant lever with which to change the formerly unchangeable.
This past year was an eventful one for someone like me who has already passed most of the common milestones of adulthood (college, marriage, home ownership, children). The highlights:
I started a weekly podcast with Dan Benjamin, named after this blog (which, in turn, was named after something I wrote for Ars Technica in 2009). I’ve been amazed by the popularity of the show and the quality of the listener feedback and participation. Special thanks to Jeremy Mack, creator of showbot.me, and Justin Michael, creator of 5by5illustrated.com.
I’ve also become a devoted fan of several other podcasts on the 5by5 network, co-hosted by Dan Benjamin: Back to Work with Merlin Mann, Build and Analyze with Marco Arment, The Ihnatko Almanac with Andy Ihnatko, and The Talk Show with John Gruber. And for dessert, Roderick on the Line with John Roderick and Merlin Mann.
Though it started in 2010, The Incomparable, a geek ensemble podcast on which I’m proud to be a semi-regular guest, really hit its stride in 2011, with some great episodes about Star Wars (ANH part 1 and part 2; ESB part 1 and part 2), Pixar (part 1 and part 2), giant fantasy novels (The Name of the Wind and The Wise Man’s Fear), plus a bushel of episodes about Dr. Who and other TV shows and movies.
I enjoy being on this podcast all out of proportion to the number of listeners it’s managed to gather. If you have even a fraction of the fun listening as I do recording this show, you should definitely give it a try. (And if you’re already a listener, why not rate it or write a review in iTunes?)
In June, I made my first trip to WWDC in San Francisco, which was also my first trip farther west than Colorado. Ostensibly, I made the trip because I was afraid that Mac OS X 10.7 Lion would be released after WWDC but before Apple published videos of the sessions for non-attendees. (I rely on the information presented at WWDC when writing my Mac OS X reviews for Ars Technica.) But really, going to WWDC is something I’d always wanted to do.
The trip was expensive, and I had to take time off work to do it, but it was so worth it. I saw what turned out to be Steve Jobs’s final keynote presentation. I met tons of people in person that I’d known for years online, and made several new friends. I also got to talk to a handful of famous (well, “nerd famous”) people in the Apple community that I’d never imagined I’d ever have any contact with. I refuse to name-drop them, lest it cheapen the experience (and no, sadly, Steve Jobs was not one of them), but the suffice it to say that it exceeded all my expectations. I’m not sure when or if I’ll make it to WWDC again, but it’ll be extremely hard to top my first time.
Apple’s release of Mac OS X 10.7 Lion in July meant that my trip to WWDC was indeed a wise choice. In the two years since my last Mac OS X review at Ars Technica, the site has grown tremendously. Amazing feature stories on all sorts of subjects were pulling in huge traffic numbers, well beyond what my past Mac OS X reviews had drawn. I worried that the audience for my brand of tech writing was no longer significant enough to matter.
When my Lion review was published, I was grateful to be proven wrong. Thanks to everyone who continues to read what I write. Thanks for indulging my idiosyncrasies and continuing to hold me to the same high standards that I demand of the things I write about. And thanks to everyone at Ars for so many years of loyalty and for building an amazing publication that I’m proud to be even a small part of.
Steve Jobs died in October, and it affected me more than I’d expected it to. I wrote about it on Ars, talked about it on my podcast, and still think about it pretty regularly.
Some smaller 2011 milestones:
My seven-year-old son finished Ico, his first three Zelda games (Wind Waker, Ocarina of Time, and Twilight Princess) and is deep into his fourth (Skyward Sword), with only a little help from dad on the harder bosses. His gaming education is coming along nicely.
Hardware upgrades: MacBook Pro 15-inch replaced with a 13-inch MacBook Air and a 27" Thunderbolt display; 4th generation iPod touch replaced with an iPhone 4S; Canon PowerShot S3-IS replaced with a Canon PowerShot S100. Hardware firsts: first SSD, first camera that can shoot RAW, first iPhone. (Note: the iPhone is my wife’s, not mine.)
I almost posted more than one thing to this blog.
I’ve never considered Obama a very good speaker. It may be because he speaks slowly and pauses a lot, all of which drives my fast-talking-Italian-New-York-native self up a wall. Whatever the reason, my low opinion of his speaking ability meant that I was willing to believe that the Obama teleprompter gibes could very well be indicative of a real problem. Those jokes fed my fear that Obama lacked substance, that he was just a pretty voice able to dazzle people (though not me, apparently) with speeches he didn’t write or fully understand.
That fear was put to rest by Obama’s recent performance in front of a gathering of Republicans. No teleprompter, no questions received ahead of time, no softballs. I was amazed at how well he did when I read the transcript. When I watched the video, I still didn’t like his delivery (maybe I should have watched it at 1.5x) but it’s good to know that our president has a brain in his head.
That’s what was important to me regarding the teleprompter issue, and that’s why I care little about what Sarah Palin does unless it changes my existing opinion of her. Learning that she wrote notes on her hand before a speech doesn’t do that, and it sure as hell has no effect on what I think Obama’s use of the teleprompter does or doesn’t signify, regardless of which situation is more likely to resonate with the American people.