March 5, 2015

pw-header

Five years ago, Apple’s last new product category, the iPad, was unveiled. In just a few days, the company’s next new product category, the Apple Watch, will make its debut. So I’m going to do something I haven’t here since 2010: I’m going to make some informed (if almost certainly wrong) speculation about one of the biggest remaining unknowns: In this case, pricing.

Pricing is an often-overlooked part of design, something we tend to think of as purely the domain of business, to be taken care of by MBAs rather than product designers. But pricing is just as interwoven with design constraints and user psychology as any other design choice, and I’m nearly certain Jonathan Ive’s team played a part in the process.

Mind the gap

The one thing we do know about the lineup’s pricing is its floor: Apple Watch Sport is $349. We have wild speculation for the ceiling: five, ten, even twenty thousand dollar price points have been proposed for the top-end, solid-gold Apple Watch Edition. That’s uncharted territory, but what’s most interesting in terms of price design is the mid-range, un-suffixed Apple Watch.

Consensus seems to be that the steel-clad Apple Watch will start at nearly twice the price of the aluminum Sport. While this is hardly unreasonable to expect for a semi-luxury item, it would leave a huge hole in Apple’s usual pricing structure: Ever since the introduction of the iPhone, Apple has priced all its mobile devices in $100 increments, and it’s become a proven strategy for the company. Edition notwithstanding, I can’t really see them abandoning it for the Watch.

Finding the right variable

pw-sport

Apple’s upsells for mobile products have included storage space, screen size, and in the case of previous years’ models, hardware revision, all priced with the same $100 increments. But the Apple Watch has none of these easily-tiered variables.

As for screen size, it’s highly unlikely the four-millimeter step from the 38mm to the 42mm version will be enough to carry a $100 premium (for aluminum and steel, at least); even $50 seems like a stretch. All Apple needs to cover the cost of are the additional metal (a few cents to a dollar?) and a few more pixels of OLED ($12?), for which a $30 bump seems just about right to maintain the product’s margin. And the $30 increment is a familiar one, used with iPads for which the $100 increment alone wasn’t sufficient to maintain the margin on cellular radios.

So that’s a $30 increment, but still no $100. How about color? Even though John Gruber has suggested a possible return of the 2006 MacBook’s “black tax,” Apple has been selling black versions of its hardware for the same price as all other colors since the iPhone 3G in 2008.

Stepping up

pw-steel

The magic of the repeated $100 increment is the negotiability it engenders in the buyer’s mind. Whatever you’re considering buying, the next step up is never more than $100 away – it’s right there, tempting you. It’s for this reason that Apple more or less has to have something at a $449 price point, and with no premium bands in sight for Sport, only a modest premium likely for screen size, and equal color pricing, that leaves the entry-level steel version to cover the gap.

From there, however, I think Apple has plenty of flexibility to maintain its $100 increment strategy with premium bands, each tier falling into the groups outlined by Gruber in his piece.

The market versus the margin

There’s no doubt that premium watch buyers are used to spending more than this for a quality steel watch. Perhaps Apple could charge higher prices to signal competition with quality mechanical watches, but I think they will lowball simply because they can. A steel-housed smartwatch doesn’t cost twice as much to build as an aluminum-housed one, and I don’t think Apple will price it as such.

I could be entirely wrong about this, but historically, Apple doesn’t really seem to care about charging a lot as much as they care about charging enough to maintain good margins. Remember Steve Jobs’ deft anchoring of the iPad at $999 before announcing it would be half that? However the Apple Watch ends up being priced, I think it’s going to be based more on the margins Apple can make than on how high the market might be willing to go.

September 10, 2014

The newly-announced Apple Watch is a curious fusion. It blends ingredients of both an old and a new version of the largest company in the US: On one hand, it’s a throwback to the days when Apple experimented with products designed not to change your life, but to improve a very specific facet of it. On the other, it is loaded with subtle signals of a substantial change in the way Apple approaches the market. It is simultaneously an extension of the iPhone in the same way the iPod was an extension of the Macintosh, while charting a fascinating new direction for the post-Beats-acquisition Apple.

On repeat

The iPod was, in the big picture, not a game-changer. It was not a product that would usher in the digital age of several creative industries like the Mac did, or kickstart a variety of entire markets and even subcultures as did the iPhone. It was not a product that would enable you to do amazing new things like instantly translating words on a sign, but one that would simply take something you already do, listening to music, and make it slick, seamless, and more fun. It was a device that nobody needed, but one that millions grew to want.

In this way, the Apple Watch seems to closely follow the iPod playbook. Despite what many industry watchers have breathlessly proclaimed about the dawn of the wearable era, one would be hard-pressed to call the Watch, or indeed any smartwatch a socio-technological shift on the level of the smartphone or the tablet. At the end of the day, all of them – Pebble, Android Wear, and now even the Apple Watch, are conceived of purely as extensions of the smartphone. By and large, they enable new modes of doing the same things you could do with a smartphone, where the smartphone enabled you to do new things altogether.

And that’s not a problem, at least for Apple. Because this time, they’re trying what appears to be an entirely new strategy to out-iPod the iPod.

The category path

“New product categories” were the enigmatic, ear-perking words with which Tim Cook teased the Apple Watch (among other things) earlier in the year. But beneath the glossy presentation of a new Apple product and category, between the lines of the messaging both verbal and visual, is where a new, uncharted Apple becomes apparent.

Each “new category” Apple has entered over the past thirteen years has seen the debut of a product with a singular cachet. The first iPods, at $400 (almost $540 in 2014), grew to become a pop-culture icon of consumer tech luxury, until the $250 iPod mini, and ultimately the $99 Shuffle, made them accessible to the mass market. The iPhone, introduced at the $400 price point on contract, was most visibly adopted by tastemakers for whom the Motorola Razr had become culturally and economically diluted, only to eventually have its past models offered free on contract. Even in a world with a 5.5″, gold iPhone 6 Plus, the mere presence of the cracked-glass iPhone 4 toted by a broke college student dilutes the iPhone’s cachet from its cultural height.

Built to last

It seems that Apple is attempting to build a new business that is immune to this dilution. An unabashed luxury (though as always, attainable luxury) sub-brand. If its desktop computer and mobile device segments sell you beautifully-designed options for things you need, Luxury Apple sells you the things you don’t need, but want.

It’s written all over the details. The new typographic standards, abandoning friendly Myriad for the more sophisticated and transatlantic DIN, all-caps at that. The new industrial design language that dispenses with the practical surfaces and materials of Apple’s computing hardware and adopts the language of jewelers, with not-entirely-necessary engraved detail text, mirror-finish metals, and extensive, first-party customization with the sorts of materials you might find composing products at Angela Ahrendts’ old employer. Make no mistake, Apple is speaking a new language with the Watch.

In this light, the Apple Watch is about more than smartwatches. It isn’t even particularly about wearables. It’s about building a new business on aspirational products now that its original aspirational products have become accessible. Will Apple succeed at this, or will annual updates of thinner, more capable devices from this new luxury side take the same path to the mass market as every other one of Apple’s successful premium devices?

An earlier version of this post misquoted Tim Cook and has been corrected.

May 31, 2013

Skeuo-flat-ic design

I don’t remember exactly where I first encountered it. But at some point in the past three years, I, along with a large contingent of user interface designers, fans, and industry followers, learned a new word: Skeuomorphic. A skeuomorph in the UI world, so the popular definition goes (even if the rigorous scientific definition of the word makes “skeuomorphic design” an oxymoron) is a GUI or elements of a GUI that borrow from a physical analog of their functionality. It seemed like a useful concept in defining the execution of user interfaces, with an added benefit of sounding exotic in conversation – but I think we may have outgrown it.

Everyone loves a feud

Somewhere along the way, this newly-popularized concept gained that one thing critical to capturing our collective imagination: a foil. Where skeuomorphism was tied to the familiar, the tactile, the rich, the warm, this dark horse was divorced of the familiar, lived in platonic ideals, was simple, cold, mathematic. Despite drawing most of its theory from the 20th Century’s signature design movement, Modernism, it was nonetheless given its own, rather less impressive (and more prescriptive) name, “flat design.”

As the story goes, each of these poles had its champion, with Apple raising the varnished-oak banner of its increasingly unified mobile and desktop design language, and Microsoft carrying the solid, rectilinear flag of what was briefly but indelibly called its Metro design language. A war was brewing in the UI design world between flat and skeu: Apple’s rumored “move to flat” would stir more design-office conversation than a betrayal in Game of Thrones.

Not so flat

There is a problem with this narrative. Much of the interaction that users have with iOS devices is with UI elements having no physical analogs apart from the most basic, localized physical metaphor, the button – most of these are even just black Helvetica on a white background. And for all of Microsoft’s eschewing of texture, shading, and object references, one cannot escape that its many boxes and encircled icons ultimately draw affordance from our associations with a physical object, the humble pushbutton.

So if much of iOS is “flat,” and Windows Phone is loaded with a thousand tiny skeuomorphs, what are we left with? An important realization: “Flat design” is not nearly as flat as it looks. Skeuomorphism is a critical part of interaction design, and is everywhere.

How, then, do we verbalize the many clear differences between the examples of iOS and Windows Phone? The answer is to build a more nuanced framework than “flat” versus “skeuomorphism.”

Building a more useful vocabulary

Instead of imagining a fun-to-follow, yet ultimately empty battle between the forces of skeuomorphism and flat design, a more productive pursuit would be to construct a vocabulary around the toolset these concepts offer. Here are a few tools I see:

Functional object reference (“skeuomorphism”)

object

Propellerhead Reason, Apple Calendar, Microsoft Windows Phone 8 dialer

This is the sort of visual metaphor that ties an object from the physical world to a virtual tool. Ideally it is for purposes of building affordance from familiarity (turning pages in iBooks), but it can easily be misused (non-functional pages in iOS Address Book). Regardless of how realistically it’s rendered, a physical object can be useful as a reference so long as it is recognizable by the user and responds in the same way.

Material/texture reference

texture

Apple Game Center, Quantum WordPress theme from Themesorter, artist’s rendition of 1990s CD-ROM menu

The difference between this (which you may call skeuomorphism as well; I won’t stop you) and the previous example is that the metaphor is implicit, if present at all. Ideally, there is some implied metaphor: Apple’s Game Center may not actually play backgammon, but its material references to a vintage board game case can put the user’s mind in a conceptual space for gaming. Often, this tool is simply used for decoration, but tasteful decoration can still aid user experience.

Depth cues

depth

Apple Notification Center settings, Apple Maps, GitHub for Mac

Whereas the previous two tools always use concrete references, depth cues may or may not have such a clear analog. Their primary purpose is to imply what can be done if the user interacts with the controls they adorn, not referencing real objects themselves, but rather mechanical aspects of them.

Shape / color cues

Calvetica settings, Android lock screen, Solve for iOS

Calvetica settings, Android lock screen, Solve for iOS

Contrasts in shape and color are often used in conjunction with depth cues to further increase contrast and create visual hierarchy. The trend of “flat design” is to use them with minimal application of the other aforementioned tools, which can be successful so long as the application of shape and color contrasts are sufficient to create an affordance of user action.

Addition and subtraction

My current one-liner for when the subject comes up is “‘flat design’ is approximately as useful a term for user interfaces as ‘red design’ or ’round design.'” Far from being a shot at the popular aesthetic of relying heavily on it, though, it’s meant to provoke thought: Flatness and depth are tools, just like color and shape, affordances and analogs.

Don’t just practice flat design or skeuomorphic design. Use the tools that are right for the interface that’s right for your users.

November 30, 2012

I recently set aside my aging iPhone 3GS for a new iPhone 5. Naturally, the latter covers all the bullet points expected of an update to a consumer electronic device: It’s faster, thinner, bigger-screened. Yet as much as these iterative advances may improve the day-to-day experience of using of the device, they actually add up to a tradeoff.

One gives up several things along with the exchange of a 2009 smartphone for a 2012 smartphone. It might sound obtuse to say the things given up include low pixel density and time spent waiting for things to load, but these are more than annoyances made perceptible by the march of technology: They are connections to the medium. They are the signatures of the technology we use, bonds to time and place forged in memory; over time they become the familiar sensations of home.

In exchange for these connections to the medium, upgrades give us abstraction from it, the ability to perform tasks less encumbered by the technology’s inherent compromises.

Dissolving with the pixels

The history of raster-based computer displays may be seen as a single thread of increasing medium-abstraction from the technology’s earliest green-phosphor text terminals through today’s Retina displays. The experience of using the oldest screens was deeply connected to the limitations of the technology: Far from reproducing photographs in the millions of colors discernible by humans, images were limited to a single color and two intensities; even such screens’ greatest strength, text, was far removed from capturing the subtleties of centuries’ worth of typographic refinement. In the use of these technologies, the medium itself was ever-present.

As graphics technology improved over the next few decades, the technology itself began to abstract away as images could be reproduced at greater fidelity to the human eye and typography could be rendered with at least a recognizable semblance of its heritage. With high-DPI displays, the presence of the medium is all but gone – while dynamic range and depth cues may yet evade modern LCDs, the once-constant reminder that you are viewing a computer display has become so subtle as to have disappeared.

Computation, time, and distance

Every time you wait for a computer to catch up with you, whether it’s a second or two for a disk cache or an hour for a ray-traced image to render, you experience a signature of the medium in which you are working. Waiting for a document to save in HomeWord on an 8088 was a strong reminder that you weren’t dealing with paper. Invisible, automatic saving in Apple Pages lends a physicality to the document on which you’re working, abstracting the volatile nature of the medium.

A significantly faster network connection, such as the leap from 3G to LTE, further abstracts the already unimaginably-abstracted distances of the Internet. As this abstraction increases, our expectations adjust accordingly, pointing to a change in our mental models. I still recall that first time in the 1990s when I loaded a web page from outside the US, imagining the text and images racing over transatlantic cables as they piled up in the browser. A 20-megabit connection leaves no temporal space for such imagination.

The last one you’ll ever need

For the past two years, during the ascendancy of “retina”-DPI displays, it has seemed plausible that the industry is at last approaching a point in display technology where further innovation won’t be necessary—displays could be “solved,” having reached the apotheosis of their abstraction. As Moore’s Law continues to conspire with faster networks and better UI design to melt away all the other aspects of the tool-ness of the digital tools we use, our consciousness of those tools predictably becomes less pronounced. In the long run, more responsive, more reliable, more accurate, more abstracted interfaces trend toward invisibility.

Given enough time and enough iterations, can the technology and design of an interface simply be solved, in totality, like the game of checkers? Can it be abstracted away entirely, leaving perceptible only user intent and system response? Can we ever become truly independent from a medium—visual information matched with the limits of human vision, latency for every network request below the threshold of human perception, and a UI with nearly zero cognitive load?

When we’ve lost the last traces of the “computer-ness” of a computer, will we have lost something meaningful? Or will our only loss be of fodder for nostalgia?

December 16, 2011

Getting data from a public place onto a mobile device—whether that data is a discount coupon, a museum map, a restaurant menu, or any other kind of mobile web site, is a problem with no shortage of solutions. Location-based services, NFC systems, and even Bluetooth 4.0 each offer a handful of promising possibilities, but the clear leader is the simplest: QR codes. Yet while the QR code has long been a staple in its native Japan, it has a ways to go to find popular adoption elsewhere.

The QR code likely owes no small part of its popularity in Japan to a long history of integration into mainstream mobile handsets. Even within the infamously labyrinthine UIs of popular clamshell handsets, QR code scanning generally isn’t much harder to find than the camera mode itself. Yet nearly everywhere else outside Japan, you’re on your own.

Working out of the box

Engagement with QR codes in the US is on the rise—to a towering 5%, according to a recent Forrester study. Percentages are higher with smartphones (pushing 20% with the iPhone, higher still on power-user-focused Android), but the Japanese example makes it clear: making QR codes a reliable way to connect with the majority of mobile users will require a better, more integrated user experience from phone manufacturers and mobile OS vendors. The question for them, then, is how to integrate that experience.

Most Japanese phones have the equivalent of a discrete menu item or app for QR code scanning. But there’s really no reason for it to be stuck outside the phone functionalities that bookend the QR code experience: the camera (the external end) and the browser (the internal end). Ideally, it should be integrated into one of them.

Exploration 1: The browser

As QR codes will generally lead to a mobile website, it follows to attach the acquisition of the code to the beginning of the web browsing experience. Why not integrate QR code scanning right into the browser’s address bar?

In this concept, the camera icon appears when Mobile Safari’s address field is clear, replacing the circular “X” button: The user has signaled an intent to enter a new address, and so the camera icon augments the affordance of the keyboard as an additional input channel for a web address. This vocabulary could be used for QR codes’ less-popular applications as well, as a channel inside a contacts app or calendar.

Exploration 2: The camera

Many phones have a dedicated camera key, and even the iPhone gained a soft-key to launch the camera in iOS 5. This rapid access to the camera makes it a great candidate for reducing friction to the acquisition of a QR code. But how do we balance ease of recognizing codes with the other, non-code-reading functionality of the camera?

One solution is to make it modal: The iPhone’s bundled camera app offers an obvious place for such a switch.

The ideal solution, though, might be to simply make the functionality transparent. Rather than require mode selection or input from the user at all, why not simply detect QR codes as they come into view? The potential pitfall here is unintentional activation, should a QR code accidentally come into the field of view while the user is trying to use the camera for something else. The intrusiveness of the scanning affordance must be minimized.

An augmented-reality-style pop-up with a preview of where the QR code will navigate to offers a clear path to the link, but without interrupting normal camera usage. This approach does, however, lack some affordances. Without a mode selector, how is the user to know the camera app is capable of scanning QR codes?

In the wild

Anecdotally, the camera app as starting point seems to be the most intuitive context for a QR code reader. I recently assisted a co-worker who was attempting to test a QR code one of the designers at the office had prepared. She was concerned the code wasn’t set up correctly, as it wasn’t registering on her iPhone.

The problem? She had never installed a reader app. The seamlessness of the iPhone experience and the growing popularity of QR codes logically led her to believe the built-in camera app would read them.

Perhaps it’s time for Apple, Google, and other leaders in the mobile industry to make that logic hold.