May 31, 2013

Skeuo-flat-ic design

I don’t remember exactly where I first encountered it. But at some point in the past three years, I, along with a large contingent of user interface designers, fans, and industry followers, learned a new word: Skeuomorphic. A skeuomorph in the UI world, so the popular definition goes (even if the rigorous scientific definition of the word makes “skeuomorphic design” an oxymoron) is a GUI or elements of a GUI that borrow from a physical analog of their functionality. It seemed like a useful concept in defining the execution of user interfaces, with an added benefit of sounding exotic in conversation – but I think we may have outgrown it.

Everyone loves a feud

Somewhere along the way, this newly-popularized concept gained that one thing critical to capturing our collective imagination: a foil. Where skeuomorphism was tied to the familiar, the tactile, the rich, the warm, this dark horse was divorced of the familiar, lived in platonic ideals, was simple, cold, mathematic. Despite drawing most of its theory from the 20th Century’s signature design movement, Modernism, it was nonetheless given its own, rather less impressive (and more prescriptive) name, “flat design.”

As the story goes, each of these poles had its champion, with Apple raising the varnished-oak banner of its increasingly unified mobile and desktop design language, and Microsoft carrying the solid, rectilinear flag of what was briefly but indelibly called its Metro design language. A war was brewing in the UI design world between flat and skeu: Apple’s rumored “move to flat” would stir more design-office conversation than a betrayal in Game of Thrones.

Not so flat

There is a problem with this narrative. Much of the interaction that users have with iOS devices is with UI elements having no physical analogs apart from the most basic, localized physical metaphor, the button – most of these are even just black Helvetica on a white background. And for all of Microsoft’s eschewing of texture, shading, and object references, one cannot escape that its many boxes and encircled icons ultimately draw affordance from our associations with a physical object, the humble pushbutton.

So if much of iOS is “flat,” and Windows Phone is loaded with a thousand tiny skeuomorphs, what are we left with? An important realization: “Flat design” is not nearly as flat as it looks. Skeuomorphism is a critical part of interaction design, and is everywhere.

How, then, do we verbalize the many clear differences between the examples of iOS and Windows Phone? The answer is to build a more nuanced framework than “flat” versus “skeuomorphism.”

Building a more useful vocabulary

Instead of imagining a fun-to-follow, yet ultimately empty battle between the forces of skeuomorphism and flat design, a more productive pursuit would be to construct a vocabulary around the toolset these concepts offer. Here are a few tools I see:

Functional object reference (“skeuomorphism”)

object

Propellerhead Reason, Apple Calendar, Microsoft Windows Phone 8 dialer

This is the sort of visual metaphor that ties an object from the physical world to a virtual tool. Ideally it is for purposes of building affordance from familiarity (turning pages in iBooks), but it can easily be misused (non-functional pages in iOS Address Book). Regardless of how realistically it’s rendered, a physical object can be useful as a reference so long as it is recognizable by the user and responds in the same way.

Material/texture reference

texture

Apple Game Center, Quantum WordPress theme from Themesorter, artist’s rendition of 1990s CD-ROM menu

The difference between this (which you may call skeuomorphism as well; I won’t stop you) and the previous example is that the metaphor is implicit, if present at all. Ideally, there is some implied metaphor: Apple’s Game Center may not actually play backgammon, but its material references to a vintage board game case can put the user’s mind in a conceptual space for gaming. Often, this tool is simply used for decoration, but tasteful decoration can still aid user experience.

Depth cues

depth

Apple Notification Center settings, Apple Maps, GitHub for Mac

Whereas the previous two tools always use concrete references, depth cues may or may not have such a clear analog. Their primary purpose is to imply what can be done if the user interacts with the controls they adorn, not referencing real objects themselves, but rather mechanical aspects of them.

Shape / color cues

Calvetica settings, Android lock screen, Solve for iOS

Calvetica settings, Android lock screen, Solve for iOS

Contrasts in shape and color are often used in conjunction with depth cues to further increase contrast and create visual hierarchy. The trend of “flat design” is to use them with minimal application of the other aforementioned tools, which can be successful so long as the application of shape and color contrasts are sufficient to create an affordance of user action.

Addition and subtraction

My current one-liner for when the subject comes up is “‘flat design’ is approximately as useful a term for user interfaces as ‘red design’ or ’round design.’” Far from being a shot at the popular aesthetic of relying heavily on it, though, it’s meant to provoke thought: Flatness and depth are tools, just like color and shape, affordances and analogs.

Don’t just practice flat design or skeuomorphic design. Use the tools that are right for the interface that’s right for your users.

November 30, 2012

I recently set aside my aging iPhone 3GS for a new iPhone 5. Naturally, the latter covers all the bullet points expected of an update to a consumer electronic device: It’s faster, thinner, bigger-screened. Yet as much as these iterative advances may improve the day-to-day experience of using of the device, they actually add up to a tradeoff.

One gives up several things along with the exchange of a 2009 smartphone for a 2012 smartphone. It might sound obtuse to say the things given up include low pixel density and time spent waiting for things to load, but these are more than annoyances made perceptible by the march of technology: They are connections to the medium. They are the signatures of the technology we use, bonds to time and place forged in memory; over time they become the familiar sensations of home.

In exchange for these connections to the medium, upgrades give us abstraction from it, the ability to perform tasks less encumbered by the technology’s inherent compromises.

Dissolving with the pixels

The history of raster-based computer displays may be seen as a single thread of increasing medium-abstraction from the technology’s earliest green-phosphor text terminals through today’s Retina displays. The experience of using the oldest screens was deeply connected to the limitations of the technology: Far from reproducing photographs in the millions of colors discernible by humans, images were limited to a single color and two intensities; even such screens’ greatest strength, text, was far removed from capturing the subtleties of centuries’ worth of typographic refinement. In the use of these technologies, the medium itself was ever-present.

As graphics technology improved over the next few decades, the technology itself began to abstract away as images could be reproduced at greater fidelity to the human eye and typography could be rendered with at least a recognizable semblance of its heritage. With high-DPI displays, the presence of the medium is all but gone – while dynamic range and depth cues may yet evade modern LCDs, the once-constant reminder that you are viewing a computer display has become so subtle as to have disappeared.

Computation, time, and distance

Every time you wait for a computer to catch up with you, whether it’s a second or two for a disk cache or an hour for a ray-traced image to render, you experience a signature of the medium in which you are working. Waiting for a document to save in HomeWord on an 8088 was a strong reminder that you weren’t dealing with paper. Invisible, automatic saving in Apple Pages lends a physicality to the document on which you’re working, abstracting the volatile nature of the medium.

A significantly faster network connection, such as the leap from 3G to LTE, further abstracts the already unimaginably-abstracted distances of the Internet. As this abstraction increases, our expectations adjust accordingly, pointing to a change in our mental models. I still recall that first time in the 1990s when I loaded a web page from outside the US, imagining the text and images racing over transatlantic cables as they piled up in the browser. A 20-megabit connection leaves no temporal space for such imagination.

The last one you’ll ever need

For the past two years, during the ascendancy of “retina”-DPI displays, it has seemed plausible that the industry is at last approaching a point in display technology where further innovation won’t be necessary—displays could be “solved,” having reached the apotheosis of their abstraction. As Moore’s Law continues to conspire with faster networks and better UI design to melt away all the other aspects of the tool-ness of the digital tools we use, our consciousness of those tools predictably becomes less pronounced. In the long run, more responsive, more reliable, more accurate, more abstracted interfaces trend toward invisibility.

Given enough time and enough iterations, can the technology and design of an interface simply be solved, in totality, like the game of checkers? Can it be abstracted away entirely, leaving perceptible only user intent and system response? Can we ever become truly independent from a medium—visual information matched with the limits of human vision, latency for every network request below the threshold of human perception, and a UI with nearly zero cognitive load?

When we’ve lost the last traces of the “computer-ness” of a computer, will we have lost something meaningful? Or will our only loss be of fodder for nostalgia?

December 16, 2011

Getting data from a public place onto a mobile device—whether that data is a discount coupon, a museum map, a restaurant menu, or any other kind of mobile web site, is a problem with no shortage of solutions. Location-based services, NFC systems, and even Bluetooth 4.0 each offer a handful of promising possibilities, but the clear leader is the simplest: QR codes. Yet while the QR code has long been a staple in its native Japan, it has a ways to go to find popular adoption elsewhere.

The QR code likely owes no small part of its popularity in Japan to a long history of integration into mainstream mobile handsets. Even within the infamously labyrinthine UIs of popular clamshell handsets, QR code scanning generally isn’t much harder to find than the camera mode itself. Yet nearly everywhere else outside Japan, you’re on your own.

Working out of the box

Engagement with QR codes in the US is on the rise—to a towering 5%, according to a recent Forrester study. Percentages are higher with smartphones (pushing 20% with the iPhone, higher still on power-user-focused Android), but the Japanese example makes it clear: making QR codes a reliable way to connect with the majority of mobile users will require a better, more integrated user experience from phone manufacturers and mobile OS vendors. The question for them, then, is how to integrate that experience.

Most Japanese phones have the equivalent of a discrete menu item or app for QR code scanning. But there’s really no reason for it to be stuck outside the phone functionalities that bookend the QR code experience: the camera (the external end) and the browser (the internal end). Ideally, it should be integrated into one of them.

Exploration 1: The browser

As QR codes will generally lead to a mobile website, it follows to attach the acquisition of the code to the beginning of the web browsing experience. Why not integrate QR code scanning right into the browser’s address bar?

In this concept, the camera icon appears when Mobile Safari’s address field is clear, replacing the circular “X” button: The user has signaled an intent to enter a new address, and so the camera icon augments the affordance of the keyboard as an additional input channel for a web address. This vocabulary could be used for QR codes’ less-popular applications as well, as a channel inside a contacts app or calendar.

Exploration 2: The camera

Many phones have a dedicated camera key, and even the iPhone gained a soft-key to launch the camera in iOS 5. This rapid access to the camera makes it a great candidate for reducing friction to the acquisition of a QR code. But how do we balance ease of recognizing codes with the other, non-code-reading functionality of the camera?

One solution is to make it modal: The iPhone’s bundled camera app offers an obvious place for such a switch.

The ideal solution, though, might be to simply make the functionality transparent. Rather than require mode selection or input from the user at all, why not simply detect QR codes as they come into view? The potential pitfall here is unintentional activation, should a QR code accidentally come into the field of view while the user is trying to use the camera for something else. The intrusiveness of the scanning affordance must be minimized.

An augmented-reality-style pop-up with a preview of where the QR code will navigate to offers a clear path to the link, but without interrupting normal camera usage. This approach does, however, lack some affordances. Without a mode selector, how is the user to know the camera app is capable of scanning QR codes?

In the wild

Anecdotally, the camera app as starting point seems to be the most intuitive context for a QR code reader. I recently assisted a co-worker who was attempting to test a QR code one of the designers at the office had prepared. She was concerned the code wasn’t set up correctly, as it wasn’t registering on her iPhone.

The problem? She had never installed a reader app. The seamlessness of the iPhone experience and the growing popularity of QR codes logically led her to believe the built-in camera app would read them.

Perhaps it’s time for Apple, Google, and other leaders in the mobile industry to make that logic hold.

July 31, 2011

Article image illustrating Mission Control vs. 10/GUI Con10uum

In the fall of 2009, I was afforded a rare opportunity: I got to start a lot of people thinking about something in a new way. A video I had spent my spare time producing that summer hit TechCrunch and subsequently all the major tech blogs, spreading the idea of a new kind of desktop user interface: after 25 years of mouse pointers, windows, and desktops, I proposed a new set of conventions that were in some ways radically new, and in other ways quite legacy-compatible. I called it 10/GUI, and it struck a chord with people across many disciplines.

The goal of 10/GUI was always twofold: to comfortably expand our tactile interaction with computers beyond the single pair of coordinates offered by the mouse, and to deal with the complexity of the modern multitasking operating system in a more elegant way than through fidgety stacks of windows. It solved some problems, but introduced others, perhaps the most prominent being “how would we actually transition to this?”

Indeed, the state of the desktop right now often seems unable to advance upon either of the fronts 10/GUI proposed, so bound are we to existing conventions and patterns. Or at least it seemed so—until Apple announced Mac OS 10.7 last year, releasing it this past month.

Window cleaning

Apple is no stranger to combating the scourge of messy desktops. Even as far back as System 7’s window-shade effect and OS 9′s “spring-loaded” Finder windows, designers and engineers in Cupertino have experimented with ways to ease multitasking’s cognitive burden. With OS 10.3 in 2003, Apple introduced Exposé, perhaps the first mainstream attempt to address the inherent clutter of the window paradigm, refining its behavior in subsequent OS releases.

By the late 2000s, Apple’s foray into the mobile world had paved its own course, demonstrating, as Steve Jobs proudly classified them in 2007, “desktop-class apps” that adopted the full-screen paradigm of decades of purpose-built devices, embedded systems, and terminals. Now, in 2011, OS 10.7 has adapted this approach back to the desktop, synthesizing full-screen apps and multitasking into a linear application space to be swiftly navigated via multi-touch gestures.

Swiped?

Some have found this combination of linear application management and gestural navigation familiar. A common theme in email and Twitter correspondence I’ve received recently has been the similarities between Lion’s “Mission Control” UI for managing full-screen applications and 10/GUI’s “Con10uum” UI. There are indeed similarities. But I’m reminded of the infamous “rich neighbor named Xerox” story, the rich neighbor in this case being pre-HP Palm.

Palm’s WebOS and its “cards” model inspired the Con10uum linear desktop. I think it’s fair to say it inspired Mission Control to some extent as well. When you don’t need to manage applications in two dimensions—which, barring a few edge cases, describes everyday use for most users—one dimension makes the most sense. Palm knew this, I learned from it, and so has Apple. Did 10/GUI inform Lion’s design? It’s flattering to imagine, but it’s impossible to know. The question that interests me is where Apple may take Mission Control next.

One away from 10

I don’t see this as a controversial claim: Apple is setting the stage to deprecate the windowed desktop. The timetable for that is open to debate, but given Lion’s emphasis on Mission Control, the new full-screen APIs, and what can be seen as an eventual replacement for the dock, the preparations are clearly in motion.

What is most fascinating to me is that with the full-screen APIs in Lion, Apple is really not that far away from a full, 10/GUI-style solution. Once every app can be relied upon to work in a scalable (by screen resolution), full-height UI, there’s no reason they can’t begin to coexist on the same screen. The three-finger swipe currently used to navigate between full-screen apps could, in a pinch, resize them. The present “desktop” space could become a compatibility pen for old apps, leaving the Mission Control spaces to become the new un-desktop. Given that the menu bar is now only a part-time resident of the screen, even that could be reconsidered.

Of course, if this happens, I can then truly consider 10/GUI Xeroxed. But such is the history of technology: Experimentalists will experiment; innovators will innovate.

June 30, 2011

Visual identity takes many forms, from the most superficial of trademarks to the most integrated of design signatures. Shapes are part of its language. At their most basic, shapes are universal, untetherable to any name, product, or brand. But in context, in their intersections and in the synthesis of forms, they are powerful.

Illustrated above are four shapes: a square, a roundrect, a squircle, and a circle. Bilaterally symmetrical and geometrically simple, each shape’s popular associations are innumerable. Yet in a particular space, the context of a particular market, this is inverted: In the world of mobile software, each of these shapes has a definite association, some quite strong.

To the victor go the squircles

Microsoft’s Metro UI owns the square. Apple has a corner on the roundrect, from the Springboard launcher to the iPhone hardware itself. Nokia, despite its late entry with MeeGo’s Harmattan UI, found the squircle unclaimed and ran with it beautifully. Palm has used the circle from the early days of PalmOS, and in WebOS, HP continues the tradition with care (one might even note that both Palm and HP structure their wordmarks around the circle).

And yet there are pretenders to every throne. Samsung’s Bada may use the square, but it can’t hold a candle to Microsoft’s Mondrian-esque masterwork. RIM may use roundrects pleasantly enough, but not with the subtle consistency Apple does. The lone standout is Android, which doesn’t really have a unifying shape – a symptom of fragmentation?

Like color, which also despite limitless associations has a history of strong associations within a market, shape is a powerful, yet subtle differentiator. Owning a shape isn’t easy – by itself, as demonstrated by Samsung and RIM, a shape is hardly potent. Those who have successfully laid claim to a shape have used it as a building block rather than as window dressing. Use the power of shape to reinforce good design with coherence and identity – and that shape may one day be yours.

Ray writes in with some insights on symmetry, better describing the fundamentality of these shapes:

I thought I would just point out that while all these shapes are bilaterally symmetric, they all exhibit much higher symmetry than that – the first three have four fold and two fold rotational symmetry as well as another mirror plane at 45 degrees to the vertical and the circle has infinite rotational symmetry and mirror planes.