December 16, 2011
Getting data from a public place onto a mobile device—whether that data is a discount coupon, a museum map, a restaurant menu, or any other kind of mobile web site, is a problem with no shortage of solutions. Location-based services, NFC systems, and even Bluetooth 4.0 each offer a handful of promising possibilities, but the clear leader is the simplest: QR codes. Yet while the QR code has long been a staple in its native Japan, it has a ways to go to find popular adoption elsewhere.
The QR code likely owes no small part of its popularity in Japan to a long history of integration into mainstream mobile handsets. Even within the infamously labyrinthine UIs of popular clamshell handsets, QR code scanning generally isn’t much harder to find than the camera mode itself. Yet nearly everywhere else outside Japan, you’re on your own.
Working out of the box
Engagement with QR codes in the US is on the rise—to a towering 5%, according to a recent Forrester study. Percentages are higher with smartphones (pushing 20% with the iPhone, higher still on power-user-focused Android), but the Japanese example makes it clear: making QR codes a reliable way to connect with the majority of mobile users will require a better, more integrated user experience from phone manufacturers and mobile OS vendors. The question for them, then, is how to integrate that experience.
Most Japanese phones have the equivalent of a discrete menu item or app for QR code scanning. But there’s really no reason for it to be stuck outside the phone functionalities that bookend the QR code experience: the camera (the external end) and the browser (the internal end). Ideally, it should be integrated into one of them.
Exploration 1: The browser
As QR codes will generally lead to a mobile website, it follows to attach the acquisition of the code to the beginning of the web browsing experience. Why not integrate QR code scanning right into the browser’s address bar?
In this concept, the camera icon appears when Mobile Safari’s address field is clear, replacing the circular “X” button: The user has signaled an intent to enter a new address, and so the camera icon augments the affordance of the keyboard as an additional input channel for a web address. This vocabulary could be used for QR codes’ less-popular applications as well, as a channel inside a contacts app or calendar.
Exploration 2: The camera
Many phones have a dedicated camera key, and even the iPhone gained a soft-key to launch the camera in iOS 5. This rapid access to the camera makes it a great candidate for reducing friction to the acquisition of a QR code. But how do we balance ease of recognizing codes with the other, non-code-reading functionality of the camera?
One solution is to make it modal: The iPhone’s bundled camera app offers an obvious place for such a switch.
The ideal solution, though, might be to simply make the functionality transparent. Rather than require mode selection or input from the user at all, why not simply detect QR codes as they come into view? The potential pitfall here is unintentional activation, should a QR code accidentally come into the field of view while the user is trying to use the camera for something else. The intrusiveness of the scanning affordance must be minimized.
An augmented-reality-style pop-up with a preview of where the QR code will navigate to offers a clear path to the link, but without interrupting normal camera usage. This approach does, however, lack some affordances. Without a mode selector, how is the user to know the camera app is capable of scanning QR codes?
In the wild
Anecdotally, the camera app as starting point seems to be the most intuitive context for a QR code reader. I recently assisted a co-worker who was attempting to test a QR code one of the designers at the office had prepared. She was concerned the code wasn’t set up correctly, as it wasn’t registering on her iPhone.
The problem? She had never installed a reader app. The seamlessness of the iPhone experience and the growing popularity of QR codes logically led her to believe the built-in camera app would read them.
Perhaps it’s time for Apple, Google, and other leaders in the mobile industry to make that logic hold.
July 31, 2011
In the fall of 2009, I was afforded a rare opportunity: I got to start a lot of people thinking about something in a new way. A video I had spent my spare time producing that summer hit TechCrunch and subsequently all the major tech blogs, spreading the idea of a new kind of desktop user interface: after 25 years of mouse pointers, windows, and desktops, I proposed a new set of conventions that were in some ways radically new, and in other ways quite legacy-compatible. I called it 10/GUI, and it struck a chord with people across many disciplines.
The goal of 10/GUI was always twofold: to comfortably expand our tactile interaction with computers beyond the single pair of coordinates offered by the mouse, and to deal with the complexity of the modern multitasking operating system in a more elegant way than through fidgety stacks of windows. It solved some problems, but introduced others, perhaps the most prominent being “how would we actually transition to this?”
Indeed, the state of the desktop right now often seems unable to advance upon either of the fronts 10/GUI proposed, so bound are we to existing conventions and patterns. Or at least it seemed so—until Apple announced Mac OS 10.7 last year, releasing it this past month.
Apple is no stranger to combating the scourge of messy desktops. Even as far back as System 7’s window-shade effect and OS 9’s “spring-loaded” Finder windows, designers and engineers in Cupertino have experimented with ways to ease multitasking’s cognitive burden. With OS 10.3 in 2003, Apple introduced Exposé, perhaps the first mainstream attempt to address the inherent clutter of the window paradigm, refining its behavior in subsequent OS releases.
By the late 2000s, Apple’s foray into the mobile world had paved its own course, demonstrating, as Steve Jobs proudly classified them in 2007, “desktop-class apps” that adopted the full-screen paradigm of decades of purpose-built devices, embedded systems, and terminals. Now, in 2011, OS 10.7 has adapted this approach back to the desktop, synthesizing full-screen apps and multitasking into a linear application space to be swiftly navigated via multi-touch gestures.
Some have found this combination of linear application management and gestural navigation familiar. A common theme in email and Twitter correspondence I’ve received recently has been the similarities between Lion’s “Mission Control” UI for managing full-screen applications and 10/GUI’s “Con10uum” UI. There are indeed similarities. But I’m reminded of the infamous “rich neighbor named Xerox” story, the rich neighbor in this case being pre-HP Palm.
Palm’s WebOS and its “cards” model inspired the Con10uum linear desktop. I think it’s fair to say it inspired Mission Control to some extent as well. When you don’t need to manage applications in two dimensions—which, barring a few edge cases, describes everyday use for most users—one dimension makes the most sense. Palm knew this, I learned from it, and so has Apple. Did 10/GUI inform Lion’s design? It’s flattering to imagine, but it’s impossible to know. The question that interests me is where Apple may take Mission Control next.
One away from 10
I don’t see this as a controversial claim: Apple is setting the stage to deprecate the windowed desktop. The timetable for that is open to debate, but given Lion’s emphasis on Mission Control, the new full-screen APIs, and what can be seen as an eventual replacement for the dock, the preparations are clearly in motion.
What is most fascinating to me is that with the full-screen APIs in Lion, Apple is really not that far away from a full, 10/GUI-style solution. Once every app can be relied upon to work in a scalable (by screen resolution), full-height UI, there’s no reason they can’t begin to coexist on the same screen. The three-finger swipe currently used to navigate between full-screen apps could, in a pinch, resize them. The present “desktop” space could become a compatibility pen for old apps, leaving the Mission Control spaces to become the new un-desktop. Given that the menu bar is now only a part-time resident of the screen, even that could be reconsidered.
Of course, if this happens, I can then truly consider 10/GUI Xeroxed. But such is the history of technology: Experimentalists will experiment; innovators will innovate.
June 30, 2011
Visual identity takes many forms, from the most superficial of trademarks to the most integrated of design signatures. Shapes are part of its language. At their most basic, shapes are universal, untetherable to any name, product, or brand. But in context, in their intersections and in the synthesis of forms, they are powerful.
Illustrated above are four shapes: a square, a roundrect, a squircle, and a circle. Bilaterally symmetrical and geometrically simple, each shape’s popular associations are innumerable. Yet in a particular space, the context of a particular market, this is inverted: In the world of mobile software, each of these shapes has a definite association, some quite strong.
To the victor go the squircles
Microsoft’s Metro UI owns the square. Apple has a corner on the roundrect, from the Springboard launcher to the iPhone hardware itself. Nokia, despite its late entry with MeeGo’s Harmattan UI, found the squircle unclaimed and ran with it beautifully. Palm has used the circle from the early days of PalmOS, and in WebOS, HP continues the tradition with care (one might even note that both Palm and HP structure their wordmarks around the circle).
And yet there are pretenders to every throne. Samsung’s Bada may use the square, but it can’t hold a candle to Microsoft’s Mondrian-esque masterwork. RIM may use roundrects pleasantly enough, but not with the subtle consistency Apple does. The lone standout is Android, which doesn’t really have a unifying shape – a symptom of fragmentation?
Like color, which also despite limitless associations has a history of strong associations within a market, shape is a powerful, yet subtle differentiator. Owning a shape isn’t easy – by itself, as demonstrated by Samsung and RIM, a shape is hardly potent. Those who have successfully laid claim to a shape have used it as a building block rather than as window dressing. Use the power of shape to reinforce good design with coherence and identity – and that shape may one day be yours.
Ray writes in with some insights on symmetry, better describing the fundamentality of these shapes:
I thought I would just point out that while all these shapes are bilaterally symmetric, they all exhibit much higher symmetry than that – the first three have four fold and two fold rotational symmetry as well as another mirror plane at 45 degrees to the vertical and the circle has infinite rotational symmetry and mirror planes.
May 3, 2011
It’s that time of year again—speculation is running rampant as to the makeup of the next iPhone, even if this one might be late. Will we see a leap in 3D graphics processing as with the iPad? Will Apple embrace 4G wireless for a new generation of data-intensive apps? What new capabilities will developers have at their disposal? There’s no doubt that the new iPhone will be a powerful device, but there’s one thing it won’t be like: The first.
The old world of apps
Four years ago, such questions about a new phone might have been hard to even imagine. Mobile software was an established market, but its products tended to fall into one of a few limited categories: largely offline-focused apps from the PDA era of PalmOS and PocketPC, simple, BREW-based featurephone apps emphasizing carrier-selected games, and a smaller market of newer, connectivity-focused apps for smartphone-equipped business users.
This is the world into which the iPhone arrived in 2007, rightly earning criticism that it itself was not a smartphone as it was not set up to run third-party binaries. Today, with 350,000 apps in the App Store, 2007 looks like a brief transitional period. But to consider it that is to fail to recognize what was lost when the iPhone became a smartphone: a phone that was, in many ways, an almost platonic ideal of a telephone, personal media player, and internet communication device. It wasn’t a smartphone, yet it wasn’t a featurephone. It was a smart-enough phone.
When the iPhone SDK was announced in 2008, it had already been clear for months that to compete in the smartphone market (a market into which, at least from a technological perspective, the iPhone appeared to have always been an entry, even if it was not from a practical, end-user perspective), the iPhone would need more than the original “sweet solution” of web apps proposed at the iPhone’s unveiling. This was founded on two assumptions, one obvious and one less-obvious: first, that the iPhone was always intended to be a smartphone, and second, that the market for smartphones could expand beyond the business world and into the general public. The latter has obviously been proven many times over in the past three years. The first? Despite its unparalleled success as one, it’s not so clear.
Before iPhone users had a dozen home screens full of everything from Twitter and Angry Birds to Deluxe Kitchen Timer HD Universal and Angry Birds St. Patrick’s Day Edition, the iPhone was a marvel of simplicity. Twelve icons, meticulously arranged into a harmonious, balanced grid, covered the core of any user’s needs while out away from a desktop, while four at the bottom forged a novel bridge between the concepts of mobile phone, internet phone, and portable media player. Anything else you might want was out there beyond the Safari icon, but everything you needed–it was all right there, unchanging, unambiguous: A smart-enough phone.
Once in the highlands
The smart-enough phone is a user experience the market has clearly stated is less preferable to a device packed with third-party features and software. But has the market as a whole really spoken? The smartphone is obviously preferable for the part of the market that bought iPhones and other iPhone-paradigm devices. But what of the people who don’t want apps, for whom a smartphone is superfluous and a distraction? The featurephone market has tried to metamorph its product into an image of the iPhone-era smartphone, with large touch screens and minimal physical buttons, but nothing has approached the formal and functional purity of the 2007 iPhone.
Will we see a smart-enough phone again? There have been attempts at radical simplicity, such as John’s Phone, or the Peek, but these have been highly niche. And a minimalist phone or a minimalist email device is a far cry form a minimalist phone-media-player-internet-device. Perhaps the “smart-enough phone” will be a Brigadoon—lost to time until another company of forward-thinkers happens upon its misty fields.
August 26, 2010
Next week, it’s anticipated that Apple will announce a successor to the Apple TV based on iOS. While this predictably inspires more speculation than a Rod Blagojevich verdict, the most interesting question it poses is how the onscreen user interface will be controlled.
While some sort of optional iPhone integration is a given, Apple’s multimedia standby, the traditionally-bundled Apple Remote, starts to look a little inelegant in light of the usability strides made on Apple’s mobile devices. Directional pads are older than Apple itself, and their usefulness falls off quickly with the number of elements they are used to navigate through. Click gestures such as long presses and double-taps can help, but they remain stop-gaps for a limited interaction technology.
Dan Provost’s suggestion of a Click Wheel remote is a very intuitive one. A logical progression of decoupled input for Apple, it works very well for navigating hierarchical lists, and to be sure, it even has plenty of creative potential for use in an iTV App Store. But the question of how to build a better remote is one that has intrigued me for some time, and I think it’s possible to go much further.
You mustn’t be afraid to dream a little bigger, darling
The one-to-one motion and gestural speed control of the Click Wheel make navigating in one dimension much nicer. But a trackpad takes out both dimensions at once. And with Apple’s mastery of intertial scrolling on mobile devices, it’s a perfect match for two-dimensional menuing on a new iTV.
It’s not hard to imagine an onscreen UI built around swipe gestures, inertial scrolling, and the snap-to-rest behavior of something like the iOS spinner input, across a two-dimensional menu system like Sony’s XMB. Add the Magic Trackpad surface-click to navigate, and you have a lot of potential in a very simple remote control.
With the addition of an iOS-standard home screen button and perhaps a play / pause button, this hypothetical remote could not only make 10-foot navigation as pleasant as using an iPhone, but extend a lot of its interaction vocabulary. Such a system might just be the closest one could get to mirroring the iPhone’s direct interaction in a decoupled context.
As the above might suggest, I had something in my head which begged to be mocked up.
Trackpad on top, surface click on both top and bottom, home key in the middle. Slight resemblance to sushi.
Playback or play
Dan Provost suggested the very slim possibility of a game controller for the new iTV. While it’s unlikely that Apple will make gaming a central pillar of iTV, Apple is nonetheless making serious inroads in the market. So how might they make this part of the out-of-the-box experience?
With something like the concept envisioned above, there’s already a game controller: Just turn the remote sideways.
Even without any other features, a trackpad and two primary buttons allow for plenty of latitude in designing gameplay. Add an accelerometer or another button or so, and you have something that can easily rival the twitch-friendly controls on major consoles.
Will Apple take the adventurous route, or will they play it safe with iTV? We’ll find out soon enough.