28

Jan

Multi-touch: Why the iPhone Matters

By Noah | Add a comment | 

The introduction of the iPhone heralded the mainstreaming of a new interface paradigm. Features and form factor aside, the multi-touch interface represents the first major interface change since the introduction of the Macintosh GUI in 1984, and a notable shift in the right direction.

Twenty years ago, Donald Norman described the relationship between a control and its effect as mapping. “Natural mapping, by which I mean taking advantage of physical analogies and cultural standards, leads to immediate understanding.” (Norman, D. 1990. The Design of Everyday Things. Doubleday/Currency. P23.) Unfortunately, when there is not a “natural mapping,” understanding is anything but immediate.

Technology interfaces are difficult to design and learn because interfaces have moved further and further away from natural mappings. When the tool in question is an axe or a spoon, the relationship between the control and its effect is clear and direct. Similarly, for simple mechanical tools such as food grinders, adjustable wrenches, latches, and the like, it’s not too difficult to divine the function with no documentation. The interface is inseparable from the tool or device, and the mapping is strong.

Electrical devices are a little trickier. The function of simple electric tools with very few controls, such as power drills, can be figured out, even with no labels. Anything more complex than, say, a multi-speed blender, benefits from having a clear label on each button. The reason is that when the interface for a device reaches that level of removal from the function, there is no longer a truly natural mapping between the control and effect.

When we move from electrical devices to electronic, labels and documentation are necessary to make the interface understandable. A microwave or VCR with no labels on the buttons would be totally unusable. There are simply no “natural” mapping conventions for users to draw on, and it becomes more important than ever for the interface designer to do a good job at creating and conveying the interface metaphor.

Finally, at the level of virtual or information-based interfaces, the interface is completely removed from any effect it may have. This is most evident in text-based interfaces; contemporary GUIs are a fancier presentation layer but retain the same underlying problems. Playing a game, shopping, and everything else onscreen is accomplished with the same clicks. It is only by virtue of the provided context that we are able to understand what we’re doing. Consequently, when interaction designers don’t do a good job of creating metaphor, the provided context is insufficient and users get confused.

The Windows-Mouse-Pointer-based GUI, first commercialized by the Macintosh in 1984, gave us a very limited physical metaphor. The mouse allowed us to indicate the noun we wanted to do something to, and then do a verb to it. Most verbs were menu based, and a few (selecting, dragging, clicking) could be executed directly. That amounted to poking our simplistic virtual worlds with a stick through a narrow window. This limited ability to affect the virtual environment is often frustrating.

The iPhone is special and noteworthy because it takes us a step back down the path back towards the physical. Steve Jobs notes “there are no ‘verbs’” in the iPhone interface. Instead of selecting nouns and then indicating a verb, we can simply do the physical action to the virtual thing as though it were a physical thing. Gestures for scrolling, dragging, and pinch-resizing work as we’d expect. The result is an interface that toddlers learn in seconds, and experienced users find delightful.

It’s not perfect, there are still many places where buttons and lists of options must be used, but it’s the most interesting, different, relevant change to interfaces in decades. It’s also just the first drop in a big, big bucket, evidenced by the recent introduction of the MacBook Air, which supports multi-touch and gestures on its trackpad. Gesture-based interface will spread to more devices, and devices that don’t support gestures will eventually seem antiquated.

Apple made a great choice by commercially introducing a radical new interface paradigm on a finite, portable, accessible platform, and building on it from there. As a herald of the next wave of interface technology, the iPhone represents much more than the sum of its pretty parts.

Share
Filed in UX  |  Tags: , ,
digg |  del.icio.us | 

2 Comments

  1. Great points about the need for appropriate metaphor and context for the user, particularly in virtual environments.

    I find, especially with technical software, that there are times when it is apparent that the designer was far too close to the project and did not take the time to reflect on initial interactions with the design. While the appropriate context can be learned, it often becomes far from intuitive and makes operation of the interface less than fluid.

    This can create problems when dealing with mission critical operations in high pressure situations. A user will be inclined to revert to a more natural metaphor under stress which can lead to operational failure with poorly designed interfaces.

    Comment by Jennifer — March 4, 2008 @ 1:07 am

  2. I’m totally in love with the multi-touch trackpad on my Macbook. I’m waiting for someone to release a full size keyboard with the gesture touchpad embedded so I can use my notebook more ergonomically.

    There’s a whole body of HCI R&D towards post-WIMP interfaces that map more closely to the ways in which we interact with our physical environments. Interfaces for devices like the iPhone, the Microsoft Surface, and Perceptive Pixel’s multi-touch wall are examples of tangible user interfaces(TUIs) with surface-based, gestural interfaces. There are others type of TUIs, constructed assemblies and token+constraint(i.e. Siftables, MetaDesk), that take advantage of physical interaction even moreso. Check out the relate work section of B. Ullmer’s dissertation from the MIT Media Lab on Tangible User Interfaces.

    The move towards post-WIMP interaction will be necessary for computation to truly fade into the background of our environments per the vision of ubiquitous computing. This is becoming more of a reality as we develop and utilize the ability to sense and tag physical object and associate them with digital representations.

    Comment by Cornelius — April 9, 2009 @ 10:12 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.