What we can, governs what we wish

User interfaces define the way we think about how to get what we want.

Let’s take the mouse. It has been invented four decades ago by Douglas C. Engelbart, and since then it became ubiquitous in the design of computer user interfaces (you can watch the first public appearance of the mouse in a series on YouTube). Actually, it has became so ubiquitous that we would have a hard time imagining an interface without a mouse to click our way around.

The first time I used a mouse I was 14 years old, and I still remember the slight initial confusion. It is not really straightforward to get that the horizontal movement gets translated into a vertical movement. And the distinction between left click and right click was again not that obvious given that I had the mouse held only with my right hand.

The interesting part is that if then I had a hard time figuring my way around with the mouse, now I have a hard time figuring my way around without it.

Up to not so long ago, I forgot about my initial stumbling. What made me remember was a pointer of Adrian to the clickless interface. The clickless interface challenges the way we came to perceive the mouse, as it does not allow you to click, while still providing you with the full functionality of the environment. I highly recommend taking a couple of minutes to play with the prototype. I find it a beautiful exercise because it reminds me of a quote from Tom deMarco:

It’s not what you don’t know that kills you, it’s what you know that isn’t so.
Tom deMarco

I believe and hope that after 40 years we are on the verge of rethinking many parts of the user interface design to obtain a completely new way to experience the digital world. I will name here three examples that caught my attention.

The first one is the so called Multi-touch interface that captured the hype this year due to the Apple iPhone. The first public demo was held by Jeff Han at TED 2005 (8 min) and it showed how we can directly interact with the screen with all our fingers. The iPhone put it to a nice but rather limited use to allow the user to zoom, but it was Microsoft that went further along the lines of Han’s demo and built the Microsoft Surface, a fresh and entertaining device. In any case, the innovation here is in the ability to have several points of action simultaneously. In retrospect this sounds like an obvious idea, but a couple of years ago we would not even think about it.

The computer desktop as a reminiscent of the physical desktop is yet another concept that is at the core of the computer user interface design. When the desktop metaphor became popular with the Macintosh, it was a breakthrough over the text interface. Nothing really new happened in the main stream since then, and again we might get tempted to say nothing happened because it is such a good design. At its time it was revolutionary, but in our times more can be done. In the video below (5 min), Anand Agarawala presents BumpTop, a desktop design that takes the physical metaphor to an extreme and provides a joyful experience.

While the previous two designs take the physical metaphor into the digital one, my absolute favorite recent user interface innovation comes from Microsoft under the name of Seadragon and it redefines the way we think of a surface by allowing information to be stored at various levels of zoom granularity. Words can hardly make this idea justice. To get an impression of it, you can watch the first part of Blaise Aguera y Arcas presentation at TED 2007 (8 min). In the second part of the presentation he shows Photosynth which is an incredible algorithm to construct a 3D environment based on 2D pictures.

Posted by Tudor Girba at 26 September 2007, 8:08 pm link