What Comes After The Touchscreen?

The next great gadget might be one you don't even touch. Here are five experts' thoughts on what it means, and what the future might look like.

Much of the current crop of gadgetry runs on touchscreens, but it won't always be that way. We're already seeing a generation of gadgets that do away with screens entirely, starting with the early success of the Kinect. A more precise gesture-tracking module, the Leap Motion controller, is shipping out to nearly 30,000 developers this fall, planting seeds for a post-touch takeover in the next few years. In an interview this summer, Valve's Gabe Newell put it this way:

You have to look at what’s going to happen post-tablet. If you look at the mouse and keyboard, it was stable for about 25 years. I think touch will be stable for about 10 years. I think post-touch, and we’ll be stable for a really long time — for another 25 years.

But one big question still hasn't been answered: what is it good for? Post-touch hasn't found the killer use case that the mouse found with GUIs and the touchscreen found with mobile web browsing and apps — but it's not for lack of trying. We've had a flood of prototypes, demos and art projects, any one of which could flourish into an industry — that is, once every laptop comes with a near-field depth camera. As for which will take off...it's anyone's guess. But some guesses are better than others:

Post-Touch Means Smaller Gestures

View this video on YouTube

Michael Buckwald, CEO of Leap Motion

This technology is a fundamental transformation akin to the mouse because, if done correctly, it can be just an unambiguously better way of doing a large number of things. It's everything from the way people interact with their social graph and see their Facebook connections, to the way surgeons interact with things in the operating room, to how engineers build and interact with 3D models. We expect all those things to change.

What's bad about touch is that it has to be one-to-one to make sense, so if I want to move something from the top right corner to the bottom left corner, I have to move my finger that distance. Even on a tablet that starts to feel a little inefficient, and when you get to a giant touchscreen like a 22-inch monitor or a touch-TV, it's radically inefficient and extremely tiring. What we're able to do because the user is back from the screen and not physically touching it, is have that same feeling of connectedness. We envision people moving their fingers just a couple of millimeters really, and moving the cursor across the entire screen based on those movements.

Post-Touch Cameras Will Come With Every Laptop

Doug Carmean, Researcher-at-Large for Intel Labs

As soon as next year, you'll be able to see Logitech near-field depth cameras integrated everywhere there's a standard webcam in laptops. And they'll have the capabilities to do fine-motor control detection. Think about what you could do with that. You can use them for feature recognition. You can start doing emotion detection. Those are all things that I've seen that are in R&D today, that you could project forward.

Another aspect of that is that with Kinect, people are going beyond skeletal tracking and doing full-on 3D maps of bodies and they're projecting them into space. And the 3D mapping stuff allows you to create much more compelling systems for both augmented reality and virtual reality than we've seen in the past.

Post-Touch Is Better At 3D

James Alliban, interaction designer

I think post-touch will be best for creative software – like 3D packages and Photoshop. Navigating around a 3D environment and tweaking vertices and polygons makes far more sense in a gesture enabled space over the standard 2D-only input devices. I suspect it will also make sense for casual web browsing. I've been banging on about Augmented Reality eyewear and HUDs for a couple of years now so I'm fascinated to see how Google gets on with Project Glass. I'm fairly certain the first iteration will be disappointing (at least for what I have in mind), but I'm looking forward to 5-10 years down the line, when we have embedded depth-sensing tech that allows for gesture controlled interfaces, when the digital layer is seamlessly integrated into your surroundings and the high resolution image and wide field of vision allows for a fully immersive experience.

Whenever we see gesture enabled interface demos they tend to be computer science guys moving and zooming stock photos, waving frantically at a large screen. This isn't a great look for the future of the interface. There's a term for the physical effects that long term exposure to gestural input can have on a person — Gorilla Arm. Tom Cruise apparently suffered terribly from this while filming Minority Report. The argument against most gesture-enabled computing is that it looks exhausting. Great for blasting zombies but complete overkill for updating a spreadsheet.

Post-Touch Could Use Your Voice, Or Your Eyes

View this video on YouTube

Pigeon Sim

Andrew Hudson-Smith, Director of the Centre for Advanced Spatial Analysis

Post-touch has the potential for instant information retrieval based on eye tracking, voice recognition and augmented reality display technologies. By simply 'looking' at an object for a set amount of time — say three seconds — information can be retrieved and displayed. You could compare prices in supermarkets by "eye-scanning" objects, for instance.

Post-Touch Can Capture The Whole Body

View this video on YouTube

Myron Krueger's Videoplace, 1989

Casey Reas, co-creator of Processing

The history of human-computer interaction moves toward interfaces that respond to our complete bodies. The new class of touch screens are an extraordinary step after decades of the keyboard and mouse as the primary interfaces, but they only utilized a narrow part of what hands can do.

The Videoplace installation (1985) by Myron Krueger set us on this new path decades ago, controlled by the full silhouette of a body in motion. I have no idea where full-body and gestural interfaces will lead us, but I do know that artists using Processing, Cinder, OpenFrameworks and other related frameworks are discovering what it will be.

Where does that leave us? Well, depth cameras are available, but whether you'll be training them on your fingertips, your eyeballs or your whole body is up for debate, just like the question of whether you'll be using it to make art, play games or retouch photos.

The only thing they need is momentum, the kind of inevitability touchscreens got after the first iPhone launch. It could come from Microsoft, Leap, or somewhere we haven't even heard of — but however it happens, it's going to take some getting used to.

Skip to footer