The future of human/machine interaction

I’ve been thinking about this one for a while as well. We all know the scene from minority report…

This has been held up as the way in which people will interact with machines, and – indeed – some people have been working to make it a reality.

But is this the way that people will interact with technology in the future? A big part of me thinks no – too much work! Sci-fi tells lots of different stories, and one of the main things people imagine is voice control.

My own feeling falls down a few different paths. I should flag that my agency has clients involved in a few of these fields – Logitech on the more traditional machine interaction side and Nuance on the voice recognition side – but these views are my own and uninformed by discussions with those guys.

1. Traditional man/machine interaction isn’t going away for a while. Mice and keyboards are very effective at getting through many of the tasks we’ve made for ourselves and are very well entrenched.

2. Voice is going to continue to develop. Whilst voice control has always had its fans and its critics, there will be two key things that both limit it and send it on its way. The limitation – is accuracy. In the near to mid term it’s unlikely to reach the 90+% accuracy levels you get when typing. The driving force – is the need for hands free. There are always going to be contexts in which hands-free control over a machine will be important, more so as mobile computing entrenches itself in modern society. So whether its in an industrial context, in a car or on a mobile device there are platforms on which voice would be an optimum control mechanism.

3. Touch. The ‘hot’ interface right now. As someone who owns and uses and iPad and and iPhone I can tell you that I am a convert; initial mediocre experiences on tediously inadequate Windows Mobile devices, unresponsive and stylus-driven, made me very sceptical indeed but the potential of this for innovative and interesting interaction with different applications is tremendous. But I can’t help but feel that the limitation here is the screen…

Which leads me to…

4. AR interaction. I have no idea how far this will go – at the moment augmented reality provides wonderful toys for marketers to play with and the potential for some retail novelty. But if you’ve read Charlie StrossHalting State (as you know I have), you’ll have read of a world in which everyone wears AR enabled glasses, and can overlay ‘layers’ of Internet reality on the real world. So – an overlay of Google Maps on your current view of the street, complete with turn by turn navigation. An overlay of SquareMeal’s restaurant reviews. An overview of World of Warcraft’s avatars, if you are so inclined. An overlay of the police criminal database, giving you information on individuals, crime scenes, etc. Whilst that’s a fun extrapolation, I think there’s scope for more everyday applications, and – as ever – I have no doubt that marketers will be amongst the first to pick them up. Imagine an AR iPhone app, for example, that allowed you to view special offers on a poster, and interact with them to choose the one you wanted to download (app would recognise a QR code, or some such, download the relevant reality overlay from the Internet alongside an interaction protocol, and let you play!). Or imagine a gaming context – in which you could run around, laserquest style, interacting with phantoms like the one in the Lynx ad.

AR is exciting for much the same reason that the Wii was exciting – it involves every day people in an interactive experience – in the real world. There may be screens or bits of tech to support the interaction but over time they will fade into routine mundanity (is that a word? computer says no). Although I do think that perhaps Gmail’s new features might be taking the concept a bit further than it should go.

5. Direct neural interface. Still far away? I’ve not read anything in the mainstream media about this one. A lot of sci-fi features subvocalisation to intelligent digital agents (Peter F Hamilton (link) calls them ‘u-shadows’). I’ve never been sure what subvocalisation is (oh, that’s interesting, wonder what Nuance is doing there…), and over the years of meeting people, the workings of whose minds completely evades me, I’m cynical about the capacity of a machine to interpret the synaptic instructions of a broad subset of humanity. Not without the Cylons taking over, anyway.

One thing’s for sure – there’s a lot going on in this space and it’s massively exciting. Have I missed any particularly interesting ones? Always interested to read.