As we move further into the 21st Century, our thinking about how we interact with computers and our understanding of interfaces is evolving at a rapid pace. Despite the proliferation of electronic visual displays, there are other compelling emerging forms of engaging with technology that fall under the umbrella of Natural User Interfaces (NUI). NUI technologies like Voice, Gaze and Gesture are transforming our interactions with technology. Access to it can be done by going beyond the screen. This new found agency to interact has great potential to empower us, and it’s not as far away as we may think.
I am sure most of us are familiar with smart home devices, like Amazon’s Echo. This allows us to play songs, make purchases online and do other things with voice control. To some, this may seem frightening, the feeling of always being listened to. Nevertheless we can’t ignore it’s usefulness and accessibility potential. Furthermore, Amazon is planning on releasing at least 8 new devices that can be connected to Echo and controlled via voice. These devices include:
- a microwave oven
- a smart plug
- a wall clock.
If coming home and telling Echo to start cooking dinner and play your favourite song doesn’t sound like the future, what does? These are more than entertainment devices. They can also provide more independence to people with disabilities such as impaired eyesight, mobility or dexterity. This would remove the need to interact through a screen or buttons to use certain appliances and instead you could to just ask Echo to do it.
Gaze, or eye tracking, is used to see where the user is looking by measuring eye position and movement. Although it’s use is not as widespread as the computer mouse, it has it’s advantages in item selection on different screen based interfaces. For example, when a user is on the move and cannot sit down with keyboard and mouse. An interesting example of where this technology is being used to enhance the customer experience is in IKEA. There it records where the customers look while shopping.
Cockpit electronics provider Visteon has also incorporated similar technology into their electric cars. When the user gets into the vehicle, infrared cameras and facial recognition are used to identify the user. The automated vehicle is also built with gaze-tracking technology to assess whether the user is fit to drive. If the vehicle deems they aren’t ready to drive, then it relinquishes control from the user. In some respects this takes away our freedom to do what we like with our own property. However, gaze-tracking (in combination with an autonomous vehicle) is an idea that would greatly reduce the number of accidents caused due to drink-driving.
It is unnatural for us to sit for long periods pressing keys on the keyboard. Also it can lead to physical injuries to our bodies such as carpal tunnel syndrome. One way around this is to use gesture recognition technology. Gesture recognition is when a computer is able to recognise your hands and interpret what you want to achieve by performing specific gestures. Why click and drag when you can just grab?
Leap Motion (founded in 2010) are constantly making progress and pushing the limits of how we can interact with technology through motion. Their main product is their motion controller. This uses infrared to determine the size, orientation and motion of the users’ hands. This allows the user to bypass the keyboard and mouse. Currently, the motion controller is most often used to interact with something on a screen, like a video game, but it has the potential to be used for so much more.
Giovanni Leal gives us a glimpse in what could be the future. Giovanni’s made use of Leap Motion Controller and Arduino to move a robotic arm around in space. At the moment it is just a toy, but when scaled up one can imagine people controlling large machinery without the need of reading a long manual. Instead, they could move their hands as they would naturally and intuitively to operate equipment. The learning curve would be greatly reduced.