Embodying the Interface

User interfaces have been changing overtime. As the field of user experience grows, so do the possible interfaces that people interact with. While there is are no set standards for the future, it seems that the technological world wants to move towards body-based interactions. We already have examples of technology that are body-based, for example Microsoft’s Kinect and Leap Motion.

The Imaginary Interface

Nielson Norman Group recently wrote about the “imaginary interface” which was presented by Sean Gustafson, Bernhard Rabe, and Patrick Baudisch. Basically an “imaginary interface” is fully integrated into the body; the example they provide is a mobile phone’s main functionality is conducted through the palm of your hand.

I certainly wouldn’t say that this example presented above is what SHOULD be happening, especially since there lacks a visual component of functionality and there is a heavy dependence on a computerized voice to know what is happening (according the to example provided).

Gustafson, Rabe, and Baudisch conducted research where users used both a touch screen device and their “imaginary interface”.

Experiment Results

Under normal use, people were about equally fast selecting functions from a regular touchscreen phone and from the palm-based system. However, blindfolded users were almost twice as fast when touching themselves as when touching the glass surface of the phone.

A phone that provided tactile feedback when touched rather than a stiff pane of glass. Using the tactile phone, users were 17% faster, though the difference wasn’t statistically significant given the study’s sample size.

Having users wear a finger cover to remove the finger’s sense of touch. This made no appreciable difference.

Having users touch a fake hand, rather than their own, to remove the palm’s sense of touch. This condition slowed users down by 30%.

Why!? Embodied Cognition!

In short, the research provides a simple reason for why the outcomes happened.

…the key benefit of using the hand as a “touchscreen” is that you can feel when and where you’re being touched.

While that may satisfy some people, I hope that understanding this from a cognitive perspective may shed some light as to why that feedback is more effective.

The performance difference between the bodily-based interactions and the device-based interactions is potentially due to embodied cognition. One theory of embodied cognition states that the same processes that perceive the initial experience of an environment are utilized to recreate them when the stimuli are no longer present. This dependency establishes the importance of the sensory modalities in both “online” cognition (perception and understanding of a present stimulus) and “offline” cognition (a thought about a stimulus which is not physically present).

Anything that can be interacted with externally will be processed in an embodied manner allowing for offline simulations that reconstruct a person’s motor, cognitive and emotional representations of said object. The sensorimotor resources of the body inside the brain are utilized to simulate a physical world that is absent from a person’s environment or even imaginary. In the case of removing sensory information, a person cognitively recreates the sense at a lower definition level, which aids in the performance.

The question is why are simulations of the body stronger than an external interface? There is a greater deal of constant sensory information, providing enough feedback for proprioception, or awareness of one’s self. External devices do become embodied, but the accuracy of the simulation is much lower since there isn’t continuous feedback. That’s why the fake hand throws users off as it is an object that has been perceived and interacted with, but has not been embodied as accurately as their own hand. I would expect that given more practice, participants would improve, but not quite to the levels of using their own body.

Body-based interfaces may be the future

Body-based interfaces appear to be the future of UI, but it remains to be seen if they can be implemented in in a way that is unobtrusive, fits easy body movements (because I know people who can’t wink, Google Glass developers!), and takes advantage of the senses. Embodied cognition would allow these interfaces to become easier to use overtime, and provide the opportunity for full usage even when some sensory resources aren’t available.

We must keep in mind that an overall experience should not be sacrificed simply to use the newest interface technology. Just because new interfaces are available for usage doesn’t necessarily mean it is the right one.

Have a question or comment for me?

Get in Touch