gestural User interfaceGestural UI refers to using specific gestures, like scrolling, pinching, and tapping to operate an interface. It also refers to gesture recognition, including tipping, tilting, eye motion, and shaking. Gestural user interface and gesture recognition technology has evolved from very basic motions and applications to the complex, and it is now part of everyday life for a huge number of people. As this technology continues to evolve, the future possibilities are also incredibly exciting.

Smartphones and Tablets

Currently, smartphones and tablets are the most common place everyday consumers can find gestural UI. From Apple iPhones to the Samsung Galaxy, the vast majority of contemporary phones incorporate some elements of gesture UI, from swiping and scrolling, which is common to most phones and tablets, to orientation recognition.

The Samsung Galaxy S4 is an example of one smartphone that has taken gesture UI a step further. The phone tracks and recognizes your eye movement and automatically scrolls down as your eyes reach the bottom of the page.

Current research and experimentation suggests that in the future, gestural UI in phones and tablets will use more cameras and sensors, increasing responsiveness and recognition capabilities. Experts assert that phones and tablets will not only be able to understand non-touch gestures and facial expressions, but will be context-aware, meaning these devices will be able to anticipate and predict what users want to a greater degree of accuracy than is currently possible.

Gaming

Gaming is another area where gesture UI is already commonplace. The Xbox 360 and Xbox One consoles, for example, use the Kinect system to track player movements via cameras and sensors. The PlayStation 3 and 4 consoles use PlayStation Play in a similar manner, while the Nintendo Wii and Wii U use motion sensors and remote controls to track player gestures and movement.

The future of gestural user interfaces in gaming includes the use of gesture-capable touchscreens that wrap around game pads for greater gaming control, particularly for 3D gaming.

Additionally, according to both Microsoft and Sony, their vision for the future of gaming includes the combined use of gaze-tracking, gesture recognition, and brain waves, or technological telekinesis for a far more immersive gaming experience that borders on virtual reality.

Medicine

In the world of medicine, the future of the gestural user interface is very promising. One of the most notable innovations is the introduction of gestural UI in surgery. Plans include a hand gesture recognition system that enables surgeons to review images and patient records during surgery. The ability to manipulate the interface via non-touch gestures reduces surgery time, as surgeons won’t be forced to exit the operating theater to access traditional computer terminals.

The biggest benefit is the potential for the reduction of the risk of infection. If surgeons can manipulate UIs without having to make physical contact, the risk of passing harmful bacteria to the patient is greatly reduced.

Still in the developmental stages, robotic scrub nurses will operate based on the same gestural UI, and could conceivably take the place of a human scrub nurses in a range of medical functions, thus increasing the potential to improve surgical outcomes by reducing the likelihood of human error.