Wherever you go in the world people are different. And so is the way they talk with their hands. Now what if you could operate your TV by using gestures? Would it understand you?
A growing number of gesture-controlled consumer electronics are coming to market. Some of the latest smart TVs, smartphones, cameras and game consoles are able to capture our hand movements and translate them into user interface commands.
This poses new challenges for user experience design for globally merchandized devices, as gesture behavior is presumably influenced by local cultures.
As specialists for culture-specific user experience research, UX Fellows wanted to find out which semantic gestures people from various cultures would spontaneously use to control consumer electronics like interactive TVs.
We talked with 360 electronics-savvy users in 18 different countries to find out which semantic gestures fit best with basic and advanced smart TV commands.
A cross-cultural study spanning 18 countries on spontaneous gesture behavior by UX Fellows
Thumbs up to gesture-controlled consumer electronics?
In just a few short years, we have all learned to use and love touchscreens. The interactions provided by touchscreens feel very natural and as the name implies involves a physical interaction with a device through touch. However, as technology continues to evolve, we are moving into an era where human-computer interactions can move beyond touch in the form of gesture control.
This type of technology has already been seen in sci-fi movies such as Minority Report and introduced to users through game consoles such as the Wii and Xbox Kinect. There are first-generation gesture controlled devices such as digital cameras and televisions currently available from manufacturers such as Samsung. In July 2013, the tool Leap Motion will be released, offering a completely new user experience based on gesture control, even for classic computers. Together, these products have the potential to trigger a wave of new gesture-based applications and programs.
Gesture control has typically been depicted as an experience where the hand (moving in three-dimensional space) replaces the mouse to control a graphical user interface, what we can call pointing gestures. A step beyond this is the use of what we call semantic gestures (i.e. gestures with an associated meaning) which are understood by machines without the need for an additional GUI.
Gestures are a natural and often unconscious aspect of our daily communication and interaction with our friends, family and colleagues. Applying them to globally merchandized devices such as cameras and TVs potentially poses a huge challenge for user experience designers, as gestures are presumably influenced by local cultures. A gesture that is widely understood and acceptable in one culture may hold no meaning and/or be offensive to another. Also we know that some cultures are more predisposed to the use of gestures in their everyday communication than others.
As specialists in culture-specific user experience research, the UX Fellows network decided to investigate this interesting topic and explore the gestures people from a variety of cultures would spontaneously use in order to control consumer electronic devices such as TVs.
The key things we wanted to understand included:
- What are the most common gestures for typical TV-related commands?
- Do any of the chosen gestures have high international commonality? Which, if any, are particularly culture-specific?
- Do any common symbols or metaphors underpin the chosen gestures?
- Do any markets or regions differ from others? Is it possible to identify any clusters amongst regions?
- How difficult is it for potential users to imagine gestures that could be used as commands for controlling CE gestures?
The UX Fellows gesture study concentrates on spontaneously generated semantic hand gestures for controlling typical TV functions as nominated by users. Importantly, these gestures are not associated with manipulating an on-screen menu, all gestures are intended to be independent of such an on-screen display. Pointing gestures, where users interact with a menu system where the hand is a mouse substitute were not examined.
The study took the form of one-on-one interviews between a moderator and a participant. Each session was conducted in a room that contained a comfortable chair for the participant to be seated in and a switched off flat screen TV. Participants sat in front of the TV and were instructed to imagine that the TV was able to understand their gestures and asked to demonstrate the gestures that they would use in order to complete a series of commands associated with using the TV. The interviews took approximately 15 minutes to complete.
Participants were instructed that they could use one or both hands in order to demonstrate their preferred gestures. The moderators did not use any gestures themselves so that they could avoid influencing participants’ choice of gestures. The participants were also asked to verbalize their thoughts as they imagined and produced each gesture to provide the moderator with some insight into how the chosen gestures were decided upon. Each participant was asked to assume they were using gestures to communicate in their own country, and if they felt that there was more than one possible gesture for a specific command they were instructed to choose the gesture they thought would be best understood by someone else. We told the participants that they didn’t have to develop a coherent system of gestures, we were simply interested in the gestures they would choose for each command and that they were free to repeat gestures if they wanted.
To continue reading this article, please download the free PDF version.