Archive for the ‘Blog’ Category

What’s in a Face?

Around 100,000 years ago, humans developed spoken language. Speech allowed people to share complex information about people, places, or abstract ideas in a highly efficient way. Spoken words aren’t the only part of communication, though. Eye gaze, body language, gestures, vocalizations, and facial expressions allow us to express emotions, adding an extra layer of non-verbal information to speech. By combining speech and non-verbal information, face-to-face social communication is the most effective way for people to share and understand each other’s thoughts and feelings.

 

Facial expressions are a key part of social communication, and they’re universal across all human cultures. In fact, there are six emotions that people all over the world can display and recognize: happiness, surprise, sadness, fear, disgust, and anger.

 

But some people struggle to learn social communication skills, particularly individuals with autism. Many people with autism process sights, sounds, or other sensations more intensely than others. Some may feel overwhelmed by all of the sensory information they receive during a one-on-one conversation. This experience can interfere with their ability to focus on other people’s faces, which are the main sources of speech and emotional communication cues.

 

Research has shown that people with autism can learn more effectively when they use technology, which can stimulate their interest in and attention to a task. InnerVoice’s 3D avatars help people with autism focus on eye gaze, facial expressions, speech, and written language.

 

InnerVoice teaches people with autism how to communicate the old-fashioned way, using a combination of spoken language and facial expressions — but with a modern-era twist.

 

InnerVoice’s photo-based 3D avatars demonstrate eye gaze, facial expressions, speech movements, and tone-of-voice so that people with autism can learn from the main source of social communication: a human face. By teaching the connections shared among facial expressions, spoken, and written words, InnerVoice is the only communication app designed specifically for people with autism.

 

Listening with Your Eyes…

Have you ever approached someone who is watching a video and then started talking to them? You might feel that they’re ignoring you because they’re hyper-focused on what they’re watching. Although you may have said their name over and over, they continue to look at the screen as if you weren’t even there. Interestingly, they might not be able to hear you at all — due to something called perceptual load theory.

 

Perceptual load theory researchers have found that our eyes and ears can only process so much. Most people notice sounds and sights that are relevant to what they are attending to first before processing anything that isn’t associated to what they are focusing on in the moment. In other words, even though you may be standing next to someone saying their name as they watch a video, their name may not be relevant to the video — so their brain doesn’t process your voice.

 

People with autism process sights and sounds very differently. Surprisingly, they are better than most people at detecting unexpected and expected sounds in their environments. Dr. Temple Grandin, a professor of animal science and a person with autism herself, has described people with autism as having “ears like microphones” that detect all the surrounding sounds in an environment — whether or not they are relevant to the task at hand.

 

Researchers Anna Remington and Jake Fairnie (2017) found that an increased capacity to detect and process a variety of sounds at once, however, can decrease attention to important social information such as speech. Remington and Fairnie added that to “…reduce the impact of unwanted distraction in autism that results from increased capacity, we need to reduce [irrelevant] background noise but also increase the level of perceptual load in a given task.” This is especially important when learning spoken language — an area in which many children with autism struggle.

 

Remington and Fairnie’s 2017 findings contradict an outdated, all-too-common idea that educational activities and materials should be simplified for children with autism. Actually, these kids simply need the right strategies which allow them to tap into their potential: they need tools that present sound and visual information which is relevant to the task at hand.

 

InnerVoice is exactly this type of tool: one that displays relevant sound and visual stimuli for learning speech, language, and social communication. InnerVoice’s 3D avatars capture the attention of children with autism and present facial expressions, speech movements, and spoken words all in one place — combining just the right amount of speech and language stimuli so that kids with autism can build upon their innate strengths as learner

Remington and Fairnie (2017) sound advantage: Increased auditory capacity in autism

Text, Speech, and Emotions :)

Until the invention of writing, spoken words, gestures, vocalizations, and facial expressions were the most popular ways to share ideas and feelings. New ways to communicate — cell phones, text messaging services, emails, and social media — haven’t been available until relatively recently. And times change quickly. As people have begun to rely on modern forms of communication, they have gradually shifted to text-based communication.

 

Text-based communication, for millions of individuals, has begun to take the place of telephone and face-to-face conversations. Text messages and other media — such as GIFs and short videos — offer quick and condensed interactions that often allow people to multitask as they communicate. In an attempt to share quick emotional content or avoid miscommunications, many users send picture- or text-based emojis: lol 🙂

 

These new symbols can add some emotional content, but both parties have to know what the symbols represent — IMHO. But despite their inconvenience in today’s world, phone and face-to-face conversations still have an advantage in accurately conveying emotional information. This is because text alone does not supply the auditory and visual cues that we rely on to effectively communicate not just the meaning of our words but the underlying intention. One way of communicating the underlying intention of a message is to use speech prosody, or tone of voice.

 

Prosody involves pitch, volume, and word emphasis in order to convey emotional tone, to clarify word meanings, and to determine types of sentences (e.g., the difference between a statement and a question). Communicating emotions through speech is complex and contains a wide array of speech variations depending on a given emotional state. There may be variations in pitch, speech tempo, rhythm, voice quality, loudness, and pronunciation (Mozziconacci, 2002). For example, a person who is sad may speak slowly, using a soft voice. Yet tone of voice can be misleading without context. Faces often provide an emotional context for tone of voice.

 

Facial expressions share an enormous amount of emotional information that is easy for most people to understand. In fact, Ekman and Friesen (1969) found that there are six universally recognizable human emotions: anger, disgust, fear, happiness, sadness, and surprise. Ekman’s (1977) Facial Action Coding System was developed to code universal human facial expressions into general patterns regardless of race, age, sex, or ethnicity. They found that facial expressions are comprised of specific muscle movements, called facial action units.

 

Each facial action unit is assigned a number, which corresponds to a specific set of muscles and the movements they create when stimulated. For example, in order for a person to be viewed  as “happy,” they have to display a combination of facial action units: 6+12+25.

 

While texts, GIFs, emojis, and other tools can express emotions to some degree, the combination of facial expressions, spoken words, and tone of voice seems to be the most effective, and humanly innate, system for sharing and understanding feelings. But what if a person struggles to learn social cues by participating in conversations, spending time in groups, or watching others in their environment?

Recent technological advances in facial recognition and synthesized speech technology offer potential solutions for these unique individuals and many others. Research shows that a combination of facial expressions and tone of voice can improve the perception of emotional content within speech (Johnstone, et al. 2006; Brady & Guggemos, 2018). As an educational tool, synthesized emotional communication systems, which include facial expressions and tone of voice, could potentially help people with autism learn to share and understand emotional content within social settings. By using these tools, people with autism could have more opportunities to participate in the rapidly changing physical and virtual social communities, in which we all now live.