Voice UI AND MENTAL HEALTH

As part of our Better, Simpler & Smarter series, Common Good explore designing Voice User Interfaces for people with anxiety or depression

google-home-dark.jpg

DESIGNING EXPERIENCES TO SUPPORT WELLBEING

Here at Common Good, we help organisations rapidly identify, prototype and realise innovative, human-centred design solutions.

We recently spent time working alongside Wellcome Trust to help identify and define critical design challenges within the field of Mental Health. We recognised the limited support for people suffering from anxiety and depression – and that voice assistant technology provides a significant opportunity to create experiences which support vulnerable people and their mental wellbeing.

sq_bg_blue.png

 VOICE UI today

Exploring and interacting with Voice UI services

andres-urena-470137-unsplash-dark.jpg

WHAT DID WE LEARN?

There are limitations across digital platforms and services for people in need of mental health support. Creators of Voice UI products are overlooking three important subjects in the field of Voice technology for mental health support.


  1. BEHAVIOURAL UNDERSTANDING

    Services lack understanding of a user’s typical behaviour pattern and the ability to identify when behaviours change.

  2. SENTIMENT ANALYSIS

    Services are unable to distinguish the difference between a negative and positive tone of voice.

  3. SENSITIVE ERROR HANDLING

    System errors are extremely difficult and frustrating to recover from, often making the user feel stupid.


Empathising with people who have lived experience of anxiety or depression helps designers truly understand their specific needs and typical day-to-day behaviours. This sets the foundation for identifying opportunities to create useful experiences considering Behavioural Understanding, Sentiment Analysis and Sensitive Error Handling.

How can we design and learn simultaneously to introduce a truly human experience instead of robotic interactions?

sq_bg_pink.png

The Rise of Emotion AI

Moving beyond command and control to create a more human experience

AI-dark.jpg
 

IT’S NOT WHAT YOU SAY, BUT HOW YOU SAY IT

Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice, to communicate their true emotions.

Most voice interfaces for supporting mental health ask people to express their emotions verbally in a clear and concise way. However, not everyone is emotionally expressive. Approximately one in ten people have the Alexithymia personality trait which means they struggle or have the inability to express their feelings with words. This can make voice applications which rely on the user to self diagnose their feelings a barrier to some people getting the support they need.

 

OPPORTUNITY TO MEET THE NEEDS OF PEOPLE WHO ARE LESS EMOTIONALLY EXPRESSIVE

Alexithemia_Diagram3.png
 

Artificial emotional intelligence or Emotion AI is also known as emotion recognition or emotion detection technology. This emerging technology has speech detection capability to analyse not what is said, but how it is said. Plus, observing changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender.

This opens up opportunities to create a more human experience and move away from command and control style interactions.

 

WHERE TO START?

Designing experiences for people who expect to be understood

soroush-karimi-507824-unsplash-dark.jpg
 
 

DESIGN PRINCIPLES

Design principles are a useful starting point for guiding design decisions and making informed choices about the approach you take.

Designing for a voice interface is very different from a graphical interface. First, there are no visual cues to guide user interaction. And second, users are unsure of what they can expect from a voice assistant because people associate ‘speaking’ with interpersonal communication rather than with technology.

 

 
 
Amazon Echo_love.png

1. SENSITIVE ERRORS

Ensure error recovery is thoughtfully designed to avoid asking people to repeat trauma or negative feelings, as this can have a bad effect on their mental health. Never make the user feel like they are stupid, this is especially important for people in a vulnerable state of mind.


 

2. INFORMATION CHUNKS

Be mindful about giving out large volumes of advice at once. People have limited capacity to remember a string of information, and typically a user cannot remember more than about seven auditory items at a time. Instead, break the information into smaller steps and tasks for the user to complete.

Amazon_Chunk_small.png

 
Google-home.png

3. MAKE THE FIRST MOVE

Voice technology that relies on the user to give the first command poses difficulties for people who may have low motivation and confidence. We suggest linking up with other sensors, such as Emotion AI, wearables and smart home tech to switch from a command and control relationship to one where the technology can look out for the user and make the first move.


 

ADDITIONAL CONSIDERATIONS

Complementing design principles with additional guidelines will deliver a safe and secure environment for the user to interact with the Voice UI.

Considerations to keep in mind:

  • PRIVACY

    Users sharing personal information with Voice UI will want to understand how their information is going to be used.

  • TRANSPARENCY

    Tell users how long their information will be stored and where the information will be saved. Let them know about security and encryption.

  • SEPARATION

    Make a distinction between technology and humans and remain open about who the user is interacting with.


Design capability to recognise the change in daily routines and behaviour to offer support to the user in moments of need. Anxiety and/or depression is expressed in a number of ways. Recording and capturing new behaviours helps adapt to the users' needs and starts to create a more human-like experience.

 
 

THE FUTURE

There’s a huge opportunity in Voice UI to create experiences that support vulnerable people who suffer from anxiety and/or depression by adopting a more human approach.

Conversational interaction is more than just an activity through audible words - our language is the most powerful, useful and effective communication mechanism we have available. There are layers to what we say, how we say it and the meaning behind our words. Humans also communicate in other forms such as physical cues, social awareness and personality.

We’re excited to see how the future of Voice technology develops, and look forward to continuing our Voice UI journey to understand how it will aid those who will benefit most – for example, users with anxiety and depression.

 
 

Explore more of our work on voice interfaces

Coming-Soon-Voiceui.jpg

DRIVE CALM

Our Voice UI product that assists drivers in moments of anxiety or distress.

Coming-Soom-Voiceui.jpg

VOICE UI FOR DRIVERS

We’ll be sharing our research and insights on Voice UI for drivers soon.

 
 

LET’S MAKE SOMETHING GREAT TOGETHER

Want to learn more about our Voice UI work? Is there a design or business challenge your organisation needs solving? We would be happy to talk about how Common Good can help with our services or Design Sprint.

Send us a message or say hello.

Name *
Name