DESIGNING EXPERIENCES TO SUPPORT WELLBEING
Here at Common Good, we help organisations rapidly identify, prototype and realise innovative, human-centred design solutions.
We recently spent time working alongside Wellcome Trust to help identify and define critical design challenges within the field of Mental Health. We recognised the limited support for people suffering from anxiety and depression – and that voice assistant technology provides a significant opportunity to create experiences which support vulnerable people and their mental wellbeing.
WHAT DID WE LEARN?
There are limitations across digital platforms and services for people in need of mental health support. Creators of Voice UI products are overlooking three important subjects in the field of Voice technology for mental health support.
Services lack understanding of a user’s typical behaviour pattern and the ability to identify when behaviours change.
Services are unable to distinguish the difference between a negative and positive tone of voice.
SENSITIVE ERROR HANDLING
System errors are extremely difficult and frustrating to recover from, often making the user feel stupid.
Empathising with people who have lived experience of anxiety or depression helps designers truly understand their specific needs and typical day-to-day behaviours. This sets the foundation for identifying opportunities to create useful experiences considering Behavioural Understanding, Sentiment Analysis and Sensitive Error Handling.
How can we design and learn simultaneously to introduce a truly human experience instead of robotic interactions?
The Rise of Emotion AI
Moving beyond command and control to create a more human experience
IT’S NOT WHAT YOU SAY, BUT HOW YOU SAY IT
Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice, to communicate their true emotions.
Most voice interfaces for supporting mental health ask people to express their emotions verbally in a clear and concise way. However, not everyone is emotionally expressive. Approximately one in ten people have the Alexithymia personality trait which means they struggle or have the inability to express their feelings with words. This can make voice applications which rely on the user to self diagnose their feelings a barrier to some people getting the support they need.
OPPORTUNITY TO MEET THE NEEDS OF PEOPLE WHO ARE LESS EMOTIONALLY EXPRESSIVE
Artificial emotional intelligence or Emotion AI is also known as emotion recognition or emotion detection technology. This emerging technology has speech detection capability to analyse not what is said, but how it is said. Plus, observing changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender.
This opens up opportunities to create a more human experience and move away from command and control style interactions.
WHERE TO START?
Designing experiences for people who expect to be understood
Design principles are a useful starting point for guiding design decisions and making informed choices about the approach you take.
Designing for a voice interface is very different from a graphical interface. First, there are no visual cues to guide user interaction. And second, users are unsure of what they can expect from a voice assistant because people associate ‘speaking’ with interpersonal communication rather than with technology.
1. SENSITIVE ERRORS
Ensure error recovery is thoughtfully designed to avoid asking people to repeat trauma or negative feelings, as this can have a bad effect on their mental health. Never make the user feel like they are stupid, this is especially important for people in a vulnerable state of mind.
2. INFORMATION CHUNKS
Be mindful about giving out large volumes of advice at once. People have limited capacity to remember a string of information, and typically a user cannot remember more than about seven auditory items at a time. Instead, break the information into smaller steps and tasks for the user to complete.
3. MAKE THE FIRST MOVE
Voice technology that relies on the user to give the first command poses difficulties for people who may have low motivation and confidence. We suggest linking up with other sensors, such as Emotion AI, wearables and smart home tech to switch from a command and control relationship to one where the technology can look out for the user and make the first move.
Complementing design principles with additional guidelines will deliver a safe and secure environment for the user to interact with the Voice UI.
Considerations to keep in mind:
Users sharing personal information with Voice UI will want to understand how their information is going to be used.
Tell users how long their information will be stored and where the information will be saved. Let them know about security and encryption.
Make a distinction between technology and humans and remain open about who the user is interacting with.