When to speak?

You often hear: "User's don't have to learn anything new, they are already fluent speakers," when promoting Voice as the new interface and medium. That is true. However, we still need to adapt to voice technology and its conversational turn-taking capabilities.

When in a ''conversation'' with a smart speaker, it often happens that the user starts to speak when the device is not listening or while the device is still speaking. When Google or Alexa hasn't registered what the user said, they often think it's their fault. It creates a situation where users are not sure of themselves, and their immediate reaction is to adjust their way of speaking; the choice of words, tone-of-voice, the loudness in their voice, articulation, etc. These adjustments are not their natural responses and therefore, are adapted to the capabilities of the technology. As a brand or company, this is not a result you want when users are interacting with your brand or company. You don't want users to feel uncertain, insecure, or frustrated about their conversational skills.

We're still waiting when voice technology has the human-like conversational turn-taking abilities. Until that time, the use of earcons can help to guide users through the experience. Because earcons have the power to activate and navigate users. Great sound design can drastically enhance conversational experiences and clarifies turn-taking functionality.

Written by: Phoebe Ohayon - Co-founder @ Voicebranding.ai