Why voice input is flawed as a primary interface input mode | by Chris Ashby | Jan, 2024

[ad_1]

The recent release from Rabbit and Teenage Engineering, the r1 companion, has prompted repeat sell-outs of the product, as well as being one of the few devices challenging the established app-based convention of interface design in smart devices.

Not only that, but the primary input method for the device is voice, which for many feels like a natural evolution of the primary input mode for smart devices, and certainly leverages large-language AI models to new heights through focused and specific, task-focused interactions, that simplify multiple-app processes into simple, natural interactions.

But is voice really the right choice for the next generation of interface design? And can we truly adopt voice as a primary input mode across our smart devices?

I would argue that we aren’t there yet, and the r1, as well as other devices like the AI Pin by Humane, although successful in challenging convention, will certainly not be the devices to break the back of voice.

Maybe voice will never become the primary input mode.

Here are the 3 main flaws I foresee (or the main hurdles we have to overcome depending on how positive your outlook is) for this being the case…

Unavoidable is the fact that voice is just simply not private, unless you are by yourself, in a soundproof room. And there are just certain tasks that users will never want to do without the privacy of silence as well as the privacy of visibility.

I’m sure you can think of many things you use your phone for on a day-to-day basis that you wouldn’t want the general public knowing — not necessarily for any seedy or creepy reason — but simply because nobody wants everybody around them knowing exactly what they’re browsing or doing, even if the people around them are close family or friends.

[ad_2]

Source link

2023. All Rights Reserved.