HA look at designing emotive AI. UX design ideas. | by Adrian Chan | Apr, 2024

[ad_1]

Hume AI’s EVI chatbot

A sidebar on why gen AI design is social interaction design

We could debate whether AI should be designed with human, social interactions and communication in mind. Or whether it should be designed with machine interaction, commands and instructions, and functionality in mind.

LLMs to date are excelling at coding, document analysis, and many of the functions and operations designed into software tools. In short they’ve proven themselves good at learning automation.

But the user interaction with LLMs tends in a different direction: it leans into our social behaviors, our speech, talk, and conversation. LLMs want to read and participate in social platforms, want to recommend us products and services in conversation, want to replace web search with an interaction model that is conversational.

Use of emotions in generative AI behaviors settles any debate about whether social interaction (among humans) is a reasonable source of interaction design concepts and principles. If AI is to make progress in conversational user experience, of course designing AI will use concepts from actual human social interaction.

Clearly, we integrate emotions into chat agents and robots because it humanizes them. Clearly, we do this to make them more expressive as characters or entities, and also to make them more meaningful. Emotions add meaning.

The issue will be what meaning to add, when, how much of it, and according to what incoming signals?

Emotions can express an internal state — mental state, or feelings — in which case the emotions expressed by an agent would be simply “how it’s feeling.” This would be the case of an agent, alone and engaged in its own thoughts. Since there’s really no such thing yet as agents being left alone, we can assume that the emotions expressed by an agent pertain to the run of interaction we’re engaged in with them. In this case, then, the emotions expressed by an agent are not merely signs of internal states, but are communicative. They’re meant to mean something. These kinds of emotions are cues: cues to pay attention, to interact, to speak, to listen, to agree or disagree…

In human social interaction, emotions expressed are both an expression of a participant’s feelings and also a reflection on the state of the interaction. In the case of verbal communication, this means two things: the state of the information content being communicated, and the feelings participants have towards each other about the state of the communication. This is essential: emotions displayed by participants in a conversation (in our case a chat agent and a human user) are meta-communication. Emotions are meaningful information about the conversation.

Emotions exist, in part, to help align and coordinate the behaviors and disposition of participants when the information content alone is not sufficient. Emotions are an additional layer of meaning, so to speak, available to us in social situations for the purpose of resolving ambiguities.

So you see the opportunity and the risk for designing emotions. If you’re going to add them to an agent, they better mean something. Or at least do a very good job pretending to.

[ad_2]

Source link

2023. All Rights Reserved.