Emerging UX patterns in Generative AI experiences | by Ryan Tang | Mar, 2024

[ad_1]

In the above image, inpainting, threaded conversations, and highlighting interactions are all emergent examples that show how users can curate specific parts of the information to create more relevant context and get better outcomes.

Take writing a well-researched report as another example. A user’s journey often begins with broad research, leading to the discovery of key points that warrant deeper investigation. As they gather and assess information, they gradually compile and synthesize it into their final piece. In this process, moments of highlighting or selecting specific content act as crucial anchors, guiding the AI to deliver more pertinent results and context. This path requires ways for users to both save and consume highlights.

Users need to save specific highlights, and also use those highlights to refine their experience. This requires deep understanding of user outcomes and creating feedback mechanisms to capture this.

User curation reveals that for generative AI to effectively support complex creative tasks, it must not only understand but also anticipate the nuanced ways users interact with information. By recognizing and responding to these ‘curation signals,’ AI tools can offer more targeted assistance, enriching the overall user experience and outcome.

Designing for Just Enough Trust

While generative AI has made interacting with technology easier for users, trust remains a significant barrier to widespread adoption. This has been true in the past and remains true today. Addressing trust is key to building and encouraging the adoption of new AI tools.

Among the many frameworks for understanding how people accept and use new technology, two frameworks were particularly inspiring: the Unified Theory of Acceptance and Use of Technology (UTAUT) and Fogg’s Behavior Model (FBM).

As a useful oversimplification: UTAUT suggests that usage intention is influenced by performance expectancy, effort expectancy, social influence, and facilitating conditions. For example, someone might decide to start using a client management tool because they believe it will effectively help them achieve their sales goals (performance expectancy), they find the app straightforward and user-friendly (effort expectancy), their co-workers and mentors also use and recommend it (social influence), and their organization database is accessible through it (facilitating conditions).

A parallel theory, FBM, simplifies behavior into a function of motivation, ability, and a prompt (or trigger). For example, the act of buying coffee is driven by the desire for caffeine, the presence of money and a nearby coffee shop, and the coffee shop sign serving as a prompt.

Generative AI reduces the perceived effort required to achieve outcomes. Anecdotally, many users have overcome activation inertia with generative AI. However, ensuring more user try and stay engaged is where trust plays a crucial role.

In the context of designing for trust, there are many perspectives and frameworks like the ones mentioned above. Here we will further simplify and think about trust as being shaped by: previous experiences, risk tolerance, interaction consistency, and social context.

Previous Experiences: We must recognize that users have baggage. They come into experiences with context created by previous experiences. To influence this foundation of trust, we simply need to not re-invent the wheel. Familiar interfaces and interactions allow users to transfer the trust of the past into the present. It is much easier to build on this trust foundation rather than working against it. An example in the context conversational AI, rather than telling a user to input a prompt, we can leverage subconscious tendencies to mirror in conversation by using responses to influence the way users interact.

Risk Tolerance: Understand that users want to avoid negative outcomes. The key to this is understanding which risks users will not take. We must bring risk below the users risk tolerance. Some methods to influence risk tolerance include: increasing transparency, user control, user consent, compliance. Creating polished experiences can leverage aesthetic usability to decrease the risk expectation. However, product-specific approaches are always going to be more effective. As an example, imagine a conversational AI for doctors providing diagnostics. The risk tolerance is very low. A misdiagnosis would be extremely consequential for both the doctors and patients. Ensuring output transparency with references, prompt break downs, and conflicting perspectives would be effective in reducing the risk.

Interaction Consistency: Interaction is both the output and the way a user arrives there. Users shouldn’t have to wonder whether different words, situations, or actions mean the same thing. To improve interaction consistency, ensure that internal and external consistency is maintained from the layouts to the button text. In the context of a conversational AI, interaction consistency may look like responses having the similar formats and words having the same meaning across the conversation. If a user requests a summary of a topic it should not look like an essay in one interaction and a bullet list in another, unless the user specifically asks.

Social Context: Potentially the most visible layer. Social context can include endorsements from trusted sources like a manager, or facilitation within a trusted network like being connected with pre-approved enterprise software. Social context can be influenced by social proofing strategies, and creating social proofing opportunities within interaction. In the context of an LLM for internal databases, this may mean highlighting work done by the user and their direct team. Pointing out that the system has visibility of internal data helps to further build trust that the system is approved within this social context.

[ad_2]

Source link