Prompts should be designed — not engineered | by Alex Klein | Apr, 2024
[ad_1]
This is how I see the difference between “engineering” and “designing” a prompt.
A wire prompt is engineered to complete a job — while a cloth prompt intuitively understands a user’s needs and is designed to provide a fluid and supportive experience for the user.
Anyone can access an API and prompt a foundational model to execute a task. Sophistication, in prompting, comes from deeply understanding the user — their attitudes and behaviors towards this experience — and integrating that knowledge into the prompt.
Cloth “mother” (left), Wire “mother” (right)
This is even more important than with traditional software interactions because of the open-ended nature of conversational experiences. As noted by designer, Emily Campbell, “the more autonomy that users have to direct computers to their personal whim, the better they will expect computers to understand them and anticipate their needs.”
Logan Kilpatrick, head of developer relations for OpenAI, boils it down to “context.” The model needs context to do its perform a specific task well. This makes prompting an “inherently human” task because it requires that we understand the nuances of what people are trying to achieve.
At risk of stating the obvious: this. is. what. we. do.
Without design, the next wave of AI applications will be brittle little things that waste an organization’s precious AI resources.
There’s a fundamental difference between traditional software engineering and prompt engineering that is worth highlighting.
In traditional software engineering, we can pinpoint the lines of code necessary to produce the desired outcome. For example, we know what kind of function to use in order to validate a user’s credentials against a database.
In prompt engineering, we can’t predict how the model will respond to the prompts we craft. Instead of direct programming, we’re steering the model towards the outcomes we aim for.
This is why an iterative prompt design process is required. In this process, we continuously design and refine, based on user and model insights. (I’ve written recently about the “two-sided user test” where a researcher studies the user and the model’s behaviors simultaneously.)
The discipline of prompt engineering is still in its infancy, with its best practices yet to be discovered. For instance, even OpenAPI only defines three categories for prompt in its playground: system, user, and assistant.
Personally, I use six categories to structure the prompt design process:
- Flow
- Role
- Mission
- Guidelines
- Examples
- Output parameters
I’m working on a prompt design guide that outlines each of these. You can subscribe for access when it’s complete. In this article, my goal is to bring the iterative nature of the prompt design process to life.
I’m not implying that engineering is unnecessary. For example, consider a ‘clothing copilot’ designed to recommend outfits to a fashion company’s customers. Creating an architecture to support this functionality demands significant engineering effort; however, the prompts within this architecture require meticulous design.
Flow
A designer’s first job is to map the steps that go into achieving the end goal. Conversational AI interactions are far more open-ended than standard software workflows, yet the copilot still needs to lead users through a logical sequence. Without this, the model and your users can become lost. Or as highlighted by designer, Paz Perez, users “don’t know what to do or ask for when interacting with an AI-powered chat.”
The iterative prompt design process…
Let’s imagine that the designer initially outlined a flow consisting of three steps: 1) identify the occasion for the outfit 2) discern the user’s style and 3) present options to the user. However, during testing, the designer notices that users frequently provide extra context about the inspiration behind their outfit (e.g. “I always wear neutral colors, but I really want to start wearing more funky patterns”). Because of this, the designer recommends adding a fourth prompt to the flow, wherein the copilot probes into the user’s goal for the outfit.
Examples
The most powerful method to guide a non-deterministic model towards desired outcomes is by providing examples of what you want the model to do. This approach, known as few-shot prompting, allows the model to mimic the examples provided.
The iterative prompt design process…
To help the model identify the occasion for the outfit, the designer creates a single example within the prompt, illustrating how the model should respond to a user’s input.
- User input “I need a dress for prom.”
- Model response: occasion, prom
During testing, however, the designer notices a pattern: sometimes users mention a specific event (e.g. “my birthday”) and other times they mention a motive for buying (e.g. “I broke up with my boyfriend”). This insight leads the designer to develop two categories of examples for the prompt: one for event-based occasions and another for motivation-based reasons.
Guidelines
Examples are a powerful tool, but occasionally, the model requires extra direction. This is where guidelines become useful. A guideline helps further refine the model’s behavior, more explicitly than an example alone.
The iterative prompt design process…
In testing, the designer notices that the model is paraphrasing a user’s response and losing important detail.
For example:
- User input: “I need a dress for prom.”
- Model response: occasion, the user needs a dress for a dance
The designer adds a guideline to the prompt that explicitly specifies: “use the exact wording that the user provides; try not to paraphrase their response.” This guideline can provide additional context to shape the outcome.
I’m hoping these simple examples help highlight how crucial designers and the iterative design process are in creating AI experiences that are usable and valuable.
Let’s face it: no one’s going to sit around waiting for designers to become valuable in the AI era, but companies need design more than ever.
When imposter syndrome strikes, just remember, nobody has all the answers. We’re all trying to figure it out as we go, together.
This article was originally published in Empathy & AI, follow for more human-centered AI content or reach out on Linkedin.
[ad_2]
Source link