Generative AI and the future of problem-solving | by Jonathan Kahan | Dec, 2023

[ad_1]

How thinking machines will change the way innovators work.

Illustrative image representing winding abstract paths
Creation by the Author using Dall-e

A recent study conducted by Fabrizio dell’Acqua and Ethan Mollick in partnership with BCG found that consultants using AI do significantly better on a number of common tasks compared to those with no access. For lower performers, the jump in performance, as measured by the time it takes to complete tasks, the number of tasks completed overall or the quality of the outputs, was a stunning 43%.

For those who were not yet convinced of the power of generative AI, this study should provide the final proof: AI will radically transform problem solving disciplines as we know them. And yet, despite all the powerful tools at our disposal, from OpenAI’s latest ChatGPT release to the thousands of specialized applications on the market, we are yet to see a true, end-to-end autonomous problem-solving tool. Many are talking about “autonomous innovation”. But what would autonomous innovation even look like? Can we really imagine machines that autonomously create and deploy original solutions to novel problems? And what elements are still missing before AIs can autonomously address and solve complex problems?

Problem solving and the benefits of gen-AI

Problem-solving disciplines require their practitioners to solve novel problems, at least some of which can’t simply be addressed by following an established procedure. They include professions like management consulting, strategy and design. And while these professions might sound very different from each other, they all involve similar steps:

● Framing the problem
● Modeling the problem — involving breaking down the problem into manageable components
● Collecting data — involving desk research, conducting interviews, data analysis
● Ideating a solution
● Prototyping the solution
● Testing the solution and iterating

So far these steps were mostly done by humans. The gen-AI revolution now allows machines to take over in a few of these stages, unlocking huge value for companies that adopt it. This new value essentially falls within three categories or value levers: Increased efficiency, increased variability and enablement of ubiquity. Let’s look at each of them:

Increasing efficiency and speed — This one is pretty straightforward: gen-AI makes problem-solvers faster and better at doing their job, thus being able to get more work done at the same time. This is evident e.g. in:

  • Automating data collection, clustering etc
  • Prototyping with generative AI tools, e.g. for visualization or website design
  • Overall reduction of time to project delivery and personnel cost

Enabling more variations — This is quite radical. Right now, after having identified a number of possible solutions, problem solvers have to “converge”, prune the tree of ideas and select a few to move on with. Gen-AI may eliminate the need for doing this:

  • Eliminating path dependence on problem formulations and frameworks adopted early on in the process- every possible variation of an idea can be reasoned through to its ultimate consequences or even be brought to market
  • Generating more options during idea generation phase: we all know in brainstorming quantity comes before quality. Now the ideas quantity can increase exponentially.
  • Enabling the prototyping and testing of an arbitrary number of variations
  • Enabling infinite customization of solutions, to the point of potentially tailoring them for an audience of one.

Enabling expert ubiquity — Until now, problem solvers could only be in one place at a time. Generative AI promises to allow their expertise to be uploaded and “replicated” infinite times. And just as problem solvers can be replicated, so can, for many use cases, the target users of a given solution. This development allows, for example, for:

  • Facilitated data collection with synthetic subjects or experts
  • Disseminating problem-solving expertise and making it on-demand

A world in which generative AI’s three levers of value are fully taken advantage of, is one in which problem solving looks completely different. In terms of process, as we eliminate the need for convergence, our “double diamond” starts to look like a cone, projecting infinite variations of possible solutions from a single problem statement. In terms of scope, AI’s low-marginal cost ubiquity will enable us to apply the problem solving method to a host of new problems large and small. Anyone will be able to deploy an AI to any ultra-specific problem, on-demand.

There is no shortage of solutions on the market that enable companies to harvest at least some of this value. ChatGTP is a generalist tool that can be used for most of the points illustrated. Dall-e, Midjourney and the like can help rapidly prototype a whole range of products and interfaces. Then there is a host of specialized tools:

  • Some are meant to increase productivity, basically enabling search and action triggering with natural language, instead of using a complicated language or interface (an example in the product space is Tagado)
  • Some assist specific stages of the problem solving process, like brainstorming (eg. Stormz), prototyping (eg. Diagram)
  • Some enable the replication of individual minds, as users (eg. Synthetic Users) for testing purposes, and potentially problem-solvers (eg. various bots created with the latest ChatGTP GTPs feature)
  • So it looks like what we have is piecemeal solutions to enhance human work in most parts of the problem solving process. But most current applications are missing an opportunity to truly use Gen AI to the full extent of its problem-solving capabilities

I think the gap between where we are now and end-to-end autonomous problem solving has to do with three critical challenges:

  1. Framing the problem
  2. Classifying the type of problem
  3. Adopting an agent concatenation approach.

1. From prompt engineering to problem framing

Over the past few months, we have been hearing a lot about prompt engineering as a key skill for the future. But prompt engineering is, in a sense, only a cover to the actual book. As Oguz Akar pointed out on HBR, we should rather think about the more general issue of formulating a good problem statement as the most fundamental human input into AI-powered problem solving.

As Akar explains, this skill involves diagnosing the problem from the available evidence, decomposing it into sub-problems, reframing it in different terms and designing the constraints that will inform the solution. These four together play such a massive role in determining how the problem will be solved that it is essential for anyone interested in applying AI to problem solving to design them intentionally into processes and interfaces. I don’t see this being done a lot at the moment.

A simple exercise designers do to decompose and reframe problems is challenge mapping: from a starting problem statement, a moderator repeatedly asks “what is stopping us” to break down the problem into sub-problems, and “why does this need to be solved” to zoom out and focus on more strategic-level questions.

At least a couple of ways come to mind in which this kind of exercise could be reproduced by an AI agent or its interaction with a human user:

  • With some kind of template that forces us to tell the machine what it needs to be told — context, level of analysis, known constraints to the problem space, etc. — this approach shown to some extent for example by Board of Innovation’s strategy advice bot.
  • Alternatively, the agent could help us run a challenge mapping exercise by asking us questions about the problem as we set it up. This approach is present eg in the MIT’s Ideator.
An illustration of how the selection of the right problem statement ends up affecting the solution space. Author’s creation
An illustration of how the selection of the right problem statement ends up affecting the solution space. Author’s creation

Read more about creating good problem statements in my Modeling in Problem Solving series

2. Classifying problem types

Not all problems are equal. Think about the difference between optimizing a manufacturing process, setting the strategy for a firm and reacting to a sudden crisis like COVID-19.

Depending on factors like time constraints, availability of information, and presence of feedback loops in the problem system, the right approach to problem solving may be to apply an existing best practice, rapidly testing a number of solutions, or going all in with one and hoping to learn more about the environment in the process. These different categories of problems — simple, complicated, complex and chaotic, were illustrated by Dave Snowden with his seminal Cynefin framework.

An autonomous innovation process should be able to differentiate between problem types and take the right action, or suggest the right path accordingly. This can, to some extent, be simulated with something like ChatGTP, but it doesn’t seem to have been incorporated into most available tools. Future AI problem-solving tools will likely either be specialized on one type of problem — simple, complicated, complex or chaotic, or have hard-coded if-then conditions which will direct the process to the right set of steps.

The Cynefin framework. Source: Wikipedia
The Cynefin framework. Source: Wikipedia

3. Agent concatenation approach

The shift to autonomous problem solving and innovation requires moving from the currently predominant paradigm of AI as a copilot to thinking of AI as a set of concatenated agents. The first key difference is that agents are, of course, more independent of human input: they have their own sensors, that trigger them to act given a set of conditions, and actuators, i.e. ways to intervene in the world. OpenAI itself seems to have tilted a bit this way with the latest ChatGTP release (Nov 2023), which includes the ability to build something like specialized AI agents. These are currently fairly limited in terms of what they can sense (mainly your chat input, webpages) and how they can act (mainly triggering actions in various pieces of software), but we can definitely expect to see a lot more capabilities on this front soon (what about live camera sensing? What about 3d printing something, triggering IoT workflows, etc.)

The second aspect of the difference is concatenation. Problem solving is typically a collective effort, that involves multiple people playing different roles, sometimes based on their skills, sometimes on personality, sometimes deliberately, almost as roleplay (think of the famous six thinking hats creative ideation technique). This is essential for problem solving success: different human agents optimizing for slightly divergent preferred outcomes ensures that more robust and innovative solutions are developed.

This mechanism could be emulated by AI. Rather than chatting with a single agent, our interaction with AI could look more like witnessing the unfolding of the innovation process among different AI agents. Some of this interplay could be sequential (think: ideation bot feeds its outcome forward to the idea selection bot, which hands it over to the prototyping bot), while some could be parallel: for example, several bots could try to ideate or prototype based on a single prompt that gets procedurally “injected” with a random mutation in each instance, enabling divergence. Or even, a both could be systematically try to criticize what the content another bot is generating, while a third bot could be synthetizing their results. Microsoft Autogen seems to be a step in this direction.

[ad_2]

Source link