AI Tasks: Context and Open-Endedness

How do you know a task is a good candidate for AI automation?

The most accurate way to answer that is to go and build the AI tool. But let’s assume we don’t want to jump headfirst into it, because there’s time and money at stake.

We want to weed out those tasks where we wouldn’t expect current AI to have a fighting chance. For that, we can draw up a framework that looks at how well-defined the task’s outcome is and how much it depends on an overall context, leading us to a 2x2 matrix. (Can’t do consulting without sprinkling those matrices around every once in a while, after all.)

Let’s go through them

Simple Automation

Tasks that do not require a lot of context and have narrowly defined success criteria are good candidates for simple automation. They might not even need any machine learning or AI. Or, if they do, they will be straightforward to implement, with AI engineering mainly focused on finding the right model prompt and processing the output back.

Examples

  • Summarizing a news article

  • Small-scale code refactorings (“Change this Python dictionary into a dataclass”)

Precision Automation

Some tasks have a clearly and narrowly defined outcome but require a lot of context, and that context might be implicit instead of easily passed to the AI tool. To handle such tasks with AI, you need to have a way to provide the appropriate context, which means a lot of data engineering behind the scenes, and, before any work on the actual tool can begin, “downloading” the implicit knowledge from the subject matter experts. This is also where various retrieval-based techniques (basically, intelligent search for source material) plays an important role.

Examples

  • Reviewing a legal document and flagging problematic clauses. What counts as problematic depends on the context, but once that context is defined, it’s a narrowly defined task.

  • Implementing a straightforward feature in a codebase while adhering to the company’s coding guidelines and following their chosen practices.

Creative Exploration

Moving on to the two “high open-endedness” quadrants, let’s first define what we mean by that. We define open-endedness as an inability to state a universally accepted definition of done. Or, in short, you can’t tell in advance what a good solution to an open-ended task looks like, but you’ll know it when you see it. With a narrow task, you could let the tool judge whether the task was completed. With an open-ended task, you have to be the final judge.

If the task requires such open-endedness but does not require much context, there’s a good chance existing off-the-shelf generative AI tools are just what you need. ChatGPT, Claude, and co for text, Midjourney for images, Runway for videos, and countless more for bespoke requirements (company logos, marketing copy, etc.). 

Example

  • Creating a visual to go with your blog post. Context dependence is low (paste in the blog post), but you must iterate over several variations to find something you like.

Guided Intelligence

Leaving the hardest nut to crack for last. A highly open-ended task that also requires intimate knowledge of your unique context. This combines the challenges of all the previous approaches. We’ll need lots of prep work to get the correct data (i.e., context) into the system. We also need intuitive interfaces that let you seamlessly iterate and refine so that you can explore solutions in a fruitful direction.

Example

  • Generating marketing copy that takes brand voice, target audience and corporate strategy as well as legal requirements into consideration

Why it matters

You’ll want to know what task you’re dealing with when choosing what AI system (if any) to build. For example, if you try to develop a “fire-and-forget” system for a highly open-ended task, you’ll waste a lot of time trying to find that one magical prompt that gets the AI to give you the perfect outcome.

Pick the simplest approach for the problem you have, but not simpler.

Previous
Previous

AI Affordances

Next
Next

Making Users Awesome