The question that determines whether an AI project succeeds

Date
May 11, 2026
Hot topics 🔥
AI & Tech
Contributor
Anastasia Gritsenko
Summarize with AI:
The question that determines whether an AI project succeeds

There’s a pattern that comes up in almost every early-stage AI conversation we have with organisations. By the time we’re in the room, a tool has usually been identified. Sometimes a licence has already been purchased. The thing that hasn’t been defined yet is the problem.

This tends to be the real source of difficulty. AI amplifies the problem it’s given. A precise, well-scoped brief produces something genuinely useful. A vague one produces output that looks plausible, gets presented in a demo, and then quietly gets shelved three months later because nobody can connect it to anything that actually matters in the business.

The question that determines whether an AI project succeeds is the one that should come first, before any tool discussion: what problem are we actually trying to solve?

What a false good problem looks like

There’s a category of brief that sounds reasonable but isn’t specific enough to build anything on. “We want to use AI to improve our processes.” “We want to automate repetitive tasks.” “We want to be more data-driven.” These feel like problem statements — they carry real frustration, they reference genuine business pressure, and they’re written by intelligent people who care about improving how their organisations work.

The gap is that they describe a direction, not a problem. They tell you where someone wants to go without identifying what’s broken, who it affects, or what it costs the business today.

The research reflects how widespread this is. A Gallup survey from late 2024 found that only 15% of US employees report their workplace has communicated a clear AI strategy. MIT’s GenAI Divide study found that AI success consistently comes when organisations tackle one specific pain point at a time rather than pursuing broad, unfocused rollouts. And Gartner predicts that 30% of generative AI projects will be abandoned after proof of concept, with unclear business value cited as a primary cause.

The system contributes to this. Tool vendors don’t ask for a problem definition before selling a licence. The pressure to “do something with AI” arrives faster than the organisational clarity needed to do it well. Managers aren’t getting this wrong because the step that matters most has been largely removed from the buying process.

The formula that changes the question

The methodology we use to move from a vague intention to a workable brief comes from our AI in Practice workbook, and it’s built around one prompt:

“How could we [action] in order to [outcome] while [constraint]?”

The structure is simple by design. Each clause does specific work. The action names what needs to change. The outcome connects it to a measurable result. The constraint, the “while”, is where most of the value lives, because it forces the trade-off into the open before anyone has touched a tool.

The difference between a weak brief and a strong one becomes clear quickly. A weak brief sounds like: “How could we use AI to improve our procurement process?” It has a direction. It names a function. There is nothing in it that tells you what’s broken, what success looks like, or what the solution has to work within.

A strong brief sounds like: “How could we guide vendors through the onboarding request process, in order to reduce first-time-right errors and shorten lead times, while working within our existing SharePoint infrastructure and without requiring vendors to install new software?”

That second brief is, in essence, the problem definition behind Harry, the AI-powered procurement assistant we built for HEINEKEN. The original process was fragmented across complex Excel forms with no validation, driven by requestors who were often unaware of the steps involved, producing low first-time-right rates and delayed deliveries. The problem was real, observable, and expensive. The constraint, working within existing infrastructure, shaped the entire technical direction. Harry didn’t emerge from a technology conversation. It emerged from a problem that had been defined precisely enough to build something against.

The “while” clause is what most vague briefs are missing. Without a stated constraint, any solution is technically valid, which means no solution can be properly evaluated. With one, the scope becomes real.

The full methodology, with worked examples across different business functions, is in Chapter 5 of the AI in Practice workbook, free to download and built for managers working through this before any vendor conversation begins.

The four questions worth asking before going further

Once a brief exists in the formula structure, there’s a quick check worth running before taking it further. These are the questions that tell you whether the problem is real enough to build on.

  1. Is it linked to an observable situation in daily operations? 

Something that actually happens, not something that might happen or could theoretically be improved. If you can’t point to a specific moment in the working week when this problem shows up, the brief needs more grounding.

  1. Does it happen often enough to justify the investment? 

A problem that surfaces twice a year is a different kind of problem from one that creates friction every day. Frequency is a proxy for impact, and impact is what makes a business case.

  1. Does it cost something measurable? 

Time, money, energy, or quality. If the cost is hard to articulate, the return will be equally hard to demonstrate, and equally hard to defend when someone asks whether the project was worth it.

  1. Does it affect more than one person? 

Problems that live in a single person’s workflow tend to produce solutions that die when that person leaves or changes role. Broader impact means broader ownership, which matters for adoption as much as for the build.

When the answer to any of these is unclear, the brief needs more work. That’s useful information. It’s faster and cheaper to find it now than six months into a build.

McKinsey’s State of AI research confirms the underlying principle: organisations that redesign workflows before selecting tools are twice as likely to report significant financial returns from AI. The sequence matters as much as the solution.

The work we did with Crisis Cognition illustrates how a well-formed constraint shapes a project from the start. The problem was clear: field teams lose decision support when connectivity fails, and in crisis response, that’s not a minor inconvenience. The constraint, “while operating without internet access, on low-power hardware, without requiring technical setup in the field”, shaped every technical decision that followed. It was the definition that made the project specific enough to build.

What happens when the problem is right

A well-defined problem makes the tool conversation significantly simpler. The requirements are already embedded in the brief. The success criteria exist before the build begins. The people who need to use the solution have contributed to defining it, which changes how they receive it.

It also occasionally reveals that AI is not the right answer — that a process redesign, a better workflow, or a different kind of investment would create more value. We tell clients this when we see it. It’s not a comfortable thing to say early in a relationship, but it tends to build more trust than any demo.

Where to take this from here

Problem definition is the first thing we work through with every organisation we partner with. It regularly changes the direction of a project. Occasionally it changes the project entirely.

The formula and the four questions above are drawn from our AI in Practice workbook, which maps this process with guided prompts for managers working through it with their teams. It’s free to download.

If you’d like to work through problem definition with your team before any tool decision is made, our AI strategists can help you get from a vague intention to a scoped, ready-to-run use case. Get in touch to find out how we work.

Download the free workbook →

SaveSaved
Summarize with AI:

Anastasia Gritsenko

Anastasia is our head of UX and Design. She was born into a family of designers, so you could say that creativity is quite literally in her blood. During her free time, she enjoys reading everything from sci-fi and fantasy novels to the latest on UX and design.
Woman holding the Working machines book

Working Machines

An executive’s guide to AI and Intelligent Automation

Working Machines eBook