How we built an AI guide for managers (and why implementation has nothing to do with tools)

Date
April 13, 2026
Hot topics 🔥
AI & TechEntrepreneurshipHow-to Guides
Contributor
Paula Ferrai
Summarize with AI:
WeAreBrain illustration abstract

The most common question we hear from managers who are ready to adopt AI: “Should we go with ChatGPT or Copilot?”

It’s an easy place to start, because the conversation about AI in business has been almost entirely tool-led. Most managers begin their AI journey by comparing products rather than defining problems.

The data reflects this. According to McKinsey’s State of AI research, despite 88% of enterprises now using AI in at least one function, over 80% report no meaningful impact on EBIT. The gap between adoption and value is wide, and it opens at the very first step: reaching for a tool before articulating a problem worth solving.

At WeAreBrain, we kept seeing this pattern in our workshops and client engagements, motivated managers with genuine ambition stalled because they’d skipped the foundational work. So we developed a structured process to address it. Here’s what it looks like, and why tools don’t appear until much later.

The cost of starting in the wrong place

McKinsey research consistently shows that organisations which redesign their workflows before selecting AI tools are twice as likely to report significant financial returns, yet most teams approach it the other way around, buying tools first and then trying to construct a use case around them. The result is a cycle of promising pilots and disappointing follow-through, with teams holding subscriptions they can’t fully justify.

The framing managers most often encounter is a procurement one, when the decision that actually matters is strategic. AI is a capability, and like any capability, it creates value when pointed at something specific. Most of the work of “implementing AI” happens before a single product is evaluated.

A 2025 McKinsey workplace report identified leadership, not technology, as the biggest barrier to AI success in organisations. The managers who make genuine progress are those who take ownership of problem definition, rather than delegating it to IT or waiting for a vendor to lead the conversation.

Start with the problem, not the platform

The first thing we hand managers in our workshops is a sentence structure: the problem-definition methodology from our AI in Practice workbook.

Built around one prompt, it goes: 

“How could we [action] in order to [outcome] while [constraint]?”

The structure is intentionally simple. The power sits in the third clause. The “while” forces managers to name the constraint they’re unwilling to compromise on, which immediately separates vague AI ambitions from use cases worth pursuing.

The methodology surfaces misalignment early. Teams that use it often discover that proposed solutions have been built on assumptions about the problem rather than the problem itself, and that’s worth knowing before any tool enters the conversation.

It also creates a measurable brief. Once a manager can complete it in one sitting, they have the foundation for everything that follows: what to measure, who to involve, and what constraints any solution needs to respect. The full methodology, with worked examples across different business functions, is in Chapter 5 of the AI in Practice workbook.

From problem to plan: the AI Canvas

A well-formed problem statement is the starting point. The next step is scoping the use case before any tools are evaluated. For this, we use the AI Canvas, a structured one-page framework from Chapter 10 of the workbook that covers the ten dimensions of a viable AI use case.

In practice, we group the canvas sections into three themes:

ThemeWhat it covers
ClarityWhat are we solving? For whom? What does success look like, and how will we know?
FeasibilityWhat data, people and processes do we need? What are the most likely failure points?
ImpactHow do we measure it? What changes operationally once this is running?

The canvas works because it slows the process down in the right places. A Gartner survey on AI maturity found that identifying the right use case is a top barrier for 37% of organisations with low AI maturity. In our experience, it’s the absence of a structure for translating ambition into something specific enough to act on.

A completed canvas also changes the internal conversation around AI investment. Rather than asking “should we use AI for this?”, teams start asking “what would need to be true for this to work?” — a far more productive question at the scoping stage. The workbook walks through each section with guided prompts for every theme, available to download here.

What changes when you use a framework

Managers who begin with tool comparisons tend to stay in evaluation mode for months, uncertain how to justify investment or move a pilot forward. Managers who begin with a problem statement and a completed canvas tend to move faster, because they’ve already answered the questions that typically stall implementation.

Harvard Business Review notes that most AI initiatives fail because organisations aren’t structured to sustain them, and the structural gap we see most often is the absence of a defined use case at the outset. A framework addresses this directly by creating clarity before commitment.

The output quality of any AI tool is proportional to the quality of thinking brought to it. A vague brief produces a generic output. A precise, well-scoped problem produces something genuinely useful. As I noted in a recent LinkedIn post on AI and authenticity, AI is most useful for organising and expanding your thinking, but only when you’ve done the thinking first. The same principle scales from individual content creation to organisation-wide adoption. AI amplifies the clarity you bring to it, and a framework is what builds that clarity.

Where to start this week

Three things are worth doing before evaluating another tool:

  1. Write a problem statement using the “How could we… in order to… while…” formula. If you can’t complete it in one sitting, the problem needs more definition before any solution is considered.
  2. Score your use case against the three canvas themes: clarity, feasibility and impact. A theme that’s mostly blank is where the implementation risk sits.
  3. Involve the people who own the workflow. The canvas only surfaces the right answers when the people doing the work have contributed to it.

The full guide, with prompts for both the problem-definition methodology and the AI Canvas, is in our AI in Practice workbook. It’s free to download and built for managers who want to move from “we should be doing something with AI” to a use case they can actually run.

If you’d like support working through this process with your team, our AI strategists can help managers go from problem definition to a scoped, ready-to-run use case. Get in touch to find out how we work.

Download the free workbook →

SaveSaved
Summarize with AI:

Paula Ferrai

Paula leads our Marketing & Communications team. She’s a brand strategy expert and is perpetually excited about connecting the dots. She loves scuba-diving, yoga, and having fun with her son.
Woman holding the Working machines book

Working Machines

An executive’s guide to AI and Intelligent Automation

Working Machines eBook