Before you build an AI agent, ask yourself these five questions

Date
March 23, 2026
Hot topics 🔥
AI & Tech
Contributor
Mario Grunitz
Summarize with AI:
hologram of a person's head

We’ve run over 100 AI agent experiments at WeAreBrain. More hallucinations than a tired toddler. More failed builds than we’ll ever put in a case study. And somewhere in all that mess, a framework worth sharing.

Right now, every vendor has an AI agent. Every pitch deck leads with one. Decision-makers are committing budget before they’ve asked whether an agent is actually the right tool, and that’s where the money disappears.

This isn’t a piece about why agents are overhyped. They’re genuinely powerful in the right context. But before you spend a single euro building one, here are the five questions we ask every client first.

What is an AI agent, actually?

Worth being precise before we evaluate anything.

An AI agent is a system where a model dynamically directs its own process, choosing tools and deciding next steps based on what it finds, to complete a goal with minimal human intervention at each step.

That’s different from automation. Anthropic draws a clear line between workflows (tools and models following predefined code paths) and agents (the model deciding how to proceed). Most problems that look like agent problems are actually workflow problems. A well-built workflow is faster, cheaper, and easier to maintain.

This matters because “AI agent” has become a catch-all label. Chatbots get rebranded as agents. RPA tools get called agentic. Gartner estimates only around 130 of the thousands of vendors claiming to offer agentic AI are building the real thing. Knowing what you’re actually buying matters.

Agent, automation, or human workflow?

Before picking an architecture, pick the right category. We use this table as a starting point in every scoping conversation.

FactorAI AgentAutomationHuman workflow
Task variabilityHigh, unpredictableLow, repeatableHigh, requires judgement
Steps requiredMulti-step, dynamicPredefined sequenceFlexible, context-driven
Error toleranceMedium (needs oversight)High (deterministic)Low (high-stakes decisions)
Cost to runHigher (inference costs)LowerHighest (staff time)
Best suited forResearch, triage, multi-system orchestrationData transfers, notifications, processingClient relationships, novel situations, complex negotiation

Most operational problems sit in the middle column. If the process can be mapped in advance and the steps don’t change based on context, automation is the better choice, nearly every time.

The five questions

These came from years of client conversations and, more usefully, our own failed builds.

1. Is the task genuinely dynamic, or just complex?

Complex and dynamic sound similar. They’re not.

A complex task has many steps, but if those steps are always the same, in the same order, with the same logic, that’s an automation problem. A dynamic task changes shape depending on context. It requires the system to decide what to do next rather than follow a script.

If you can write out the full decision tree in advance, you don’t need an agent.

Red flag: You’re describing the task as “it depends” without being able to say what it depends on.

2. Can you define what “done” looks like?

Agents work best when success is measurable. If you can’t define a clear, verifiable output, something you can check and say “yes, this is right,” you’ll struggle to evaluate performance and you’ll struggle to catch failures early.

Anthropic identifies this as one of the core conditions for successful agentic deployment: the output must be verifiable, ideally through automated tests or explicit criteria.

Red flag: Success involves subjective judgement that varies person to person.

3. What happens when it goes wrong?

Every agent will make mistakes. The question is whether your process can tolerate them, catch them, and recover.

We worked with a scale-up that wanted an agent to handle their entire inbound sales qualification process autonomously. After scoping the failure modes, we recommended a structured automation with a human review step for edge cases instead. Half the build cost, twice the reliability, and a cleaner audit trail.

High-stakes decisions, legal, financial, reputational, need a human in the loop. That changes both the architecture and the cost calculation considerably.

Red flag: The failure mode is “we send the wrong contract to a client.”

4. Do you have clean data and accessible systems?

An agent is only as capable as the information and tools available to it. This is consistently the biggest blocker in real deployments, not the model, not the architecture, but the data. A 2025 IBM study found that 42% of organisations cannot properly customise AI models due to poor-quality data. Agents amplify that problem because they make decisions based on what they find. Messy inputs, confident wrong answers.

Before building: is the data accurate and structured well enough for a model to use reliably? Are the systems it needs to access available via API, or does this require significant infrastructure work first?

Red flag: Your team says “the data is a bit messy” or “we’d need to build an integration first.”

5. Is this a capability gap or a capacity gap?

Most people skip this one.

A capacity gap means your team can do the work but doesn’t have enough time or people to do it at scale. Agents can genuinely help here. A capability gap means the work requires skills or knowledge your team doesn’t have. Agents will simulate that capability convincingly enough to cause problems, and the oversight required often costs more than the original problem.

Red flag: The brief is “the agent should do what our best consultant does.”

When agents genuinely make sense

We build agents regularly. The use cases where they earn their complexity tend to share a few things: the task is genuinely open-ended, the steps can’t be predicted in advance, and the system needs to interact across multiple tools or data sources in real time.

In practice, that means research and synthesis tasks (where the path to the answer changes every time), dynamic support triage (routing and responding to a wide variety of request types, not just answering an FAQ), and multi-system orchestration (where the logic of what to do next depends on state across several platforms simultaneously).

A rigid workflow would either fail in these contexts or require constant manual maintenance. That’s when an agent earns its cost.

Better questions before better technology

There’s no AI magic bullet. There’s hard work, trial and error, better questions, and human oversight. After 100+ experiments, that’s still the framework that holds.

Work through these five questions with your team before committing budget. Be honest about the answers. If the task is predictable, build a workflow. If the data isn’t ready, fix the data first. If success is hard to define, go back to the problem before touching the technology.

And if an agent genuinely is the right tool, build it with clear success criteria, human oversight at the right checkpoints, and a plan for what happens when it gets things wrong.

Key takeaways:

  • Most “agent problems” are actually workflow problems
  • The five questions cover: task variability, definition of done, failure tolerance, data readiness, and capability vs capacity
  • Agents earn their complexity when tasks are open-ended, dynamic, and span multiple systems
  • Start with better questions

Not sure which approach fits your problem? We run scoping sessions to help you figure that out before you spend anything. Let’s talk.

SaveSaved
Summarize with AI:

Mario Grunitz

Mario is a Strategy Lead and Co-founder of WeAreBrain, bringing over 20 years of rich and diverse experience in the technology sector. His passion for creating meaningful change through technology has positioned him as a thought leader and trusted advisor in the tech community, pushing the boundaries of digital innovation and shaping the future of AI.
Woman holding the Working machines book

Working Machines

An executive’s guide to AI and Intelligent Automation

Working Machines eBook