The four structural AI challenges that determine whether implementation works

Date
April 28, 2026
Hot topics 🔥
AI & TechEntrepreneurship
Contributor
Mario Grunitz
Summarize with AI:
Abstract floor image with an arrow on i t

Vendor demos focus on capability and cost. Both matter. But the challenges that actually determine whether an AI project succeeds sit in a different set of questions entirely — ones that rarely feature in technology briefings and almost always surface after a tool has been selected rather than before.

In our experience working with businesses at the early stages of AI adoption, four areas come up consistently: legal, regulatory, organisational, and operational. They’re structural. And working through them before implementation is what separates a project that lands well from one that has to be unpicked later.

ChallengeThe core question
LegalWho is liable, and who owns what?
RegulatoryWhich rules apply to your use case, and when?
OrganisationalWho owns AI inside your business?
OperationalHow does AI change your workflows, and is that change designed?

According to Morgan Lewis’s global AI legal overview, AI legal risk now spans privacy, employment, intellectual property, cybersecurity, and antitrust — and industry-specific scrutiny is intensifying across financial services, healthcare, and retail.

For managers, the immediate questions are practical. Who owns content or analysis that an AI system produces? What is the organisation’s liability if that output is wrong? If an AI-assisted process affects a hiring decision or a credit assessment, who is accountable?

The right time to raise these questions is before a workflow is built around a tool that assumes the answers. The terms under which a vendor can use your data to improve their model, the employment implications of automating tasks people currently perform, and the contracts governing AI tool usage — these are areas where assumptions tend to be expensive. A short checklist for any use case under consideration: what does the tool do with our data, who is liable for its outputs, and what changes for the people whose work it touches.

Challenge 2: Regulatory

For businesses operating in Europe or serving European customers, the regulatory landscape has shifted materially. The EU AI Act entered into force in August 2024 and is rolling out in phases. Prohibited practices have been enforceable since February 2025. Full enforcement follows in August 2026, with fines for violations reaching up to €35 million or 7% of global annual turnover.

The Act classifies AI systems across four risk tiers. For most organisations, the immediate task is understanding which tier a proposed use case sits in, because compliance obligations vary significantly depending on the classification. A tool used for internal content drafting carries different requirements than one used in recruitment or credit assessment.

There is a deeper layer too. As I noted in a recent post on AI sovereignty, the debate about data tends to stop at infrastructure, where servers run, when the harder questions live in data models and decision logic. Who controls how your data is structured, accessed, and used to make decisions? Who can audit those decisions after the fact? These are regulatory questions as much as technical ones, and they warrant a clear answer before any contract is signed.

If your organisation hasn’t mapped its AI use cases against the Act’s risk tiers, that process is worth starting now. Compliance preparation typically requires 12 to 18 months, and August 2026 is close. The AI in Practice workbook includes a practical guide to working through your use case inventory — free to download.

Challenge 3: Organisational

The organisational challenge is the one most consistently underestimated, because it presents as a people problem when it’s a governance problem. The central question: who owns AI inside your business?

Deloitte’s State of AI in the Enterprise 2026 report, which surveyed 3,235 senior leaders across 24 countries, found that insufficient worker skills remain the biggest barrier to integrating AI into existing workflows, and that only 34% of organisations are genuinely reimagining their business around AI. McKinsey’s State of AI research shows that organisations are now actively managing an average of four AI-related risks, compared to two in 2022 — the governance burden is growing faster than most structures are adapting to it.

The practical elements of organisational readiness are decision rights, accountability, and capability. Who decides which tools the business adopts? Who is responsible when a model produces an output that affects a customer? Does the team have enough working knowledge to make informed decisions about where AI should and should not be applied? Clarity on these points, before tools go into production, is the difference between adoption that holds and adoption that drifts.

Challenge 4: Operational

AI changes workflows. The question is whether that change is designed or accidental.

Protiviti’s Global Top Risks 2026 report found that 31% of executives rank integrating AI with existing technologies and processes as a top-three risk concern. The concern is well-founded. Tools embedded into workflows create dependency; the operational implications need to be mapped before implementation, not discovered during it.

The practical questions at this stage: which parts of a workflow does the tool touch, what happens to the steps it replaces, who is affected, and how will performance be measured? Workflow redesign is part of the implementation, not a follow-on task. Every AI-assisted process also needs a defined fallback — a clear point at which a human reviews, overrides, or takes over. Designing that in advance is considerably less costly than discovering you need one under pressure.

Where to take this from here

The four challenges rarely arrive in sequence. A regulatory question tends to surface an organisational one, which surfaces an operational question, which surfaces a legal one. They interact, and the order in which they’re addressed shapes the quality of decisions made across all four.

The AI in Practice workbook maps each challenge area with practical prompts designed for managers rather than technical teams. It’s a useful companion to the picture this post describes, and it’s free to download.Navigating these four areas is where we spend most of our time with clients in the early stages of AI adoption. Getting the structure right before the tools go in is what separates a successful implementation from an expensive lesson. If you’d like to work through these challenges with your team, our AI strategists can help you build that structure from the ground up. Get in touch to find out how we work.

Download the free workbook →

SaveSaved
Summarize with AI:

Mario Grunitz

Mario is a Strategy Lead and Co-founder of WeAreBrain, bringing over 20 years of rich and diverse experience in the technology sector. His passion for creating meaningful change through technology has positioned him as a thought leader and trusted advisor in the tech community, pushing the boundaries of digital innovation and shaping the future of AI.
Woman holding the Working machines book

Working Machines

An executive’s guide to AI and Intelligent Automation

Working Machines eBook