
We’re seeing a troubling pattern. Companies rush into LLM deployments with grand visions, only to hit walls they never saw coming. Not because the technology failed, but because they skipped the readiness assessment.
More and more demos and intro calls come in with folks who are disillusioned and fed up. Not about what AI can do. But about how misleading so many of the wanna-be AI software hustlers and automation gurus are.
The grift is real, and it’s costing companies time and money.
We’ve guided 12+ organisations through LLM implementations, from enterprise software to crisis response systems. Every successful deployment started the same way: with a thorough ai readiness assessment across data, infrastructure, team capabilities, and realistic costs. Here’s the framework we use before deploying any solution.
Data readiness: The foundation most companies overlook
Data readiness means your data is accessible, high-quality, and properly structured for AI consumption. It’s not about having massive datasets, it’s about having usable ones.
Here’s the reality: according to AIIM’s report, 77% of organisations rate their data as poor or average quality for AI. We saw this firsthand with Crisis Cognition, where we built an offline AI prototype for crisis response in challenging environments with limited connectivity and fragmented data sources.
The data constraints were brutal. Information was scattered across multiple systems, formats were inconsistent, and we had to work within strict security requirements. But we assessed and prepared the data requirements upfront, which made the difference between success and failure.
Our data readiness checklist includes four critical dimensions:
- Data accessibility: Can your AI actually reach the data it needs? We’ve seen companies with perfect datasets locked in systems their LLM can’t access. Break down those silos first. IBM’s research found only 29% of tech leaders say their data meets accessibility standards for generative AI.
- Data quality benchmarks: Accuracy, completeness, consistency, and timeliness. For Crisis Cognition, we had to establish quality thresholds before deployment because poor data would have meant poor crisis decisions.
- Data governance and security: Who can access what? How is PII protected? For regulated industries, this adds 5-15% overhead to your infrastructure costs.
- Volume and diversity assessment: Do you have enough representative data? Diversity matters more than volume for most LLM applications.
| Data Readiness Criteria | Ready | Needs Work | Not Ready |
| Data is centrally accessible | Single source of truth exists | Multiple systems, some integration | Completely siloed |
| Quality meets AI standards | >95% accuracy, complete | 80-95% accuracy, gaps exist | <80% accuracy, major gaps |
| Governance policies in place | Clear ownership, security, compliance | Policies exist, inconsistent enforcement | No formal policies |
| Sufficient volume and diversity | Representative dataset for use case | Limited coverage, some gaps | Insufficient data |
Infrastructure requirements: Compute, storage, and capacity planning
AI infrastructure requirements include your hardware, cloud resources, networking capacity, and the ability to scale as usage grows. Get this wrong, and you’ll either overspend or create bottlenecks.
Let’s talk realistic costs. In 2025:
- Mid-tier setups with 30B-70B models? €15,000-€40,000 monthly.
- H100 cloud instances run €1.90-€3.50 per hour
- Entry-level deployments with 7B-13B parameter models cost €600-€3,000 monthly
Our ai readiness framework for infrastructure assessment includes:
Current infrastructure audit: What compute and storage do you have? What networking capacity? Most companies underestimate their baseline requirements by 40-60%.
Projected usage patterns: How many concurrent users? What request volume? Peak vs average load? According to NVIDIA’s benchmarking analysis, for chat applications you need to keep average time to first token at or below 250ms to ensure responsiveness.
Latency requirements: Real-time chat needs sub-second responses. Batch processing can tolerate minutes or hours. Your infrastructure choices depend entirely on this.
Scaling strategy: Start small, prove value, scale deliberately. We’ve seen companies provision for enterprise load before validating their use case.
| Model Size | User Load | Monthly Cost Range | Hardware Example |
| 7B-13B | 100-500 users | €600-€3,000 | Single GPU, cloud instance |
| 30B-70B | 500-2,000 users | €15,000-€40,000 | 4-8 GPUs, multi-node |
| 70B+ | 2,000+ users | €50,000-€150,000 | 8+ GPUs, distributed |
The key insight: infrastructure costs don’t scale linearly. With smart optimisation, quantisation, batching, right-sizing, you can cut costs by 40-60% without sacrificing performance.
Team capabilities: Skill gaps and realistic staffing
PwC’s Global AI Jobs Report found 74% of SMB employees feel unprepared for AI tools. The skills gap is real, and it extends beyond just hiring data scientists.
Across our 12+ implementations, we’ve identified essential roles that most companies overlook. MLOps engineers average €135,000 annually and they’re critical for managing AI infrastructure at scale. Data engineers handle pipeline management and quality monitoring. Domain experts who understand your business context are often more valuable than AI specialists.
For Crisis Cognition, we needed people who understood crisis response patterns, not just machine learning. For Pulsr, enterprise software expertise mattered more than cutting-edge research knowledge.
Your team readiness assessment should answer three questions:
Can your current team manage AI infrastructure, or do you need external partners? We typically recommend starting with partners who have proven track records, then building internal capabilities over time.
What training is required, and what’s the realistic timeline? Upskilling takes 3-6 months minimum. Plan accordingly.
Do you have the right vendor selection criteria? Look for partners who demonstrate proven results across multiple industries, not just prototypes. Ask for measurable business outcomes and delivery of proof of value within 90 days, not 12-month discovery phases. Technical flexibility to integrate with your existing stack matters, as does compliance discipline around GDPR and EU AI Act alignment.
Cost expectations: Beyond the model API bill
AI costs extend far beyond API tokens. We break down total cost of ownership into five components:
- Compute and hosting make up the largest slice, typically 60-70% of total costs. This includes GPU time, cloud instances, or on-premise hardware depreciation.
- Storage and data management account for 10-15%. Vector databases, data lakes, and backup systems add up quickly.
- Team costs include MLOps engineers, data engineers, and ongoing management. For self-hosted solutions, staff costs can match or exceed infrastructure costs within 12 months.
- Compliance and security overhead adds 5-15% for regulated industries. HIPAA, GDPR, and industry-specific requirements don’t come free.
- Vendor and consulting fees vary widely but typically run 20-30% of total budget for implementation phases.
Cost optimisation strategies we use:
Start with smaller models. Prove value with a 7B parameter model before jumping to 70B. Use a hybrid approach, API access for experimentation, self-hosted for scale. The breakeven point typically hits around 2 million tokens per day. Batch process where possible, real-time inference is expensive. Monitor token usage patterns religiously.
| Deployment Model | Monthly Cost | Best For | Breakeven Point |
| API-only (e.g., GPT-4) | €500-€5,000 | Low volume, experimentation | <1M tokens/day |
| Hybrid (API + small self-hosted) | €3,000-€15,000 | Medium volume, mixed workloads | 1-2M tokens/day |
| Fully self-hosted | €15,000-€150,000+ | High volume, data sovereignty needs | >2M tokens/day |
Our pre-deployment checklist
Before deploying any LLM solution, run through this AI readiness framework across four critical dimensions:
✅ Data readiness: Audit accessibility, quality, governance, and volume before selecting a model. Poor data quality derails more projects than poor model selection.
✅ Infrastructure capacity: Size for realistic usage, not aspirational growth. Plan to scale, but don’t overprovision upfront.
✅ Team capabilities: Assess internal skills honestly. Use vendor selection criteria that prioritise proven results and speed to value.
✅ Total cost planning: Budget for ongoing operational costs, not just initial deployment. Include team, security, and scaling in your calculations.
Start small, prove value within 90 days, then scale deliberately. We’ve seen this pattern succeed across enterprise software, crisis response systems, and e-commerce applications.
Before deploying your next LLM solution, take the time to assess readiness across these dimensions. The companies that skip this step end up disillusioned. The ones that do it properly? They’re the ones still running successful AI implementations 12 months later.
Need help assessing your readiness? We’ve guided 12+ companies through this process, from data audits to production deployment.
