Article — Position paper · ○ Open access

AI Investment ROI: Why 95% "Failure" Rate Says More About Our Measuring Tools

Traditional ROI metrics fall short for AI platforms — a three-level evaluation framework

Jérôme Vetillard · · LinkedIn · 8 pages · 4 min read
🇫🇷 Lire en français ↓ Download PDF

The real story behind the MIT study

A widely cited MIT study claims that 95 % of AI projects fail to deliver ROI. The statistic is alarming — and misleading. The researchers defined success as measurable P&L impact within six months of deployment. Under this criterion, most traditional IT investments — ERP rollouts, CRM deployments, cloud migrations — would equally qualify as failures. Executive business case expectations typically project a 12-month ROI horizon, and actual break-even is more often reached after 24 to 48 months. Applying a quarterly timescale to AI is a double standard that says more about the measurement instrument than the technology.

What the six-month lens automatically excludes: skills and knowledge acquired by teams, process improvements that compound over time, brand differentiation, customer experience gains, and innovation capabilities that enable future projects. The problem is not that AI doesn’t create value — the measuring stick is simply the wrong one.

Why traditional ROI metrics fall short for AI platforms

Traditional ROI frameworks were designed for discrete, siloed investments: a new factory line, a store opening, a departmental IT system. Costs and benefits trace linearly to a single business unit. AI platforms break this model. They are cross-BU by design, financed centrally yet delivering value simultaneously to marketing, sales, operations, R&D, and compliance. Both cost allocation and benefit attribution become orders of magnitude more complex.

A second factor is systematically overlooked: it is never “AI” in isolation — it is always “Data + AI”. Before an organisation can deploy AI at scale, it must modernise its data foundations, harmonise data models, improve governance and lineage, expose secure APIs, and ensure regulatory compliance. Without this structural investment, ROI calculations understate value because they ignore the enabling infrastructure.

The old model vs. the new reality

The siloed project model assumed one project equals one department equals one clear cost centre, with benefits appearing quickly and staying within the same perimeter. AI platforms introduce shared cloud infrastructure across departments, data science teams working on multiple initiatives, and compliance costs distributed across all use cases. On the benefit side, a single customer scoring model may simultaneously serve marketing (targeting), sales (lead prioritisation), customer service (personalisation), and risk management (fraud detection). Traditional accounting frameworks are structurally unable to handle this cross-department value creation.

A three-level evaluation framework

Level 1 — Individual use case performance. Traditional metrics remain valid but require realistic timeframes: cost per prediction, time savings valued at loaded hourly rates, error rate reductions, cycle time improvements, and customer satisfaction increases. The key difference: measure over 18 to 36 months rather than six, to increase confidence in NPV projections.

Level 2 — Portfolio and platform benefits. Infrastructure costs shared across multiple use cases reduce marginal cost of new capabilities by 70 to 80 %. Data quality improvements benefit all models simultaneously. Cross-pollination of insights between departments generates network effects where each new use case makes existing ones more powerful. Portfolio ROI formula: total benefits from all use cases divided by shared platform costs.

Level 3 — Strategic capabilities and options value. Speed to deploy new AI capabilities when opportunities arise, organisational learning and skill development, regulatory compliance infrastructure, and competitive differentiation that is difficult for rivals to replicate. This is the “AI readiness” dimension — the ability to quickly capitalise on opportunities because the foundation is already in place.

Portfolio management: from project budgets to platform investment

Smart organisations are shifting from project-level budgeting (each AI project fights for its own budget, success measured individually, redundant infrastructure) to platform-level investment (AI capabilities funded centrally, success measured across the portfolio, costs shared and benefits amplified). Financial operations (FinOps) for AI brings real-time cost monitoring with automatic optimisation, usage-based internal billing, and resource allocation that adapts to actual demand.

How TweenMe addresses the ROI problem

TweenMe, the universal digital twin generator by Qualees, is architecturally designed to deliver portfolio-level ROI. Rather than one-off AI projects, TweenMe enables creation, deployment, and maintenance of digital twins at scale with controlled marginal costs (automated generation at a fraction of custom development), accelerated time-to-market (pipeline automation reduces deployment to weeks), and built-in portfolio value (shared infrastructure, cross-model learning, automatic maintenance, and data enrichment improving with each use case). Costs become transparent and usage-based; benefits measurable at both individual and portfolio levels.

What this means for business leaders

CFOs and finance teams must develop capabilities in shared-cost allocation, portfolio-level ROI calculation, and longer-term value assessment. Budget processes must shift from projects to platforms, with multi-level reporting that captures different types of value. CEOs face a strategic risk: organisations that impose six-month ROI requirements on AI platforms will systematically under-invest in transformative capabilities, ceding competitive advantage to rivals that evaluate at portfolio level.

The bottom line

For standalone AI projects, traditional metrics mostly work with longer timeframes and allowance for indirect benefits. For AI platforms serving multiple departments, new evaluation frameworks are essential — the old methods systematically show negative ROI even when massive value is being created. AI doesn’t have a value problem. It has a measurement problem. The future of AI ROI lies in building, managing, and measuring value-creating ecosystems that compound benefits over time.

Read the document