Back to blog

Why 88% of Enterprises Use AI But Only 6% See Real Returns: A CTO's Guide to AI ROI

QuarLabs TeamJanuary 24, 20258 min read

The numbers tell a sobering story. According to McKinsey's 2025 State of AI report, 88% of organizations now use AI in at least one business function—up from 78% in 2024. Yet here's the troubling part: only 6% qualify as "AI high performers" achieving significant EBIT impact.

This isn't a technology problem. It's a value realization problem. And for CTOs under pressure to demonstrate AI returns, understanding this gap—and how to close it—has never been more critical.

The AI ROI Paradox

Widespread Adoption, Limited Impact

The disconnect between AI adoption and business impact is stark:

Metric Statistic Source
Organizations using AI 88% McKinsey 2025
Organizations scaling AI enterprise-wide 33% McKinsey 2025
AI high performers with EBIT impact 6% McKinsey 2025
Companies abandoning most AI projects 42% S&P Global 2025
CEOs happy with AI investment returns <30% Gartner 2025

This gap has real consequences. S&P Global data shows that the share of companies abandoning most of their AI projects jumped to 42% in 2025 (from just 17% the year prior), often citing cost and unclear value as top reasons.

The Investment Reality

Organizations are spending significantly on AI:

  • Average GenAI initiative spend: $1.9 million (Gartner 2024)
  • Enterprise AI spending forecast: $1.5 trillion worldwide (Gartner 2025)
  • Generative AI market size: $59.01 billion in 2025

Yet despite these investments, over 80% of respondents reported no meaningful impact on enterprise-wide EBIT (McKinsey 2025).

"Most organizations are still navigating the transition from experimentation to scaled deployment, and while they may be capturing value in some parts of the organization, they're not yet realizing enterprise-wide financial impact." — McKinsey State of AI 2025

What Separates AI High Performers?

Deloitte's AI ROI Performance Index

Deloitte created a comprehensive AI ROI Performance Index combining four key metrics:

  1. Direct financial return
  2. Revenue growth from AI
  3. Operational cost savings
  4. Speed of results achievement

Based on their overall score, only around one in five surveyed organizations qualify as true AI ROI Leaders. These outperform peers by:

  • Treating AI as an enterprise transformation
  • Embedding revenue-focused ROI discipline
  • Making early strategic bets on both generative and agentic AI

Characteristics of High Performers

Trait High Performers Average Performers
Clear KPIs before deployment Yes Often afterthought
Business problem focus Start with problem Start with technology
Cross-functional integration Deep Siloed
Change management investment Significant Minimal
Governance maturity Established Ad-hoc

The KPI Difference

McKinsey's research identifies tracking well-defined KPIs as the single most important factor for AI success. This goes beyond basic usage metrics to include:

  • Business impact measurement
  • ROI tracking
  • Performance optimization over time

"Successful organizations establish measurement frameworks before AI deployment rather than trying to define success metrics after implementation begins." — McKinsey State of AI 2025

Why AI Projects Fail to Deliver ROI

Common Failure Patterns

1. Technology-First Approach

Organizations leading with "we need AI" rather than "we need to solve this problem" consistently underperform. The most successful deployments start with specific business problems.

2. Pilot Proliferation

Many organizations have dozens of AI pilots but struggle to scale winners. This "pilot purgatory" consumes resources without delivering enterprise impact.

3. Measurement Misalignment

Different executives define success differently:

  • CFOs emphasize ROI
  • CIOs focus on EBITDA
  • CTOs track technical KPIs

This misalignment can strand value across various measures.

4. Infrastructure Gaps

AI requires:

  • AI-ready data pipelines
  • Cloud-native (or hybrid) architectures
  • Modern API layers
  • Scalable compute

Many legacy infrastructures can't support production AI at scale.

5. Governance Deficits

Without proper governance:

  • AI outputs can't be trusted
  • Regulatory compliance is at risk
  • Bias goes undetected
  • Decisions lack transparency

The Time Factor

According to Deloitte, the majority of organizations acknowledge they need at least a year to resolve ROI and adoption challenges such as governance, training, talent, trust, and data issues.

Building an AI ROI Framework

Phase 1: Foundation (Before Deployment)

Define Success Metrics

Category Example Metrics
Efficiency Time saved, automation rate, processing speed
Quality Error reduction, accuracy improvement, defect rate
Revenue Conversion lift, deal size increase, new revenue
Cost Labor savings, resource optimization, waste reduction
Strategic Competitive advantage, market position, innovation rate

Establish Baselines

Before deploying AI, measure current performance:

  • Document existing processes
  • Quantify current costs and time
  • Capture quality metrics
  • Record customer satisfaction

Phase 2: Implementation Tracking

Leading Indicators

Track early signals of value creation:

  • Adoption rates
  • User satisfaction
  • Process compliance
  • Output quality

Lagging Indicators

Measure business outcomes:

  • Revenue impact
  • Cost reduction
  • Productivity gains
  • Customer metrics

Phase 3: Value Realization

ROI Calculation

AI ROI = (Value Generated - Total Investment) / Total Investment × 100

Value Generated Includes:

  • Direct cost savings
  • Revenue increase
  • Productivity gains
  • Risk reduction
  • Quality improvements

Total Investment Includes:

  • Technology costs (licenses, infrastructure)
  • Implementation costs (consulting, integration)
  • Operational costs (maintenance, monitoring)
  • Change management costs (training, adoption)

The Deloitte ROI Leaders Approach

Almost all organizations report measurable ROI with GenAI in their most advanced initiatives:

ROI Level % of Organizations
Exceeding 30% ROI 20%
Meeting/exceeding expectations 74%
Below expectations 26%

The difference? ROI Leaders embed measurement from day one.

Practical Recommendations

For CTOs Starting AI Initiatives

1. Start with Business Problems

The most telling trend from Fortune's 2025 analysis: Companies are failing when they lead with AI and finding success when they lead with the problem they're trying to solve.

2. Pick Measurable Use Cases

Choose initial deployments where:

  • Success can be quantified
  • Baselines exist
  • Impact is significant
  • Timeline is reasonable

3. Build Measurement Infrastructure

Before deployment:

  • Define KPIs
  • Establish baselines
  • Create dashboards
  • Plan regular reviews

4. Invest in Governance

Per Deloitte, organizations need to resolve governance challenges before expecting scaled ROI.

For CTOs Struggling with ROI

1. Audit Current Projects

  • Which projects have clear metrics?
  • What value has been demonstrated?
  • Where are measurement gaps?

2. Ruthlessly Prioritize

Focus resources on:

  • Projects with demonstrated value potential
  • Use cases with clear metrics
  • Areas with business sponsor commitment

3. Address Infrastructure Gaps

AI-ready infrastructure is a prerequisite, not an afterthought.

4. Strengthen Change Management

Technology is only part of the equation. People and processes matter equally.

The Path Forward

2025-2026: Foundation Building

  • Establish measurement frameworks
  • Address data and infrastructure gaps
  • Build governance capabilities
  • Focus on high-impact use cases

2027-2028: Scaled Impact

  • Expand proven use cases
  • Integrate AI across functions
  • Automate ROI tracking
  • Achieve enterprise-wide EBIT impact

Key Success Factors

Factor Why It Matters
Executive sponsorship Resources and cultural change require leadership
Business-IT alignment Technology must serve business goals
Measurement discipline What gets measured gets managed
Patience with persistence Meaningful AI ROI takes time to realize

The QuarLabs Perspective

At QuarLabs, we believe AI should deliver measurable value—not become another technology experiment. Our products are designed with ROI in mind:

  • Letaria: 10x faster test case generation with measurable coverage improvements
  • Vetoid: Three assessment tools (Bid/No-Bid, Vendor Assessment, Post-Mortem) with trackable decision outcomes and lessons learned database

We build for practical value, not vanity metrics.


Sources

  1. McKinsey: The State of AI in 2025 - 88% adoption, 6% high performers, 39% EBIT impact
  2. Deloitte: AI and Tech Investment ROI - ROI Performance Index, 20% exceeding 30% ROI
  3. Deloitte: State of Generative AI in the Enterprise - 74% meeting/exceeding expectations
  4. Gartner: AI Investment Returns - Less than 30% CEO satisfaction
  5. S&P Global/WalkMe: Enterprise AI Adoption - 42% project abandonment rate
  6. Fortune: Three Trends in Enterprise AI 2025 - Problem-first approach insights

Ready to implement AI with measurable ROI? Contact us to learn how QuarLabs delivers practical AI value.