Table of Contents
- Why aren’t companies seeing measurable AI productivity gains despite heavy spending on generative AI?
- Are we seeing early signs?
- Constraint 1: Data centers need trades, not slogans
- Constraint 2: Energy demand is a hard ceiling
- Constraint 3: “Value” is not showing up consistently
- A practical example of limited gains
- OpenAI’s finances: risk framing without hype
Why aren’t companies seeing measurable AI productivity gains despite heavy spending on generative AI?
Are we seeing early signs?
Yes—there are credible early signals that parts of the AI hype cycle are colliding with real-world constraints: build capacity (skilled trades), operating capacity (electricity), and business capacity (provable ROI).
That does not mean “AI is over”; it means the easiest narratives (“infinite scaling” and “instant productivity”) are being stress-tested by economics and infrastructure.
Constraint 1: Data centers need trades, not slogans
Large-scale AI expansion depends on building and upgrading data centers, and that work requires electricians, plumbers, and HVAC technicians at scale.
WIRED reports the US is facing a shortage of these skilled trades as AI data center construction ramps up, creating a bottleneck that money alone can’t solve quickly.
Constraint 2: Energy demand is a hard ceiling
AI workloads are pushing up electricity demand, and multiple analyses warn that AI-related compute growth is colliding with grid limits and public tolerance for higher energy use.
One peer-reviewed estimate cited by the BBC projects AI could consume 85–134 TWh per year by 2027 (roughly comparable to a country-scale electricity footprint).
Constraint 3: “Value” is not showing up consistently
A Forrester principal analyst told The Register that AI’s impact is “nowhere in recent productivity statistics,” and also argued much enterprise genAI “isn’t really working” yet.
PwC’s 29th Global CEO Survey findings (as reported by heise) describe a gap between expectations and outcomes, with only a small minority of companies achieving measurable AI results.
A practical example of limited gains
In an experiment reported by The Register, an insurance-focused chatbot (“Axlerod”) saved only seconds per lookup on average, which the article frames as potentially meaningful at high volume but still modest per task.
OpenAI’s finances: risk framing without hype
Reports citing internal forecasts (via coverage of The Information) say OpenAI projected a $14 billion loss in 2026, highlighting how expensive frontier AI can be even with strong revenue growth.
Treat “bankruptcy soon” claims as scenario risk: the credible point is not certainty of failure, but that funding needs and operating costs remain unusually high versus typical software economics.