95% of Gen AI pilots fail
- Mike Dzierza
- Sep 16
- 2 min read
If your company is currently building a gen AI project, this post might save you some time and money.

A recent MIT report reveals that 95% of generative AI pilot projects in companies are failing to deliver rapid revenue growth or measurable impact.
The study, based on interviews, surveys, and analysis of public AI deployments, highlights a stark divide: while a few startups and select large companies succeed, the vast majority of enterprise initiatives stall.
That doesn't necessarily mean the projects are bad per se. It's more about how they've been deployed.
Here are some key findings from the report:
Only about 5% of AI pilots achieve significant revenue acceleration. (I think the word "significant" is key here and widens the context.)
Most failures are not due to the quality of AI models, but rather a “learning gap. Translation: organisations and tools struggle to adapt to real workflows.
Companies often misallocate resources, spending heavily on sales and marketing AI tools, while the biggest returns are found in back-office automation.
Purchased AI solutions from specialised vendors succeed far more often (about 67%) than internally built tools.
Success is more likely when line managers drive adoption and when tools are deeply integrated and adaptable.
Workforce disruption is happening, especially in customer support and administrative roles, but mostly through not replacing vacated positions rather than mass layoffs.
“Shadow AI” - or unsanctioned use of tools like ChatGPT - is widespread, and measuring AI’s impact remains a challenge.
Advanced organisations are now experimenting with agentic AI systems that can learn and act independently.
Looking at the less sexy workflows first and seeing whether they would benefit from an AI-powered support - and then identifying an external vendor who's potentially specialising in implementing similar solutions - might be the easier way forward.



Comments