Observations:
1. The AI Hype vs. Reality
AI-enabled projects and proof of concepts are now underway in almost every organisation and industry sector. Yet the hype cycle has created unrealistic expectations and irrational fear of missing out. This situation is driving boards to demand AI strategies, vendors often overpromise and under-deliver, while competitors promote their AI innovations. In this rush, many organisations launch AI projects or pilots without a clear understanding of the technology’s limitations or the business problems they aim to solve.
Now that the AI wave has been underway for almost three years, several recent credible studies reveal the reality of many AI initiatives.
2. Key Studies on AI Project Failure
- MIT’s ‘State of AI in Business 2025’
- Finding: 95 per cent of generative AI pilots fail to deliver return on investment (ROI).
- Causes: Projects lack strategic alignment, suffer from weak data infrastructure, and stall in the pilot phase.
- RAND Corporation Report (2024)
- Finding: AI projects fail at twice the rate of traditional IT projects.
- Causes:
- Misunderstanding the problem AI is meant to solve.
- Poor infrastructure for deployment.
- Applying AI to problems too complex for current capabilities.
- Bain & Company Technology Report (2025)
- Finding: Generative AI in software development yields only modest gains.
- Challenge: Productivity boosts (10–15 per cent) don’t translate into ROI due to time spent correcting AI errors.
- Forecast: 40 per cent of agentic AI projects will be cancelled by 2027.
- Scale AI & Enquirer360 (2025)
- Finding: Most companies see zero financial return from AI investments.
- Cause: Misapplication of AI to inappropriate problems and oversimplified expectations.
- The Conference Board & ESGAUGE (2025)
- Finding: 72 per cent of S&P 500 companies now disclose AI as a material risk.
- Risks Identified:
- Reputational fallout from failed implementations.
- Cyber security vulnerabilities.
3. What Successful AI Projects Do Differently
1. Clear Problem Definition with Measurable Outcomes
One of the most common reasons AI projects fail is misunderstanding the problem. AI should be a tool to solve specific, well-defined business challenges. However, many initiatives are technology-led rather than business-driven. Technical teams can pursue cutting-edge models such as deep learning, transformers, and generative AI, when simpler solutions would suffice. This adds complexity, cost, and risk without a corresponding benefit case or outcome.
2. Data Governance and High Quality Data Sets
AI models need clean, structured, relevant, and representative datasets. Many organisations underestimate the effort required to prepare data for AI. They assume existing data is sufficient, only to discover it’s fragmented, inconsistent, or biased.
Some common data-related issues include:
- Poor data quality and missing values.
- Lack of labelled data for supervised learning.
- Inadequate data governance and privacy controls.
- Siloed or inconsistent data definitions across departments.
3. Build Solid Infrastructure Before Model Deployment
Piloting an AI solution is only the beginning. Deploying it into production, integrating it with existing systems, ensuring and realising scalability, and maintaining ongoing performance are key challenges.
AI systems impose computational and architectural requirements that will test existing IT infrastructure. Models require significant processing power for training and inference; they generate vast amounts of data that must be stored, versioned, and retrieved efficiently; and they depend on real-time data pipelines to deliver outputs in real time. They need monitoring systems that track model performance and detect model drift as data distributions shift over time.
Organisations with legacy systems or on-premise systems face particular challenges. Scaling processing, siloed databases and batch-oriented processes cannot easily support the real-time, integrated data flows that AI applications require. Modernising this infrastructure can represent a substantial investment that outweighs the cost of deploying an AI model. Requirements for a robust security infrastructure present further challenges to successful deployment.
4. Start With a Pilot, But Plan to Manage it Actively
Many organisations are facing internal pressure to utilise AI to innovate and deliver business value. A traditional approach to exploiting a new technology wave is to launch a proof of concept (POC) to answer “can it work?”, and progress to a pilot that answers “does it work well in our environment, and is it worth rolling out?”. However, most pilot projects operate in artificial conditions that mask real-world complexity. Pilot system environments often feature curated data, and models are standalone and not integrated into existing ecosystems, unlike in a production environment. This can result in a group of AI projects stuck in pilot mode, consuming resources without delivering the required business value.
While pilot projects may demonstrate technical feasibility, unless there is a clear path to deployment or business value, key executive sponsors will withdraw commitment. AI pilots, in particular, need to be planned and managed proactively up front if they are to avoid the POC trap.
5. Manage Organisational Change Actively and Build Capability
IBRS’s experience with hands-on technology transformations has observed a similar pattern with each technology-enabled wave of innovation, from the internet, IoT, mobility, and AI. A hype cycle with heightened promise and expectations, many failed investments and eventually the realisation that real transformation is primarily a human challenge, not just a technical one.
Executive sponsorship and change management remain the bedrock of any initiative. AI, in particular, changes how work is done. It automates tasks, augments decision-making and shifts roles. Yet, many organisations fail to prepare their workforce for this change. Resistance from employees, fear of job displacement, and lack of AI literacy will derail projects.
Additionally, there’s a shortage of AI talent. Data scientists, machine learning (ML) engineers, and change managers are in high demand. Without the right skills, organisations struggle to build and maintain AI solutions. Use explainable AI (xAI) to develop consistent AI literacy across the organisation so that business stakeholders can effectively collaborate with technical teams. Invest in both recruiting specialised talent and developing existing employees. Developing the skills and capabilities is critical.
Next Steps
To improve the success rate of AI projects, IT and business executives need to focus on leadership over and above the noise. They should consider the following:
- Start with the Problem, Not the Technology: Ensure every AI initiative is grounded in a clear business need. Engage stakeholders early to define success metrics and use cases.
- Invest in Data Readiness and Governance: Build robust data pipelines, enforce quality standards, and ensure ethical data use. Treat data as a strategic asset.
- Build Cross-Functional Teams and Foster AI Literacy: Combine technical talent with domain expertise. Upskill staff and create a culture of experimentation and learning.
- Establish AI Governance and Feedback Mechanisms: Monitor model performance, address bias, and ensure transparency. Create feedback loops to improve the AI model.


