According to research from the Massachusetts Institute of Technology (MIT), a staggering 95% of GenAI pilot programmes are delivering no discernible impact on their companies’ profit and loss.
As AI investment grows, many enterprises risk wasting resources on projects that fail to deliver measurable value. The real question for leaders across a wide range of industries is how their projects, that delivered promising results in the test phase, then failed to match up to expectations when reality kicked in. I believe I know the answer.
Regulation plays a role in this situation. Our own research found that the same percentage (95%) of EU businesses say that complex regulatory requirements have held back their GenAI projects. It’s obviously important that governments ensure AI is used responsibly, but for businesses, those fast-evolving requirements can cause confusion and hesitation. Should we crack on and risk being non-compliant down the line? Or do we wait to see how legislation changes and risk falling behind our competitors?
But ultimately, AI isn’t failing because of flaws in the models themselves. Rather, the issue lies with the underlying data AI depends on. Poor data management leads to poor results: bad input, bad output. And it’s not just a question of feeding the right information into the system. Even when firms have clean, high-quality data, it often lacks the context, connectivity, and governance needed to power AI successfully. This includes metadata and semantic context, as well as ontologies that map relationships between business terms.
All of this is critical to providing domain specific, enterprise specific and use-case specific context. In short, AI needs to be fed with data that’s not just accurate, but relevant, responsible, and reliable. A truly holistic approach to the data underpinning AI is needed if businesses are to do better than a 5% success rate.
Many leaders feel pressured to adopt AI quickly, increasing the risk of joining the 95% who fail. In the rush to get projects live and into the company’s workflow, work on the right data strategy can often fall by the wayside. But this stage of the process is a crucial element of the project’s overall development.
Companies should start by defining clear business use cases and identifying the data needed to support them. Business leaders must be actively involved in this process and become data literate, rather than treating this as solely a data office or IT responsibility.
From there, the next step is to identify all potential data sources and design a data management architecture that can draw from them. This architecture should compile, clean, standardise, and deliver the right data in the right format at the right time. It’s crucial to be able to determine and analyse metadata at scale to achieve this.
finance.yahoo.com
#enterprise #GenAI #pilots #failing




