AI has never been more accessible. In just the past few years, organizations have gone from experimenting with machine learning to rapidly building applications powered by large language models and generative AI.
It’s now possible to create something that looks like a fully functional AI solution in a matter of days.
But that speed is also part of the problem.
More companies are starting AI projects than ever before, yet more of those projects are failing to deliver real impact.
In a recent webinar, Dr. Tim Oates breaks down why this is happening. Drawing on years of experience in AI consulting, he explains where organizations go wrong and what it actually takes to build AI systems that work in production and drive measurable business value.
Organizations are not lacking ambition when it comes to AI. According to research referenced in the webinar, most executives are pursuing AI to:
Among these, efficiency consistently stands out as the top priority. Companies want to reduce friction in their workflows and allow employees to spend more time on meaningful work.
But while the goals are clear, the results often aren’t.
Dr. Oates highlights a key finding from industry research:
46% of organizations investing in generative AI report that no single enterprise objective has seen a strong positive impact.
At the same time, failure rates are increasing. The percentage of companies abandoning AI initiatives before production has jumped significantly, from 17% to 42% year over year.
This doesn’t mean AI isn’t valuable. It means that organizations are struggling with execution.
And in most cases, the issues show up long before a model is ever deployed.
One of the most common patterns Dr. Oates sees is organizations starting with vague, top-down directives:
“We need to use AI.”
“We need to be more data-driven.”
While these goals are well-intentioned, they lack the specificity needed to guide a project. Teams often jump straight into building something, a chatbot, a model, a prototype, without clearly defining the business outcome they’re trying to improve.
This leads to a subtle but important shift:
Success becomes about the output, not the impact.
In practice, this creates several problems:
Dr. Oates emphasizes that every successful AI project should start with a clear answer to a simple question:
What decision or workflow are we trying to improve?
From there, teams need to define:
Without this structure, projects tend to drift and eventually stall.
Another major issue is the gap between how organizations perceive their data readiness and the reality of working with it.
At a high level, many companies feel “somewhat ready” for AI. But when projects begin, data challenges quickly surface.
According to the webinar, only 22% of organizations consider their data fully ready, while the majority fall into the “somewhat ready” category.
That distinction matters.
Because in practice, “somewhat ready” often means:
In rare cases, the data simply doesn’t exist. But more often, the issue is usability, not availability.
Dr. Oates points out that data challenges consistently rank among the most difficult aspects of AI projects, including:
These are not small issues, they are foundational blockers.
As AI tools become more powerful, there’s a growing tendency to apply them everywhere.
But not every problem requires generative AI.
Dr. Oates highlights a common mistake: using large language models for tasks that could be solved more effectively with simpler methods.
In some cases, the problem is not even an AI problem at all. It may be:
Applying a complex model in these situations adds cost and unpredictability without improving outcomes.
Instead, he recommends matching the approach to the problem:
He also emphasizes the importance of right-sizing the solution. Smaller, simpler models are often faster, cheaper, and more reliable and in many cases, just as effective.
One of the most important insights from the webinar is how dramatically the effort changes across different stages of an AI project.
Coming up with a good idea takes some effort.
Building a working demo takes a bit more, but not much.
But moving that demo into production? That’s where the real work begins.
Dr. Oates describes this as a massive gap.
Modern tools make it easy to create something that looks finished. A demo can be built quickly and generate excitement across the organization. But once teams try to deploy it, new challenges emerge:
This is where many projects break down.
Executives see a working demo and expect rapid deployment. But the transition to production requires significant additional effort and without planning for it early, projects can stall or fail entirely.
Even when the technical pieces are in place, organizational misalignment can prevent success.
Different groups within a company often have very different priorities:
When these priorities are not aligned, projects struggle to move forward.
Dr. Oates notes that this often results in:
To avoid this, organizations need:
Without these elements, even strong technical work can fail to translate into real value.
One example shared in the webinar highlights how a well-defined AI project can drive measurable results.
In an example sales development workflow, the goal was to improve how leads were prioritized.
Instead of starting with technology, the team defined:
A clear KPI: meeting conversion rate
A baseline: approximately 8%
A target: a 15% improvement
Guardrails: ensuring customer experience did not degrade
They then tested the AI system against the existing approach using A/B testing.
This structure made it possible to clearly measure impact and ultimately demonstrate ROI.
AI projects don’t fail because organizations lack access to powerful tools. They fail because of gaps in definition, data, alignment, and execution.
The organizations that succeed tend to follow a consistent pattern:
Perhaps the most important takeaway from the webinar is this:
AI is not a shortcut to results, it’s a capability that requires structure, discipline, and iteration.
The technology is powerful. But without the right foundation, even the most promising AI project will struggle to deliver real value.
Looking for help ensuring your AI project or initiative is successful?
Synaptiq can help you identify high-value use cases, launch focused pilots, define clear KPIs and build/deploy AI solutions to your business challenges so you can measure results, demonstrate ROI, and confidently scale what works.
Contact me to learn more or discuss your next project.