Blog / Thought Leadership

88% of Companies Are Using AI. Only 6% Are Seeing Results. Here Is Why.

McKinsey's latest global survey puts the number at 88%. Nearly nine in ten companies say they are actively using AI. But only 6% report seeing real impact on their bottom line. That is not a technology problem.

If It Is Not the Tools, What Is It?

The companies pulling ahead are using the exact same tools as everyone else. Same models. Same platforms. Same API access. And yet they are generating measurably greater revenue growth.

After 15 years in enterprise technology, watching organizations buy, deploy, and quietly abandon wave after wave of promising technology, the answer is familiar. It is not new. AI just makes it more expensive and more visible.

Here are the five reasons most AI initiatives fail, and what it actually takes to fix them.

1. The Strategy Was Never Precise Enough to Fail Clearly

If I asked you right now, what specifically is your AI strategy designed to produce and how will you know it is working, could you answer that in one sentence? Most leaders cannot.

Vague strategy has always been a problem in organizations. Historically you could survive it. A capable manager and enough institutional knowledge could fill in the gaps. People knew what "improve the customer experience" meant in practice even if nobody defined it precisely.

AI removes that safety net entirely. When you give AI a vague directive, it does not push back. It does not use 20 years of institutional context to figure out what you actually meant. It executes exactly what you specified. And if what you specified was fuzzy, you get fuzzy output. Confidently delivered at machine speed across your entire organization.

In 2025, roughly 42% of enterprises quietly abandoned most of their AI initiatives, up from 17% the year before. Not because the tools failed. Because nobody could tie the initiative back to a clear, measurable result. The strategy was never precise enough to tell anyone what winning looks like.

The fix is not complicated. Stop using words like "create efficiencies" or "accelerate innovation." Replace them with a specific problem, a specific method, a specific metric, and a specific deadline. The difference between "use AI to improve customer experience" and "reduce invoice processing from 14 steps to 5 with a measurable reduction in error rate by Q3" is the difference between a pilot that dies and an initiative that scales.

2. You Automated a Bad Process and Made It Worse

There is one line worth remembering from this entire post. The only thing worse than a bad process is an automated bad process.

The bolt-on approach looks like this. Take an existing workflow, find the slowest or most painful step, drop an AI tool on top of it, and call it transformation. But you have not changed how the work gets done. You have made the old way faster without asking what happens when a human is removed from that step.

Air Canada launched a chatbot to handle customer inquiries without rethinking how their support process was designed. When a passenger whose grandmother had just passed away asked about bereavement fares, the bot gave him incorrect information. When he tried to claim what he had been promised, Air Canada argued the chatbot was a separate legal entity. A tribunal ruled otherwise. That is what bolt-on looks like.

The companies seeing real results are not bolting AI onto existing workflows. They are rebuilding those workflows around what AI actually does well: searching, synthesizing, summarizing, and drafting. And they are keeping humans focused on what humans do well: judgment, relationships, and decisions that carry consequence.

McKinsey found that top performing companies are nearly three times as likely to have fundamentally redesigned their workflows around AI, and that factor had one of the strongest contributions to business impact of anything else they tested.

The question to ask is not "where can we add AI?" It is "if we were building this from scratch today, knowing what AI can do, what would it look like?"

3. Your Data Is Held Together with Duct Tape and Your Governance Has Holes in It

Think about where your organization's knowledge actually lives right now. Not where it is supposed to live. Where it actually lives.

It is in spreadsheets one person in finance maintains and no one else can find. It is in email threads from 2019 that contain the only record of why a decision was made. It is in different systems that do not talk to each other.

In a survey of 600 chief data officers by Informatica, 43% said data quality, completeness, and readiness are among the biggest obstacles keeping AI projects from reaching production. That is what kills initiatives before they start.

But even if your data is clean and connected, there is a second gap that most organizations have not closed. They do not have the frameworks to trust what AI produces or protect themselves from what it does.

Six questions every organization should be able to answer for any AI deployment:

If you cannot answer all six, you have a governance gap. That is where you start.

Here is the part most organizations miss. Governance is not just about what you have approved. It is about the tools your people are already using whether you know about them or not. Every tool carries a different risk profile. A locally hosted model and a free browser plug-in are not in the same conversation. If you do not have visibility into what is running across your organization, you cannot assess the risk. And if you cannot assess the risk, no policy document is going to fix that.

Your customers are going to start asking about this if they have not already. What are your AI policies? How is their confidential information being handled? What guardrails are in place? The companies that can answer those questions clearly will win business their competitors cannot.

4. Leadership Has Not Done the Reps

Here is the uncomfortable truth. AI leadership is now a skill. And most leadership teams do not have it yet. Not because they are not capable. Because they have not done the work.

Leaders who do not use AI do not develop the judgment to evaluate it. And leaders who do not feel confident evaluating it do not prioritize using it. The organization pays the price. Initiatives get approved based on vendor pitches rather than organizational readiness. Pilots get funded that never had a realistic path to scale. And when they fail, leadership draws the wrong conclusion.

McKinsey found that AI top performers are three times more likely to have senior leaders who actively demonstrate ownership of AI initiatives. Cisco found that when leaders actively engage with AI, team adoption doubles.

The signal you send as a leader is not a soft cultural thing. It directly determines whether your people actually adopt the tools you are investing in.

This does not mean becoming technical. It means becoming dangerous enough to evaluate what is put in front of you. Ask to see the data that is supposed to power an initiative before it gets approved. Sit with the people doing the work. Ask them what is actually slowing them down.

The questions you are building toward are specific. Is this use case well-defined or does it just sound good in a deck? Do we have the data to support it? What does success look like in 90 days? Who is accountable if it does not work? And what is the exit criteria if it is not working?

If you cannot answer those independently, without relying on the people proposing the initiative to also be the ones evaluating it, you are not governing AI investment. You are rubber stamping it.

5. You Are Teaching People to Use Tools When You Should Be Teaching Them to Think in Systems

OpenAI's enterprise data shows that the heaviest AI users save over 10 hours per week. An entire extra workday. But they are not using a different tool. They are using the same models as everyone else.

BCG found that fewer than one in three companies have upskilled even a quarter of their workforce to use AI effectively.

Picture two analysts on the same team. One opens the AI tool when they need to draft a report, gets a generic result, edits it for an hour, and moves on. The other has mapped out their entire reporting workflow including data sources, assumptions, decision points, and output formats into a system they can run and refine. Same tool. Same job. One saves an hour and the other has fundamentally changed what their role looks like.

The difference is how they see their work. The second analyst is not just using AI. They are thinking in systems. They have looked at their entire process, broken it into steps, identified what is repeatable and what requires judgment, and defined inputs and expected outputs. That is a fundamentally different skill than knowing how to write a good prompt.

Teaching people how to use a tool is not the same as teaching them to think differently about their work. That is like teaching someone to type and calling it writing.

BCG found that future-built companies are four times more likely to dedicate structured time for AI skill building than lagging firms. Four times. If you do not carve out actual hours, none of the above happens.

What This Actually Requires

Every one of these five failures has a root cause that sits underneath the technology. Unclear strategy. Unredesigned workflows. Ungoverned data. Untrained leadership. Undertrained teams.

The organizations closing the gap are not winning because they have better models. They are winning because they have paired the right infrastructure with the discipline to deploy it precisely. The software enforces the governance. The practice builds the capability. And the two reinforce each other.

Governance cannot be aspirational. A policy document that depends on humans remembering to follow it will fail at scale. What scales is governance that is architectural. Mechanically enforced. Auditable by design.

The question worth sitting with is this: if your AI initiatives stopped tomorrow, could anyone in your organization explain exactly what they were producing, how it was being governed, and what it would have taken to declare them a success?

If the answer is no, that is where to start.

Governance That Is Architectural, Not Aspirational

Lancelot enforces AI governance through architecture, not policy documents. Every action classified, every decision auditable, every outcome verified.

See How It Works