Most of the leaders I work with can point to AI initiatives that are running. Pilots completed. Vendors engaged. Teams reporting real productivity gains at the project level. And yet when they step back and look at the business as a whole, the picture is far less clear. Revenue hasn’t moved materially. Margins haven’t improved in ways the board can point to. The AI activity is real. The business impact is not.
This is not an isolated experience. PwC’s 2026 Global CEO Survey of more than 4,400 chief executives found that 56% saw neither higher revenues nor lower costs from AI.[1] RAND Corporation research found that more than 80% of AI projects fail, roughly twice the failure rate of conventional IT projects.[2] McKinsey found that only 31% of organizations are scaling AI enterprise-wide, and only 39% report any measurable effect on enterprise-level performance.[3]
I have watched this pattern play out across industries, and I watched a version of it unfold during the IoT adoption cycle a decade ago. The instinct then, as now, is to look at the technology: a better model, a different vendor, a more sophisticated use case. But in most organizations I work with, the technology is not the problem. Something structural is missing. And until that gap is addressed, the pattern repeats: more pilots, more activity, and business impact that never quite arrives.
Here is what actually needs to change.
Don’t Be Fooled by a Successful Pilot
The most costly misunderstanding I encounter in enterprise AI is treating a successful pilot as evidence that scaling will follow. It won’t, and the reasons are structural rather than technical.
Pilots are designed for controlled conditions. Data is curated manually. Integrations are one-off connections. Workflows that look automated often still rely on human review. These shortcuts are appropriate for validating a concept quickly. But they create a predictable gap when the pilot moves into a real operational environment shaped by legacy systems, fragmented data sources, governance requirements, and organizational dependencies the pilot never had to contend with. Forrester estimates that only 10-15% of AI projects reach sustained production use and that over 60% fail to scale beyond controlled environments.[4] In most cases I see, the blocker is not the AI. It is that the infrastructure, integration work, and operational upgrades required for enterprise deployment were never funded or planned as part of the pilot.
The pilot proved the AI could work. Scaling revealed the enterprise was not prepared to support it.
When I start working with a new client, one of the first things I do is ask a set of questions that most organizations have not answered before a pilot launches. Are the data sources this initiative depends on actually accessible at scale? Can the integrations support production use, or were they built for a single use case? Who owns the scaling work if this pilot succeeds, and do they have the authority and budget to actually do it? If those questions are hard to answer before the pilot starts, they will be much harder once it is done.
What I recommend: Before any pilot is approved, require a scaling readiness answer for three questions: who owns the operationalization work if this succeeds, what infrastructure investments are required to support production deployment, and how those investments will be funded. If those answers don’t exist before the pilot launches, build finding them into the pilot’s scope and success criteria. A pilot that cannot answer these questions isn’t done, regardless of what the model performance numbers say.
The Data and Infrastructure Gap Is Usually Bigger Than It Looks
Beneath most of the stalled AI initiatives I work through is a data and connectivity problem that was never properly addressed. AI capabilities depend on something more fundamental than advanced models: reliable, consistent access to the right operational data. In most enterprises, that access is considerably less solid than it appears during a controlled pilot.
Enterprise data is distributed across internal applications, operational systems, and physical assets spread across multiple sites and geographies. Many of those assets were built for a pre-connectivity world. Legacy equipment operates in proprietary networks outside traditional IT domains, generating data that remains trapped on the asset itself or stranded in siloed systems. In my experience, the data challenges that block AI at scale fall into three categories: data that does not exist because assets were never instrumented to collect it, data that exists but is stranded and inaccessible, and data constrained by rights and governance issues that nobody mapped before the initiative launched.
Beyond data, I consistently see four operational infrastructure barriers break the transition from pilot to production regardless of industry. Handcrafted integrations were never designed for enterprise reliability or reuse. Legacy operational assets cannot provide the data AI models require. AI-generated insights cannot trigger action because operational systems are not connected into execution workflows, producing what I call insight without execution. And infrastructure that is technically running is still not managed for the continuous data flow that AI-driven operations require.
These are not IT problems. As AI becomes embedded in operational decision-making, these gaps move from manageable inefficiencies to sources of operational, financial, and competitive risk. The question I put to every leadership team before they approve the next AI initiative is this: are the systems that generate the data this initiative depends on truly accessible, connected, and governable at enterprise scale? If the answer is uncertain, the initiative may be dead on arrival regardless of the model.
For organizations that want to go deeper on this, we have addressed the data and infrastructure readiness challenge in detail in our “Your AI initiatives may be dead on arrival” and “Your AI pilot worked. So why isn’t it scaling?” blogs.
What I recommend: Conduct a focused data and connectivity audit on any operational area where AI is planned or already in use before committing further investment. The goal is not a comprehensive enterprise data inventory, which tends to become a project in itself, but a specific answer to one question: is the data this initiative depends on accessible, connected, and governable at the scale we need? In my experience, this audit surfaces blockers in days that would otherwise derail a program months later at considerably greater cost.
Managing AI as a Business Capability, Not a Collection of Projects
Even organizations that have made real progress on data and infrastructure often still cannot connect AI activity to enterprise performance. When I work through why, the answer is almost always the same: AI is being managed as a series of independent experiments rather than as a coordinated business capability.
Without a formal structure governing the portfolio, I see the same problems compound across every organization. Multiple initiatives compete for the same limited IT resources and budget with no principled way to decide which ones move forward. Teams in different departments procure overlapping tools because no one has visibility across the portfolio. When a pilot shows promise, no one has clear authority or budget to take it from experiment to operation. When AI-related risks surface, including compliance issues, biased outputs, or operational failures, accountability is undefined. And the gap between what AI is doing at the project level and what it is contributing to business performance stays impossible to close, because no one is measuring at that level.
What I help organizations build is the management infrastructure to govern AI the same way they govern any other critical business function: with defined strategy, clear accountability, investment frameworks, risk oversight, and performance management mechanisms that operate at the level of the whole organization rather than individual projects. We call this the AI Operating Function, and our approach is grounded in the Strategy of Things 9-Layer AI Operating Model. The details of that model are worth a separate conversation, but the core principle is straightforward: AI that is directed, prioritized, governed, and measured as a coordinated business capability produces fundamentally different outcomes than AI managed as a collection of experiments.
McKinsey’s research on high-performing AI organizations reinforces what I observe in practice. Nearly half of respondents in those organizations report that senior leaders show clear ownership and long-term commitment to AI, sponsoring initiatives, protecting budgets, and modeling usage, compared with only 16% in lagging organizations. They did not get better results by finding better models. They got better results by building better management infrastructure around AI.
This structure does not need to look the same at every organization. A company running five AI initiatives needs a lighter version than a global enterprise managing thirty. But some version of it is necessary at every scale, because the absence of it creates the same problems regardless of organization size. For many organizations, particularly those without a dedicated AI leadership function, a fractional Chief AI Officer is how they get this structure in place quickly, without the cost and overhead of building a permanent executive function from the ground up.
What I recommend: Map your current AI portfolio in a single document: every active initiative, who owns it, what it is intended to achieve, how it is being funded, and how prioritization decisions are currently being made. If that exercise is harder than it should be, the difficulty itself tells you something important about the state of your governance. Most organizations that do this for the first time find the coordination gaps become immediately visible, and that visibility is the starting point for building the structure that closes them.
You Don’t Have to Build Everything Yourself
The AI ecosystem is moving fast and remains fragmented. Vendor capabilities are evolving rapidly, new tools are emerging constantly, and the gap between what is available off the shelf and what most organizations are actually using is significant. Trying to build everything in-house is one of the fastest ways to stall an AI program, particularly for organizations operating with constrained IT resources and limited budget.
The build-buy-partner decision is one of the most consequential choices organizations make in their AI programs, and most are making it by default rather than by design. I have watched organizations spend eighteen months and significant budget building capabilities that were available off the shelf, and I have watched others buy solutions that didn’t fit their operational environment because no one with sufficient technical and business judgment was in the room when the contract was signed. Neither outcome is inevitable, but both are common when the decision isn’t treated with the strategic discipline it deserves.
For organizations that want to go deeper on how to approach this decision, we have addressed the build-buy-partner framework in detail in our blog.
What I recommend: Before committing internal resources to building an AI capability, ask honestly whether the same outcome could be achieved faster and more reliably through a buy or partner approach. The organizations I work with that make the most consistent progress treat the build-buy-partner decision as a deliberate strategic choice rather than a default, and they revisit it regularly as the market evolves. Trying to build everything in-house with constrained resources is one of the fastest ways to stall an AI program that was otherwise headed in the right direction.
A Quick Diagnostic: Where Does Your Organization Stand?
Before I close, here is a practical test I use at the start of every engagement. The questions do not require a formal assessment, just an honest conversation with the right people in the room.
On pilot design and scaling readiness: For every active AI initiative in your portfolio, can you name the person specifically responsible for the scaling work if the pilot succeeds, and confirm they have the authority and budget to do it?
On data and infrastructure: Before your last AI initiative launched, did anyone confirm that the data it depends on is accessible, connected, and governable at enterprise scale? Or did that question surface later, after the pilot was already underway?
On portfolio governance: When multiple AI initiatives compete for the same IT resources and budget, how does your organization decide which ones move forward? Is that decision made systematically, or by whoever makes the strongest case in the room?
On accountability: When an AI-related risk arises, a compliance issue, a biased output, or an operational failure, is it immediately clear who is accountable? Or does that conversation have to happen first?
On business impact: Can you draw a direct line between your current AI investments and measurable business outcomes, not at the pilot level, but at the level of overall business performance?
If several of these questions are harder to answer than they should be, that gap is worth a conversation. I work with leadership teams to assess where their AI program stands against these foundations and what needs to change to move from activity to impact. Reach out at Strategy of Things. We would welcome the conversation.
References
[1] “PwC’s 29th Global CEO Survey: Leading through uncertainty in the age of AI.” PwC. January 2026.
[2] Ryseff, J., De Bruhl, B., and Newberry, S. “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed.” RAND Corporation. August 2024.
[3] “The State of AI in 2025: Agents, Innovation, and Transformation.” McKinsey & Company / QuantumBlack. November 2025.
[4] Pandey, T. “Forrester picks holes in IT’s AI story, say just 10–15% pilots scale.” The Economic Times. January 22, 2026.
This article is part of a continuing series aimed at providing senior leaders and managers with a practical working knowledge of artificial intelligence and how to manage it as a business capability. This article draws on direct observations from fractional Chief AI Officer engagements and reflects patterns encountered consistently across industries and organization sizes.
Thanks for reading this post. If you found this post useful, please share it with your network. Please subscribe to our newsletter and be notified of new blog articles we will be posting. You can also follow us on Twitter (@strategythings), LinkedIn or Facebook.
Related posts:
The AI Build–Buy–Partner Decision: A Strategic Framework for Executives
AI Is Everywhere. Enterprise Impact Isn’t. Here’s the Structure That Closes the Gap.
