"We're going to see the largest disruption ever in 2026 from companies that don't make this change."
That warning comes from Matt Fitzpatrick, who spent more than a decade at McKinsey rising to Global Head of Quantum Black Labs—the firm's AI software development, R&D, and global AI products division. A year ago, he joined as CEO of Invisible Technologies, a company that trains large language models for most major providers and builds custom AI workflows for enterprises.
In a recent episode of the Moonshots podcast, Fitzpatrick laid out a comprehensive framework for enterprise AI success—and an unsparing diagnosis of why most companies are failing to capture value from their AI investments.
His perspective matters. At McKinsey, he led AI implementation across hundreds of enterprise clients. At Invisible, he works directly with the hyperscalers building foundation models and the enterprises deploying them. Few people have a clearer view of what actually works in enterprise AI—and what doesn't.
The gap between AI adoption and AI transformation isn't closing. It's widening. And 2026 is the year that gap becomes a competitive chasm.
The 2026 Thesis: Why This Year Is Different
Fitzpatrick's core argument is that 2026 represents an inflection point for enterprise AI. The technology has matured enough that the efficiency gains are real and measurable—but most organizations haven't adapted their operations to capture them.
"The efficiency gain between the company that uses AI versus the one that doesn't is just too insurmountable to try and make up for if you're not using these technologies," he notes. "With AI, there's not going to be any going back to the way things used to be."
But the impact won't be uniform across industries. Fitzpatrick draws a clear distinction between sectors facing imminent disruption and those where AI's effects will be more gradual.
High-impact sectors include media, legal services, and business process outsourcing—"areas where the structure of what the industry does is going to change." The common thread: knowledge work involving the production of large amounts of documentation.
Lower-impact sectors include oil and gas and real estate, where "the function of what they do is going to stay pretty consistent." The decision on which office building to buy, Fitzpatrick observes, "is going to function pretty similar to what it did five, six years ago."
This differential matters for resource allocation. Not every part of your business needs to transform at the same pace.
Why Most Companies Are Getting AI Wrong
Fitzpatrick identifies four fundamental challenges that explain why most AI initiatives fail to deliver enterprise value. Each represents a common pattern that organizations must break.
Challenge #1: The Data Foundation Problem
"The first thing I would start with is making sure you have the data right before you can even start with AI," Fitzpatrick states bluntly. "If you tried to build an AI agent on fragmented customer and product data, it's going to break by definition."
This might seem obvious, but most organizations skip this step. They rush to deploy AI tools on top of messy, siloed data infrastructure—then wonder why the results disappoint.
The good news: you don't need to fix all your data. "If you start with the question of 'what data do I need for this specific use case,'" Fitzpatrick explains, "you can probably have five to six core data variables you need." The key is being tactical about what data needs to be right, rather than attempting a boil-the-ocean data transformation.
Challenge #2: The "Thousand Flowers" Trap
"I would not start with letting a thousand flowers bloom," Fitzpatrick advises. "I would start with what are two to three things that if you do them well, materially move the needle for your business."
Most organizations make the opposite choice. Excited about AI's potential, they encourage experimentation across every function. The result: scattered pilots, duplicated efforts, and no concentrated impact.
Fitzpatrick offers a vivid analogy for this failure mode: "It's like, 'Here's a million people for free and they're all geniuses and it fails.'" The problem isn't capability—it's focus. When you give teams unlimited AI potential without strategic direction, you get activity without impact.
Challenge #3: Wrong Ownership Model
Perhaps Fitzpatrick's most counterintuitive recommendation: "Do not locate this in your technology organization. Take your best operator, your best ops person, give them an operational KPI and track it to that."
The failure mode he's addressing is common: AI initiatives led by IT teams with technical metrics, disconnected from business outcomes. The alternative is giving ownership to operational leaders who understand the business goals, know what good work looks like, and can make the necessary tradeoffs.
"If you have a clear sense of which operational person is leading it and how they're marshaling resources around it, and you have a clear KPI, you're going to make progress," Fitzpatrick observes. "If you let a thousand flowers bloom, none of them have an operational metric, and you kind of end up with a science project dynamic."
Challenge #4: No Real Benchmarks
The AI industry is obsessed with benchmarks—coding tests, reasoning evaluations, general intelligence measures. But Fitzpatrick argues these are largely irrelevant for enterprise deployment.
"Most of the public focus to date has been on the large public benchmarks for things like coding," he notes. "I think the problem is, though, if you think about enterprises or small businesses, your benchmark for most cases is not a broad-based cognitive benchmark—it's accuracy or human equivalence on a specific task."
This means enterprises need to build their own evaluation frameworks. If you're deploying AI for claims processing, you need a benchmark that tests claims processing accuracy against your specific standards. Generic LLM performance tells you nothing about whether the AI will work for your use case.
A Framework for Success
Based on patterns from organizations that do capture AI value, Fitzpatrick outlines a four-step approach.
Step 1: Focus on Value
Start by identifying two to three use cases that will materially impact your business. Not the easiest to implement—the most valuable if done well.
Common high-value areas include customer service, forecasting, digital marketing, and inventory management. But the specifics depend on your business model and competitive dynamics.
Fitzpatrick offers a useful test: "Would you bet your annual bonus that whatever use case you deploy works?" If you can't answer yes, you probably don't have clear enough success criteria.
Step 2: Get a Real Proof of Concept
"Not a strategy document," Fitzpatrick emphasizes. An actual working prototype.
The paradigm for AI deployment differs fundamentally from traditional software development. "If you take the paradigm of how machine learning is deployed where you spend months and months building something and then it works—this is kind of the exact opposite paradigm in that you can get a prototype up and running in a month, but you have to do a lot of testing and validation to make sure you can trust it."
For your first use case, Fitzpatrick recommends running an RFP to a third-party vendor that gets compensated based on results. "If you do it in-house, the odds are the in-house team has not had a lot of experience with this. And so you also can't hold them accountable in the same way of 'you get paid if it works.'" Tying compensation to outcomes limits your risk while you're still learning.
Step 3: Establish Operational Ownership with Clear KPIs
Give ownership to operational leaders, not technology teams. Define success in operational terms: CSAT scores, time per call, inventory days, stockouts—whatever metrics actually matter for the business function you're transforming.
"Make sure you know which operational person is leading it and how they're marshaling resources around it," Fitzpatrick advises. "That should be your guide."
Step 4: Buy or Rent—Don't Build
For most companies, building AI capabilities in-house is the wrong approach.
"The idea that everyone can buy, everyone can hire people to do this is challenging," Fitzpatrick observes. "The challenge of trying to adapt an existing IT function to do this is—many of the skill sets that people hire for, even like 'do they know Python'—there are gaps in that."
The practical implication: partner with vendors who can deliver outcomes, rather than trying to develop expertise you don't have. "I think the answer that most companies I've seen who don't have the resources in-house, who are being directive about how to push this forward, is they are finding ways to rent or buy this externally and to partner with folks that can allow them to do it."
Why Human-in-the-Loop Persists
One of Fitzpatrick's strongest convictions runs against the prevailing narrative of full AI autonomy.
"Human in the loop is going to be a feature, not a bug, for a long, long time," he states. "The entire red herring of the enterprise is that autonomous agents will do all of this with no human involvement. I actually think you're going to need more and more humans at every step."
The reasoning is practical. LLMs are trained on precedent data. They excel at tasks where historical patterns provide reliable guidance. But many enterprise situations lack that precedent—complex edge cases, novel situations, judgment calls where context matters enormously.
Fitzpatrick cites the Klarna cautionary tale as evidence. The fintech company announced a move to fully AI-powered customer service, then "about 8 to 12 months later, they basically announced they were rolling the whole thing back and moving back entirely to human contact center agents."
The failure mode: trying to eliminate humans entirely, rather than optimizing human-AI collaboration. "You actually would never want to move to doing everything agentic," Fitzpatrick argues. "You're going to want humans in the loop in almost every industry, in almost any topic."
Swiss Gear: A Brief Example
While Fitzpatrick shared several case studies, one illustrates the framework particularly well.
Swiss Gear, the luggage brand, struggled with inventory forecasting across a fragmented data landscape—750 separate data tables across products, customers, and operations that couldn't be unified for analysis.
Using Invisible's data platform, they consolidated those tables quickly and optimized forecasting to minimize stockouts while managing inventory levels. The result: 2x the number of SKUs with reliable prediction, achieved in "a couple of months."
The example demonstrates the pattern: focus on a specific high-value use case (inventory forecasting), get the required data right (consolidate the 750 tables), measure against operational metrics (stockouts, coverage), and work with an external partner rather than building in-house.
What's Coming in 2026
Looking ahead, Fitzpatrick identifies three developments that will reshape enterprise AI this year.
Multi-Agent Teams
"One of the challenges is if you're a large enterprise or medium-sized company implementing a use case, you won't necessarily have one decisioning agent that does everything," Fitzpatrick explains. Instead, companies will "train task-specific agents for individual tasks, usually orchestrated by an LLM."
This architecture allows pinpoint accuracy on specific tasks while using broader LLM logic to coordinate across them. "We're just starting to see the green shoots of more and more folks having success with that."
The Multimodal Leap
"More and more video, images, audio are going to become a bigger and bigger part of how people engage with these models," Fitzpatrick predicts. Audio interfaces may be particularly transformative—enabling natural conversation rather than text-based interaction.
"The way you'll be able to speak to them, interact with them, visualize them is going to be a really interesting moment for 2026. And I don't think that will all be text-based like it has predominantly historically."
RL Gyms and Mirror Worlds
Perhaps the most technically significant development: simulated environments for testing AI before production deployment.
"Think of that as actually creating simulated environments or digital twins for tasks you might want to test," Fitzpatrick explains. The applications span coding environments, contact centers, manufacturing—anywhere you want to "simulate a series of function calls, tasks, or environments" before risking production systems.
This capability addresses one of the core challenges of enterprise AI: validating that systems work before they touch real customers, real data, real operations.
The Bottom Line
Fitzpatrick's framework distills to four imperatives for enterprise AI success:
- Focus ruthlessly on two to three use cases that materially impact the business
- Get the data right for those specific use cases—not everything, but what you need
- Give operational ownership with clear KPIs to business leaders, not IT
- Buy or rent capabilities rather than building from scratch
The gap between companies that execute on this framework and those stuck in perpetual experimentation is widening. As Fitzpatrick warns, the efficiency differential between AI-enabled and non-AI-enabled companies is becoming "insurmountable."
The blueprint is public. The frameworks are well-documented. The only remaining question is execution.
2026 is the year that question gets answered—for better or worse—for most enterprises.
At OuterEdge, we help organizations move from AI experimentation to enterprise transformation. Our approach mirrors what Fitzpatrick describes: operational ownership, clear KPIs, focused use cases, and deep integration rather than scattered pilots. If you're ready to close the gap between AI adoption and AI value, book a strategy call to discuss your AI transformation.