AI is everywhere right now. Every week there’s a new tool, a new model, a new promise that this one will change everything.
We've found that the problem usually isn’t a lack of ambition or budget.
(And not in the best "Working Backwards"-kind-of-way!)
The AI Impact Methodology was created to help leaders slow down just enough to make better decisions—so AI investments actually improve the business and the way people work, instead of becoming expensive experiments.
Most AI conversations start with the technology:
“What tools should we use?”
“Should we build or buy?”
“What’s our AI strategy?”
But those questions skip the most important one:
Where are the real bottlenecks, frustrations, or missed opportunities that, if we fixed them, would meaningfully change results—not just add another tool to the stack?
Before any talk of models or platforms, we need a sharp, shared answer to questions like:
Clarifying this isn’t a theoretical exercise—it’s the filter that determines where AI can genuinely move the needle, and where it would just create more noise.
So this methodology starts with impact—real, tangible improvements in things like:
There’s a common pattern we see across organizations: AI gets introduced before the business problem is clear.
Across industries, the same pattern keeps showing up: AI gets introduced before the business problem is truly understood, before teams are prepared, and before core processes are stable or well-designed.
Leaders feel pressure to “do something with AI,” so they layer it on top of:
Research consistently shows this is a recipe for disappointment. McKinsey has found that only a small subset of companies capture a majority of the value from AI, largely because they anchor AI efforts in clear business objectives, operating-model changes, and adoption plans—not just technology choices ([McKinsey Global Survey on AI].
Similarly, MIT Sloan and BCG have reported that organizations realizing significant AI value are those that redesign processes and roles, rather than simply “bolting AI on” to existing ways of working ([MIT Sloan Management Review / BCG: The Cultural Benefits of AI in the Enterprise].
When results don’t show up—when pilots stall, adoption lags, or metrics don’t move—the conclusion often becomes:
“AI didn’t work for us.”
The absence of a clear problem, misaligned incentives, shallow change management, and no plan for how AI will actually create and capture value in day-to-day operations is why it didn't work.
The AI Impact Methodology is built around a simple idea:
Miss one, and the whole thing falls apart.
This starts by getting very clear on the business side—not at a buzzword level, but to the point where you can point to specific metrics, teams, and workflows and say: “This is where we need to see change.”
What outcomes are we trying to drive?
Are we trying to shorten sales cycles, increase win rates, improve renewal and expansion, reduce support volume, speed up content, cut costs, or improve forecast accuracy?
“Better” is not specific enough. You should be able to name the few outcomes that matter most for the next 6–12 months.
Which KPIs actually matter?
For each outcome, there should be a small set of measurable indicators—pipeline created, tickets resolved per rep, handle time, CSAT, NPS, churn, time-to-decision, margin, error rates, or time from idea to launch.
These are where improvement will be obvious—and where AI efforts can be judged as successful or not.
Where are we losing time, money, or momentum today? (This is where you get honest about reality):
This step is about focus.
Not everything needs AI. Not everything should be automated. Some problems are better solved by clearer processes, better training, or simpler tools. The hard part is saying “no” to use cases that are interesting but not material.
The goal is to identify where AI could meaningfully change the business—not just where it’s technically possible.
You’re looking for intersections where:
Do this well, and you end up with a short, prioritized list of business-critical opportunities where AI can:
That list becomes the foundation for everything that follows. It keeps AI efforts anchored in what the business actually needs, so when you move on to people and technology, you’re solving the right problems—not just chasing the latest capability.
People are not a “change management step.”
If teams don’t trust the tools, don’t understand them, or don’t see how AI helps them do better work, adoption will stall—no matter how good the technology is.
If teams aren't trusted by management, true adoption, application and opportunity detection can never be achieved.
The accuracy behind the tale of how a Frito-Lay janitor pitched his billion-dollar idea to the CEO is debated, but you get the idea.
This methodology looks closely at:
How work actually gets done today
Where people feel friction or overload
What would genuinely make their jobs easier or more effective
What support and skills are needed for adoption
When people experience AI as something that helps them—not something being forced on them—everything changes.
Most of the problems we see—pilots that never scale, tools nobody uses, anxiety about where AI fits—show up right here, in the gap between how work actually happens and how leaders imagine it happens.
By going straight into the reality of workflows, handoffs, and daily frustrations, the AI Impact Methodology makes those invisible gaps visible.
Instead of guessing where AI might help, you see exactly where people are stuck, where context is lost, and where effort is being wasted—so you can design AI into the work in ways that actually remove friction instead of adding more.
When teams are already stretched, any “new thing” that feels like extra work or surveillance is going to be quietly ignored, no matter how powerful it is on paper.
This methodology surfaces where people are at capacity, where trust is low, and where incentives don’t line up, then uses that insight to shape AI use cases, rollouts, and training.
Instead of dropping a tool into a broken or unclear process, the methodology ensures that roles, expectations, and guardrails are in place before AI is scaled.
That’s how you avoid “AI didn’t work for us” and replace it with a cycle where every deployment increases confidence, surfaces new opportunities, and builds momentum.
At this stage, technology decisions become much simpler, because you’re no longer asking, “What can this tool do?” in the abstract—you’re asking:
Which tools fit our goals?
What integrates with what we already have?
Where does automation help, and where does it create risk?
What needs to be validated before scaling?
Instead of browsing endless options, you can quickly narrow in on tools that map to the few, clearly defined use cases and KPIs you’ve already prioritized.
You’re matching capabilities to real problems, not trying to invent problems that justify a tool.
You evaluate how AI fits into your current systems, data, and processes so you’re not creating yet another silo or workaround. The question becomes: “How do we extend and strengthen our existing stack?” not “What do we need to rip and replace?”
You can draw a clear line between tasks that are repetitive, pattern-based, and safe to automate—and those that require human judgment, oversight, or relationship-building.
This is where you decide what should be fully automated, what should be human-in-the-loop, and what should stay human-led with AI in a supporting role.
Because you know which outcomes matter, you can define what “good” looks like up front: what metrics should move, what guardrails are required, what edge cases need to be tested, and what feedback loops are needed from the people doing the work. You pilot with purpose, then scale what proves real value.
AI becomes a tool in service of the business—not the center of attention.
A lot of organizations think moving fast means jumping straight into AI.
In practice, moving fast means:
Avoiding the wrong use cases
Avoiding tools no one adopts
Avoiding rework after failed pilots
Avoiding “AI theater” that looks good but delivers nothing
The AI Impact Methodology helps leaders make better bets earlier—so effort compounds instead of getting reset every six months.
This mindset is especially critical for larger, more complex organizations with multiple teams and functions—and fast-growing companies—where:
Systems are complex, with many moving parts, dependencies, and edge cases that aren’t always visible from the outside.
Changes affect many teams, meaning even a small tweak in one area can ripple across functions, regions, and customer touchpoints.
Mistakes are expensive, not just in dollars, but in lost trust, wasted time, and the opportunity cost of fixing what went wrong instead of moving forward.
It provides a repeatable way to evaluate, build, integrate, validate, and adopt AI—without gambling on hype
At the end of the day, this isn’t about “doing AI.”
If you’re reading this and recognizing your own organization in these patterns—the stalled pilots, the skepticism, the sense that you’re “doing AI” without really feeling the impact—that’s the signal to pause, not to panic.
The good news is that you don’t need a ten-year roadmap or a massive transformation program to start changing the trajectory.
You need a clearer way to decide where AI belongs, how it should show up in day-to-day work, and what “better” actually means for your teams and customers.
The AI Impact Methodology is one way to do that, but it doesn’t have to be the only way.
Use it as a lens with your own leadership team: identify one or two critical areas of the business, talk honestly with the people closest to the work, and map specific outcomes before you touch a single tool.
Then, let that clarity—not hype—guide your next AI move.