Marketing Strategy in an Agentic World
AI agents are executing parts of the customer journey that used to belong to your brand. Most marketing frameworks weren't built for that. MCM was.
Something shifted in the past twelve months that most marketing teams haven't fully internalized yet. It's not that AI can write your copy faster, or that it can segment your audience more precisely. Those capabilities have been around for years. What's new is different in kind, not degree.
AI agents are now completing transactions on behalf of consumers. A user tells ChatGPT to "book a restaurant for Saturday" and it books one — without visiting any website, reading any review, or seeing any advertisement. A procurement manager delegates vendor research to an AI assistant that evaluates options, requests quotes, and drafts recommendations — without a single human sales conversation. A consumer shopping for insurance gets a comparison and a recommendation from an agent that has never interacted with your brand's marketing team at all.
Your brand is being represented, recommended, or overlooked by AI systems you don't control. And most of the strategic frameworks your team is using right now were designed for a world where the customer was always a human who could be reached through media, persuaded through messaging, and retained through experience.
That world is not over. But it is no longer the only one that matters.
The Problem with Most Marketing Frameworks
Most strategic marketing frameworks produce narrative outputs. Positioning statements. Insight memos. Priority lists. Opportunity maps. These documents are useful for human teams and can be summarized by AI — but they cannot be computed by it.
When an AI agent tries to work with your marketing strategy, it finds prose. It can read the prose. It can extract themes from the prose. But it cannot route on it, calculate from it, or use it to make decisions that are consistent across cycles and contexts. The strategy exists as text in a slide deck, not as a system an agent can actually operate.
This matters for two reasons that are easy to underestimate.
First, if your strategy can't be processed by AI, you can't accelerate the strategy cycle. A traditional framework executed manually takes four to six weeks. That means you run it once a year. In a world where market conditions, competitive positions, and customer behaviors are shifting on quarterly cycles, running your strategy once a year is not a cadence — it's a lag.
Second, if your strategy can't be computed, it can't be consistently applied. Every time a new team member joins, every time a different consultant facilitates, every time someone "adapts" the framework to the situation, variance enters. The strategy becomes whatever the most experienced person in the room thinks it should be. That's not strategy. That's informed intuition with a slide deck attached.
Why MCM Was Built for This Moment
The Marketing Canvas Method was designed to be machine-readable from the beginning. Not as an afterthought. From the first version, every step produces structured outputs: coded dimensions, numerical scores, deterministic routing logic, quantified gaps, sequenced initiatives. The method speaks in numbers, codes, and decision trees. That's the language machines are best at.
An AI agent with access to financial databases, industry reports, and competitive intelligence can pre-populate the ten M-parameters of Step 1 in hours. The same process takes four to six weeks in a manual engagement. The archetype selection logic — M3 × M4 × Revenue Lever — runs in seconds. The gap calculation is exact. Initiative candidates are generated from the method's own routing rules.
The 60% of elapsed time that constitutes preparation work in a traditional engagement compresses to an afternoon. What remains — scoring, debating, committing, deciding — takes the same time as before, because it's irreducibly human. But now you can run the full cycle quarterly, not annually.
"The agent handles the complexity. The human handles the consequence. That is the contract." (Marketing Strategy, Programmed, Chapter 18)
Three things change when an agent runs the method:
1. Consistency eliminates facilitator variance. Two teams in different cities, using the same inputs, receive identical outputs. Variance lives where it should — in the human judgments about scores and gate decisions — not in the computation.
2. Longitudinal memory transforms strategy from event to process. An agent running the method across multiple cycles knows that the Positioning initiative in Cycle 1 moved the score from −2 to −1 in four months, but Experience took seven. It surfaces the pattern: "In the last three cycles, your initial scores averaged 1.2 points higher than the scores you converged on after evidence review." That calibration is impossible without perfect memory.
3. The expertise barrier disappears. Running a structured strategy method well has always required someone who has done it fifty times. An agent that has internalized the method's logic provides that expertise on demand — without replacing the facilitator, but removing the need for the facilitator to have memorized the method.
The Three Decisions That Must Stay Human
None of this means handing strategy to the machine. The research is unambiguous: AI systems produce better outcomes when they augment human judgment, not when they replace it. In a randomized field experiment of 6,255 customers, disclosure of AI involvement before a conversation reduced purchase rates by 79.7% — not because the AI was less competent, but because customers perceived it as less knowledgeable and empathetic. The resistance is to substitution, not to assistance.
MCM is explicit about which three decisions must remain human, and why.
The scoring commitment. When a team member assigns −2 to Experience (420), they are making an organizational statement with career-risk implications. An agent can prepare the evidence that supports −2. It cannot make the commitment to write it. That commitment requires a human who will be accountable for the strategy that follows.
The gate decision. Opening the gate from FIX to ALIGN signals to the organization that the foundation is sound and 60% of resources are shifting to growth. The data says the Fatal Brakes are positive. The judgment about whether the organization is actually ready for the shift belongs to the people who will live with the consequences.
The strategic pivot. When the annual re-run produces a different archetype, the team faces an identity question: "We thought we were a disruptor. The method says we're a stagnant leader now. Do we accept this?" No agent should ever make that decision on a team's behalf.
This division of labor is not a philosophical preference — it's what works. The Nokia error — playing three archetypes simultaneously rather than accepting an uncomfortable diagnosis — destroyed over €100 billion in value. The agent would have routed correctly. The humans overrode it because the answer was uncomfortable. The governance contract exists precisely to prevent that.
When Your Customer Is a Machine
There is a deeper disruption ahead that most marketing strategy frameworks have not yet confronted: what happens when the Lead Segment is not human?
An AI procurement agent selecting a SaaS vendor does not feel emotions. A hospital robot choosing consumable supplies does not have brand loyalty. An algorithmic media buyer allocating budget across platforms does not respond to storytelling. These are not edge cases. They are the emerging reality of B2B purchasing, industrial logistics, and platform ecosystems.
MCM's six-step architecture is indifferent to whether the segment is human or algorithmic. You still need one company, one market, one geography, one segment. You still decompose revenue into Active Operating Customers × Transactions × Value × 12. The archetype selection logic still routes on M3 + M4 + Goal. What changes is not the engineering. What changes is the content of the dimensions.
When the buyer is a machine, Emotions (320), Stories (520), Influencers (540), and Visual Identity (240) collapse toward zero strategic weight. Features (310), Pricing (330), Proofs (340), and Magic (440) become disproportionately dominant. JTBD stops being a human need statement and becomes an optimization function. Experience becomes API latency. Positioning becomes structured metadata. Customer lifetime becomes switching cost architecture.
The strategic questions are the same. The answers look very different.
What to Do Right Now
You don't need a different strategy framework for the agentic world. You need a strategy framework that produces outputs an agent can actually use — and a governance architecture that is explicit about where human judgment is irreplaceable.
Three questions worth putting to your current approach:
Can your strategy be computed? If your strategic outputs exist only as slides and documents, an AI agent cannot work with them. Coded dimensions, numerical scores, and deterministic routing are not bureaucratic overhead — they are what makes strategy legible to the machines that will increasingly execute it.
Is your strategy cycle fast enough? A framework that requires six weeks of preparation and runs once a year is already too slow. The preparation work can be compressed to hours. The human commitment work cannot — and should not. The question is whether you've separated these two things.
Do you know which decisions must stay human? Governance is not about limiting AI. It's about knowing precisely which decisions require human commitment and making sure they are never delegated. The scoring commitment, the gate decision, and the strategic pivot are not AI tasks. Everything else increasingly is.
The agentic world is not coming. It's here. The brands that will navigate it well are not the ones with the most advanced AI tools. They're the ones with the clearest strategy — expressed in a language that both humans and agents can work with.