Your Brand's Biggest Marketing Problem Just Got Automated.

Oguz Acar and David Schweidel just published a piece in Harvard Business Review that should be on every marketing leader's desk this week. Preparing Your Brand for Agentic AI draws on thousands of consumer interviews across the US and UK and on AI adoption frameworks developed with companies and startups. The argument is straightforward and commercially urgent: AI agents are no longer just tools consumers use to ask questions. They are increasingly the entities that research, shortlist, compare, and complete purchases — on behalf of humans, without human involvement in the process.

Their headline figure: 60% of shoppers expect to use AI agents to make purchases within the next 12 months.OpenAI is already integrating with Stripe, PayPal, Walmart and Shopify to enable complete purchase journeys inside ChatGPT. The agent doesn't just advise. It acts.

This is not a future problem. It is a present one. And the brands that will navigate it best are not the ones with the most sophisticated AI infrastructure. They are the ones with the clearest, most rigorously structured marketing strategy.

Every problem the article identifies corresponds to a specific dimension of how marketing strategy is built and assessed. Here is the precise mapping.

What Acar & Schweidel Actually Said

The authors describe three interaction modes already operating in the market. In the first, a brand's own agent engages directly with customers — Capital One's automotive agent completes most of the buying journey before the customer enters a dealership. In the second, an independent consumer agent acts on behalf of the customer across multiple brands — Claude's "computer use" capability autonomously navigates screens, fills forms and completes purchases. In the third, full AI intermediation, both sides of the transaction are AI — ChatGPT's agent already books restaurants end-to-end.

The authors then propose a three-stage readiness model. Stage 1: decide whether you need to deploy an AI agent at all. Stage 2: get customers to use your agent rather than a generic one. Stage 3: ensure that independent consumer agents recommend your brand when given a choice.

Their central finding: most brands are unprepared for any of these three stages. And the reason, in every case, traces back to the same root cause — a lack of strategic clarity about what the brand stands for, who it serves, and why a machine should choose it over the alternatives.

Why This Maps Directly onto the Marketing Canvas Method

The Marketing Canvas Method was built on the premise that strategic clarity is not a soft ambition — it is a scored, evidence-based capability. Each of the 24 dimensions is assessed against specific evidence, not internal conviction. The result is a map of exactly where a brand is strong enough to defend its position and where it is not.

What the HBR article reveals is that AI agents are, in effect, running their own version of that audit on every brand, every day — and making recommendation decisions based on the results. A brand with a clear, specific, machine-readable strategic position passes the audit. A brand with vague positioning, undocumented features and no systematic listening fails it — silently, invisibly, at scale.

The article does not prescribe a technology solution. It prescribes strategic discipline. That is exactly what the Marketing Canvas Method is designed to produce.

AI Agents Are a New External Force. Score It As One.

The article describes AI agents as a force reshaping discovery behaviour across virtually every category. But it makes a distinction that most companies miss: the same force affects different brands differently. For some, AI-mediated discovery is an accelerator — AI agents recommend them accurately and prominently. For others, it is a brake — the brand is absent, misrepresented, or outcompeted by brands with clearer, better-structured information.

In the Marketing Canvas Method, M10 (External Forces) is the parameter that classifies these forces explicitly — as Accelerators or Brakes — and assesses their disruption level. A High Disruption M10 force changes not just one initiative priority but the entire strategic sequence. Agentic AI is precisely that kind of force for most categories. It is not enough to note it as "important." It must be classified: is it currently working for your brand or against you?

You should run the M10 classification for agentic AI this week. Open ChatGPT, Gemini and Perplexity. Ask the three questions your best customer would ask when researching your category. For each response: does your brand appear? Is the description accurate? Is your positioning reflected correctly? The gap between your intended position and your actual AI representation is your M10 assessment. If it is a Brake at High Disruption level, it moves to the top of your strategic priority order — above any growth initiative you are currently running.

"Share of Model" Is Your Positioning Score in a New Channel.

Pernod Ricard's team discovered that a leading LLM was describing Ballantine's Scotch as a prestige product. It is not a prestige product. It is an accessible, mass-market Scotch. The miscategorisation was steering the wrong customers toward the brand and the right customers away from it. The team's systematic response — prompting all major AI models regularly, logging responses, and updating copy to correct misrepresentations — is what they call managing their "share of model."

Research from Carnegie Mellon makes the revenue impact concrete: wording changes alter the likelihood of AI recommendation by up to 78.3%. The same product, described with more or less precision, produces radically different AI outcomes.

This is a Dimension 220 (Positioning) problem. The MCM's Positioning scoring standard is direct: score negative if the positioning statement could apply to three or more competitors unchanged. Score positive when the positioning is specific enough to exclude alternatives, validated by customer evidence, and visible at every touchpoint. In 2026, "every touchpoint" includes what AI agents say when a customer asks them to recommend in your category.

You should score your Positioning (220) in the AI channel. Does the AI description of your brand match your intended position — price tier, core benefit, target customer, competitive differentiation? If it does not, your Positioning score is negative in the channel where an increasing share of your Lead Segment forms their first impression. A +2 on Positioning requires that the position is clear, specific, and reflected consistently — including in the sources AI agents learn from.

Features and Proofs Must Be Machine-Readable to Score.

Sephora's AI system draws on a product catalogue with detailed shade and formula taxonomies, Color IQ technology mapping 140,000 different skin tones, and profiles from 34 million loyalty members. When a consumer's AI agent requests foundation recommendations, Sephora's information is specific, structured and verifiable. The result: customers using these tools are three times more likely to complete a purchase, and product returns have dropped by 30%.

The advantage is not better products. It is better-documented products — described in specific, verifiable terms that a machine can parse and act on.

In the Marketing Canvas Method, this maps precisely onto Dimension 310 (Features) and Dimension 340 (Proofs). The MCM's scoring standards ask: can you name the one feature that would make a customer choose you, and do customers confirm it? Is that feature documented specifically enough to be verified by a sceptical outside party? Are your proofs — case studies, certifications, benchmarks — accessible and structured?

You should assess your Features (310) and Proofs (340) against machine-readability. Are your differentiating features documented in structured, accessible formats — or buried in PDFs and sales decks? Is your proof architecture visible at the points where AI agents learn about your brand? A +2 on Features requires that the differentiating feature is clearly stated and customer-confirmed. A +2 on Proofs requires that multiple proof types — demonstration, endorsement, benchmark — reinforce the claim. In AI-mediated discovery, any feature or proof that is not machine-accessible does not exist competitively.

Your Value Type Determines How Much AI Should Intermediate Your Relationships.

The article's most strategically useful finding is one that most companies will skip past. The authors make explicit that AI agent deployment is not appropriate for all brands. Lamborghini CEO Stephan Winkelmann put it directly: "The purpose of a car like a Lamborghini is to drive it, not be driven in it." For a customer purchasing a Patek Philippe watch or a Hermès bag, the research process, the anticipation, and the in-store expertise are not obstacles. They are the product. Automating that journey would not improve the brand. It would destroy it.

At the other end, Amazon's Subscribe & Save — where 23% of US customers have delegated routine replenishment to automation — shows what full AI integration looks like when the value is efficiency and predictability. AG1 sits in the middle: AI handles 99% of routine queries, human teams focus on the emotionally significant interactions where the relationship quality is the product.

In the Marketing Canvas Method, this decision is determined by M4 (Economic Value) — the parameter that classifies where your brand sits on the value progression from Commodity to Experience. This is not an academic exercise. M4 is a direct archetype-selection input: it determines whether your strategy should be built around cost leadership, feature differentiation, outcome delivery, or transformation. And as the HBR article now makes clear, it also determines how much AI should intermediate your customer relationships.

You should use M4 to make your AI deployment decision. If your M4 is Commodity or Products, AI agent presence in the discovery channel is a competitive necessity — absence is invisibility. If your M4 is Services, the hybrid model is the answer: AI for efficiency, human for complexity and emotion. If your M4 is Experience, AI supports discovery but must never replace the human relationship that defines the brand. Getting this wrong in either direction — over-automating an Experience brand, or under-investing in AI for a Commodity brand — produces the same result: competitive disadvantage in the channel where an increasing share of purchase decisions now begin.

Your Lead Segment Determines Your AI Strategy, Not the Other Way Around.

The article is precise about who is driving AI-mediated discovery: two-thirds of Gen Z and more than half of Millennials were already using LLMs to research products before this article was published. These customers are forming impressions of your brand — and shortlisting or eliminating you — without visiting your website, without seeing your advertising, without reading your content.

Instacart's response illustrates the strategic discipline required. When OpenAI introduced plug-ins in 2023, they did not build a generic AI strategy. They asked: when our specific customer is in a conversation with an AI agent and needs groceries, how do we make ourselves the obvious answer? They built both Ask Instacart within their app and a ChatGPT plug-in simultaneously — because they knew their customer well enough to anticipate where the decision would happen.

That precision starts with Step 0 (Lead Segment Junction) — the Marketing Canvas Method's foundational decision. One company, one market, one geography, one customer segment. Not "our customers" as a composite. The specific group whose decisions matter most to the current revenue goal. Step 0 is the foundation on which every other strategic choice rests — including your AI strategy. A brand that has not made this choice with the required specificity cannot make the right call about where to invest in AI presence and where to invest in human expertise.

You should define your Lead Segment precisely enough to answer one question: does this segment currently use AI agents to research or purchase in your category — and if so, at what moments in their journey? The answer shapes everything: which channels require AI optimisation, which moments require human protection, and which investment is more urgent. A blurred Lead Segment produces a blurred AI strategy. There is no AI tool that compensates for that.

Listening Now Includes What AI Says About You.

Danone monitors how LLMs portray its brands in real time. When discrepancies arise, the team adjusts marketing communications and tracks measurable improvements in how AI agents describe and recommend their products. This is not a technology project. It is a listening discipline — systematic, structured, and connected to communications decisions.

The best marketing organisations have always maintained listening processes across multiple sources: customer surveys, review monitoring, social listening, sales conversation analysis. The HBR article adds a new and urgently important source to that set: AI models themselves. Not monitoring what AI agents say about your brand is the equivalent of never reading customer reviews. You are managing a brand without knowing what your audience encounters when they look for you.

In the Marketing Canvas Method, Dimension 510 (Listening/VOC) is scored as a strategic capability, not a background activity. The scoring standard: score positive when multiple listening channels feed a structured process that visibly influences product, marketing, and service decisions. Score negative if customer understanding relies on assumptions or single-source data. AI model monitoring is now a mandatory source in that set.

You should add AI model monitoring to your Dimension 510 process. Once a month, prompt the four major LLMs with the three most common research questions in your category. Log the responses. Compare with your intended positioning. Track changes after website or copy updates. A +2 on Listening (510) in 2026 requires evidence that AI model output is being systematically reviewed and that misrepresentations are being corrected. A team that cannot demonstrate this is operating a listening process with a structural blind spot.

The Principle That Ties It All Together

Acar and Schweidel close with a sentence worth reading carefully: "Connections that once formed the foundation of brand relationships are being reshaped, often mediated, and sometimes entirely managed, by AI."

The brands that will navigate this transition well have something in common. It is not the sophistication of their AI tools. It is the quality of their strategic foundations — the clarity of their positioning, the specificity of their features and proofs, the honesty of their value classification, the precision of their customer focus, and the rigour of their listening processes.

These are not new capabilities. They are the core dimensions of a well-executed marketing strategy. What agentic AI changes is the speed and scale at which the gap between brands that have done this work and brands that haven't becomes commercially visible.

Article Finding MCM Component What to Score
AI agents as market force M10 External Forces Accelerator or Brake — at what disruption level?
"Share of model" misrepresentation Positioning (220) Is your position accurate in AI channels?
Sephora's structured data advantage Features (310) + Proofs (340) Are your claims machine-readable and verified?
Lamborghini vs. Amazon vs. AG1 M4 Economic Value What level of AI should intermediate your relationships?
Instacart's customer-specific strategy Step 0 Lead Segment Does your segment use AI — and where?
Danone's real-time monitoring Listening (510) Is AI model output in your listening process?

One Test You Can Run This Week

Take your three most important brand claims — the ones on your homepage, your positioning statement, your LinkedIn company page. Ask one AI agent — ChatGPT, Gemini, or Perplexity — to describe your brand to someone considering your category.

Compare the AI's description with your three claims. For each claim: does the AI reflect it, ignore it, or contradict it?

If even one of the three is absent or inaccurate in the AI response, you have an active gap in either Positioning (220), Features (310), or Proofs (340) — in the channel where your Lead Segment increasingly begins its purchase journey.

That gap is not a technology problem. It is a strategic clarity problem. And strategic clarity is what the Marketing Canvas Method is built to produce — dimension by dimension, scored against evidence, connected into a system that tells you exactly where to focus first.

Source: Oguz A. Acar & David A. Schweidel. "Preparing Your Brand for Agentic AI." Harvard Business Review, March–April 2026.

Laurent Bouty

A C-Level international Marketing and Strategy professional, Laurent Bouty brings his 20 years of international experience in Marketing, Sales, Strategy and Leadership. He has a broad Marketing experience (from Marketing Strategy to Communication) including latest trends like analytics, social networks and mobile gained in Telecommunication, Advertising and Financial sector. Laurent has a strong marketing execution orientation in highly complex industries through team development and best practices implementation.

As speaker and Academic Director, Laurent is sharing his enthusiasm and passion for Marketing topic. He also developed the Marketing Canvas as a simple yet efficient tool for building your Marketing Strategy.

As trainer and Strategic Marketing Expert at Virtuology Academy, Laurent is helping brands to benefit from entrepreneurial tools, models and tactics.

https://laurentbouty.com
Next
Next

Your Marketing Budget Is Wasting 10% to 30% of Itself. Here's How to Stop It.