BLOG
A collection of article and ideas that help Smart Marketers to become Smarter
Marketing Canvas - Listening
Most companies listen reactively — processing complaints, running annual surveys, reading reviews when they arrive. The Marketing Canvas demands proactive listening. Dimension 510 explains the difference, why it is a Fatal Brake for Pivot Pioneers, and the most expensive sentence in marketing.
About the Marketing Canvas Method
This article covers dimension 510 — Listening, part of the
Conversation meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Listening (dimension 510) is the Voice of the Customer (VOC) infrastructure — not a single survey, but a system that captures everything customers say across every channel, translates it into data, and feeds it into strategic decisions.
The distinction that defines this dimension: listening without action is surveillance. Listening with action is strategy.
Most organisations believe they listen to customers. Most are listening reactively — processing complaints when they arrive, running annual satisfaction surveys, reading reviews when a notification appears. The method demands something more demanding: proactive listening that generates data before it is needed, feeds it into decisions before problems compound, and closes the loop between what customers say and what the company does.
In the Marketing Canvas, Listening sits within the Conversation meta-category alongside Stories (520), Media (530), and Influencers (540). It is the first of the four Conversation dimensions — and it comes first deliberately. The meta-category header says it plainly: listening comes before stories, before media, before influencers. You cannot communicate effectively with people you haven't systematically understood.
Reactive vs. proactive: the canonical distinction
This is the distinction that separates a company with VOC processes from a company with a VOC system.
Reactive listening processes information when it arrives. Customer complains — the complaint is logged. Customer writes a review — someone reads it. Annual survey goes out — results are compiled. NPS score is reported quarterly. Each of these is listening. None of them is proactive. The information arrives at the company's pace, on the company's schedule, filtered through the customers who bothered to respond.
Proactive listening generates information continuously, systematically, and before it is urgently needed. Ongoing customer interviews on a regular cadence — not just when there is a problem to investigate. Social listening infrastructure monitoring what is said about the brand, the category, and competitors across platforms. Support ticket analysis that extracts pattern data from thousands of micro-interactions. Behavioural data from digital touchpoints that reveals what customers actually do, not just what they say. Structured feedback loops at defined journey stages that close the circle between hearing a concern and confirming the fix.
The gap between reactive and proactive is the gap between responding to problems and preventing them. Between knowing what customers said last quarter and knowing what they are saying now. Between confirming assumptions and challenging them.
The canonical test: if the company stopped sending surveys tomorrow, would customer understanding continue to improve? If yes, the listening system is proactive. If no — if surveys are the primary input — the system is reactive, and dimension 510 cannot score above +1.
The most expensive sentence in marketing
"We know what customers want."
This sentence costs more than any misaligned campaign, any failed product launch, or any churned enterprise account. It is the signal that internal assumptions have been allowed to substitute for external evidence — that the listening loop has been closed not by data but by conviction.
The canonical position of the Marketing Canvas on this: if the data contradicts the assumption, the assumption must yield. Not the data. Not the interpretation. The assumption.
This sounds obvious. It is routinely violated. Teams that have operated in a category for years develop a fluency with their customers that feels like understanding but is actually pattern recognition. They know what last year's customers said about last year's product. They extrapolate. The market moves. The extrapolation drifts.
The VOC system exists to correct the drift before it becomes a strategy gap. It is the institutional mechanism that keeps the company's model of its customers honest — continuously updated, data-grounded, and resistant to the internal assumptions that are far more comfortable to rely on.
The four properties of an effective VOC system
The Marketing Canvas scores Listening against four properties. Together they describe not just whether a company has listening tools, but whether those tools form a functioning system:
Capture scope (511) — does the VOC system hear everything customers are saying? Not everything worth hearing — everything. The signal that matters is often not in the formal feedback. It is in the support ticket that uses unusual language. The social media comment that frames the category differently. The customer interview that introduces a word the team has never used. A VOC system with limited capture scope is a VOC system with systematic blind spots.
Data discipline (512) — is the VOC process entirely data-driven, with no point where assumptions substitute for evidence? The failure mode here is not fraudulent data. It is filtered data — interview questions that lead to expected answers, survey scales that cluster around mid-range because respondents are conflict-averse, analysis that confirms the hypothesis the team walked in with. Data discipline means designing the listening system to surface inconvenient truths, not just validate comfortable ones.
Journey integration (513) — does the VOC process map to the customer lifecycle? Listening at only one stage of the journey is like taking a patient's temperature once and declaring the health of their entire year. The research that matters for acquisition decisions is different from the research that matters for retention decisions. A journey-integrated VOC system has different listening mechanisms at different stages — capturing the before-purchase research experience, the onboarding moment, the ongoing use patterns, and the renewal conversation separately, because each reveals different strategic information.
Methodological breadth (514) — are multiple research techniques used together? Each technique has a different blind spot. Surveys capture stated preferences but miss revealed behaviour. Interviews surface nuance but are prone to social desirability bias. Behavioural analytics reveal what customers do but not why. Support ticket analysis captures the most frustrated customers but underweights the quietly satisfied ones. No single technique is sufficient. The system that combines four or more creates a triangulated picture that is harder to misread.
Validation discipline — does the company run a JTBD check at the customer level before committing capital to a direction implied by a market signal? A strong market trend is not the same as a validated consumer job. A company can detect a trend correctly and still deploy capital in a direction its specific customer does not need, because it never ran the validation step between signal and decision. This failure is harder to catch because the company genuinely believes it is being data-driven. The tell: VOC data is being used to confirm a direction already chosen, rather than to test it before capital is committed. Volume of consumer data does not protect against this failure. Only validation discipline does.
The second critical failure is the mirror of the first: companies that mistake market signal intake for customer listening. Reactive companies filter data through assumptions. A different failure mode — harder to detect because it is dressed in data — is the company that tracks macro trends attentively but never validates them at the individual customer level. The market is moving toward X does not mean your specific customer's job has changed. Listening without validation is still surveillance, just at a more sophisticated level.
Listening in the Marketing Canvas
The canonical question
Do you systematically capture, analyse, and act on what customers are saying about your brand, products, and market?
Listening is a Fatal Brake for A5 (Pivot Pioneer) — the most strategically consequential placement of any Conversation dimension.
The rationale is direct: you cannot pivot successfully if you don't know where the market is going and whether your specific customer is moving with it. Listening is how you find out both — and the second question matters more than the first.
The Fujifilm and Kodak cases provide the sharpest possible contrast. Both companies faced the same crisis in the early 2000s: digital technology was destroying the photographic film market. Both had data. Kodak had commissioned research in 1981 predicting film's decline — and then calculated how many years they could milk film revenue before needing to act. They listened, and then filtered the listening through their assumption that they had more time. Fujifilm conducted an 18-month technology audit — described in the canonical case library as "the most sophisticated VOC exercise in the book" — mapping every capability they had against every market need they could identify. They listened, and then let the data direct the strategy. Fujifilm still exists. Kodak destroyed over €100B in value.
For A5, Listening is a Fatal Brake because the pivot direction is unknown until the market reveals it. An A5 company that is listening well will identify the new job before competitors do. An A5 company that is listening reactively will discover it in competitors' press releases.
Listening is also a Growth Driver for A9 (Category Creator) — the dimension through which category language is discovered. Green Clean's voice-of-customer language mining is the canonical example: extracting the exact phrases customers used to describe the indoor health protection job and feeding those phrases directly into marketing copy. Customers teach you the vocabulary of the category they are joining. Listening is how you learn it.
Statements for self-assessment
Rate your agreement on a scale from −3 (completely disagree) to +3 (completely agree). There is no zero — the Marketing Canvas forces a directional position on every dimension.
Note on Detailed Track scoring: if averaging sub-question scores produces a mathematical zero, the method rounds to −1. A split score means the dimension is not clearly helping your goal — and "not clearly helping" requires the same investigation as "hurting."
Interpreting your scores
Negative scores (−1 to −3): Customer understanding relies on assumptions, single-source data, or reactive feedback that arrives too late to be strategic. "We know what customers want" is the operating assumption. The likely result: strategy decisions are made on the basis of internal conviction rather than external evidence. Problems compound before they are detected. For A5, this score is existential — a pivot built on assumed market direction is a rebrand, not a transformation.
Positive scores (+1 to +3): Multiple listening channels feed a structured process that visibly influences product, marketing, and service decisions. Every significant strategy decision can be traced back to a specific customer insight from a specific source. The VOC system generates evidence before it is urgently needed, corrects internal assumptions when data contradicts them, and closes the loop between what customers say and what the company does.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean's listening consists of a post-service satisfaction email sent to every customer after each visit. The response rate is 19%. The four questions (overall satisfaction, cleaner performance, product quality, likelihood to recommend) produce scores the team reviews monthly. No action has been taken based on these scores in the past six months — they are tracked but not acted on. Customer interviews have never been conducted. Social media is monitored by the founder personally, approximately once a week, without a systematic process for capturing or analysing what is found. Support tickets are answered and then closed, with no aggregation or pattern analysis. "We know what our customers want" is the informal position of the team. The VOC system exists in form. It does not function as strategy.
Score: +1 to +2 (Developing) Green Clean has introduced quarterly customer feedback sessions — 45-minute conversations with a rotating group of 8–10 customers focused on the full service journey. The sessions are structured but not scripted: customers describe specific moments rather than rate abstract attributes. Two rounds of sessions have already produced one significant insight: customers consistently describe the moment they realise the Family Health Report is personalised to their specific home as the point when they first trusted the brand. This insight was not available from the satisfaction survey. The team has started acting on it: the first Health Report for new customers is now delivered with a phone call rather than an email, specifically to confirm the personalisation in conversation. Social listening is now monitored daily using a basic tool. Support ticket language is being reviewed weekly for recurring patterns. Proactive listening is forming. It is not yet systematic.
Score: +2 to +3 (Strong) Green Clean's VOC system operates at four levels simultaneously. Satisfaction data (post-service NPS) provides the quantitative baseline. Quarterly customer interviews provide the qualitative depth, including specific language analysis — the team has documented the exact phrases health-conscious parents use to describe the indoor health protection job and has fed those phrases directly into website copy, sales conversations, and the Family Health Report narrative. Social listening captures every mention of Green Clean and its category terms in the region, updated daily. Support ticket analysis is reviewed weekly and produces a monthly "friction report" — specific interaction patterns that indicate friction in the journey. Each of these data streams feeds into monthly strategy reviews where at least one decision is required to trace back to VOC evidence. The system has produced three product changes and two messaging updates in the past twelve months. When the team states what customers want, they can cite the specific data source, the sample size, and the date the insight was captured.
Connected dimensions
Listening does not operate in isolation. Five dimensions connect most directly:
110 — JTBD: Listening enables the initial evidence base for the job definition — and, more critically, maintains its accuracy over time. A company can define the job well in year one and then watch it silently decay if no VOC system is actively testing whether the definition still holds. Without 510, a correct 110 ages in amber while the customer's actual job evolves. 510 is how you build 110. It is also how you keep it honest.
130 — Pains & Gains: VOC validates pain mapping. The pains identified in journey research (dimension 130) are hypotheses until the VOC system confirms them with data across a sufficient sample. Pains that appear in one customer interview may be individual; pains that appear in twelve are systemic. Listening is how the difference is established.
140 — Engagement: VOC systems feed engagement data. The promoter/detractor ratios that dimension 140 scores are produced by the listening infrastructure. Without a functioning VOC system, Engagement can only be measured by satisfaction surveys — which, as noted in dimension 140, is not the same as measuring engagement.
420 — Experience: Listening reveals what the experience actually feels like from the customer side. A team that believes the onboarding experience is +2 on Experience may discover through customer interviews that the specific moment the substitute cleaner arrives without prior notice is scoring −2 in the customer's head. Without the listening system, the Experience score is a self-assessment. With it, it becomes evidence-based.
520 — Stories: Listening provides the customer language that makes stories resonate. The most effective content uses the words customers use to describe their own problems — not the words the marketing team uses to describe the product. VOC language mining is the process that produces the raw material for story strategy.
Conclusion
Listening is the first Conversation dimension because it is the prerequisite for all the others. A brand cannot tell credible stories without knowing what customers actually experience. It cannot design effective media without knowing which messages resonate. It cannot identify the right influencers without knowing which voices customers trust.
The strategic test is not whether the company has feedback mechanisms. It is whether those mechanisms are proactive, multi-technique, journey-integrated, and action-connected. A company that sends satisfaction surveys and reads the results is listening. A company that conducts ongoing interviews, monitors social conversation, analyses support ticket patterns, tracks behavioural data, and ties every decision to a specific customer insight is listening strategically.
The difference between those two companies is not tools. It is discipline — the discipline of requiring data to yield when it contradicts assumption, rather than requiring assumption to explain away inconvenient data.
Sources
Harvard Business Review, "Everyone Says They Listen to Their Customers — Here's How to Really Do It", October 2015 — hbr.org
McKinsey & Company, "Are You Really Listening to What Your Customers Are Saying?", McKinsey Quarterly — mckinsey.com
Marketing Canvas Method, Appendix E — Dimension 510: Listening (VOC), Laurent Bouty, 2026
About this dimension
Dimension 510 — Listening (VOC) is part of the Conversation meta-category (500) in the Marketing Canvas Method. The Conversation meta-category contains four dimensions: Listening (510), Stories (520), Media (530), and Influencers (540).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas - Proof
Every brand makes claims. Few build proof systems. Dimension 340 of the Marketing Canvas identifies four types of proof — demonstration, logical explanation, endorsement, and reputation — and explains why stacking all four is the only way to convert sceptical prospects into convinced ones.
About the Marketing Canvas Method
This article covers dimension 340 — Proof, part of the
Value Proposition meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Proof (dimension 340) scores the evidence layer of your value proposition — the demonstrations, endorsements, explanations, and reputation markers that make your claims credible. The foundational distinction: proofs are not the same as claims.
Saying "we're the best eco-friendly cleaning service in the city" is a claim. Showing a customer saying "they changed how I think about what clean actually means" is proof. The dimension scores whether evidence exists and whether it is deployed effectively — not whether the brand believes its own story.
In the Marketing Canvas, Proof sits within the Value Proposition meta-category alongside Features (310), Emotions (320), and Prices (330). It is the credibility layer that makes everything else believable: Features describe what the product does; Proof demonstrates it.
Claims vs. proof: the foundational distinction
Every brand makes claims. Few build proof systems.
A claim is a statement the brand makes about itself. Proof is evidence that exists independently of the brand's desire to be believed. The gap between them is the gap between what a brand says and what a prospect believes — and in most markets, that gap is large and widening.
The reason: customers have become systematically sceptical of self-assertion, particularly around sustainability, quality, and expertise claims. "Award-winning," "industry-leading," "eco-friendly," "best-in-class" — these phrases have been used so frequently, by brands of such varying quality, that they carry almost no credibility signal. They are the background noise of value proposition communication.
What breaks through is evidence that exists independently of the brand making the claim: a third party that validated it, a customer who confirmed it, a before/after result that demonstrated it, a mechanism that explains how it works. That is proof. And the dimension that scores whether your value proposition has it is 340.
Score negative if claims are unsupported or if proof relies entirely on self-assertion. Score positive when multiple proof types reinforce each other and customers cite specific evidence when recommending the brand.
The four canonical proof types
The Marketing Canvas identifies four types of proof. The most effective strategies use all four — each type covers a different dimension of credibility, and they stack:
Demonstration — showing the product working in a real context. Not a polished commercial. A before/after air quality result. A live installation. A customer tour. A product in use under realistic conditions. Demonstration answers "does it actually work?" It is the most visceral form of proof because it bypasses scepticism about the brand's motives — the outcome is visible.
Logical explanation — clarifying how and why it works. The mechanism. Why is this formula non-toxic? Because it uses X chemistry instead of Y. How does it eliminate toxins? Here is the molecular process. Why does this hold up better than alternatives? Here is the engineering rationale. Logical explanation answers "can I understand why it works?" It converts the sceptical-but-open prospect — the one who wants to believe but needs a reason — into a convinced one.
Endorsement — third-party validation. Certifications, awards, analyst recognition, celebrity ambassadors, peer recommendations. In B2C: certifications like B-Corp or EcoCert, customer reviews, media coverage, social proof numbers ("550 families served"). In B2B: Gartner Magic Quadrant placement, ISO certifications, named client case studies, analyst endorsements. Endorsement answers "who else believes this?" It transfers credibility from a trusted external source to the brand.
Reputation — established credibility that precedes any specific claim. Years in business. Volume of customers served. Industry recognition over time. The credibility that arrives before a prospect reads a single word of marketing. Reputation answers "can I trust this brand in general?" It is the slowest proof type to build and the most durable once established.
Stacking: why one proof type is never enough
Each proof type addresses a different dimension of credibility. A single proof type is credible on one dimension and silent on the others — leaving gaps a sceptical prospect will fill with doubt.
A brand that has only endorsement (certified, award-winning) but no demonstration (show me it works) can be dismissed as buying certifications. A brand with strong demonstration but no logical explanation raises the question "yes, but how?" A brand with deep reputation but no current endorsement is vulnerable to the claim that past performance is no longer relevant.
The proof stack that makes a category claim genuinely credible combines all four:
Here is what it does (demonstration)
Here is why it works (logical explanation)
Here is who else validates it (endorsement)
Here is the track record behind us (reputation)
For Green Clean as an A9 Category Creator — a company asking the market to believe in a category that didn't previously exist — the stacking principle is existential. The burden of proof for creating a new category is ten times higher than for competing within one. Every claim they make is unfamiliar. Every endorsement they earn legitimises the category, not just the company. Every demonstration they run teaches the market that the job is real.
Laurent Bouty - Marketing Canvas Method - Proofs
B2B and B2C: proof types work differently
The four proof types apply universally but manifest differently by context.
In B2B, proof often determines whether you make the shortlist before any sales conversation begins. Gartner Magic Quadrant placement, ISO certifications, named client case studies with verifiable outcomes, and analyst endorsements function as purchase prerequisites — the deal never begins without them. A B2B buyer who cannot show their CFO a Gartner ranking or a named enterprise reference cannot internally justify the purchase, regardless of the product's quality. Proof here is a gatekeeping mechanism, not just a persuasion tool.
In B2C, proof works through different channels. Customer reviews (demonstration by proxy), before/after results (direct demonstration), media coverage (earned endorsement), social proof numbers ("over 1 million families have switched"), and visible certifications on packaging all contribute to the credibility system. The scale of endorsement matters differently: a single enterprise case study moves a B2B deal; 500 five-star reviews move a B2C conversion. The mechanism is the same — independent validation — but the format and threshold differ.
The implication for scoring: a B2B company that scores its proof stack against B2C norms (focusing on reviews and social media rather than analyst coverage and certifications) will systematically misdiagnose the dimension.
Proof in the Marketing Canvas
The canonical question
Why should customers believe your claims?
Proof appears in the Vital 8 of four archetypes — spanning a wide range of strategic urgency:
Primary Accelerator for A8 (Niche Expert): Expert authority must be demonstrable, not claimed. A niche expert whose expertise cannot be independently verified is simply a specialist with good self-confidence. The proof stack — certifications, published work, client outcomes, peer recognition — is the mechanism that converts internal confidence into external authority. For A8, Proof is the dimension that transforms "we know this space deeply" into "the market knows we know this space deeply." Hermès' resale values (Birkin bags appreciating faster than gold) are a form of proof: independent market validation that the quality claim is real.
Secondary Brake for A3 (Brand Evangelist): Tribal trust is built on values and shared belief — but it is sustained by proof that the brand lives what it claims. Patagonia's "Don't Buy This Jacket" campaign worked because the proof of environmental commitment was already established through decade of verified actions: 1% for the Planet donations (independently tracked), Worn Wear repairs data (published), B-Corp certification (audited). Without the proof stack underneath, the campaign would have been dismissed as marketing theatre. For A3, credibility gaps erode tribal trust faster than any competitive threat.
Secondary Brake for A4 (Stagnant Leader): A stagnant leader's most valuable asset is the credibility accumulated over years of market presence. When that credibility starts to decay — when proof points become dated, when case studies reference old products, when certifications lapse — the legacy position that was the primary competitive defence begins to dissolve. Proof maintenance is as important as proof creation for A4.
Secondary Brake for A9 (Category Creator): The unique challenge here is proving something works in a category that doesn't exist yet. Green Clean cannot reference ten years of "health-first home care" competitors because the category is new. Every proof point they build — the university formula validation, the B-Corp certification, the Family Health Report, the air quality before/after results — is simultaneously proving the company and defining the standards of the category. For A9, Proof is the physical evidence that the new category is real, not just a repositioning exercise.
Statements for self-assessment
Rate your agreement on a scale from −3 (completely disagree) to +3 (completely agree). There is no zero — the Marketing Canvas forces a directional position on every dimension.
Note on Detailed Track scoring: if averaging sub-question scores produces a mathematical zero, the method rounds to −1. A split score means the dimension is not clearly helping your goal — and "not clearly helping" requires the same investigation as "hurting."
Interpreting your scores
Negative scores (−1 to −3): Claims are unsupported or rely entirely on self-assertion. Proof types are absent or single-layer. Sceptical prospects — particularly in categories where greenwashing is common — have no independent reason to believe the value proposition. Conversion rates are lower than the product quality justifies. For archetypes where Proof is a Strategic Brake, a negative score here explains why the strategy is not generating the expected traction.
Positive scores (+1 to +3): Multiple proof types reinforce each other. Demonstration, explanation, endorsement, and reputation are all present and deployed at the moments in the customer journey where scepticism is highest. Customers cite specific evidence when recommending the brand — not because they were asked to, but because the proof is memorable and specific enough to pass on.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean's proof system is entirely self-asserted. The website states "non-toxic cleaning you can trust" and "safe for your family." No demonstration: no before/after air quality data, no ingredient testing results, no customer outcome evidence. No logical explanation: the website says the formula is "plant-based" but does not explain what that means for toxin elimination or why it is safer than conventional products. No endorsement: no certifications, no third-party validation, no named customer testimonials. No reputation: Green Clean is four years old and has not systematically built a credibility track record. When health-conscious parents research the brand, they find claims that every competitor also makes. There is nothing that distinguishes a Green Clean claim from an EcoPure claim from a NatureFresh claim. The proof gap is the primary barrier to conversion for the Early Believer segment — the very customers who care most about evidence.
Score: +1 to +2 (Developing) Green Clean has begun building a proof stack. The B-Corp certification (first in the region for cleaning services) is the strongest endorsement they have — it is independently audited and competitively rare. The university partnership behind the formula is publicly referenced but not yet explained: the website says "developed with a university chemistry department" without specifying the institution, the testing methodology, or what the validation showed. Customer testimonials are present but anonymous — "a satisfied parent in [city]" — which reduces their credibility impact. The Family Health Report exists and provides per-visit demonstration data but is only seen by existing customers, not by prospects during the research phase. The proof stack is forming but is not yet deployed at the moments that matter most: the first three minutes of a prospect's research.
Score: +2 to +3 (Strong) Green Clean's proof stack covers all four types and is deployed at the right journey stages. Demonstration: the Family Health Report excerpt (average toxin load reduction across 550 customer visits) is visible on the website homepage before any sales conversation. A before/after air quality result from a real customer home (anonymised but with verifiable methodology) appears on the booking page. Logical explanation: a plain-language technical summary explains precisely why the university-validated formula eliminates specific chemical classes that conventional eco-cleaning products do not address. Endorsement: B-Corp certification displayed prominently; EcoCert certification in process; 127 named customer testimonials with full first name and suburb; local health journalist coverage. Reputation: four years of service data, 550 active customers, 35% referral rate cited explicitly as a trust signal. When a prospect asks "why should I believe you over EcoPure?" — the answer is specific, layered, and independently verifiable at every level.
Connected dimensions
Proof does not operate in isolation. Four dimensions connect most directly:
310 — Features: Proofs demonstrate features work. The unique feature (the university-validated formula) is only as strong as the evidence behind it. Without the proof, the formula is a claim like every competitor's. With the proof, it is a category-defining differentiator.
330 — Prices: Proofs justify premium pricing. A customer who has encountered the full proof stack — demonstration data, logical explanation, B-Corp endorsement, reputation track record — is less price-sensitive than one who has not. Proof shifts perceived value upward and expands the WTP range.
520 — Stories: Stories are the delivery vehicle for proof. A case study is a story with demonstration. A customer testimonial is a story with endorsement. A founder origin narrative is a story with reputation. Proof is the evidence; Stories (520) is the format that makes evidence compelling and memorable.
530 — Media: Earned media is a form of proof. A journalist covering Green Clean's health-first positioning in a local parenting publication is providing endorsement at scale — more credible than any paid placement because the editorial decision is independent. Media strategy and proof strategy should be planned together.
Conclusion
The gap between a brand that has good features and a brand that is believed to have good features is exactly the width of dimension 340.
The most capable product in the market cannot sell itself if prospective customers have no independent reason to trust the claims made about it. Every market has category-level scepticism built up by years of overclaimed marketing — "eco-friendly," "expert," "world-class" — that has trained buyers to discount self-assertion reflexively.
The proof stack is the mechanism that breaks through that scepticism. Demonstration shows. Explanation clarifies. Endorsement validates. Reputation precedes. Together, they convert claims into credibility — and credibility into the willingness to buy, recommend, and pay a premium.
Sources
Robert Cialdini, Influence: The Psychology of Persuasion, Harper Business, revised edition 2021
Nielsen, Trust in Advertising, Nielsen Consumer Research, 2023 — nielsen.com
Marketing Canvas Method, Appendix E — Dimension 340: Proof, Laurent Bouty, 2026
About this dimension
Dimension 340 — Proof is part of the Value Proposition meta-category (300) in the Marketing Canvas Method. The Value Proposition meta-category contains four dimensions: Features (310), Emotions (320), Prices (330), and Proof (340).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Innovation Bootcamp @Besix
Please find below a video about the Innovation Bootcamp I co-created and facilitated with Solvay Brussels School for a great Belgian Company (they built the highest tower in the world in Dubai). Great team, great people, great energy and fantastic ideas.
Please find below a video about the Innovation Bootcamp I co-created and facilitated with Solvay Brussels School for a great Belgian Company (they built the highest tower in the world in Dubai). Great team, great people, great energy and fantastic ideas.
Innovation Bootcamp