BLOG
A collection of article and ideas that help Smart Marketers to become Smarter
Marketing Canvas - User Lifetime
Lifetime measures how long customers stay — scored as 1/churn rate. Learn the four properties, the CRC/CAC benchmark, and why a leaky bucket makes every other marketing investment less efficient.
About the Marketing Canvas Method
This article covers dimension 630 — User Lifetime, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Lifetime measures how long customers remain active — expressed as 1 divided by the churn rate. A 10% annual churn rate produces an average customer lifetime of 10 years. A 50% churn rate produces a lifetime of 2 years. The dimension scores four properties: measurement capability (can you calculate churn?), churn level (is it below market average?), trend (is churn improving?), and cost efficiency (is Customer Retention Cost proportionate to Customer Acquisition Cost?).
Lifetime is the Retention lever's primary metric. When the strategic goal is to grow revenue by keeping customers longer rather than acquiring new ones, Lifetime is the scoreboard. A leaky bucket makes every other marketing investment less efficient — acquisition, ARPU growth, brand building — because each one is partially undone by customers who leave before they return their full value.
Introduction
Acquisition brings customers in. Retention determines how long they stay. The relationship between the two is not symmetrical: what you invest to acquire a customer only pays back over time, and the longer the customer stays, the more time there is for that investment to compound. Shorten the lifetime, and the economics of acquisition become structurally harder to justify.
The Marketing Canvas treats Lifetime as a metrics discipline, not a loyalty programme design exercise. The dimension scores whether the company knows its churn rate, how that rate compares to market benchmarks, whether it is improving, and whether the investment in retention is proportionate — not excessive, not negligent.
The churn mathematics
The core formula is simple and worth holding precisely:
Customer Lifetime = 1 ÷ Churn Rate
5% annual churn → 20-year average lifetime
10% annual churn → 10-year average lifetime
25% annual churn → 4-year average lifetime
50% annual churn → 2-year average lifetime
The revenue mathematics of churn reduction are powerful and non-linear. Reducing annual churn by 5 percentage points — from 20% to 15%, for example — can increase total lifetime value per customer by 25 to 95%, depending on the business model and ARPU level. The range is wide because the compounding effect of extended lifetime interacts differently with high-ARPU versus low-ARPU relationships, and with businesses that generate more value from long-tenure customers through upsell and cross-sell than from short-tenure ones.
The practical implication: a 5-point improvement in churn is rarely a 5% improvement in commercial outcome. It is frequently a 30–60% improvement in the total value the acquired customer base will generate over its lifetime. This asymmetry — small churn improvements producing large value changes — is why Retention-focused archetypes treat Lifetime as a Fatal or Primary dimension rather than a supporting metric.
What the Marketing Canvas scores in Lifetime
The dimension scores four properties — measurement capability, churn level, trend, and CRC/CAC relationship — each addressing a distinct layer of retention health.
Measurement capability is the prerequisite that must be met before any other Lifetime property can be managed. Can you calculate your churn rate, because you know who is buying and using your products and services? A company that cannot identify which customers have stopped purchasing — because it lacks a direct customer relationship, because purchase identity is not tracked, or because "churn" has never been formally defined for the business model — cannot manage the others. Defining churn requires first agreeing on what "active" means. In subscription models, it is straightforward: did the customer renew? In transactional models, it requires a defined activity window: a customer who has not purchased within 12 months when the average purchase cycle is 6 months is churned. The definition must exist before the measurement can.
Churn level — is your churn rate below or equal to average market churn for your category? Churn benchmarks vary dramatically by industry — SaaS businesses might target 5–7% annual churn; consumer subscription services often run 20–30%; transactional retail models have different definitions entirely. The method scores relative to industry, not absolute thresholds. A 15% annual churn rate in a category where competitors average 25% is a positive score. The same rate in a category where the benchmark is 8% is negative.
Trend scores direction, not just position. A churn rate that is above industry average but visibly improving is a different strategic situation from a rate that is average but deteriorating. The method scores both the current level and the momentum independently — because a company that is losing ground on retention is in a different position from one that is gaining it, even when the current absolute numbers look similar.
The CRC/CAC relationship — is your Customer Retention Cost proportionate to your Customer Acquisition Cost, with the combined total running at 20–30% of revenue for mature businesses? This property diagnoses the investment balance between finding customers and keeping them. Below 20% combined, the company is likely underinvesting in one or both. At 20–30%, the economics are proportionate. Above 30%, the signal is that something upstream is broken: when retention cost is high, the root cause is almost never a retention spending problem — it is a product, experience, or fit problem. You are paying to hold customers who would leave without the financial incentive, rather than retaining customers who stay because the value is genuine. If CRC is rising without a corresponding improvement in churn trend, the spending is compensating for a deeper problem rather than solving it. The correct response is to investigate dimension 420 (Experience) and 140 (Engagement) — not to increase the retention budget further.
The leaky bucket consequence
The strategic framing the method applies to Lifetime is architectural, not tactical. A leaky bucket — high or rising churn — creates a compounding drag on every other marketing investment:
Acquisition becomes less efficient. The CLTV/CAC ratio (610) falls as customer lifetime shrinks. The acquisition spend that was justified by a 4-year lifetime is no longer justified by a 2-year lifetime at the same CAC. The acquisition engine keeps running; the economics quietly deteriorate.
ARPU growth is partially cancelled. Investments in cross-sell, upsell, and frequency programmes (620) build value in the existing base. If churn removes 30% of that base annually, the ARPU growth achieved in the retained segment is offset by the lost revenue from departing customers. The Stimulation lever loses efficiency every time the Retention lever is leaking.
Brand investment returns less. Customers who experience the brand, develop loyalty, and become advocates — the highest-value customers in any archetype — are disproportionately long-tenure. High churn eliminates the customers most likely to generate word-of-mouth, referral, and community value before those effects compound.
The canonical formulation: every 1% improvement in churn releases capacity across the entire marketing system. Every 1% worsening locks it.
Statements for self-assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Churn is unmeasured, above industry benchmark, deteriorating, or the CRC/CAC balance signals over-spending to compensate for an upstream product or experience problem. The leaky bucket is draining value from every other marketing investment. The priority is measurement first, then diagnosis of root cause, then targeted retention investment.
Positive scores (+1 to +3): Churn is tracked at cohort level, below industry average, improving through deliberate retention strategy, and the CRC/CAC ratio is proportionate. The Retention lever is functioning. Lifetime is extending and with it the total value generated by the acquired customer base.
Strategic Role
Fatal Brake for A4 (Stagnant Leader): The Stagnant Leader has a large installed base and a growth problem. In this context, churn is the existential threat: the customer base that the strategy depends on for ARPU growth and market share maintenance is being depleted. A weak 630 for A4 means the strategy is trying to grow value from an asset that is shrinking. Sage and Peloton both faced this dynamic — large bases, rising churn in the core segment, requiring fundamental retention intervention before any growth strategy could take hold. The leaky bucket is A4's most dangerous structural problem.
Primary Accelerator for A7 (Scale-Up Guardian): Hypergrowth creates a retention stress test. The service and experience that earned loyalty at 10,000 customers often strains at 100,000. New customers are acquired faster than the service model can be extended to them. Churn rises not because the product has degraded but because the delivery system hasn't scaled alongside the customer base. Airbnb and Spotify both navigated this: the core experience had to be systematically re-engineered at each order of magnitude of scale to prevent churn from rising with growth. For A7, Lifetime is a Primary Accelerator because protecting it during hypergrowth is the strategic capability that separates sustainable scale from growth that exhausts itself.
Secondary Accelerator for A3 (Brand Evangelist): The Brand Evangelist archetype depends on deep customer relationships that generate advocacy, word-of-mouth, and community identity. These effects compound over time — a customer in year five generates more referral value, more community participation, and more brand evangelism than a customer in year one. High churn truncates the compounding before it reaches full value. A strong 630 for A3 doesn't just protect revenue; it protects the community depth that makes the evangelism archetype function.
Secondary Accelerator for A6 (Value Harvester): In a declining market, the customer base is the asset being harvested. Every churned customer is an irreplaceable unit of that asset — they cannot be replaced by acquisition in a contracting market. Lifetime extension is the primary mechanism for extracting more value from the existing base before it naturally erodes. Combined with ARPU growth (620), extended Lifetime is what allows an A6 to generate increasing value from a shrinking pool.
Growth Driver for A6 (Stability Lock-in): When the Value Harvester deploys the Stability Lock-in growth driver, Lifetime extension is the primary mechanism. The strategy: make it structurally easier to stay than to leave — through contract architecture, integration depth, switching cost design, and service quality that makes alternatives unattractive. The 630 score for A6 measures whether this lock-in is producing measurable lifetime extension, not just whether the tactic exists.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean has never formally defined what constitutes a churned customer. The founder believes churn is "low" based on the intuition that most regular customers seem to keep booking — but this is not measured. There is no definition of what counts as "active": a customer who booked six cleans last year and none this year is not flagged anywhere in the system. The CRM migration that improved ARPU measurement has created a transaction log, but no cohort analysis has been run. The team cannot state its churn rate, cannot compare it to any benchmark, and has no historical trend data. Retention activities consist of a birthday discount email sent to customers on the anniversary of their first booking — not a strategy, but a single tactic with no measured impact. The leaky bucket is running; the size of the leak is unknown.
Score: +1 to +2 (Developing) Green Clean has defined its churn metric: a customer is considered churned if they have not booked a clean within 90 days when their historical booking frequency was fortnightly or more often. Applying this definition retroactively, the team has calculated a 12-month churn rate of 22%. A benchmark research exercise has established that comparable residential cleaning services in the region average 28–32% annual churn, placing Green Clean's current rate below market average — a stronger position than the team expected. The churn trend over the past six months shows improvement: the monthly churn rate has fallen from 2.1% to 1.7% since the introduction of the subscription model (which provides an explicit renewal commitment that reduces passive drift). CRC has been formally calculated for the first time: the total cost of the birthday discount programme, the proactive re-engagement emails, and the subscription management time runs at approximately 8% of revenue. CAC runs at approximately 14% of revenue. Combined, CAC + CRC is 22% — within the 20–30% mature business benchmark. The measurement exists. The trend is positive. The investment ratio is sound.
Score: +2 to +3 (Strong) Green Clean's churn management is cohort-level and predictive. Monthly cohort analysis tracks churn by acquisition channel, service tier, and customer tenure — revealing that customers acquired through the referral programme have a 12-month churn rate of 11%, versus 31% for customers acquired through paid social. This channel-level insight has redirected acquisition investment: referral programme budget has increased, paid social has been reduced, and the mix shift is producing compounding lifetime improvement. Annual churn has fallen from 22% to 14% over 24 months — from slightly below the market average benchmark to substantially below it. The 14% rate produces an average customer lifetime of 7.1 years, compared to 4.5 years at the 22% baseline: a 58% increase in expected lifetime at the same ARPU, without acquiring a single additional customer. CRC has risen slightly to 11% of revenue as the proactive at-risk customer programme has been built out — but combined with CAC of 12%, the total remains within the 20–30% benchmark at 23%. The churn model now includes a predictive layer: customers who miss two consecutive bookings are flagged and receive a personal outreach call within 7 days. The at-risk recovery rate is 41%.
Connected dimensions
Lifetime does not operate in isolation. Four dimensions connect most directly:
140 — Engagement: Engagement predicts lifetime. The most reliable leading indicator of churn is declining engagement — a customer who is using the product less, participating in fewer touchpoints, and showing reduced activity before formally cancelling or lapsing. A strong 140 score functions as an early-warning system for 630: engagement data identifies at-risk customers before they appear in churn statistics. When 630 scores are weak despite retention investment, the diagnostic starts at 140.
420 — Experience: Experience quality determines whether customers stay. Churn that cannot be explained by price sensitivity, competitive alternatives, or life circumstances is almost always an experience failure — something in the journey is consistently disappointing customers in a way that accumulates until departure. A rising CRC without a corresponding improvement in 633 is the signal: the retention spend is compensating for an experience problem that 420 needs to solve. Spending more to keep customers who are leaving because of a broken experience is the wrong lever.
610 — Acquisition: CAC must be justified by Lifetime. The CLTV/CAC ratio (610) depends on how long the acquired customer stays. A short lifetime makes an otherwise healthy CAC structurally unprofitable. The two dimensions must be scored and managed in relation to each other: improving 630 improves the return on 610 investment without changing the acquisition economics.
620 — ARPU: ARPU × Lifetime = total customer value. This is the fundamental identity connecting the two Stimulation and Retention lever metrics. Growing ARPU in a high-churn environment is a partial strategy. Extending Lifetime with flat ARPU is also partial. The combination — ARPU rising and Lifetime extending simultaneously — is the full expression of customer value maximisation, and the strategic goal of the archetypes where both dimensions appear in the Vital 8.
Conclusion
Lifetime is the dimension that determines how much time each customer relationship has to generate value. Every investment in acquisition, ARPU growth, experience quality, and brand building operates inside the window that Lifetime defines. Shorten that window and every upstream investment returns less. Extend it and the compounding begins.
The diagnostic test is the churn arithmetic: calculate your current churn rate, convert it to a customer lifetime using the 1/churn formula, and then multiply that lifetime by your ARPU. The result is the total expected value of a newly acquired customer. Now reduce the churn rate by 5 percentage points and recalculate. The difference between those two numbers — achievable with deliberate retention investment — is what Lifetime management is worth commercially.
If you have not run that calculation, 631 scores negative. Everything else follows from measurement.
Sources
Frederick F. Reichheld, The Loyalty Effect: The Hidden Force Behind Growth, Profits, and Lasting Value, Harvard Business School Press, 1996 — foundational churn-to-value mathematics
Robbie Kellman Baxter, The Forever Transaction, McGraw-Hill Education, 2020 — subscription and retention architecture
Marketing Canvas Method, Appendix E — Dimension 630: Lifetime, Laurent Bouty, 2026
About this dimension
Dimension 630 — Lifetime is part of the Metrics meta-category (600) in the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget/ROI (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas Method - User Lifetime and Churn
Marketing Canvas - ARPU
ARPU measures whether you are maximising revenue from each customer through frequency, spend, and value growth. Learn the four properties, the revenue equation, and why measurement capability is the prerequisite everything else depends on.
About the Marketing Canvas Method
This article covers dimension 620 — ARPU, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
ARPU — Average Revenue Per User — is the metric that scores whether you are extracting maximum value from each customer relationship, not just from your customer base in aggregate. The dimension scores four properties: measurement capability (do you know who is buying and how much?), purchase frequency (are customers buying often enough?), average spend per transaction (is the value per purchase competitive?), and trend (is ARPU growing over time?).
ARPU is the Stimulation lever's primary metric. When the strategic goal is to grow revenue by getting more value from existing customers rather than acquiring new ones, ARPU is the scoreboard. Revenue can grow with a flat or even shrinking customer base if ARPU is rising. That possibility is only accessible to companies that can measure it.
Introduction
Every marketing strategy has a revenue growth direction. Acquiring more customers (Acquisition lever). Keeping them longer (Retention lever). Getting more value from each one (Stimulation lever). ARPU is what the Stimulation lever measures — the revenue generated per active customer, and whether it is moving in the right direction.
The dimension is not about whether you understand the concept of average revenue. It is about whether your business has the instrumentation to know, at the individual customer level, who is buying what, how often, and at what transaction value — and whether deliberate strategies are moving those numbers upward over time.
What does the Marketing Canvas score in ARPU?
The dimension scores four properties — measurement capability, purchase frequency, average spend per transaction, and trend — each a distinct layer of revenue-per-customer health.
Measurement capability is the prerequisite that everything else depends on. Can you measure ARPU, because you know who is buying and using your products and services? Companies that sell through intermediaries — retailers, distributors, resellers, channel partners — frequently cannot measure ARPU at the customer level. They know what they ship to the channel. They do not know who buys it, how frequently that person returns, or what they spend across the relationship. The method's position is unambiguous: strategy built on unmeasurable metrics is fiction. If you cannot measure ARPU, you cannot manage it, benchmark it, or improve it with any precision. A negative measurement capability score is not a data problem — it is a business model problem. The route to a positive score typically requires a direct relationship with the customer, whether through owned channels, a loyalty programme, direct distribution, or subscription architecture.
Purchase frequency — is the average number of purchases per customer per period above industry average? Frequency is one of the two levers within ARPU that can be deliberately moved, the other being average transaction value. Frequency improvement strategies — subscription models, loyalty programmes, replenishment triggers, behavioural nudges, service bundling — all work by increasing the number of times a customer transacts, not the size of each transaction. A weak frequency score relative to industry benchmarks suggests the customer's potential buying rhythm is not being captured.
Average spend per purchase — is the average transaction value per customer above industry average and above direct competitors? Transaction value improvement strategies — upselling to premium tiers, cross-selling complementary products, bundling, value-based pricing discipline — all work by increasing the revenue extracted from each interaction, independent of how often it occurs. A weak score here often traces upstream to dimension 330 (Prices) or 310 (Features): either the pricing architecture is not capturing full willingness to pay, or the product range does not provide sufficient upsell surface.
Trend is the most strategic of the four properties because it reveals direction, not just position. A current ARPU above industry average is a position. A rising ARPU trend is a momentum signal. The method scores both: where you are (frequency and spend, benchmarked against industry) and where you are going (trajectory over time). A company with below-average ARPU but a strongly positive trend is in a different strategic position from one with above-average ARPU that has been flat for two years.
ARPU in the revenue equation
The Marketing Canvas places ARPU explicitly in the revenue model. In the method's framework:
Revenue = AOP × NT × ATV × 12 (for subscription or recurring models)
Where:
AOP = Active Operating Periods (the number of active customers)
NT = Number of Transactions per customer per period
ATV = Average Transaction Value per purchase
× 12 = annualisation factor
ARPU captures the NT × ATV components. When ARPU grows — either through frequency (NT) or transaction value (ATV) — revenue grows, even if AOP is flat or declining. This is the commercial logic that makes ARPU the primary growth mechanism for archetypes whose customer base is stable or contracting.
The implication: a business that is not growing its customer count can still grow revenue if it is managing ARPU deliberately. This is not a consolation prize for low-acquisition businesses — it is the preferred growth strategy for several archetypes, particularly A6 (Value Harvester), where the customer base is the asset to be maximised before it erodes.
The measurement prerequisite in practice
Measurement capability has a compounding effect on all other ARPU properties. A company that cannot measure ARPU cannot validly assess frequency, average spend, or trend — because all three require knowing who is buying and at what level.
The diagnostic questions are practical: Do you have a direct relationship with your end customers, or does an intermediary sit between you and them? Can you identify individual customers across multiple transactions and aggregate their behaviour over time? Do you have a system — CRM, loyalty programme, subscription platform, or equivalent — that captures purchase identity at the transaction level? Can you calculate, for any given customer, how many times they have purchased and at what average value?
If the answer to any of these is no, measurement capability scores negative. The consequence is not just a low ARPU score — it is the strategic constraint that Stimulation lever strategies are inaccessible without the infrastructure to identify and act on individual customer behaviour.
Statements for self-assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): ARPU is unmeasured, below industry benchmark, declining, or all three. The most common root cause is 621 — the measurement infrastructure does not exist, making deliberate ARPU strategy impossible. If 621 is negative, it must be resolved before 622, 623, or 624 can be meaningfully improved.
Positive scores (+1 to +3): ARPU is tracked at the individual customer level, above competitive benchmarks on frequency and transaction value, and showing a positive trend driven by deliberate cross-sell, upsell, or subscription strategies. The Stimulation lever is active and measurable.
Strategic Role
Primary Accelerator for A6 (Value Harvester): The Value Harvester archetype faces a structurally declining customer base — through market contraction, category disruption, or strategic wind-down. The core mission is to extract maximum revenue from the remaining base before it erodes further. ARPU is the primary instrument: if you cannot grow the customer count, you must grow what each customer generates. Nokia's PC division, IBM's legacy hardware operations, the physical media businesses of the early 2000s — all faced this equation. ARPU is not a growth story in A6; it is a survival and value extraction strategy. A weak 620 score for A6 means the value in the existing base is being left on the table.
Secondary Accelerator for A2 (Efficiency Machine): Efficiency businesses win on cost structure, but ARPU discipline prevents the trap of growing volume at declining transaction values. A2 companies that allow average spend per purchase to drift below market — through discount dependency, race-to-bottom pricing, or failure to develop premium tiers — sacrifice the margin that makes operational efficiency commercially meaningful. ARPU keeps the revenue per unit healthy while the cost structure is being optimised.
Secondary Accelerator for A4 (Stagnant Leader): A stagnant leader has a large installed base that is not growing. ARPU is the mechanism through which that base generates increasing revenue without acquisition investment. Upsell programmes, premium tier introduction, frequency stimulation through loyalty architecture — these are the A4 ARPU strategies. Sage and Peloton both faced this challenge: large customer bases with flat or declining ARPU, requiring deliberate Stimulation lever investment to restore revenue growth from existing relationships.
Secondary Accelerator for A8 (Niche Expert): In a niche, customer count is bounded by market definition. ARPU is the primary revenue growth mechanism once the addressable niche has been substantially penetrated. Deep expertise enables premium pricing (623) and expanded service scope that generates frequency (622). Hermès cannot grow by acquiring more customers — the niche is intentionally small. It grows ARPU by deepening the relationship, expanding the product universe, and maintaining pricing discipline that competitors in adjacent categories cannot match.
Growth Driver for A4 (Premium Stimulation): When the A4 archetype deploys the Stimulation growth driver, ARPU is the scorecard. The strategic question shifts from "how do we acquire more customers?" to "how do we get more value from the customers we have?" Premium service tiers, bundle architecture, frequency programmes — all converge on the NT × ATV components of the revenue equation. A positive 624 trend is the evidence that the Stimulation strategy is working.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean operates a direct service model — customers book cleans through the website and pay directly — so the measurement capability question should be straightforward. In practice, bookings are tracked in a spreadsheet by date and postcode, not by named customer. The team cannot produce a list of customers sorted by revenue, frequency, or tenure. They know the total revenue per month; they do not know which customers generate that revenue or how it has changed at the individual level. Purchase frequency is estimated at "every two to three weeks per regular customer" — an informal observation, not a measured figure. Average spend per clean is known (€89 average booking value) but not benchmarked against competitors in any formal way. There is no deliberate strategy to increase either frequency or transaction value. ARPU is in the system conceptually but is not being managed.
Score: +1 to +2 (Developing) Green Clean has migrated customer bookings to a CRM system that associates every transaction with a named customer. For the first time, the team can calculate individual-level purchase frequency and annual revenue per customer. The results are diagnostic: the top 20% of customers (by annual revenue) generate 61% of total revenue; the bottom 30% have purchased only once. Average frequency for regular customers is 2.1 cleans per month; the industry benchmark for comparable residential services is estimated at 1.8, placing Green Clean slightly above average. Average transaction value is €89, against a benchmarked competitor average of €82 — above market. The ARPU trend for the past 12 months is flat: frequency has been stable, average spend has not moved. The measurement is now in place. The strategy to move the trend is the next step: a bundled subscription offer (quarterly commitment at a discount) is under development to convert sporadic customers into regular ones and improve frequency among the bottom segment.
Score: +2 to +3 (Strong) Green Clean's ARPU management is fully instrumented and actively growing. The subscription model introduced 18 months ago has migrated 44% of active customers to monthly or quarterly commitments, increasing average purchase frequency from 2.1 to 2.7 cleans per month across the base. Average transaction value has grown from €89 to €104, driven by a tiered service architecture — Standard Clean, Deep Clean, and the Full Indoor Health Audit — that provides deliberate upsell surface at every booking interaction. The Indoor Health Audit, priced at €220, is purchased by 28% of active customers at least once per year, contributing significantly to ATV uplift. ARPU trend for the past 12 months shows 17% year-on-year growth. The method's revenue equation is operating as designed: AOP is growing modestly (+8%), but the NT × ATV component is growing at more than twice that rate, meaning revenue growth outpaces customer acquisition growth. The Stimulation lever is doing its work.
Connected dimensions
ARPU does not operate in isolation. Four dimensions connect most directly:
310 — Features: Features enable cross-sell and upsell. The product or service range must provide sufficient depth to give customers a reason to increase their transaction value or expand their relationship. A company with a single product at a single price point has no upsell surface. Features (310) is the upstream dimension that determines the ceiling of what ARPU can reach through 623 (average spend) improvement.
330 — Prices: Pricing architecture directly affects ARPU. A pricing structure with only one tier and no premium options constrains transaction value regardless of customer willingness to pay. Value-based pricing discipline — ensuring that price reflects the full value delivered, not the competitor floor — is the upstream condition for 623 to score positively. The 330 and 623 scores move together: weak pricing architecture produces a ceiling on transaction value that no frequency strategy can compensate.
420 — Experience: Better experience supports higher ARPU. Customers who have an outstanding experience are more likely to purchase more frequently, less likely to resist premium tier offers, and more resistant to competitor alternatives that might siphon frequency away. The 420 score is an upstream predictor of 622 and 624 performance. Experience degradation is typically visible in ARPU trend data before it appears in churn data.
630 — Lifetime: ARPU × Lifetime = total customer value. This is the fundamental identity that connects the two Metrics dimensions most directly. A high ARPU with low lifetime produces a different strategic outcome than a moderate ARPU with high lifetime. The method requires both to be scored and interpreted in relation to each other — and the CLTV/CAC ratio (610) cannot be calculated without knowing both components.
Conclusion
ARPU is the dimension that determines whether the customer base you have is generating the revenue it is capable of generating. Every acquired customer represents a revenue potential. The gap between that potential and actual revenue is the ARPU opportunity — the difference between what the customer could spend with you and what they do.
The strategic discipline the method requires begins with measurement: knowing who is buying, at what frequency, at what transaction value. Without that, every ARPU strategy is hypothesis. With it, the Stimulation lever becomes the most capital-efficient growth mechanism available — growing revenue without the cost and risk of acquiring new customers.
The single most diagnostic question: can you name your top 20% of customers by annual revenue right now, without running a manual query? If the answer is no, the measurement prerequisite hasn't been met. That is where 620 improvement begins.
Sources
Robbie Kellman Baxter, The Membership Economy, McGraw-Hill Education, 2015 — foundational framework for frequency and recurring revenue strategy
Madhavan Ramanujam & Georg Tacke, Monetizing Innovation, Wiley, 2016 — pricing architecture and willingness-to-pay instrumentation
Marketing Canvas Method, Appendix E — Dimension 620: ARPU, Laurent Bouty, 2026
About this dimension
Dimension 620 — ARPU (Average Revenue Per User) is part of the Metrics meta-category (600) in the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget/ROI (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas - Influencers
The Influencers dimension of the Marketing Canvas scores four properties — purpose alignment, goal clarity, authenticity, and long-term measurement. Learn why follower count is the wrong selection criterion.
About the Marketing Canvas Method
This article covers dimension 540 — Influencers, part of the
Conversation meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Influencers is the dimension that scores whether the people carrying your brand's message to new audiences are doing so with genuine conviction — or merely performing it for a fee. The distinction matters strategically because an influencer reading a script is advertising with a human face. It produces awareness. It creates no trust. An influencer genuinely using and recommending the product in their own language creates the most powerful form of proof available: peer endorsement.
The Marketing Canvas scores four properties — purpose alignment, goal clarity, authenticity, and long-term measurement — plus a sustainability criterion. The single most diagnostic question: does your company allow influencers creative freedom, or does it script and control the content until the authenticity is gone?
Introduction
Influencer marketing has matured from a novelty tactic into a structural component of how brands earn credibility at scale. But the term "influencer" has been so narrowly associated with social media content creators that it often obscures the more strategically significant question: who are the people whose opinions your target customers actually trust — and are those people carrying your brand's message?
The Marketing Canvas definition is deliberately broad. An influencer is anyone whose voice carries authority with your target audience. That includes social media creators with large followings. It also includes industry analysts, thought leaders, professional advisors, satisfied customers with relevant networks, and community leaders. The dimension applies universally across industries; only the cast changes.
What does the Marketing Canvas mean by Influencers?
The dimension scores four canonical properties, plus a fifth sustainability criterion:
541 — Purpose alignment: Are you working with influencers whose values genuinely match your brand purpose, and who function as authentic ambassadors rather than paid distribution channels? The selection criterion the method scores is not follower count — it is audience alignment. An influencer with 8,000 followers who are all parents concerned about home safety is more strategically valuable to Green Clean than an influencer with 800,000 general lifestyle followers. Purpose alignment is also a safeguard: an influencer who doesn't believe in the brand will eventually say so, or simply perform inauthentic content that the audience can detect.
542 — Goal clarity: Have you defined clear and actionable objectives for your influencer activity, connected to your overall marketing goals? Influencer activity without defined goals produces vanity metrics — reach, impressions, likes — that feel significant and are difficult to connect to commercial outcomes. The method scores whether goals are specific (what change in brand perception, consideration, or behaviour is the influencer activity targeting?) and whether those goals are aligned with the archetype's strategic priorities.
543 — Authenticity: Do you let influencers develop content for their audience in their own voice? This is the criterion that separates peer endorsement from advertising-with-a-face. Scripted influencer content is recognisable to audiences, produces the engagement metrics of organic content, and delivers the trust levels of a display ad. Authentic content — where the influencer has genuine experience with the product and describes it in their own language, to their own community, with their own perspective — transfers the influencer's credibility to the brand. The method scores whether the company has the discipline to allow this, or whether legal, brand, and marketing review processes have controlled the authenticity away.
544 — Long-term measurement: Have you set long-term metrics for your influencer relationships, prioritising indicators of brand impact and community engagement over short-term campaign performance? Transactional influencer strategies — one campaign, pay-per-post, move on — optimise for reach and produce no compounding value. Long-term relationships with purpose-aligned influencers compound: the influencer's knowledge of the brand deepens, the audience's association between influencer and brand strengthens, and the credibility transfer accumulates over time. Annual ROI measured in brand consideration and community growth is the right measurement frame. Post-level engagement rates are a signal; they are not the strategy.
545 — Sustainability: Are you working with influencers whose behaviour is consistent with sustainability principles, and are you minimising the environmental and ethical footprint of your influencer activity? This includes both the influencer's public conduct (a sustainability brand partnering with an influencer whose behaviour contradicts environmental values is a proof problem, not just a PR problem) and the operational sustainability of the programme.
The authenticity criterion in detail
The canonical distinction the method draws is worth holding precisely:
Influencer as advertising vehicle: The brand provides a brief, often a script, product talking points, and required disclosures. The influencer posts. The audience receives brand messaging delivered through a trusted human face. Awareness is built. Trust is not transferred — the audience recognises the commercial transaction and adjusts their interpretation accordingly. This is paid media with a warmer tone. It is scored as paid media efficiency, not as peer endorsement.
Influencer as genuine ambassador: The influencer has direct experience with the product or brand. They speak about it in their own language, to their own community, from their own perspective. They may be compensated, but the compensation does not dictate the content. The audience receives a recommendation from someone they trust, and that recommendation carries the influencer's personal credibility. Trust is transferred. This is the most powerful form of proof available — it is scored under dimension 340 (Proof) as well as 540, because it functions as both endorsement and content strategy.
The strategic failure the method diagnoses is companies that start with the second intention — genuine ambassadors — and then systematically dismantle it through approval workflows, mandatory messaging, legal review, and creative constraints until what arrives in the feed is indistinguishable from sponsored content. The 543 score measures whether the company has allowed authenticity to survive the internal process.
Influencers in B2B
The framing of influencer strategy as a consumer social media tactic obscures one of the most commercially significant applications of the dimension: B2B influence.
In B2B contexts, influencers look different but function identically. The trusted voice whose opinion shapes purchase decisions is not a content creator with an Instagram following — it is the Gartner analyst who classifies your platform in the Magic Quadrant, the industry conference speaker who cites your methodology in a keynote, the experienced CTO who posts about their implementation experience on LinkedIn, or the respected consultant who recommends your approach to their clients.
Each of these operates on the same structural logic as consumer influencer marketing: they have an audience that trusts them, and their endorsement transfers credibility to the brand. The selection criteria are the same — purpose alignment, authenticity, goal clarity, long-term relationship orientation. The content format is different. The strategic function is identical.
A B2B company that scores 540 by only considering social media creators has misunderstood the dimension. The question is: who do your buyers trust before they make a decision, and are those people encountering your brand in a way that earns their authentic endorsement?
Statements for self-assessment
Score each of the five sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Influencer activity is transactional, follower-count-selected, or script-controlled. Awareness may be being generated; trust is not being transferred. The target audience's most trusted voices are not carrying the brand's message. Commercial outcomes from influencer spend are difficult to attribute and likely low.
Positive scores (+1 to +3): Influencer relationships are purpose-aligned, long-term, and authenticity-preserving. The people your target customers trust are encountering your brand, understanding it at depth, and endorsing it in their own voice. The endorsement functions as peer proof (340), not just reach. Measurement is oriented toward long-term brand impact rather than campaign-level vanity metrics.
Strategic Role
Growth Driver for A1 (Disruptive Newcomer): A disruptor introduces something the market hasn't seen before. The brand has no heritage credibility to draw on, and paid media cannot manufacture trust for an unknown proposition. Third-party voices — early adopters, category-adjacent influencers, industry observers — are the primary mechanism through which trust is established before the brand has earned it through scale. For A1, 540 scores whether the company has deliberately seeded credible voices with genuine product access, or is relying on paid awareness campaigns that the market hasn't yet decided to trust.
Growth Driver for A7 (Scale-Up Guardian): Rapid growth creates a credibility maintenance challenge. The influencer community that endorsed the brand at launch may not be the right community at scale. New segments require new trusted voices. New markets require locally credible advocates. 540 for A7 scores whether the influencer programme is scaling in proportion to the business — maintaining authentic third-party validation as the brand reaches audiences that have no prior relationship with it.
Growth Driver for A9 (Category Creator): Creating a category requires teaching the market that the category exists and why it matters. Influencers are category educators — trusted voices who explain the new concept to their communities in terms those communities can understand. Green Clean's indoor health protection category is taught more effectively by a parent blogger who has experienced the Family Health Report than by any brand-produced content. For A9, 540 is the dimension that converts category language (510) and category stories (520) into peer-endorsed understanding at scale.
Secondary Brake for A3 (Brand Evangelist): The Brand Evangelist archetype is built on authentic community and tribal identity. The wrong influencer partnerships — commercial, follower-count-selected, scripted — can actively dilute the authenticity that the tribe values. Patagonia's community credibility would be undermined by paid lifestyle influencers who don't genuinely share environmental convictions. Harley-Davidson's tribal identity would be weakened by sponsored content from celebrities who don't ride. For A3, 540 is a brake rather than an accelerator: the risk is not absence of influencers but the wrong influencers, who signal to the tribe that the brand has prioritised reach over authenticity.
Secondary Accelerator for A8 (Niche Expert): A niche expert's authority rests on being recognised as the best-in-segment option by the people whose opinion the segment trusts. Expert influencers — analysts, specialists, practitioners with deep credibility in the niche — validate that authority in ways the brand cannot self-certify. A Gartner mention, a specialist publication citation, a respected practitioner's recommendation: these carry the proof weight that a niche expert's positioning requires.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean has run two influencer campaigns in the past year, both sourced through a micro-influencer marketplace. The selection criterion was follower count and cost-per-post. Neither influencer had demonstrated prior interest in indoor health, family safety, or sustainability. Both received a product brief, required talking points, and a mandatory disclosure script. The resulting posts were published, received moderate engagement from the influencers' general lifestyle audiences, and generated eleven visits to the Green Clean booking page. No relationship continues beyond the campaign. The brand paid for reach. It received no credible endorsement. The audience that matters — parents actively researching indoor health protection — did not encounter Green Clean through any voice they trust on the subject.
Score: +1 to +2 (Developing) Green Clean has identified three micro-influencers whose existing content demonstrates genuine alignment with the indoor health protection job: a parent blogger who writes about reducing chemical exposure in family environments, a wellness content creator who has reviewed cleaning product ingredients, and a local community leader active in sustainable home practices. All three have been approached with a relationship brief rather than a campaign brief — the brand explained its mission, offered product access and service experience, and gave full creative freedom. Two of the three have published content. The content is recognisably authentic: it uses the influencers' own language, references their personal experience with the Family Health Report, and frames the endorsement around their own concerns rather than Green Clean's messaging. Goals are partially defined — brand consideration in the target segment — but measurement is informal. The compounding value of long-term relationships has not yet been built.
Score: +2 to +3 (Strong) Green Clean's influencer programme functions as an ambassador system rather than a campaign channel. Eight long-term partners — all purpose-aligned, all with genuine indoor health or sustainability credibility — have direct experience with the brand's service and the Eco-Proof Report. Each creates content in their own format, on their own schedule, in their own language. Green Clean provides product access, behind-the-scenes access to methodology, and early information about service developments. Creative briefs are replaced by relationship conversations. The audience each influencer reaches is the specific segment Green Clean most needs to reach: parents who are already researching indoor air quality and family health. Annual measurement tracks brand consideration uplift and community growth rather than post-level engagement. Several influencers have become genuine advocates — their personal endorsement pre-dates and exists independently of any commercial arrangement, which their audiences can distinguish. The programme has generated earned media: two of the influencers' Family Health Report posts were cited by a national parenting publication, extending the endorsement to a credibility tier the brand could not have accessed through paid media.
Connected dimensions
Influencers does not operate in isolation. Four dimensions connect most directly:
520 — Stories: Influencers tell stories. The content an influencer creates is a story — about their own experience, about the brand's relevance to their audience, about the job the product helped them get done. A strong 520 (content strategy) creates the narrative framework; a strong 540 extends that framework through voices the brand doesn't own. Influencer content that follows the customer-as-protagonist arc (520) is more compelling than brand-prompted product description.
340 — Proof: Influencer endorsement is a form of proof. Peer endorsement is the highest-credibility proof type available — it carries the influencer's personal reputation as collateral. A strong 543 (authenticity) score means the influencer content is functioning as genuine endorsement, not sponsored content. The overlap between 540 and 340 is significant: the same influencer relationship that scores in 540 is simultaneously generating proof assets (testimonials, case study narratives, third-party validation) that score in 340.
530 — Media: Influencers operate across shared and earned media. Organic influencer content is shared media when it generates community conversation and earned media when it results in press coverage or independent citation. A strong 530 (media system) is built to receive and amplify authentic influencer content — the owned media infrastructure captures the referral traffic, the email system nurtures the audience that arrives, and the earned media compounds from publications that cite influencer endorsements.
230 — Values: Influencers must share brand values — not just claim to. The 545 (sustainability) sub-question is the clearest expression of this, but the values alignment requirement extends to the full 230 dimension. An influencer whose public behaviour contradicts the brand's stated values is not a PR risk; it is a proof problem. The audience infers that the brand's values are performative if the people it aligns with don't live them.
Conclusion
The Influencers dimension scores something more fundamental than campaign reach or follower count. It scores whether the people whose opinions your target customers trust are carrying your brand's message — and whether they are doing so because they genuinely believe it, or because they were paid to say it.
The strategic test is the authenticity question: if the brand removed all mandatory messaging and creative constraints, what would the influencer say? If the answer is "probably the same thing, in their own words" — the relationship is an asset. If the answer is "nothing, or something very different" — the brand has a paid distribution channel, not an ambassador.
Building the second type of relationship takes longer, costs more selectivity, and requires internal discipline to resist the temptation to control the message. The commercial return — trust transferred, proof generated, community formed — is structurally more valuable than reach purchased and forgotten.
Sources
Jonah Berger, Contagious: Why Things Catch On, Simon & Schuster, 2013
Mark Schaefer, Known: The Handbook for Building and Unleashing Your Personal Brand in the Digital Age, Schaefer Marketing Solutions, 2017
Marketing Canvas Method, Appendix E — Dimension 540: Influencers, Laurent Bouty, 2026
Marketing Canvas - Media Strategy
Media is the distribution layer of the Marketing Canvas. Learn how the four media types — owned, earned, shared, paid — work as a system, not silos, and why sequence matters.
About the Marketing Canvas Method
This article covers dimension 530 — Media Strategy, part of the
Conversation meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Media is the distribution layer of the Marketing Canvas Method — the system that determines how far your stories travel, who receives them, and at what cost. The dimension scores four media types: owned, earned, shared, and paid. The method's critical insight is that these four types must function as an orchestrated system, not independent silos. When they do, each reinforces the others. When they don't, you are paying to compensate for what a system would have delivered for free.
The sequencing principle is canonical: build owned first, then use it to earn credibility, generate sharing, and amplify with paid. Companies that start with paid media before building owned media are paying rent on someone else's attention.
Introduction
Every marketing story needs distribution. Dimension 520 (Stories) answers what to say and how to structure it. Dimension 530 (Media) answers where those stories go and how they reach the right people at the right moment.
This is not a channel selection exercise. The Marketing Canvas treats media as an architecture question: what is the role of each media type in your strategy, how do they connect to each other, and are they sequenced correctly? A strong media score requires more than presence across four types — it requires deliberate orchestration with each type performing a distinct function in a coherent whole.
The four media types
The Marketing Canvas organises media into four categories, adapted from the PESO model (Paid, Earned, Shared, Owned). Each type has a distinct strategic function.
531 — Owned media is the foundation. Your website, blog, email list, app, and any platform you control without paying for distribution. Owned media is the only type where you hold both the content and the audience relationship. It cannot be algorithmically deprioritised, editorially rejected, or priced out of your reach. Everything else in the media system should be built to drive traffic back to owned. A weak or inconsistent owned media base means the rest of the system has no home base to return to.
532 — Earned media is authority you cannot buy. Press coverage, analyst mentions, organic search rankings, third-party reviews, award recognition. Earned media carries more credibility weight than owned because the source is independent — the company did not pay for the endorsement, and the audience knows it. The strategic goal of earned media is not coverage volume; it is the specific credibility signals that reach the specific decision-makers who will not trust owned media alone. In B2B, an analyst firm citing your methodology is earned media. In consumer markets, a major publication's review is earned media. Both perform the same function: borrowed authority.
533 — Shared media is engagement and community. Social platforms, forums, user-generated content, communities where your audience participates. The strategic function of shared media is conversation — it is the media type where the flow is bidirectional and where brand advocates can amplify content beyond the brand's own reach. The critical distinction: shared media with an engaged community is a multiplier. Shared media without community is a broadcast channel you don't control, and a less efficient one than paid. The score for 533 measures whether community actually exists — not whether the brand has social media accounts.
534 — Paid media is targeted amplification. Advertising across digital and offline channels — search, social, display, video, print, broadcast. Paid media's strategic function is reach that the other three types cannot yet deliver, or speed that organic growth cannot match. The diagnostic question the method applies: is paid media being used to amplify what is already working organically, or is it being used to substitute for owned, earned, and shared foundations that don't exist? The first use is leverage. The second is dependency — and dependency on paid becomes structurally expensive as soon as budgets contract.
535 — Sustainability: Is the media strategy compatible with sustainability principles? This includes both the sustainability of the media mix itself (a strategy built entirely on paid is not sustainable as a business model) and the environmental and ethical considerations of media choices (platforms, production practices, carbon footprint of digital advertising).
PESO model from Spinsucks (credentials: https://spinsucks.com/communication/peso-model-breakdown/)
The system logic: why sequence matters
The four types are not interchangeable. They serve different functions at different costs, with different credibility profiles and different dependencies. The method's sequencing principle is not a suggestion — it is a structural constraint that most organisations violate in the direction of paid-first.
The correct sequence:
Build owned. Without a functioning website, a content infrastructure, and an email relationship with your audience, you have no home base. Stories you earn, share, or pay for have nowhere to land that you control. Every campaign that drives traffic to a weak owned infrastructure is writing a cheque you can't cash.
Earn credibility. Once owned media is solid, third-party validation becomes possible and compounding. Press coverage links back to your site. Analyst mentions send audiences to your content. SEO rankings are a form of earned media built on owned content. Earned media is slow but non-depleting — a strong article from three years ago continues to rank and generate credibility without further investment.
Generate sharing. When owned and earned are functional, community forms around real value rather than manufactured engagement. Customers share because the content genuinely helps them. The shared media layer amplifies without additional cost.
Amplify with paid. Paid media is most efficient when it amplifies content and propositions that are already proven to resonate organically. Paid budget spent on content that hasn't earned any organic engagement is a signal that something upstream in the system is broken.
The pathology the method diagnoses: companies that reverse this sequence, starting with paid because it produces immediate, measurable results, and then discovering that they have built an audience they rent rather than own. When the paid budget stops, the audience disappears. This is not a media strategy problem — it is a media architecture problem.
Companies that start with paid media before building owned media are paying rent on someone else's attention. Stopping the rent means leaving the property.
Media and acquisition cost
The 530 score has a direct, measurable relationship with the 610 (Acquisition) score. A well-orchestrated media system — strong owned base, compounding earned authority, engaged shared community — systematically reduces the cost of acquiring each new customer over time. Paid media efficiency improves when prospects arrive having already encountered the brand through earned or shared touchpoints. The trust is partially built before the first paid impression.
A media strategy that is entirely paid-dependent produces a flat or rising acquisition cost curve. Every new customer costs approximately the same as the last, because there is no compounding infrastructure. The paid-first company runs faster to stay in the same place.
Statements for Self-Assessment
Score each of the five sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Media types are siloed, over-invested in the wrong sequence, or structurally dependent on paid without owned foundations. Likely result: acquisition costs are flat or rising; brand credibility is low because no independent voices have validated it; community doesn't exist because there is nothing to gather around.
Positive scores (+1 to +3): The four media types are orchestrated into a coherent system. Owned is the foundation. Earned is compounding. Shared is generating community conversation. Paid is amplifying proven content rather than compensating for absent foundations. Acquisition cost trends downward as the system matures.
Strategic Role
Media rarely appears as a Fatal or Primary dimension in any archetype — it is the amplification layer that makes other dimensions' work visible to the market. Its absence is rarely the primary reason a strategy fails; its weakness is usually the reason a strategy that should be working isn't reaching its potential audience.
Secondary Accelerator for A1 (Disruptive Newcomer): A disruptor's story needs distribution to reach beyond the early adopter fringe. New brands have no earned media heritage, limited owned infrastructure, and no community yet. Building the media system quickly — prioritising owned first, then using early press coverage and community formation to reduce paid dependency — determines how fast the disruption can scale. A weak 530 for A1 means the product is good and the story is clear, but no one beyond the founding circle hears it.
Secondary Accelerator for A7 (Scale-Up Guardian): Scale-up creates the opposite problem: rapid growth can outpace the media system's capacity to maintain brand coherence. New audiences encounter the brand through inconsistent channels. Paid spend scales faster than owned infrastructure can receive. The earned media narrative hasn't kept pace with what the company has become. A strong 530 for A7 means the media architecture has scaled alongside the business — new owned properties in new markets, earned authority in new categories, community forming around the expanded brand.
Secondary Accelerator for A9 (Category Creator): Creating a category requires persistent category education across multiple media touchpoints. A category cannot be taught in a single paid impression. The owned media library builds the intellectual case. Earned media validates it through independent voices. Shared media spreads the language through community adoption. Paid media introduces the category to cold audiences who then continue their education through owned and earned. All four types are required for category creation. A weak 530 for A9 means the category story is being told inconsistently, too narrowly, or is being terminated every time paid budget runs out.
Growth Driver for A3 (Brand Evangelist): In the Brand Evangelist archetype, media amplification of member advocacy is the primary growth engine. Patagonia's earned media (documentary filmmaking, environmental activism coverage) and shared media (customer-generated content, community activism) are not marketing support functions — they are the growth mechanism. The brand earns media because its customers do things worth reporting. The 530 score for A3 measures whether the media system is built to receive and amplify the advocacy the brand has earned, or whether it is ignoring it.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean's media footprint is almost entirely owned — a website and an email list of past customers. The website is irregularly updated. The email list has not been used for content distribution in six months. Earned media does not exist: the brand has never been featured in a publication, has no search rankings for any competitive keyword, and has received no independent reviews. Shared media consists of a Facebook page and an Instagram account with a combined following of 340 people, almost exclusively friends and family of the founder, generating no community conversation. Paid media has been used sporadically — two Facebook campaigns in the past year, each running for two weeks, each terminated when the budget ran out. There is no system. There is presence in three types with no architecture connecting them. The paid campaigns had nowhere coherent to send traffic.
Score: +1 to +2 (Developing) Green Clean has rebuilt its owned media foundation: the website now publishes "Safe Home" content weekly, the email list is active with a fortnightly digest, and the blog is indexed and generating modest organic traffic. Earned media is beginning to form: one local parenting magazine has featured the brand, a sustainability blogger with a relevant audience has written an unprompted review, and the brand now appears in Google results for "eco-friendly cleaning service [city]." Shared media has shifted from broadcast to conversation: Instagram posts about the Family Health Report now consistently generate comments from customers sharing their own indoor air quality concerns. Paid media is used to amplify the Safe Home content to cold audiences in the target demographic, driving traffic to the owned blog rather than directly to a booking page. The system is forming. The sequencing is approximately correct. Owned is the foundation; paid is amplifying content that is already earning organic engagement.
Score: +2 to +3 (Strong) Green Clean's media system is fully orchestrated. Owned media is the anchor: the website serves as a resource hub for the indoor health protection category, generating consistent organic traffic through search and content. The email list has grown to 4,200 subscribers through content-led lead generation, and the sequence from first-touch content to first booking is documented and measured. Earned media is compounding: the brand is regularly cited in national parenting and sustainability publications, has been featured in two podcast interviews, and its Eco-Proof Report has been referenced by an independent environmental research organisation — generating credibility that paid media cannot buy. Shared media carries authentic community conversation: customers post Family Health Reports, tag Green Clean, and share indoor air quality content unprompted. The community amplifies without the brand paying for reach. Paid media is used surgically — retargeting known visitors and amplifying the highest-performing organic content to lookalike audiences. The acquisition cost curve has been falling for 18 months as the owned and earned infrastructure compounds.
Connected dimensions
Media does not operate in isolation. Four dimensions connect most directly:
520 — Stories: Media distributes stories. The quality of the 520 content determines whether distribution delivers value or noise. Strong stories with weak distribution stall. Weak stories with strong distribution produce reach without conversion. The combination — strong content, strong distribution — is what makes campaigns compound rather than decay.
430 — Channels: Media and channels overlap in digital contexts. An e-commerce brand's paid social media is simultaneously a media channel and a sales channel. The distinction the method maintains: channels (430) are where transactions happen; media (530) is where audience attention is built before the transaction moment. The line blurs in digital; the diagnostic question remains which function is primarily being served.
340 — Proof: Earned media is a form of proof. A press mention, an analyst citation, an independent review all function as third-party validation of the brand's claims — which is the same function as proof in the value proposition. A strong 532 (earned media) score and a strong 340 (proof) score tend to move together, because the same credibility-building activities produce both.
610 — Acquisition: Media effectiveness directly drives acquisition cost. The compounding media system — owned growing organically, earned building without additional investment, shared amplifying for free — produces a falling cost-per-acquisition curve. Paid-only media produces a flat or rising curve. The 530 score is a leading indicator of where 610 is heading.
Conclusion
Media is the dimension that determines whether everything else in the Marketing Canvas reaches the people it was designed for. A precise JTBD, a compelling positioning, an exceptional experience — none of it creates commercial value if the audience the brand needs never encounters it.
The strategic discipline the method requires is architectural, not tactical. The question is not which platform to post on this week. It is whether the media system — all four types, in the right sequence, with the right roles — is built to compound over time. Paid-first strategies produce visible results quickly and structural weakness quietly. Owned-first strategies are slower and produce compounding returns that paid-first companies eventually cannot afford to replicate.
The test: if you stopped all paid media today, what would remain? The answer to that question is your real media foundation score.
Sources
Gini Dietrich, Spin Sucks: Communication and Reputation Management in the Digital Age, Que Publishing, 2014 — the origin of the PESO model framework
Mark W. Schaefer, Marketing Rebellion: The Most Human Company Wins, Schaefer Marketing Solutions, 2019
Marketing Canvas Method, Appendix E — Dimension 530: Media, Laurent Bouty, 2026
About this dimension
Dimension 530 — Media is part of the Conversation meta-category (500) in the Marketing Canvas Method. The Conversation meta-category contains four dimensions: Listening (510), Stories (520), Media (530), and Influencers (540).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas - Content and Stories
Stories is the content strategy dimension of the Marketing Canvas Method. Learn the five properties of effective brand storytelling — and why the most common failure is narcissism.
About the Marketing Canvas Method
This article covers dimension 520 — Content & Stories, part of the
Conversation meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Stories is the content strategy dimension of the Marketing Canvas Method. It scores whether a brand's narratives serve both the organisation and the user — structured around how customers think and speak, equipped with clear calls to action, distributed through the right medium, and grounded in truthfulness.
The most common storytelling failure is narcissism: brands that tell their own story rather than their customer's story. Effective brand narratives put the customer as the protagonist and the brand as the guide. The dimension scores whether your content has made that shift — or is still performing a company monologue to an audience that has already moved on.
Introduction
Every organisation produces content. The strategic question the Marketing Canvas asks is not whether you produce content — it is whether your content does work. Does it educate? Does it move the audience toward a decision? Does it make the brand more credible, more human, more trustworthy?
Stories is the dimension that answers those questions systematically. It is not about production volume or creative quality. It is about whether the narratives you create are oriented toward your customer's world or your company's world — and whether they are designed with intention, not improvised under deadline pressure.
Marketing Canvas by Laurent Bouty - Stories
What does the Marketing Canvas mean by Stories?
In the Marketing Canvas Method, Stories is not synonymous with social media content or blog output. It is the entire content strategy infrastructure: the narratives the brand creates and shares to educate, persuade, and connect across every channel and every stage of the customer journey.
The dimension scores five properties:
521 — Reflection: Do content goals serve both the organisation and the user? Content that only serves the organisation is advertising. Content that only serves the user is journalism. Stories that score well do both simultaneously — they advance the brand's objectives while genuinely answering a question, solving a problem, or articulating an aspiration the customer already holds.
522 — Structure: Is content organised around how users think and speak — not around how the company is structured? The most structurally weak content reads like an internal org chart. Products are described in product management language. Services are segmented by department. Stories that score well are structured around the customer's decision process, their language, their sequence of questions. The company's internal logic is irrelevant to the reader.
523 — Call to Action: Does every piece of content have a clear next step? Content without a CTA is a conversation that ends before it reaches the point. The CTA doesn't need to be "buy now." It can be "read this next," "share with a colleague," "download the reference," or "book a call." The question the method asks is whether the content was designed with intent — was there a deliberate decision about what the reader should do next, and does the content deliver it?
524 — Medium selection: Is the format appropriate for the content type and the available resources? A complex methodology needs different treatment than a single customer insight. A B2B technical audience needs different formats than a consumer lifestyle audience. Medium selection scores whether the company has made conscious choices about format — or whether everything becomes a blog post by default because that is the path of least resistance.
525 — Truthfulness: Are your stories truthful, and do they communicate honestly about sustainability? The sustainability dimension is not an add-on. It is the anchor for all content credibility. Brands that overstate environmental credentials destroy the trust that authentic content builds. The method scores whether stories reflect what the brand actually does — not what the brand would like to claim.
The canonical narrative arc
The most important structural insight in the Stories dimension is this: the customer is the protagonist. The brand is the guide.
This is not a stylistic preference. It is the architecture of every effective brand narrative, from the simplest testimonial to the most complex thought leadership series.
The arc follows three moves:
The job: The customer has a problem they need to solve — a job to be done (dimension 110). The story opens here, in the customer's situation, using the customer's language.
The solution: The brand provides a path to resolution — features (310), experience (420), proof (340). The brand doesn't rescue the customer; it equips them.
The transformation: The customer achieves what they were aspiring to (120). They are not just satisfied — they have become a version of themselves that was not possible before the solution existed.
When this arc is intact, content resonates. Readers recognise themselves in step one, lean toward step two, and want step three. When this arc is missing — when the brand puts itself at the centre, leads with features rather than jobs, or skips the transformation entirely — the content performs for the company's ego while leaving the customer unmoved.
The red flag: content that leads with "We are proud to announce..." is the arc inverted. The brand is announcing its own importance. The customer has no reason to care.
Stories as the delivery vehicle for Proof
The connection between 520 and 340 (Proof) is one of the most underused insights in the Marketing Canvas.
Proof establishes credibility. Stories make proof compelling. The combination is what converts sceptics.
A case study is a story with evidence — the narrative arc applied to a real customer situation, with measurable outcomes.
A testimonial is a story with social proof — a peer narrator whose credibility transfers to the brand.
A "how it works" demonstration is a story with logical explanation — the brand's claim tested against a realistic scenario.
A brand with strong proof (340) but weak stories (520) has evidence that no one reads. A brand with strong stories (520) but weak proof (340) has compelling content that doesn't survive scrutiny. The dimension combination score — both above +1 — is what produces the content that drives both conversion and trust.
Statements for Self-Assessment
Score each of the five sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Your content and stories goals are reflecting your organisation's goals and user's needs (521)
Your content and stories are created and structured based on your understanding of how users think and speak about a subject (522)
Your content and stories have clear calls to action — you know exactly what you want your users to do after reading (523)
You have chosen your content and stories medium adequately in function of your type of story as well as resources, like time and money (524)
Your content and stories are truthful and communicate about sustainability (525)
Interpreting your scores
Negative scores (−1 to −3): Content is disconnected from the customer's job, organised around internal company logic, missing calls to action, or lacks credibility. The likely result: content is produced but doesn't convert; the audience it reaches doesn't recognise themselves; trust is not built because proof is absent or unconvincing.
Positive scores (+1 to +3): Content is structured around how customers think and speak. The brand serves as guide, not protagonist. Every piece has a deliberate next step. Medium selection is intentional. Proof and story are integrated. Content measurably contributes to acquisition, retention, or category education.
Strategic Role
Stories appears in the Vital 8 more frequently than any other Conversation dimension. Its archetype footprint covers both the growth and the evangelism archetypes — where narrative is not a marketing support function but the primary strategic mechanism.
Primary Accelerator for A1 (Disruptive Newcomer): A disruptor's product is often unfamiliar. The market doesn't know it needs it yet. In this context, stories are not marketing — they are the primary mechanism for market education. Canva, Odoo, Tesla at launch: none of these brands could rely on category familiarity. Each had to teach the market what they were disrupting and why it mattered. Stories is the engine. A weak 520 score for A1 means the market never learns the lesson.
Primary Accelerator for A9 (Category Creator): Creating a category requires naming it, teaching it, and repeating it until the market adopts the language. Nespresso didn't launch a coffee machine — it created a premium home espresso ritual. Salesforce didn't sell software — it taught the market that software could live in the cloud. The narrative was the strategy. The company that tells the category story most consistently owns the category. A weak 520 for A9 means the category is left undefined — and a competitor will define it instead.
Secondary Accelerator for A3 (Brand Evangelist): Evangelism is the archetype where customers carry the story further than the brand can. The brand's role is to create stories so authentic, so charged with shared identity, that customers want to retell them. Patagonia's documentary filmmaking, Harley-Davidson's customer mythology — these are brand stories that customers adopted as their own. The 520 score for A3 measures whether the brand's stories are evangelism-ready or whether they stop at brand awareness.
Secondary Brake for A5 (Pivot Pioneer): A brand in pivot faces a story problem: the existing narrative no longer serves the new direction, but the new narrative isn't yet credible. A weak 520 during a pivot creates a dangerous gap — the market is told the company has changed, but the stories still tell the old version. LEGO's pivot from failing toy company to platform for creativity required a complete narrative reconstruction. The dimension scores whether the pivot story has been rebuilt, not just the business model.
Growth Driver for A1 and A3: In both archetypes, viral storytelling directly drives revenue growth — not as a side effect but as the primary acquisition mechanism. The customer story that spreads is worth more than any paid media campaign.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean produces content regularly — a monthly blog, occasional social posts, a product page for each cleaning product. The content describes the products, explains the ingredients, and mentions eco-certification. It is accurate. It is also entirely company-centric: every piece begins with what Green Clean offers, not with what the customer is trying to accomplish. There are no calls to action beyond "add to cart." The customer segment Green Clean most wants to reach — parents concerned about indoor health — cannot find themselves in the content. The stories don't start in their world. There is no arc from job to transformation. No case studies. No customer voices. The content is a product catalogue dressed as a blog.
Score: +1 to +2 (Developing) Green Clean has begun restructuring its content around the customer's job. A series called "The Safe Home Guide" now leads with the parent's concern — "what am I exposing my children to when I clean?" — rather than with product features. The narrative arc is partially present: posts open with the customer situation, introduce the relevant Green Clean solution, but often stop before delivering the transformation. CTAs have been added to most articles, though they vary in clarity — some are specific ("book your first clean"), others are vague ("learn more"). A first customer case study has been published, featuring a family who switched from conventional products after a child's respiratory reaction. Medium selection is improving: longer content has moved to the blog; short-form testimonials are now used on Instagram. Stories are beginning to do work. The architecture is still uneven.
Score: +2 to +3 (Strong) Green Clean's content strategy is fully structured around the customer-as-protagonist arc. The "Safe Home" narrative series follows families through the discovery-to-commitment journey — opening with the indoor health concern, demonstrating the Green Clean methodology, and closing with the family's reported change in confidence and peace of mind. Each piece has a deliberate CTA mapped to the reader's stage: first-contact content leads to a "calculate your home's risk" tool; mid-funnel content leads to a trial booking; post-service content leads to referral sharing. Case studies now include before/after air quality measurements from the Eco-Proof Report, converting the brand's proprietary tool from a service feature into a content asset. VOC language sourced directly from dimension 510 (Listening) feeds the content briefs — customers' own phrasing about "knowing what my family breathes" appears verbatim in headlines and section openers. Stories and Proof (340) are fully integrated. Content measurably drives acquisition and retention.
Connected dimensions
Stories does not operate in isolation. Five dimensions connect most directly:
110 — JTBD: Stories narrate job resolution. The most effective content opens in the customer's job situation — using their language, not the brand's. A strong JTBD definition (110) is the raw material that makes story structure (522) possible. Without a clear job statement, content defaults to product description.
220 — Positioning: Stories deliver positioning in narrative form. The positioning claim (220) is the argument. The story is the demonstration. Positioning without stories is a claim without evidence. Stories without positioning is content without a point.
320 — Emotions: Stories create emotional connection. The emotional job (320) defines what the customer is trying to feel. The story is the mechanism that delivers that feeling. A story that is technically accurate but emotionally inert does not produce advocacy.
340 — Proof: Stories are the delivery vehicle for proof. A case study is a story with evidence. A testimonial is a story with social proof. Proof without story is data. Story without proof is claim. The combination is what converts sceptics into buyers.
510 — Listening: VOC language mining (510) produces the raw material for story structure. The most effective content uses the words customers use to describe their own problems — not the words the marketing team uses to describe the product. Listening tells you how the customer speaks (522). Stories build the structure around it.
530 — Media: Stories need distribution. Media (530) is the system that determines how far and to whom stories travel. Strong stories distributed through the wrong media reach the wrong audience. The quality of 520 determines what is worth distributing; the quality of 530 determines whether distribution works.
Conclusion
Stories is the dimension that turns strategy into language the market can receive. Every other dimension in the Canvas — the job, the positioning, the proof, the experience — exists as internal knowledge until stories make it external and human.
The strategic test is not whether you produce content. It is whether your content starts in the customer's world, serves the customer's job, and moves them toward an outcome they want. If it does, the brand becomes a guide. If it doesn't, the brand becomes a company talking about itself — and customers learned to scroll past company monologues a long time ago.
The architecture check is simple: read your last five pieces of content. Count how many sentences begin with the customer's situation versus the company's product. The ratio tells you where 520 actually stands.
Sources
Donald Miller, Building a StoryBrand, HarperCollins Leadership, 2017
Robert McKee & Thomas Gerace, Storynomics: Story-Driven Marketing in the Post-Advertising World, Twelve, 2018
Joe Pulizzi, Content Inc., McGraw-Hill Education, 2015
Marketing Canvas Method, Appendix E — Dimension 520: Stories, Laurent Bouty, 2026
About this dimension
Dimension 520 — Stories is part of the Conversation meta-category (500) in the Marketing Canvas Method. The Conversation meta-category contains four dimensions: Listening (510), Stories (520), Media (530), and Influencers (540).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
How to get real insights for your Marketing Strategy?
Collection of videos talking about insights.
Richard Thorogood of Colgage-Palmolive describes how new technology is transforming market research, and how firms will need to adapt.
What has been the strategy for Airbnb to understand its customers and adapt to their needs? Chip Conley, Head of Hospitality at Airbnb, explains the process to achieve this
Using a real-world case study featuring one of the most iconic brands in clothing (Timberland), TEC Executive-in-Residence Kurian Tharakan shows how this clothing giant leveraged the motives, needs, wants and desires of their core customer.
Malcolm Gladwell gets inside the food industry's pursuit of the perfect spaghetti sauce -- and makes a larger argument about the nature of choice and happiness.



