BLOG
A collection of article and ideas that help Smart Marketers to become Smarter
Marketing Canvas - Budget
Budget is the 24th Marketing Canvas dimension — scoring not how much you spend, but how deliberately. Learn the four properties, the 3-Cycle allocation logic, and the 90/10 innovation reserve principle.
About the Marketing Canvas Method
This article covers dimension 640 — Budget, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Budget is the 24th and final dimension of the Marketing Canvas — and the one that governs all the others. It scores not how much you spend on marketing, but how deliberately you spend it. The dimension measures four properties: allocation logic (is the budget based on strategic priorities, not inertia?), planning integration (is it a component of the overall business plan with a defined timeframe?), monitoring discipline (do you reallocate when something is not working?), and innovation reserve (do you protect a portion — typically 10% — for experimental approaches?).
The canonical framing: a company that allocates 100% of its marketing budget to proven tactics will never discover the channel, message, or format that produces breakthrough results. A company that cannot defend its budget allocation against its own strategic priorities has substituted familiarity for strategy.
Introduction
Every initiative identified across the 23 preceding dimensions — every fix, every accelerator, every growth driver — competes for the same finite resource: the marketing budget. Budget is the dimension that determines which of those initiatives actually happen, in what sequence, and at what scale.
This is why the Marketing Canvas positions Budget as a discipline question, not a quantum question. The amount of budget available is a constraint. How that budget is allocated — against which priorities, in which cycle, with what monitoring — is the strategic choice. Two companies with identical budgets can produce radically different outcomes depending on whether their allocation logic follows strategic evidence or historical habit.
The most expensive budget decision a marketing team makes is the one it doesn't consciously make: replicating last year's allocation because changing it requires a conversation no one wants to have.
What the Marketing Canvas scores in Budget
The dimension scores four properties — allocation logic, planning integration, monitoring discipline, and innovation reserve — each addressing a distinct layer of resource discipline.
Allocation logic — is the budget allocated based on multiple factors (industry benchmarks, business capacity, strategic goals, and urgency) rather than inertia? The canonical failure mode is the "same as last year" allocation: the prior year's budget is reproduced with a percentage adjustment, with no systematic re-examination of whether the distribution still reflects the strategic priorities. Allocation logic scores whether the budget follows the strategy or whether the strategy is reverse-engineered to justify the budget. A company in Cycle 1 of the Strategic Action Engine (fixing Fatal Brakes) should have a materially different allocation from the same company in Cycle 3 (scaling growth drivers). If the allocation doesn't shift as the strategy evolves, the budget is not serving the strategy — it is constraining it. Industry benchmarks provide the calibration reference: typically 6–12% of revenue for established businesses, up to 20% for growth-stage companies. The benchmark is not a target; it is a diagnostic.
Planning integration — is the marketing budget a component of the overall business plan, with defined costs linked to specific goals within a defined timeframe? Budget that exists as a standalone line item, detached from the strategic plan it is supposed to fund, produces the most common budget dysfunction: money is available, but there is no explicit connection between what is being bought and what outcome is expected. The chain of accountability the method scores runs from budget item to initiative (Step 4) to dimension score (Step 3) to strategic goal (Step 2). If any link is missing, the budget is partially floating.
Monitoring discipline — do you constantly monitor marketing performance and reallocate spending from underperforming initiatives to those that are working? The test is not whether performance is tracked — most marketing teams track something. The test is whether tracking produces reallocation. A team that reviews campaign performance monthly and continues funding underperforming initiatives for the remainder of the budget cycle because "it's already in the plan" does not have monitoring discipline; it has reporting. Monitoring without reallocation authority is observation without consequence.
Innovation reserve — do you protect a portion of the budget, typically 10%, for exploring new approaches, testing new channels, and discovering what the existing allocation misses? The 90/10 principle is the canonical structure: 90% on proven activities that are already demonstrably working; 10% on experimental approaches where the outcome is uncertain. The 10% is not a luxury allocation for when the primary budget is performing well — it is insurance against strategic rigidity. A company that allocates 100% to proven tactics has committed its entire resource base to yesterday's understanding of what works. The failure mode runs in both directions: below 5% for innovation protects the status quo at the cost of adaptability; above 25% starves the proven activities that generate current returns.
Budget and the 3-Cycle Roadmap
The most strategically significant connection in the Budget dimension is its relationship to the 3-Cycle Strategic Roadmap (Step 5). The canonical cycle allocations determine how the budget should be distributed across the three action streams — FIX, ALIGN, and SCALE — at each phase of strategy execution:
Cycle 1 — Foundation: 80% FIX (Fatal Brakes) / 10% ALIGN (Accelerators) / 10% SCALE (Growth Drivers)
The dominant allocation is repair. Fatal Brakes cannot be papered over with growth investment. A brand with a broken positioning (220) cannot be fixed by doubling the media budget (530). A product with weak features (310) cannot be rescued by an influencer campaign (540). In Cycle 1, the budget's primary function is to fund the foundational work that makes everything else possible. The 10% in ALIGN and SCALE is not wasted — it maintains momentum and tests the growth thesis — but it is not the primary investment.
Cycle 2 — Build: 20% FIX (maintenance) / 60% ALIGN / 20% SCALE
The Fatal Brakes have been addressed. The budget shifts to funding the accelerators — the dimensions that drive the archetype's primary mission. Brand positioning is being sharpened. The value proposition is being refined. The customer experience is being systematically improved. The FIX allocation drops to maintenance level because the structural problems have been resolved, not because they no longer need monitoring.
Cycle 3 — Scale: 10% FIX (maintenance) / 30% ALIGN / 60% SCALE
The budget now funds growth at scale. The Growth Drivers identified in the Vital Audit receive the majority of the investment. The risk in Cycle 3 is premature allocation: companies that skip Cycle 1 and move directly to Cycle 3 investment discover that growth spend on a broken foundation produces volume without compounding returns.
The Budget dimension (641) scores whether the company's actual allocation reflects the cycle it is in — or whether the allocation is driven by what is most visible, most politically comfortable, or most familiar.
Statements for self-assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Budget allocation is driven by inertia, prior year habit, or political comfort rather than strategic evidence. Planning integration is absent or superficial — the budget is not connected to specific initiatives with defined outcomes. Monitoring produces reporting but not reallocation. There is no innovation reserve, or it has been absorbed into existing line items. The budget is not serving the strategy; the strategy is being reverse-engineered to justify the budget.
Positive scores (+1 to +3): Allocation logic follows strategic priorities and cycle position. The budget is integrated into the business plan with traceable connections between spend, initiatives, dimensions, and goals. Monitoring has reallocation authority and exercises it. The 10% innovation reserve is protected and generating learning. The budget is an active strategic instrument, not a historical artefact.
Strategic Role
Fatal Brake for A6 (Value Harvester): In a declining market, every euro of marketing spend must demonstrate return. There is no budget slack to absorb misallocation — the market contraction is simultaneously compressing the revenue base from which the budget is drawn and raising the pressure on each remaining customer relationship. A Value Harvester with weak budget discipline is compounding the market problem with a spending problem. Waste is not a nuisance in A6; it is existential. The 641 score — allocation based on strategic evidence rather than inertia — is the most critical sub-question for A6.
Secondary Accelerator for A2 (Efficiency Machine): The Efficiency Machine archetype wins on cost structure and operational discipline. Budget discipline in A2 is not just a financial governance function — it is a strategic signal. A marketing team that cannot maintain budget discipline while the operations team is optimising every cost line sends a structural contradiction to the organisation. A2 companies with strong budget scores reinforce the operational excellence narrative; A2 companies with weak budget scores undermine it from within marketing.
Growth Driver for A2 (Margin Extraction): When the Efficiency Machine deploys the Margin Extraction growth driver, budget discipline is the mechanism. Reducing marketing spend on activities that produce low marginal return — while protecting spend on activities that generate efficient acquisition and retention — directly improves margin without reducing commercial output. The 643 score (monitoring and reallocation discipline) is the specific property that makes Margin Extraction possible: you can only reallocate away from low-return activities if you know which activities are low-return.
In most other archetypes, Budget operates as a constraint discipline rather than a strategic lever — necessary hygiene, but not the dimension that defines the archetype's strategy. The exception is A6, where budget discipline is survival, and A2, where it is a competitive differentiator.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean's marketing budget for the current year is €18,000 — approximately 7.2% of revenue, within the benchmark range for a service business at this stage. The allocation was set by the founder in January by reviewing the prior year's spend and making minor adjustments based on what felt underfunded. No benchmark comparison was conducted. No connection was made between the budget allocation and the strategic priorities identified from the dimension scores. The largest line item (€7,200, 40%) is paid social advertising — the same proportion as last year — despite the fact that the acquisition analysis (610) has shown paid social produces the highest CAC and the lowest customer lifetime of any channel. The budget has not been connected to the decision to build the owned media foundation first (530). Monitoring is monthly in theory; in practice, the budget is reviewed when campaigns end. There is no reallocation mechanism — budget is committed to campaigns at the start of the quarter and not adjusted. There is no innovation reserve. The 10% that would fund channel experiments is absorbed into the paid social budget by default.
Score: +1 to +2 (Developing) Green Clean has restructured its budget allocation for the first time using strategic evidence. Following the Vital Audit, the budget has been connected to the three action streams: €10,800 (60%) allocated to Cycle 1 FIX and ALIGN priorities — primarily the owned media content infrastructure, the subscription model architecture, and the Family Health Report development; €5,400 (30%) to proven acquisition and retention activities; €1,800 (10%) protected as an innovation reserve to test two new channel hypotheses (a partnership with a paediatric clinic network and a podcast sponsorship in the indoor health category). The allocation has been formally documented in the business plan with expected outcomes for each stream. Monitoring is now monthly with a defined reallocation trigger: any initiative performing below 70% of its target for two consecutive months is paused and the budget redirected. The innovation reserve has already produced one useful finding: the clinic partnership generated a cost-per-lead 40% below the paid social benchmark. The 10% is earning its place.
Score: +2 to +3 (Strong) Green Clean's budget management operates at the cycle level with quarterly allocation reviews. The company is in Cycle 2 of its Strategic Action Engine: the 20/60/20 split is in effect, with 20% on FIX maintenance (ongoing content production, subscription system upkeep), 60% on ALIGN (deepening the Family Health Report as a differentiation asset, building the referral programme, strengthening the earned media infrastructure), and 20% on SCALE (amplifying the indoor health category narrative through paid media targeted to lookalike audiences of the referral cohort). The 10% innovation reserve — now a protected line that is not subject to reallocation pressure — has cycled through six experiments in 18 months. Three have been discontinued after failing to outperform the control. Two have been graduated into the main budget after demonstrating positive ROI. One is in its second testing cycle. The 641 allocation logic is explicitly benchmarked against Gartner CMO survey data for comparable service businesses annually, with a documented rationale for any deviation. The budget is not a constraint on the strategy — it is an expression of it.
Connected dimensions
Budget connects to every dimension in the Marketing Canvas through resource allocation — every initiative in the 15-slot Strategic Action Engine draws on budget. Four connections are most direct:
610 — Acquisition: Budget funds acquisition. The size and composition of the acquisition budget determines CAC, channel mix, and the rate at which new customers enter the base. A weak 641 allocation that over-invests in high-CAC channels while under-investing in owned media infrastructure is a budget problem expressed as an acquisition problem.
620 — ARPU: Budget funds Stimulation initiatives. The upsell programmes, subscription architecture, and loyalty mechanics that improve purchase frequency and transaction value all require investment. An ARPU strategy without a budget line is a goal without a mechanism.
630 — Lifetime: Budget funds retention. The CRC component of the 634 sub-question is a budget allocation question: how much of the marketing budget is being directed toward keeping customers versus finding new ones? An under-funded retention programme produces the churn consequences that 630 measures.
All 24 dimensions: Budget is the dimension that determines which of the other 23 dimensions receive attention in the current strategic cycle. A dimension that scores −2 in the Vital Audit but receives no budget allocation in the Action Engine plan will still score −2 in the next cycle. The budget is the bridge between the diagnosis and the improvement.
Conclusion
Budget is the dimension that closes the Marketing Canvas cycle. Every insight generated across the other 23 dimensions — every job definition, every positioning choice, every experience design decision, every channel strategy — ultimately requires a budget allocation to move from understanding to action.
The strategic discipline the method requires is not about the size of the budget. It is about the clarity of its connection to strategy. A small budget allocated with precision against the right priorities at the right cycle stage will outperform a large budget allocated by inertia. The 90/10 principle is the practical expression of this: fund what is proven at 90%, fund the discovery of what comes next at 10%, and have the monitoring discipline to know which is which.
The test that closes the review: open the current year's marketing budget. For each line item, identify which dimension score it is designed to improve, which initiative in the Action Engine it funds, and what outcome improvement it is expected to produce. If any line item cannot be traced to a specific strategic purpose — it is funding inertia, not strategy. That is where 641 improvement begins.
Sources
Gartner CMO Spend and Strategy Survey, annual — gartner.com (benchmark reference for marketing spend as % of revenue by industry)
Christine Moorman, The CMO Survey: Highlights and Insights, Deloitte/Duke Fuqua/AMA, annual — cmosurvey.org
Marketing Canvas Method, Appendix E — Dimension 640: Budget, Laurent Bouty, 2026
About this dimension
Dimension 640 — Budget is the final dimension of the Metrics meta-category (600) and the 24th dimension of the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas - User Lifetime
Lifetime measures how long customers stay — scored as 1/churn rate. Learn the four properties, the CRC/CAC benchmark, and why a leaky bucket makes every other marketing investment less efficient.
About the Marketing Canvas Method
This article covers dimension 630 — User Lifetime, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Lifetime measures how long customers remain active — expressed as 1 divided by the churn rate. A 10% annual churn rate produces an average customer lifetime of 10 years. A 50% churn rate produces a lifetime of 2 years. The dimension scores four properties: measurement capability (can you calculate churn?), churn level (is it below market average?), trend (is churn improving?), and cost efficiency (is Customer Retention Cost proportionate to Customer Acquisition Cost?).
Lifetime is the Retention lever's primary metric. When the strategic goal is to grow revenue by keeping customers longer rather than acquiring new ones, Lifetime is the scoreboard. A leaky bucket makes every other marketing investment less efficient — acquisition, ARPU growth, brand building — because each one is partially undone by customers who leave before they return their full value.
Introduction
Acquisition brings customers in. Retention determines how long they stay. The relationship between the two is not symmetrical: what you invest to acquire a customer only pays back over time, and the longer the customer stays, the more time there is for that investment to compound. Shorten the lifetime, and the economics of acquisition become structurally harder to justify.
The Marketing Canvas treats Lifetime as a metrics discipline, not a loyalty programme design exercise. The dimension scores whether the company knows its churn rate, how that rate compares to market benchmarks, whether it is improving, and whether the investment in retention is proportionate — not excessive, not negligent.
The churn mathematics
The core formula is simple and worth holding precisely:
Customer Lifetime = 1 ÷ Churn Rate
5% annual churn → 20-year average lifetime
10% annual churn → 10-year average lifetime
25% annual churn → 4-year average lifetime
50% annual churn → 2-year average lifetime
The revenue mathematics of churn reduction are powerful and non-linear. Reducing annual churn by 5 percentage points — from 20% to 15%, for example — can increase total lifetime value per customer by 25 to 95%, depending on the business model and ARPU level. The range is wide because the compounding effect of extended lifetime interacts differently with high-ARPU versus low-ARPU relationships, and with businesses that generate more value from long-tenure customers through upsell and cross-sell than from short-tenure ones.
The practical implication: a 5-point improvement in churn is rarely a 5% improvement in commercial outcome. It is frequently a 30–60% improvement in the total value the acquired customer base will generate over its lifetime. This asymmetry — small churn improvements producing large value changes — is why Retention-focused archetypes treat Lifetime as a Fatal or Primary dimension rather than a supporting metric.
What the Marketing Canvas scores in Lifetime
The dimension scores four properties — measurement capability, churn level, trend, and CRC/CAC relationship — each addressing a distinct layer of retention health.
Measurement capability is the prerequisite that must be met before any other Lifetime property can be managed. Can you calculate your churn rate, because you know who is buying and using your products and services? A company that cannot identify which customers have stopped purchasing — because it lacks a direct customer relationship, because purchase identity is not tracked, or because "churn" has never been formally defined for the business model — cannot manage the others. Defining churn requires first agreeing on what "active" means. In subscription models, it is straightforward: did the customer renew? In transactional models, it requires a defined activity window: a customer who has not purchased within 12 months when the average purchase cycle is 6 months is churned. The definition must exist before the measurement can.
Churn level — is your churn rate below or equal to average market churn for your category? Churn benchmarks vary dramatically by industry — SaaS businesses might target 5–7% annual churn; consumer subscription services often run 20–30%; transactional retail models have different definitions entirely. The method scores relative to industry, not absolute thresholds. A 15% annual churn rate in a category where competitors average 25% is a positive score. The same rate in a category where the benchmark is 8% is negative.
Trend scores direction, not just position. A churn rate that is above industry average but visibly improving is a different strategic situation from a rate that is average but deteriorating. The method scores both the current level and the momentum independently — because a company that is losing ground on retention is in a different position from one that is gaining it, even when the current absolute numbers look similar.
The CRC/CAC relationship — is your Customer Retention Cost proportionate to your Customer Acquisition Cost, with the combined total running at 20–30% of revenue for mature businesses? This property diagnoses the investment balance between finding customers and keeping them. Below 20% combined, the company is likely underinvesting in one or both. At 20–30%, the economics are proportionate. Above 30%, the signal is that something upstream is broken: when retention cost is high, the root cause is almost never a retention spending problem — it is a product, experience, or fit problem. You are paying to hold customers who would leave without the financial incentive, rather than retaining customers who stay because the value is genuine. If CRC is rising without a corresponding improvement in churn trend, the spending is compensating for a deeper problem rather than solving it. The correct response is to investigate dimension 420 (Experience) and 140 (Engagement) — not to increase the retention budget further.
The leaky bucket consequence
The strategic framing the method applies to Lifetime is architectural, not tactical. A leaky bucket — high or rising churn — creates a compounding drag on every other marketing investment:
Acquisition becomes less efficient. The CLTV/CAC ratio (610) falls as customer lifetime shrinks. The acquisition spend that was justified by a 4-year lifetime is no longer justified by a 2-year lifetime at the same CAC. The acquisition engine keeps running; the economics quietly deteriorate.
ARPU growth is partially cancelled. Investments in cross-sell, upsell, and frequency programmes (620) build value in the existing base. If churn removes 30% of that base annually, the ARPU growth achieved in the retained segment is offset by the lost revenue from departing customers. The Stimulation lever loses efficiency every time the Retention lever is leaking.
Brand investment returns less. Customers who experience the brand, develop loyalty, and become advocates — the highest-value customers in any archetype — are disproportionately long-tenure. High churn eliminates the customers most likely to generate word-of-mouth, referral, and community value before those effects compound.
The canonical formulation: every 1% improvement in churn releases capacity across the entire marketing system. Every 1% worsening locks it.
Statements for self-assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Churn is unmeasured, above industry benchmark, deteriorating, or the CRC/CAC balance signals over-spending to compensate for an upstream product or experience problem. The leaky bucket is draining value from every other marketing investment. The priority is measurement first, then diagnosis of root cause, then targeted retention investment.
Positive scores (+1 to +3): Churn is tracked at cohort level, below industry average, improving through deliberate retention strategy, and the CRC/CAC ratio is proportionate. The Retention lever is functioning. Lifetime is extending and with it the total value generated by the acquired customer base.
Strategic Role
Fatal Brake for A4 (Stagnant Leader): The Stagnant Leader has a large installed base and a growth problem. In this context, churn is the existential threat: the customer base that the strategy depends on for ARPU growth and market share maintenance is being depleted. A weak 630 for A4 means the strategy is trying to grow value from an asset that is shrinking. Sage and Peloton both faced this dynamic — large bases, rising churn in the core segment, requiring fundamental retention intervention before any growth strategy could take hold. The leaky bucket is A4's most dangerous structural problem.
Primary Accelerator for A7 (Scale-Up Guardian): Hypergrowth creates a retention stress test. The service and experience that earned loyalty at 10,000 customers often strains at 100,000. New customers are acquired faster than the service model can be extended to them. Churn rises not because the product has degraded but because the delivery system hasn't scaled alongside the customer base. Airbnb and Spotify both navigated this: the core experience had to be systematically re-engineered at each order of magnitude of scale to prevent churn from rising with growth. For A7, Lifetime is a Primary Accelerator because protecting it during hypergrowth is the strategic capability that separates sustainable scale from growth that exhausts itself.
Secondary Accelerator for A3 (Brand Evangelist): The Brand Evangelist archetype depends on deep customer relationships that generate advocacy, word-of-mouth, and community identity. These effects compound over time — a customer in year five generates more referral value, more community participation, and more brand evangelism than a customer in year one. High churn truncates the compounding before it reaches full value. A strong 630 for A3 doesn't just protect revenue; it protects the community depth that makes the evangelism archetype function.
Secondary Accelerator for A6 (Value Harvester): In a declining market, the customer base is the asset being harvested. Every churned customer is an irreplaceable unit of that asset — they cannot be replaced by acquisition in a contracting market. Lifetime extension is the primary mechanism for extracting more value from the existing base before it naturally erodes. Combined with ARPU growth (620), extended Lifetime is what allows an A6 to generate increasing value from a shrinking pool.
Growth Driver for A6 (Stability Lock-in): When the Value Harvester deploys the Stability Lock-in growth driver, Lifetime extension is the primary mechanism. The strategy: make it structurally easier to stay than to leave — through contract architecture, integration depth, switching cost design, and service quality that makes alternatives unattractive. The 630 score for A6 measures whether this lock-in is producing measurable lifetime extension, not just whether the tactic exists.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean has never formally defined what constitutes a churned customer. The founder believes churn is "low" based on the intuition that most regular customers seem to keep booking — but this is not measured. There is no definition of what counts as "active": a customer who booked six cleans last year and none this year is not flagged anywhere in the system. The CRM migration that improved ARPU measurement has created a transaction log, but no cohort analysis has been run. The team cannot state its churn rate, cannot compare it to any benchmark, and has no historical trend data. Retention activities consist of a birthday discount email sent to customers on the anniversary of their first booking — not a strategy, but a single tactic with no measured impact. The leaky bucket is running; the size of the leak is unknown.
Score: +1 to +2 (Developing) Green Clean has defined its churn metric: a customer is considered churned if they have not booked a clean within 90 days when their historical booking frequency was fortnightly or more often. Applying this definition retroactively, the team has calculated a 12-month churn rate of 22%. A benchmark research exercise has established that comparable residential cleaning services in the region average 28–32% annual churn, placing Green Clean's current rate below market average — a stronger position than the team expected. The churn trend over the past six months shows improvement: the monthly churn rate has fallen from 2.1% to 1.7% since the introduction of the subscription model (which provides an explicit renewal commitment that reduces passive drift). CRC has been formally calculated for the first time: the total cost of the birthday discount programme, the proactive re-engagement emails, and the subscription management time runs at approximately 8% of revenue. CAC runs at approximately 14% of revenue. Combined, CAC + CRC is 22% — within the 20–30% mature business benchmark. The measurement exists. The trend is positive. The investment ratio is sound.
Score: +2 to +3 (Strong) Green Clean's churn management is cohort-level and predictive. Monthly cohort analysis tracks churn by acquisition channel, service tier, and customer tenure — revealing that customers acquired through the referral programme have a 12-month churn rate of 11%, versus 31% for customers acquired through paid social. This channel-level insight has redirected acquisition investment: referral programme budget has increased, paid social has been reduced, and the mix shift is producing compounding lifetime improvement. Annual churn has fallen from 22% to 14% over 24 months — from slightly below the market average benchmark to substantially below it. The 14% rate produces an average customer lifetime of 7.1 years, compared to 4.5 years at the 22% baseline: a 58% increase in expected lifetime at the same ARPU, without acquiring a single additional customer. CRC has risen slightly to 11% of revenue as the proactive at-risk customer programme has been built out — but combined with CAC of 12%, the total remains within the 20–30% benchmark at 23%. The churn model now includes a predictive layer: customers who miss two consecutive bookings are flagged and receive a personal outreach call within 7 days. The at-risk recovery rate is 41%.
Connected dimensions
Lifetime does not operate in isolation. Four dimensions connect most directly:
140 — Engagement: Engagement predicts lifetime. The most reliable leading indicator of churn is declining engagement — a customer who is using the product less, participating in fewer touchpoints, and showing reduced activity before formally cancelling or lapsing. A strong 140 score functions as an early-warning system for 630: engagement data identifies at-risk customers before they appear in churn statistics. When 630 scores are weak despite retention investment, the diagnostic starts at 140.
420 — Experience: Experience quality determines whether customers stay. Churn that cannot be explained by price sensitivity, competitive alternatives, or life circumstances is almost always an experience failure — something in the journey is consistently disappointing customers in a way that accumulates until departure. A rising CRC without a corresponding improvement in 633 is the signal: the retention spend is compensating for an experience problem that 420 needs to solve. Spending more to keep customers who are leaving because of a broken experience is the wrong lever.
610 — Acquisition: CAC must be justified by Lifetime. The CLTV/CAC ratio (610) depends on how long the acquired customer stays. A short lifetime makes an otherwise healthy CAC structurally unprofitable. The two dimensions must be scored and managed in relation to each other: improving 630 improves the return on 610 investment without changing the acquisition economics.
620 — ARPU: ARPU × Lifetime = total customer value. This is the fundamental identity connecting the two Stimulation and Retention lever metrics. Growing ARPU in a high-churn environment is a partial strategy. Extending Lifetime with flat ARPU is also partial. The combination — ARPU rising and Lifetime extending simultaneously — is the full expression of customer value maximisation, and the strategic goal of the archetypes where both dimensions appear in the Vital 8.
Conclusion
Lifetime is the dimension that determines how much time each customer relationship has to generate value. Every investment in acquisition, ARPU growth, experience quality, and brand building operates inside the window that Lifetime defines. Shorten that window and every upstream investment returns less. Extend it and the compounding begins.
The diagnostic test is the churn arithmetic: calculate your current churn rate, convert it to a customer lifetime using the 1/churn formula, and then multiply that lifetime by your ARPU. The result is the total expected value of a newly acquired customer. Now reduce the churn rate by 5 percentage points and recalculate. The difference between those two numbers — achievable with deliberate retention investment — is what Lifetime management is worth commercially.
If you have not run that calculation, 631 scores negative. Everything else follows from measurement.
Sources
Frederick F. Reichheld, The Loyalty Effect: The Hidden Force Behind Growth, Profits, and Lasting Value, Harvard Business School Press, 1996 — foundational churn-to-value mathematics
Robbie Kellman Baxter, The Forever Transaction, McGraw-Hill Education, 2020 — subscription and retention architecture
Marketing Canvas Method, Appendix E — Dimension 630: Lifetime, Laurent Bouty, 2026
About this dimension
Dimension 630 — Lifetime is part of the Metrics meta-category (600) in the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget/ROI (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas Method - User Lifetime and Churn
Marketing Canvas - ARPU
ARPU measures whether you are maximising revenue from each customer through frequency, spend, and value growth. Learn the four properties, the revenue equation, and why measurement capability is the prerequisite everything else depends on.
About the Marketing Canvas Method
This article covers dimension 620 — ARPU, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
ARPU — Average Revenue Per User — is the metric that scores whether you are extracting maximum value from each customer relationship, not just from your customer base in aggregate. The dimension scores four properties: measurement capability (do you know who is buying and how much?), purchase frequency (are customers buying often enough?), average spend per transaction (is the value per purchase competitive?), and trend (is ARPU growing over time?).
ARPU is the Stimulation lever's primary metric. When the strategic goal is to grow revenue by getting more value from existing customers rather than acquiring new ones, ARPU is the scoreboard. Revenue can grow with a flat or even shrinking customer base if ARPU is rising. That possibility is only accessible to companies that can measure it.
Introduction
Every marketing strategy has a revenue growth direction. Acquiring more customers (Acquisition lever). Keeping them longer (Retention lever). Getting more value from each one (Stimulation lever). ARPU is what the Stimulation lever measures — the revenue generated per active customer, and whether it is moving in the right direction.
The dimension is not about whether you understand the concept of average revenue. It is about whether your business has the instrumentation to know, at the individual customer level, who is buying what, how often, and at what transaction value — and whether deliberate strategies are moving those numbers upward over time.
What does the Marketing Canvas score in ARPU?
The dimension scores four properties — measurement capability, purchase frequency, average spend per transaction, and trend — each a distinct layer of revenue-per-customer health.
Measurement capability is the prerequisite that everything else depends on. Can you measure ARPU, because you know who is buying and using your products and services? Companies that sell through intermediaries — retailers, distributors, resellers, channel partners — frequently cannot measure ARPU at the customer level. They know what they ship to the channel. They do not know who buys it, how frequently that person returns, or what they spend across the relationship. The method's position is unambiguous: strategy built on unmeasurable metrics is fiction. If you cannot measure ARPU, you cannot manage it, benchmark it, or improve it with any precision. A negative measurement capability score is not a data problem — it is a business model problem. The route to a positive score typically requires a direct relationship with the customer, whether through owned channels, a loyalty programme, direct distribution, or subscription architecture.
Purchase frequency — is the average number of purchases per customer per period above industry average? Frequency is one of the two levers within ARPU that can be deliberately moved, the other being average transaction value. Frequency improvement strategies — subscription models, loyalty programmes, replenishment triggers, behavioural nudges, service bundling — all work by increasing the number of times a customer transacts, not the size of each transaction. A weak frequency score relative to industry benchmarks suggests the customer's potential buying rhythm is not being captured.
Average spend per purchase — is the average transaction value per customer above industry average and above direct competitors? Transaction value improvement strategies — upselling to premium tiers, cross-selling complementary products, bundling, value-based pricing discipline — all work by increasing the revenue extracted from each interaction, independent of how often it occurs. A weak score here often traces upstream to dimension 330 (Prices) or 310 (Features): either the pricing architecture is not capturing full willingness to pay, or the product range does not provide sufficient upsell surface.
Trend is the most strategic of the four properties because it reveals direction, not just position. A current ARPU above industry average is a position. A rising ARPU trend is a momentum signal. The method scores both: where you are (frequency and spend, benchmarked against industry) and where you are going (trajectory over time). A company with below-average ARPU but a strongly positive trend is in a different strategic position from one with above-average ARPU that has been flat for two years.
ARPU in the revenue equation
The Marketing Canvas places ARPU explicitly in the revenue model. In the method's framework:
Revenue = AOP × NT × ATV × 12 (for subscription or recurring models)
Where:
AOP = Active Operating Periods (the number of active customers)
NT = Number of Transactions per customer per period
ATV = Average Transaction Value per purchase
× 12 = annualisation factor
ARPU captures the NT × ATV components. When ARPU grows — either through frequency (NT) or transaction value (ATV) — revenue grows, even if AOP is flat or declining. This is the commercial logic that makes ARPU the primary growth mechanism for archetypes whose customer base is stable or contracting.
The implication: a business that is not growing its customer count can still grow revenue if it is managing ARPU deliberately. This is not a consolation prize for low-acquisition businesses — it is the preferred growth strategy for several archetypes, particularly A6 (Value Harvester), where the customer base is the asset to be maximised before it erodes.
The measurement prerequisite in practice
Measurement capability has a compounding effect on all other ARPU properties. A company that cannot measure ARPU cannot validly assess frequency, average spend, or trend — because all three require knowing who is buying and at what level.
The diagnostic questions are practical: Do you have a direct relationship with your end customers, or does an intermediary sit between you and them? Can you identify individual customers across multiple transactions and aggregate their behaviour over time? Do you have a system — CRM, loyalty programme, subscription platform, or equivalent — that captures purchase identity at the transaction level? Can you calculate, for any given customer, how many times they have purchased and at what average value?
If the answer to any of these is no, measurement capability scores negative. The consequence is not just a low ARPU score — it is the strategic constraint that Stimulation lever strategies are inaccessible without the infrastructure to identify and act on individual customer behaviour.
Statements for self-assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): ARPU is unmeasured, below industry benchmark, declining, or all three. The most common root cause is 621 — the measurement infrastructure does not exist, making deliberate ARPU strategy impossible. If 621 is negative, it must be resolved before 622, 623, or 624 can be meaningfully improved.
Positive scores (+1 to +3): ARPU is tracked at the individual customer level, above competitive benchmarks on frequency and transaction value, and showing a positive trend driven by deliberate cross-sell, upsell, or subscription strategies. The Stimulation lever is active and measurable.
Strategic Role
Primary Accelerator for A6 (Value Harvester): The Value Harvester archetype faces a structurally declining customer base — through market contraction, category disruption, or strategic wind-down. The core mission is to extract maximum revenue from the remaining base before it erodes further. ARPU is the primary instrument: if you cannot grow the customer count, you must grow what each customer generates. Nokia's PC division, IBM's legacy hardware operations, the physical media businesses of the early 2000s — all faced this equation. ARPU is not a growth story in A6; it is a survival and value extraction strategy. A weak 620 score for A6 means the value in the existing base is being left on the table.
Secondary Accelerator for A2 (Efficiency Machine): Efficiency businesses win on cost structure, but ARPU discipline prevents the trap of growing volume at declining transaction values. A2 companies that allow average spend per purchase to drift below market — through discount dependency, race-to-bottom pricing, or failure to develop premium tiers — sacrifice the margin that makes operational efficiency commercially meaningful. ARPU keeps the revenue per unit healthy while the cost structure is being optimised.
Secondary Accelerator for A4 (Stagnant Leader): A stagnant leader has a large installed base that is not growing. ARPU is the mechanism through which that base generates increasing revenue without acquisition investment. Upsell programmes, premium tier introduction, frequency stimulation through loyalty architecture — these are the A4 ARPU strategies. Sage and Peloton both faced this challenge: large customer bases with flat or declining ARPU, requiring deliberate Stimulation lever investment to restore revenue growth from existing relationships.
Secondary Accelerator for A8 (Niche Expert): In a niche, customer count is bounded by market definition. ARPU is the primary revenue growth mechanism once the addressable niche has been substantially penetrated. Deep expertise enables premium pricing (623) and expanded service scope that generates frequency (622). Hermès cannot grow by acquiring more customers — the niche is intentionally small. It grows ARPU by deepening the relationship, expanding the product universe, and maintaining pricing discipline that competitors in adjacent categories cannot match.
Growth Driver for A4 (Premium Stimulation): When the A4 archetype deploys the Stimulation growth driver, ARPU is the scorecard. The strategic question shifts from "how do we acquire more customers?" to "how do we get more value from the customers we have?" Premium service tiers, bundle architecture, frequency programmes — all converge on the NT × ATV components of the revenue equation. A positive 624 trend is the evidence that the Stimulation strategy is working.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean operates a direct service model — customers book cleans through the website and pay directly — so the measurement capability question should be straightforward. In practice, bookings are tracked in a spreadsheet by date and postcode, not by named customer. The team cannot produce a list of customers sorted by revenue, frequency, or tenure. They know the total revenue per month; they do not know which customers generate that revenue or how it has changed at the individual level. Purchase frequency is estimated at "every two to three weeks per regular customer" — an informal observation, not a measured figure. Average spend per clean is known (€89 average booking value) but not benchmarked against competitors in any formal way. There is no deliberate strategy to increase either frequency or transaction value. ARPU is in the system conceptually but is not being managed.
Score: +1 to +2 (Developing) Green Clean has migrated customer bookings to a CRM system that associates every transaction with a named customer. For the first time, the team can calculate individual-level purchase frequency and annual revenue per customer. The results are diagnostic: the top 20% of customers (by annual revenue) generate 61% of total revenue; the bottom 30% have purchased only once. Average frequency for regular customers is 2.1 cleans per month; the industry benchmark for comparable residential services is estimated at 1.8, placing Green Clean slightly above average. Average transaction value is €89, against a benchmarked competitor average of €82 — above market. The ARPU trend for the past 12 months is flat: frequency has been stable, average spend has not moved. The measurement is now in place. The strategy to move the trend is the next step: a bundled subscription offer (quarterly commitment at a discount) is under development to convert sporadic customers into regular ones and improve frequency among the bottom segment.
Score: +2 to +3 (Strong) Green Clean's ARPU management is fully instrumented and actively growing. The subscription model introduced 18 months ago has migrated 44% of active customers to monthly or quarterly commitments, increasing average purchase frequency from 2.1 to 2.7 cleans per month across the base. Average transaction value has grown from €89 to €104, driven by a tiered service architecture — Standard Clean, Deep Clean, and the Full Indoor Health Audit — that provides deliberate upsell surface at every booking interaction. The Indoor Health Audit, priced at €220, is purchased by 28% of active customers at least once per year, contributing significantly to ATV uplift. ARPU trend for the past 12 months shows 17% year-on-year growth. The method's revenue equation is operating as designed: AOP is growing modestly (+8%), but the NT × ATV component is growing at more than twice that rate, meaning revenue growth outpaces customer acquisition growth. The Stimulation lever is doing its work.
Connected dimensions
ARPU does not operate in isolation. Four dimensions connect most directly:
310 — Features: Features enable cross-sell and upsell. The product or service range must provide sufficient depth to give customers a reason to increase their transaction value or expand their relationship. A company with a single product at a single price point has no upsell surface. Features (310) is the upstream dimension that determines the ceiling of what ARPU can reach through 623 (average spend) improvement.
330 — Prices: Pricing architecture directly affects ARPU. A pricing structure with only one tier and no premium options constrains transaction value regardless of customer willingness to pay. Value-based pricing discipline — ensuring that price reflects the full value delivered, not the competitor floor — is the upstream condition for 623 to score positively. The 330 and 623 scores move together: weak pricing architecture produces a ceiling on transaction value that no frequency strategy can compensate.
420 — Experience: Better experience supports higher ARPU. Customers who have an outstanding experience are more likely to purchase more frequently, less likely to resist premium tier offers, and more resistant to competitor alternatives that might siphon frequency away. The 420 score is an upstream predictor of 622 and 624 performance. Experience degradation is typically visible in ARPU trend data before it appears in churn data.
630 — Lifetime: ARPU × Lifetime = total customer value. This is the fundamental identity that connects the two Metrics dimensions most directly. A high ARPU with low lifetime produces a different strategic outcome than a moderate ARPU with high lifetime. The method requires both to be scored and interpreted in relation to each other — and the CLTV/CAC ratio (610) cannot be calculated without knowing both components.
Conclusion
ARPU is the dimension that determines whether the customer base you have is generating the revenue it is capable of generating. Every acquired customer represents a revenue potential. The gap between that potential and actual revenue is the ARPU opportunity — the difference between what the customer could spend with you and what they do.
The strategic discipline the method requires begins with measurement: knowing who is buying, at what frequency, at what transaction value. Without that, every ARPU strategy is hypothesis. With it, the Stimulation lever becomes the most capital-efficient growth mechanism available — growing revenue without the cost and risk of acquiring new customers.
The single most diagnostic question: can you name your top 20% of customers by annual revenue right now, without running a manual query? If the answer is no, the measurement prerequisite hasn't been met. That is where 620 improvement begins.
Sources
Robbie Kellman Baxter, The Membership Economy, McGraw-Hill Education, 2015 — foundational framework for frequency and recurring revenue strategy
Madhavan Ramanujam & Georg Tacke, Monetizing Innovation, Wiley, 2016 — pricing architecture and willingness-to-pay instrumentation
Marketing Canvas Method, Appendix E — Dimension 620: ARPU, Laurent Bouty, 2026
About this dimension
Dimension 620 — ARPU (Average Revenue Per User) is part of the Metrics meta-category (600) in the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget/ROI (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.
Marketing Canvas - User Acquisition
Acquisition scores four metrics — CAC, conversion rate, CLTV/CAC ratio, and time to conversion. Learn the canonical diagnostic range and why the ratio matters more than the absolute number.
About the Marketing Canvas Method
This article covers dimension 610 — User Acquisition, part of the
Metrics meta-category. The Marketing Canvas Method structures
marketing strategy across 24 dimensions and 9 strategic archetypes.
Full framework reference at
marketingcanvas.net →
·
Get the book →
In a nutshell
Acquisition — formally, Acquisition (Gross Adds) — is the dimension that scores whether your customer acquisition engine is efficient: acquiring new customers at a cost and rate that supports your business goals, not just growing the customer count. The dimension scores four metrics: Customer Acquisition Cost (CAC), conversion rate, CLTV/CAC ratio, and time to conversion.
These are not vanity metrics. They are the structural indicators of whether growth is sustainable or being bought at a loss. The most diagnostic is the CLTV/CAC ratio: below 1:1, you lose money on every customer acquired. At 3:1, the unit economics work. Above 5:1, you are almost certainly underinvesting in growth.
Introduction
Every business acquires customers. The strategic question is not whether acquisition is happening — it is whether the economics of acquisition are healthy enough to sustain the strategy. A company can grow its customer base rapidly while systematically destroying value, if the cost of acquiring each customer exceeds what that customer will ever return.
The Marketing Canvas treats Acquisition as a metrics discipline, not a channel selection exercise. The dimension doesn't score which platforms you advertise on or how many leads your campaigns generate. It scores the four numbers that determine whether the acquisition engine is structurally sound: how much each customer costs to acquire, how many prospects convert, whether lifetime value justifies acquisition spend, and how long the conversion process takes.
Acquisition is the first of four Metrics dimensions (610, 620, 630, 640) that form the measurement backbone of the Canvas. Without functioning Metrics dimensions, the other five meta-categories produce strategic intent without commercial accountability.
What the Marketing Canvas scores in Acquisition
The dimension scores four metrics, each a distinct diagnostic layer of acquisition health.
CAC (Customer Acquisition Cost) — is your cost of acquiring a new customer below industry average and below direct competitors? CAC is the total investment in marketing and sales divided by the number of new customers acquired in a period. The critical framing the method applies: CAC is only meaningful relative to what the acquired customer returns. A high CAC is not automatically a problem. A CAC that exceeds the lifetime value of the customer it acquired is always a problem. Before scoring CAC in isolation, the method cross-references it with the CLTV/CAC ratio. The ratio matters more than the absolute number.
Conversion rate — is the rate at which prospects become buyers above industry average? A low conversion rate is a signal that something in the middle of the funnel is failing — the proposition, the proof, the experience, the pricing, or the channel. It rarely lives in the acquisition funnel itself; the root cause is almost always upstream in the Canvas.
CLTV/CAC ratio — does the lifetime value customers generate justify the investment in acquiring them? This is the canonical diagnostic of acquisition health. Below 1:1, the business is losing money on every customer acquired, structurally unprofitable regardless of revenue growth. At 3:1, the economics work — customers return three times their acquisition cost over their lifetime, the threshold widely recognised as the minimum for sustainable growth investment. Above 5:1, the company is likely underinvesting in growth: excess margin that could be redeployed into acquisition is sitting idle while the market may be growing faster than the company is. The method flags both failure modes: below-1:1 as structurally broken, above-5:1 as a growth opportunity signal.
Time to conversion — is the time elapsed between first contact and first purchase shorter than industry average? Time to conversion is both a commercial metric (faster conversion means capital cycles more quickly) and a diagnostic signal. Slow conversion typically indicates friction in the sales or onboarding process, insufficient proof at the decision stage, or a mismatch between channel and buyer readiness. It is one of the most sensitive indicators of experience (420) and proof (340) gaps, because the last obstacles to conversion are almost always credibility and confidence.
What the Marketing Canvas scores in Acquisition
The dimension scores four metrics, each a distinct diagnostic layer of acquisition health.
CAC (Customer Acquisition Cost) — is your cost of acquiring a new customer below industry average and below direct competitors? CAC is the total investment in marketing and sales divided by the number of new customers acquired in a period. The critical framing the method applies: CAC is only meaningful relative to what the acquired customer returns. A high CAC is not automatically a problem. A CAC that exceeds the lifetime value of the customer it acquired is always a problem. Before scoring CAC in isolation, the method cross-references it with the CLTV/CAC ratio. The ratio matters more than the absolute number.
Conversion rate — is the rate at which prospects become buyers above industry average? A low conversion rate is a signal that something in the middle of the funnel is failing — the proposition, the proof, the experience, the pricing, or the channel. It rarely lives in the acquisition funnel itself; the root cause is almost always upstream in the Canvas.
CLTV/CAC ratio — does the lifetime value customers generate justify the investment in acquiring them? This is the canonical diagnostic of acquisition health. Below 1:1, the business is losing money on every customer acquired, structurally unprofitable regardless of revenue growth. At 3:1, the economics work — customers return three times their acquisition cost over their lifetime, the threshold widely recognised as the minimum for sustainable growth investment. Above 5:1, the company is likely underinvesting in growth: excess margin that could be redeployed into acquisition is sitting idle while the market may be growing faster than the company is. The method flags both failure modes: below-1:1 as structurally broken, above-5:1 as a growth opportunity signal.
Time to conversion — is the time elapsed between first contact and first purchase shorter than industry average? Time to conversion is both a commercial metric (faster conversion means capital cycles more quickly) and a diagnostic signal. Slow conversion typically indicates friction in the sales or onboarding process, insufficient proof at the decision stage, or a mismatch between channel and buyer readiness. It is one of the most sensitive indicators of experience (420) and proof (340) gaps, because the last obstacles to conversion are almost always credibility and confidence.
The B2B translation
The four metrics apply universally, but their absolute values vary enormously by context. The method applies one interpretive rule: score relative to industry and competitive benchmarks, not absolute thresholds.
In B2B, CAC includes sales team compensation, RFP response costs, proof-of-concept investments, executive relationship-building, and the full duration of a multi-month sales cycle. A CAC of €50,000 is not inherently high for a contract worth €500,000 annually. The ratio remains the diagnostic. A CAC of €5,000 for the same contract is exceptional efficiency. A CAC of €50,000 for a contract worth €40,000 is a structural loss regardless of how many deals are being closed.
Time to conversion in B2B enterprise can extend to 12–18 months for complex deals. The relevant benchmark is not a consumer e-commerce conversion window — it is the industry standard for equivalent deal complexity. Scoring time to conversion requires knowing that benchmark.
Why low CAC can be a warning signal
The method flags a counterintuitive risk: a CAC that is dramatically below competitors, without a corresponding explanation in channel efficiency or product virality, may indicate that the company is acquiring customers from segments that do not generate sufficient lifetime value.
The mechanism: the cheapest customers to acquire are often the least qualified. They convert quickly because the proposition appears to solve a problem it doesn't actually solve at depth. They churn early. CLTV is low. The CLTV/CAC ratio that looked healthy at acquisition looks broken six months later.
This is why 610 and 630 (Lifetime) must be scored together. A 611 score of +3 with a 630 score of −2 is not a success story. It is a churn problem being temporarily obscured by acquisition volume.
Statements for Self-Assessment
Score each of the four sub-questions from −3 to +3 (no zero), then average for the dimension score. If the average is mathematically zero, round to −1.
Interpreting your scores
Negative scores (−1 to −3): Acquisition metrics are unmeasured, above industry average in cost, or the CLTV/CAC ratio is below 3:1, indicating that growth is being purchased at a structural loss. Conversion rates and time to conversion suggest friction that is not being identified or addressed. The acquisition engine is running without a dashboard.
Positive scores (+1 to +3): CAC is tracked, benchmarked, and competitive. The CLTV/CAC ratio sits in the 3:1–5:1 range or, if above 5:1, is being actively used to justify increased acquisition investment. Conversion rate and time to conversion are above industry benchmarks. The acquisition engine is instrumented and improving.
Strategic Role
Fatal Brake for A2 (Efficiency Machine): Cost-efficient customer acquisition is the core strategic capability of the Efficiency Machine archetype. A2 competes on operational excellence — the ability to serve customers at a cost structure competitors cannot match. If CAC is above industry average for an A2, the strategic foundation is cracked: the business that is supposed to win on cost efficiency is paying more than its competitors to acquire each customer. No operational efficiency downstream compensates for that. Acquisition is the one dimension where A2 cannot afford a weak score.
Secondary Brake for A7 (Scale-Up Guardian): Hypergrowth creates acquisition pressure: the company needs to acquire customers faster than before, often in new segments or geographies, using channels that haven't yet been optimised. CAC tends to rise during scale-up because the cheapest, most efficient acquisition channels (organic, referral) have been saturated. If 610 is not actively managed during the scale-up phase, the unit economics that justified growth at €X per customer begin to look different at €2X per customer across a larger base.
Secondary Accelerator for A1 (Disruptive Newcomer): A disruptor needs early customers at a cost that doesn't exhaust runway before product-market fit is confirmed. The acquisition metrics for A1 are diagnostic: if CAC is rising as the early adopter segment is saturated and the company tries to reach mainstream customers, it is a signal that the proposition hasn't yet crossed the chasm. A1 uses 610 scores as a product-market fit indicator, not just a marketing efficiency metric.
Secondary Accelerator for A5 (Pivot Pioneer): A company in strategic pivot is effectively re-entering the acquisition problem with a new proposition, new segment, or new channel. The metrics reset. Old CAC benchmarks may not apply. 610 for A5 scores whether the new acquisition engine is being built with the right unit economics from the start, rather than inheriting the assumptions of the previous strategic direction.
Growth Driver for A5 and A7: In both archetypes, new customer acquisition directly drives the growth engine. For A7, the scale-up is the growth engine — more customers, faster. For A5, the new direction's viability is validated by whether it can acquire customers at sustainable economics. In both cases, 610 is not a maintenance metric; it is the primary growth indicator.
Case study: Green Clean
Green Clean is a fictional eco-friendly residential cleaning service used as the recurring worked example throughout the Marketing Canvas Method.
Score: −2 to −1 (Weak) Green Clean has never formally calculated its CAC. The founder estimates it is "around €80 per new customer" based on a rough calculation of advertising spend divided by bookings — but this excludes time spent on social media, the cost of the free introductory clean offered to first-time customers, and the referral credits paid to existing customers who recommend the service. The real CAC, once fully loaded, is likely closer to €160. At an average first-year contract value of €420, this produces a CLTV/CAC ratio that depends entirely on how long customers stay — and Green Clean has not calculated churn. Conversion rate is not tracked: the team knows how many bookings it receives but not how many website visitors or enquiries did not convert. Time to conversion is unknown. None of the four metrics is being actively managed. The acquisition engine is operating without instrumentation.
Score: +1 to +2 (Developing) Green Clean has instrumented its acquisition funnel for the first time. CAC has been calculated at €138 using a fully loaded methodology (advertising, social media time, referral credits, introductory clean cost). Industry benchmarks for residential home services in the region suggest an average CAC of €180, placing Green Clean competitive but not exceptional. Conversion rate from enquiry to first booking is 31%, compared to an estimated industry average of 28% — marginally above benchmark, consistent with the Family Health Report serving as a credibility accelerator at the decision stage. CLTV has been estimated at €1,200 over an average 3-year customer lifetime, producing a CLTV/CAC ratio of approximately 8.7:1 — above the 5:1 threshold, signalling that Green Clean is likely underinvesting in acquisition relative to the lifetime value it generates. Time to conversion from first contact to first booking averages 11 days. The metrics exist. The strategic implications are beginning to be drawn: the above-5:1 ratio suggests the acquisition budget should be increased, not managed for efficiency.
Score: +2 to +3 (Strong) Green Clean's acquisition economics are fully instrumented and actively managed against strategic targets. CAC is tracked by channel — organic search (€62), referral programme (€89), paid social (€147), partnership (€104) — enabling deliberate reallocation toward the lowest-cost, highest-quality channels. The CLTV/CAC ratio has been recalculated using cohort data: customers acquired through the referral programme have a 4.2-year average lifetime versus 2.8 years for paid social acquisitions, making referral the highest-value channel by ratio even when CAC is higher in absolute terms. Conversion rate has improved to 38% following a redesign of the enquiry-to-booking sequence, including a same-day response protocol and the Family Health Report preview offered at enquiry stage. Time to conversion has fallen to 7 days. The CLTV/CAC ratio now sits at 6.4:1 across all channels combined, prompting a deliberate decision to increase acquisition investment rather than manage CAC downward — the economics justify acceleration.
Connected dimensions
Acquisition does not operate in isolation. Five dimensions connect most directly:
330 — Prices: Pricing directly affects conversion rate (612) and time to conversion (614). A price that is misaligned with perceived value creates friction at the decision stage that no acquisition optimisation can overcome. The 330 score is often the upstream root cause of a weak 612 score.
430 — Channels: Channel selection determines acquisition cost (611). The channels used to reach prospects determine both the CAC and the quality of acquired customers. A channel that produces low-CAC customers who churn quickly may score well in 611 while producing a weak 613. Channel-level CLTV/CAC analysis is the most granular form of 610 assessment.
530 — Media: Media mix efficiency drives acquisition cost. The compounding media system (owned → earned → shared → paid amplification) systematically reduces CAC over time as organic and referral channels grow. A company dependent on paid media will see CAC plateau or rise; a company with strong owned and earned media infrastructure will see CAC fall as the system matures.
620 — ARPU: ARPU must justify CAC. A low ARPU with a high CAC produces a CLTV/CAC ratio below 3:1 regardless of lifetime. Before investing in acquisition growth, the method checks whether the revenue each acquired customer generates is sufficient to make the investment worthwhile.
630 — Lifetime: Lifetime value makes acquisition cost sustainable. The CLTV in the CLTV/CAC ratio is a function of both ARPU and how long customers stay. A weak 630 (high churn) can make a healthy-looking 611 (low CAC) into a structural loss. The two dimensions must be scored and interpreted together.
Conclusion
Acquisition is the dimension that connects marketing strategy to commercial viability. Every other dimension in the Canvas — the job definition, the positioning, the features, the experience, the stories — ultimately expresses itself in whether customers are acquired at a cost and rate that makes the business sustainable.
The strategic discipline the method requires is not campaign optimisation. It is instrumentation: knowing the CAC, knowing the conversion rate, knowing the CLTV/CAC ratio, and making deliberate decisions based on what those numbers mean relative to industry benchmarks and strategic goals.
The single most actionable diagnostic: calculate your CLTV/CAC ratio. If it is below 3:1, fix it before investing further in growth. If it is above 5:1, you are almost certainly leaving growth on the table. The ratio tells you whether to optimise for efficiency or invest for acceleration — and getting that choice wrong is among the most expensive strategic mistakes a marketing function can make.
Sources
David Skok, "SaaS Metrics 2.0 — A Guide to Measuring and Improving What Matters", For Entrepreneurs blog — forentrepreneurs.com (foundational CLTV/CAC framework)
Ilya Volodarsky, "The Startup Metrics You Need to Monitor", Harvard Business Review, 2016 — hbr.org
Marketing Canvas Method, Appendix E — Dimension 610: Acquisition (Gross Adds), Laurent Bouty, 2026
About this dimension
Dimension 610 — Acquisition (Gross Adds) is part of the Metrics meta-category (600) in the Marketing Canvas Method. The Metrics meta-category contains four dimensions: Acquisition (610), ARPU (620), User Lifetime (630), and Budget/ROI (640).
The Marketing Canvas Method is a complete marketing strategy framework built around 6 meta-categories, 24 dimensions, and 9 strategic archetypes. Learn more at marketingcanvas.net or in the book Marketing Strategy, Programmed by Laurent Bouty.