Complete Study Guide · PM Course

Product Management
Exam Notes

All sessions distilled: frameworks, formulas, MCQ prep, and descriptive answer guides. Everything you need for tomorrow.

S2 Product Thinking S3 Segments & Sizing S4 Competitive S5 Qualitative Research S6 Quantitative & Analytics S7 Metrics & A/B S9 Business Models S10 Discovery & MVPs (Minimum Viable Products) S11 Prioritization S12 Roadmapping S13 PRDs (Product Requirements Docs) & UX (User Experience) S15 Enterprise & API (Application Programming Interface)
Study Progress 0%

Quick Jump

S2 Product Thinking S3 Segments & Market Sizing S4 Competitive & Positioning S5 Qualitative Research S6 Quant & Analytics S7 Metrics, North Star & A/B S9 Business Models & Pricing S10 Discovery & Validation S11 Prioritization & Stakeholders S12 Roadmapping & Product Ops S13 PRDs, UX & Quality S15 Enterprise, Platform & API MCQ Practice Bank Case Study Questions Quick Templates
🔍
Start typing to search across all sessions...
02

Product Thinking & Problem Framing

JTBD (Jobs-to-be-Done) · Outcome Statements · Constraints · Opportunity Brief

Core Idea

Problem framing is the highest-leverage PM skill - it sets the user, the metric, and the scope boundaries before any solution is discussed. 42% of startups cite "no market need" as the top failure reason. If you can't write the problem + metric in 1 minute, you're not ready to build.

Rule If you can't measure the impact, you don't understand the problem yet.

Key Terminology Cheat Sheet

Segment
Targetable group you can reach and measure
Persona
Empathy story/archetype (NOT a market size)
ICP (Ideal Customer Profile)
B2B (Business-to-Business) best-fit buyer profile (org type + readiness)
JTBD (Jobs-to-be-Done)
The progress someone wants to make in a situation
Outcome Metric
Measure of impact/adoption - not output/features
Guardrail
Metric that must NOT degrade while improving success
Constraint
Real boundary: time, compliance, capacity, platform
Non-goal
Explicitly out of scope - prevents scope creep

Problem Statement Formula

Problem Statement Structure [Segment] + [Pain] + [Context] + [Measurable Impact]

Example: "Mid-market ops teams lose 3+ hours/week on approval workflows because email-based processes lack traceability, leading to compliance risk and decision delays."

Outputs vs Outcomes (Critical Distinction)

Output (BAD)
"Ship dashboard redesign"
"Launch AI assistant"
"Add team analytics page"
Outcome (GOOD)
"Increase activation from 32% to 45%"
"Reduce time-to-value for new teams"
"Improve paid conversion from trial"

Opportunity Brief Structure (A2 Deliverable)

  • Segment: Who specifically?
  • Problem statement: Pain + context + measurable impact
  • Why now: What changed? Urgency driver?
  • Constraints: Real limits on solution space
  • Non-goals: What we will NOT solve
  • Success metric + baseline: What moves? From where?
  • Guardrail: What must not worsen?
  • Top 3 assumptions: Ranked by risk

Product Thinking Lens: What Good Balances

Value
Does it create real user value?
Usability
Can users actually use it?
Feasibility
Can we build it?
Viability
Does it work for the business?
03

Customers, Segments & Market Sizing

TAM (Total Addressable Market) · SAM (Serviceable Addressable Market) · SOM (Serviceable Obtainable Market) · Bottom-up · Segment Scorecard · Beachhead

Core Idea

"The best segment isn't the biggest - it's the one you can win, and prove." Sizing is a decision tool: it helps you say "no" with confidence and allocate scarce resources to markets worth building for.

TAM / SAM / SOM

TAM
Total Addressable Market
All buyers for this specific job-to-be-done (NOT the whole industry)
SAM
Serviceable Addressable Market
Segment/geo/constraints you CAN serve with your current capabilities
SOM
Serviceable Obtainable Market
What you can WIN in 12–24 months, constrained by GTM capacity
Key Rule SOM must be constrained by your GTM (Go-to-Market) capacity (pipeline, conversion rate, rollout bandwidth) , not just market share wishes.

Market Sizing Formula

Base Formula Market Size = (# of customers) × (revenue per customer)

Then apply reality filters: market boundary, adoption & switching friction, pricing/packaging, GTM (Go-to-Market) capacity, competition intensity.

Sizing Methods - Use At Least 2 + Triangulate

MethodHow It WorksWhen to UseWatch Out
Top-downStart from credible total, apply filtersQuick direction, investor decksEasy to inflate; must sanity-check with bottom-up
Bottom-upUnits × Price × Adoption % (fully auditable)Most credible for defensible modelsLabor intensive; assumptions must be sourced
Value theoryValue created → willingness to payPricing discovery phaseTheoretical; needs behavioral validation
AnalogsComparable product's penetration + ARPUNew market with established proxiesContext differences matter; justify the analogy
Triangulation Rule If your two methods differ by more than 2–3×, your assumptions are wrong - or your market boundary is.

Market Boundary Canvas (Fill Before Any Math)

FieldWhat to Define
Job (outcome)What outcome is the customer purchasing?
Buyer (budget owner)Who controls the budget decision?
ContextWorkflow/constraints/industry
Geography + timeframeWhere and in what time horizon?
SubstitutesWhat competes for the same budget/job?
Unit for sizingAccounts / users / transactions?
Red Flags in Sizing "Fintech is $X trillion" · Users ≠ buyers · 100% adoption assumptions · Double counting · No pricing logic

Segment Scorecard (Beachhead Selection)

CriterionWhat to Look For
Pain severityHigh urgency + high cost of status quo
Willingness to payBudget exists + ROI story is credible
ReachabilityYou can find and reach them (channels, communities)
Time-to-valueVisible win quickly; low setup friction
DifferentiationClear wedge vs alternatives
Retention potentialRecurring usage; switching costs or habit
Competition intensityNot a knife fight with 10 incumbents
Strategic fitMatches your team's strengths and roadmap
04

Competitive Landscape & Positioning

Competitors vs Alternatives · Value Proposition · Positioning Statement · Message Map · ICP (Ideal Customer Profile)

The Big Insight

Most deals are lost to the buyer's default alternative (DIY, spreadsheets, status quo) - not to direct competitors. If you can't name the #1 alternative buyers compare you to, you can't control the category lens.

Competitors vs Alternatives - Two Different Games

Competitors (Same Category)
  • Same buyer expectations (feature checklist + pricing norms)
  • You win by outperforming on the category's decision drivers
  • Drives parity expectations
  • Messaging: "better/faster/cheaper than X"
Alternatives (Different Category, Same Job)
  • Includes DIY, spreadsheets, email, WhatsApp, "do nothing"
  • Drives switching fear
  • Messaging: "why this approach is smarter"
  • Where category choice and narrative matter most

Differentiation Framework: Parity → Differentiator → Proof

LevelWhat It IsExample
Parity (table stakes)Must-have just to enter shortlistContacts/pipeline/email sync
Differentiator (wedge)Meaningful, defensible, provable advantagePrebuilt workflows for specific ICP (Ideal Customer Profile)
Proof (reasons to believe)Evidence that makes the claim credibleTime-to-first-value metric, customer story
Moat Test Is it Meaningful (moves a decision driver)? Defensible (hard to copy)? Provable (you can show evidence fast)? If not all 3 - discard it.

Positioning Statement Template

Positioning OS (Discovery Version) For [ICP , Ideal Customer Profile]
Who see us as [category/type of solution]
We help them [achieve outcome]
Because [mechanism / capability]
Unlike [alternative], we [trade-off / focus]

Message Map Structure

  • 1 Core message: The single claim you want buyers to remember
  • 3 Proof points: Evidence buckets (data, story, demo, trust signal)
  • Top 3 Objections + Answers: Switching fears + mitigation
Quality Check Every claim must either (a) map to a buyer decision driver or (b) remove a switching fear - and be backed by a proof asset you can actually show.

6-Step Competitive Workflow

1. Define ICP (Ideal Customer Profile)
2. Map comp set
3. Inventory diff
4. Switching costs
5. Category choice
6. Position + Message

Switching Cost Types

Financial
Sunk cost, contract penalties, migration cost
Process
Workflow re-training, change management effort
Relational
Vendor trust built, account history, integration depth
Technical
API integrations, data formats, ecosystem lock-in

To reduce switching friction early: CSV (Comma-Separated Values) import + templates, guided setup, freemium trials, migration concierge, prebuilt integrations.

05

Qualitative Research & Insight Synthesis

Interview Design · Affinity Mapping · Insight Statements · Opportunities · JTBD (Jobs-to-be-Done)

Core Idea

Qual research reduces decision risk by making customer behavior legible. Better evidence beats better opinions. If your output is only quotes, you didn't synthesize.

Expensive Loop
Assumption → Build → Launch → Learn (too late)
Cheap Loop
Evidence → Mechanism → Bet → Measure

Key Terminology

TermDefinitionStrength
ObservationWhat people DOStrongest evidence
Self-reportWhat people SAY they doWeaker
PreferenceWhat people say they WANTWeakest
FindingA fact observed in data-
ThemeA recurring pattern of findings-
InsightTheme + implication (so what?)Decision-ready
OpportunityAn insight framed as a betActionable

Qual Method Chooser

GoalBest MethodSample SizeOutput
Explore needsProblem interviews6–12Themes / JTBD
Workflow/contextContextual inquiry4–8Friction map
Validate UIUsability tests5–8 per segmentSeverity issues
Over timeDiary study10–20Triggers / timeline
MessagingConcept test8–15Objections + language
Heuristic Behavior-in-context → Observational methods. Meaning/trade-offs → Conversational methods.

Interview Best Practices

Gold-Standard Questions
  • "Tell me about the last time…"
  • "Walk me through the steps…"
  • "What did you do when that failed?"
  • "What did that cost you (time/money/risk)?"
Avoid These
  • "Would you use…?" (hypothetical)
  • "Do you like…?" (leading)
  • "Wouldn't it be great if…?" (leading)
  • "Would you pay…?" (hypothetical)

Bias Types (Know These for MCQs)

Confirmation Bias
Hearing what you want to hear
Social Desirability
Respondent being polite, not honest
Availability Bias
Recent story feels more common than it is
Framing Effects
Your wording changes the answer

Mitigation: Ask for examples, artifacts, receipts. Keep returning to specifics: "What happened next?"

Insight Statement Formula

High-Integrity Insight Evidence (quotes, clips, counts) + Mechanism (why it happens) + Implication (so what / what changes)

Synthesis pipeline: Raw notes → Clusters → Themes → Insights → Opportunities/Bets

06

Quantitative Research & Analytics Foundations

Surveys · NPS (Net Promoter Score) / CSAT (Customer Satisfaction Score) · Funnels · Cohorts · Tracking Plans

Quant vs Qual Decision Matrix

NeedBest ToolWhy
Actual behavior at scaleTelemetry / product analyticsObserved; low self-report bias
Attitudes / preferences / awarenessSurveysCan size across segments; captures stated constraints
Causality (impact of a change)Experiments / A/B testsEstimates lift; reduces confounding
Mechanisms / context / new needsQualitativeExplains "why"
Best PMs use qual to explain "why" and quant to size "how much".

4 Types of Quant Research

Descriptive
"What happened?"
Funnels, cohorts, segments, trends
Diagnostic
"Why did it happen?"
Decomposition, regression, attribution
Predictive
"What will happen?"
Propensity models, time series, scoring
Causal
"What is the impact of X?"
A/B tests, holdouts, diff-in-diff
Rule Pick the minimum type that answers the decision. Don't jump to causal if you still don't understand baseline behavior.

Survey Types for PMs

TypeBest ForTypical Output
CSAT (Customer Satisfaction) / pulseFixing a specific journey momentTop-2 box + segment deltas
NPS (Net Promoter Score)Macro trend & segmentationNPS score + drivers (directional)
Concept testCompare positioning optionsPreference share + objections
MaxDiff (Maximum Difference Scaling) / trade-offsPrioritization (when everything is "important")Ranked importance + winners/losers
Pricing / WTP (Willingness to Pay)Range/elasticity directionBands + sensitivity; validate with behavior
Important! NPS (Net Promoter Score) is a relationship metric , NOT a diagnostic UX (User Experience) tool. Don't use it to fix a specific journey problem.

Survey 7-Step Process (with Gate Logic)

  1. Brief: Decision + hypothesis + population
  2. Constructs: Define what you're measuring
  3. Questionnaire: Draft + bias check
  4. Sampling: Quotas + n targets
  5. Pilot: Test for clarity + drop-off
  6. Field + clean: Quality monitoring + cleaning
  7. Report → Memo → Backlog: Action or next test

Tracking Plan Essentials

  • Events: User actions to capture (at least 10 core events)
  • Properties: Key attributes per event (user_id, plan_type, etc.)
  • Owner: Who is responsible for each event
  • Funnel steps: Ordered journey with defined entry/exit
  • Cohort definition: Group by same start event + time window
07

Metrics, North Star & Experimentation

NSM (North Star Metric) · Metric Trees · HEART / AARRR (Pirate Metrics) · A/B Testing · MDE (Minimum Detectable Effect) · SRM (Sample Ratio Mismatch)

Types of Product Metrics (AARRR , Acquisition, Activation, Retention, Revenue, Referral)

Acquisition
Traffic, Signups, CAC (Customer Acquisition Cost), CPA (Cost Per Acquisition), Conversion Rate
Activation
First user success: account created, first purchase, first project
Engagement
DAU (Daily Active Users) / MAU (Monthly Active Users), Session length, Feature usage, Stickiness (DAU/MAU)
Retention
Retention curves, Churn rate, Cohort analysis, Renewal rate
Monetization
ARPU (Average Revenue Per User), LTV (Lifetime Value), MRR (Monthly Recurring Revenue) / ARR (Annual Recurring Revenue), Gross margin, Conversion to paid
Satisfaction
NPS (Net Promoter Score), CSAT (Customer Satisfaction), CES (Customer Effort Score), App Store ratings

NSM , North Star Metric

The NSM (North Star Metric) captures sustained value delivered to customers and correlates with long-term business success. It should be: measurable, sensitive to product improvement, and hard to game.

NSM Definition Checklist Unit (user/session/trip) + Time Window (per week, D7, W4) + Eligibility (who counts?) + Data Source

Cab booking example NSM: "Completed rides per active rider per week"

Anti-patterns Vanity metrics (total downloads, page views) that don't reflect actual value delivery. A metric that can be gamed easily.

Metric Tree Structure

NSM
Driver Metrics (leading)
Controllable Levers
+
Guardrails

Example (cab booking):
NSM: Completed rides/active rider/week
Driver: Booking success rate → Lever: simplify pickup confirmation
Driver: On-time pickup % → Lever: better ETA model
Guardrails: cancellation rate, refunds, safety/SOS usage

Measurement Frameworks - When to Use Each

FrameworkWhat It MeasuresBest For
HEART (Happiness, Engagement, Adoption, Retention, Task success)Happiness, Engagement, Adoption, Retention, Task successUX (User Experience) quality questions
AARRR (Pirate Metrics)Acquisition, Activation, Retention, Revenue, ReferralGrowth bottleneck diagnosis
OMTM (One Metric That Matters)One Metric That Matters (short-term focus)Team-level execution phase (4–8 weeks)
Metric TreeNSM → Drivers → Levers → GuardrailsStrategy-to-execution system

A/B Testing Fundamentals

MDE (Minimum Detectable Effect)
Minimum Detectable Effect - smallest change worth detecting. Higher MDE = smaller sample needed.
SRM (Sample Ratio Mismatch)
Sample Ratio Mismatch - when control/treatment split differs from intended. Always check before reading results.
Power
Probability of detecting a real effect. Typical standard: 80% power. More power = longer runtime or bigger sample.
Guardrails
Counter-metrics you won't sacrifice. Pre-specify these before starting the test.
A/B Testing Anti-patterns Peeking early and stopping when significant · No guardrails · Running on poorly defined metrics · Forgetting SRM check

End-to-End Measurement OS

Decision
North Star
Metric Tree
Instrument
Diagnose
Experiment
Readout
Rollout

Key principle: Experiments are not the starting point. Teams must first define value, define metrics, ensure data reliability, diagnose - and only then run causal tests.

09

Business Models, Pricing & Packaging

Revenue Models · Value Metric · Pricing Tiers · Freemium · Unit Economics (LTV, CAC, NRR)

Key Terminology

Business Model
How the product creates, delivers, AND captures value. Defines value loop + cost-to-serve + scaling behavior.
Revenue Model
The charging mechanism (subscription, usage, transaction, ads, licensing) - just ONE component of the business model.
Pricing
Value metric + price points + discount rules + renewals
Packaging
Tiers, bundles, add-ons and fences that enable self-selection and upgrade triggers

Business Model Taxonomy

ModelCore LogicKey MetricsFailure Mode
Subscription SaaS (Software as a Service)Ongoing access; seat-basedNRR (Net Revenue Retention), churn, activation depth, CAC (Customer Acquisition Cost) paybackMetric mismatch (seat vs usage), shelfware
Usage-basedValue scales with consumption intensityGross margin/unit, usage retention, expansion rateBill shock, budgeting complexity
Transactional / Take-rateValue per event/exchangeGMV (Gross Merchandise Value), take-rate, conversion, fraud rateDisintermediation risk
MarketplaceEnable exchange; charge % of GMV (Gross Merchandise Value)GMV, take-rate, market healthLeakage, trust/safety cost
Freemium / PLG (Product-Led Growth)Free tier drives adoption; convert via limitsFreemium conversion %, support load, activationSupport load, low conversion rate
AdvertisingMonetize attention; payer is advertiserRPM (Revenue Per Mille/Thousand), CTR (Click-Through Rate), ad qualityUser experience degradation

How to Choose a Business Model

  • What is the primary unit of value delivered?
  • What is the primary unit of cost-to-serve?
  • Is value continuous (subscription) or episodic (transaction)?
  • Does value scale with usage intensity? (→ usage-based)
  • Is there a network effect? (→ marketplace)
  • Long procurement cycles? (→ enterprise licensing)
Rule Name the dominant engine (subscription/usage/transaction) AND the distribution pattern (freemium, open core, ecosystem) that drives adoption. Also name the ROI (Return on Investment) story for the buyer.

SpendWise Example (from Session 13 Materials)

Free Tier
1 bank, 3-month history
Plus £4.99/mo
Unlimited banks + exports
Pro £9.99/mo
Forecasting + multi-currency + priority support

Target: 8–12% freemium conversion by month 12. CAC payback <6 months. LTV:CAC >3:1.

Unit Economics Essentials

LTV (Lifetime Value)
Lifetime Value: ARPU (Avg. Revenue Per User) × Gross Margin % ÷ Churn Rate
CAC (Customer Acquisition Cost)
Customer Acquisition Cost: total sales+marketing ÷ new customers
LTV:CAC Ratio
Target >3:1. Below 1:1 = unsustainable
Payback Period
CAC ÷ Monthly Gross Profit per customer. Target <12 months (consumer), <18 months (B2B , Business-to-Business)
NRR (Net Revenue Retention)
Net Revenue Retention: (MRR (Monthly Recurring Revenue) start + expansions - downgrades - churn) ÷ MRR start. >100% = net expansion
10

Discovery & Validation

Assumption Mapping · Hypothesis Writing · MVP (Minimum Viable Product) Types · Experiment Design

Discovery vs Delivery - Two Operating Systems

Discovery Mode
Goal: reduce uncertainty
Output: evidence
Cadence: fast loops
Artifacts: cheap + reversible
Question: what to learn next?
Delivery Mode
Goal: ship reliably
Output: working software
Cadence: planned execution
Artifacts: durable + scalable
Question: how to deliver with quality?
Key Rule Move from discovery to delivery only when risk is meaningfully lower. Start with the biggest assumption, not the biggest roadmap item.

Assumption Mapping

Map assumptions across Desirability (do users want this?), Feasibility (can we build it?), and Viability (does it work for the business?). Then rank by Uncertainty × Consequence of being wrong.

Rule Test the highest-risk, lowest-evidence assumption first. Don't build until you've de-risked the most dangerous assumption.

Hypothesis Writing Formula

Strong Hypothesis Template We believe [target user] has [problem or job].
If we introduce [change], then [behavior X] will improve
by [metric Y] within [time window Z].

Example: "If we add a first-step checklist, first-key-action completion will rise from 42% to 50% in 7 days."

Weak Hypothesis
"Users will love the new dashboard." (No segment, behavior, metric, or time frame)
Strong Hypothesis
Names: User group + Intervention + Behavior + Threshold + Time window

MVP (Minimum Viable Product) Types , Pick by Risk

MVP TypeBest ForSignals to WatchCommon Mistake
PrototypeTesting flow, copy, and structure. Do users understand and navigate?Task completion, confusion points, time to completeTreating prototype clicks as proof of WTP (Willingness to Pay) or sustained usage
ConciergeTesting real value with manual delivery backstage. High-touch services.Repeat usage, WTP (Willingness to Pay), quality expectations, service frictionMistaking manual heroics for a scalable business model
Fake-doorTesting demand/intent fast before building capabilityCTR (Click-Through Rate), signup rate, segment differences, drop after clarificationTreating shallow clicks as proof of lasting value
Wizard of OzTesting automated-feeling experience when automation isn't ready (AI)Trust/confidence, task completion, latency toleranceHiding risks that would change user consent
Quick Checkpoint Logic Premium service, nothing built → Fake-door | AI assistant but model not ready → Wizard of Oz | Complex high-touch service → Concierge | Test flow clarity in 24 hrs → Prototype

Experiment Card Structure

  • Hypothesis: Specific belief to test
  • Test method: How you'll test it
  • Primary metric: One outcome reflecting core behavior
  • Success threshold: Pre-agreed number that changes the decision (e.g., 42% → 50%)
  • Guardrail: What must not worsen
  • Decision owner: Named person
  • Decision rules: Scale / Iterate / Stop criteria
11

Prioritization, Stakeholders & Decision Memos

RICE (Reach, Impact, Confidence, Effort) · WSJF (Weighted Shortest Job First) · Kano · MoSCoW (Must, Should, Could, Won't) · DACI (Driver, Approver, Contributor, Informed) · Decision Memos

PM Decision Stack

1. Frame
2. Prioritize
3. Align stakeholders
4. Write the memo
5. Decide + review

Prioritization Framework Chooser

FrameworkBest WhenWhy It Helps
RICEMany comparable ideas with estimable inputsCombines upside with cost; discounts weak evidence via confidence
ICE (Impact, Confidence, Ease)Quick workshop when only directional estimates availableLightweight; great for PLG (Product-Led Growth) / growth experiment lists
Impact–Effort 2×2Early sorting; visual alignment neededHonest roughness beats fake rigor; good for exec conversations
WSJF (Weighted Shortest Job First)Waiting is expensive; delivery programsMakes urgency and job size visible together
KanoShaping feature set / release scopeSeparates must-haves from performance features and delighters
MoSCoWTime-boxed release scope negotiationForces commitment on "must" vs "could" - prevents everything being urgent
Narrative thesisLarge, discontinuous strategic betsBetter than feature math for existential choices

RICE (Reach, Impact, Confidence, Effort) Formula

RICE Score RICE = (Reach × Impact × Confidence) ÷ Effort
Reach
How many users/accounts affected in the time period? (Easiest place for hidden optimism)
Impact
How strongly does it move the target metric? (Scale: 0.25 = minimal, 3 = massive)
Confidence
How trustworthy are the inputs? Lower confidence aggressively when evidence is thin.
Effort
Team time/capacity cost. Always pair score with a narrative explanation.
Usage Rules Use RICE when options are similar enough to compare. Do NOT force onto executive strategy bets or existential choices. Always pair score with narrative.

MoSCoW (Must, Should, Could, Won't) Categories

Must
Critical for this release; launch cannot proceed without it
Should
Important but launch can proceed without it
Could
Nice-to-have; include if capacity remains
Won't (this release)
Explicitly excluded; high effort relative to current window
Warning Too many "Musts" means the exercise failed. Protect the Must category rigorously.

Stakeholder Mapping

Map stakeholders on Power × Interest grid. Key questions:

  • Who decides? Who contributes? Who needs to know?
  • Who can accelerate, block, or reshape the call?
  • Pre-wire key stakeholders before the decision meeting

DACI Framework: D = Driver (moves it forward), A = Approver (final call), C = Contributor (input), I = Informed (kept updated)

Decision Memo Structure

  1. Context: Why is this decision happening now?
  2. Options: What were the alternatives considered?
  3. Recommendation: What did you decide and why?
  4. Risks: What could go wrong? Mitigation?
  5. Review date: When do we revisit?
Strong product teams leave legible choices, durable rationale, and a trail that lets others understand why the roadmap moved.
12

Roadmapping & Product Ops

Outcome Roadmaps · Now/Next/Later · Cadence · Governance · RAID (Risks, Assumptions, Issues, Dependencies)

What a Roadmap Actually Does

A roadmap is a communication device for strategic direction, not a guarantee sheet or feature promise list. It must achieve 4 things:

Signal Direction
Turns themes into visible focus; connects strategy to small number of bets
Show Trade-offs
Now vs later is legible; portfolio comparisons become possible
Create Narrative
Shared story for stakeholders; reduces rework and surprises
Enable Review
Becomes a management artifact; governance is explicit

Roadmap Types - One Does Not Fit All

TypeShows WhatBest For
Outcome roadmapStrategic change view (desired results)Strategy translation, leadership alignment
Feature roadmapCapability view (what capabilities being built)Internal visibility, team-level alignment
Release roadmapLaunch view (what ships when)GTM (Go-to-Market) coordination, sales/support readiness
Portfolio roadmapCross-product viewLeadership trade-offs across teams
Platform roadmapInfrastructure/reusable capability viewPlatform teams, dependency management
Selection Logic Start with the audience → Clarify the decision the artifact supports → Choose the level of detail that matches → Use more than one type when needed.

Now / Next / Later Framework

Now
Current sprint/quarter - committed, well-defined outcomes with metrics
Next
Next 1–2 quarters - directional bets; still subject to learning
Later
Horizon items; intentionally low certainty - avoid false precision

The fastest way to raise roadmap quality: shift language from feature shipment to change creation.

Product Ops Cadence

RhythmPurposeKey Artifacts
WeeklyExecution review, blocker resolutionStandup notes, decision log updates
MonthlyRoadmap review, intake processingMetric review, roadmap updates, RAID log
QuarterlyStrategy review, OKR (Objectives & Key Results) check-in, kill criteriaBusiness review, roadmap reset, OKR scoring

RAID: R = Risks, A = Assumptions, I = Issues, D = Dependencies , track all four in a living log.

13

PRDs, UX Collaboration & Quality

PRD (Product Requirements Document) · User Stories · AC (Acceptance Criteria) · Edge Cases · NFRs (Non-Functional Requirements)

"Ready to Build" - What It Really Means

Ready to Build ✓
  • Problem, desired outcome, and user are named clearly
  • Team knows what is in scope now, later, and not at all
  • Design intent is concrete enough that implementation trade-offs are visible
  • Quality expectations, analytics, accessibility (a11y), and support needs are explicit
Not Ready ✗
  • Goals are vague or conflict with other priorities
  • Stories exist but AC (Acceptance Criteria) and edge cases do not
  • Design and engineering making different assumptions
  • Testing, metrics, or support expectations postponed

PRD (Product Requirements Document) Structure

SectionWhy It ExistsTypical Contents
Context / ProblemExplain why the work matters nowUser problem, business context, evidence, dependencies
Goals / Non-goalsPrevent scope drift and false assumptionsDesired outcomes, what success is, what this will NOT solve
ScopeClarify the deliverable envelopeIn scope, out of scope, phased delivery, constraints
Experience requirementsDescribe what users should be able to doFlows, states, design references, content rules
Quality + data requirementsMake execution measurable and testableAcceptance criteria, analytics, NFRs, support, QA scenarios
Risks / Open questionsExpose uncertainty earlyEdge cases, unresolved decisions, dependencies

Acceptance Criteria Patterns

Given / When / Then "Given [context], When [action], Then [observable result]" - for flow-based behavior
State-based For permissions, visibility, and transitions. Explicit thresholds for timeouts, messaging, empty / error behavior.

Strong AC (Acceptance Criteria) describes observable behavior in the user's context, covers success states + validation rules + failure states, and makes "done" agreeable across design, engineering, and QA (Quality Assurance).

Edge Case Types (Classic Exam Question)

TypeWhat to AskExample
Missing dataWhat if required info is absent or stale?Import list but one email is invalid
PermissionsWho can see, edit, retry, or cancel?Only admins can resend pending invites
System failureWhat happens when API/dependency fails?Save fails after network interruption
State mismatchWhat if user returns later or uses another device?Checklist shows completed state after mobile refresh
Abandon / retryWhat if user leaves halfway through?Draft setup progress persists for 24 hours

NFRs , Non-Functional Requirements

Performance
Response time, load time, throughput. Example: page loads in <2s on standard broadband
Security / Privacy
Access control, data handling, auditability. Example: respects org permissions, logged for admin actions
Reliability
Error handling, recovery, retries. Example: submission failures retryable without duplicates
Scalability
Volume and future load assumptions. Example: support batches of up to 500 teammates
NFR Rule NFRs (Non-Functional Requirements) prevent "it works" from being mistaken for "it's ready for production."

Analytics Requirements in the PRD (Product Requirements Document)

NeedExample
Primary outcome metric% of new admins who invite 3 teammates in day 1
Funnel eventsChecklist viewed → invite started → invite completed → project created
Properties / segmentsWorkspace size, plan type, device type, setup source
Experiment / rollout fieldsVariant, feature flag state, beta cohort
OwnershipAnalytics owner, event spec reviewer, dashboard owner
Rule If a team cannot measure the change, it cannot learn from the release. Analytics requirements belong in the PRD - not in an afterthought.
15

Enterprise, Platform & API Product Management

Enterprise Motion · Pilots · SLAs (Service Level Agreements) · Platform vs Product · API (Application Programming Interface) Contracts · DX (Developer Experience)

Why Enterprise PM Needs a Distinct Playbook

Revenue Quality
Enterprise deals are won/lost in pilots, security reviews, procurement, and renewal readiness - not just features.
Leverage Model
Platforms and APIs determine whether each new customer becomes bespoke work or reusable capability.
Cross-functional Gravity
PM sits at intersection of engineering, security, solutions, legal, support, and sales. Convert ambiguity into contracts + rollout policies.

Key Terminology

TermDefinition
Enterprise motionPilots, security review, procurement, rollout, and renewal around a deal
PlatformReusable capability many teams or partners can configure and repeat
Product surfaceEnd-user workflow customers directly use and pay for
API (Application Programming Interface) contractStable promise of resources, fields, auth, limits, errors, and behavior
DX (Developer Experience)Developer Experience , how fast a developer can discover, test, troubleshoot, and launch
VersioningDisciplined way to introduce change without breaking consumers
SLO (Service Level Objective) vs SLA (Service Level Agreement)SLO = internal target; SLA = external promise that may carry remedies if missed
DeprecationStill works, but customers are told to migrate on a defined timeline
Sunset / EOL (End of Life)Capability is no longer supported or available
BrownoutPlanned temporary disruption before cutoff to reveal hidden dependencies
EcosystemPartners, integrators, and add-ons that extend platform value

Who Influences an Enterprise Product Decision

Economic Buyers
Care about business value, payback, and risk transfer
Technical Validators
Care about architecture, security, data handling, integration fit
Operational Adopters
Care about rollout effort, support burden, administration, change management
Key Insight The product must satisfy enough of ALL three groups to become contractable and renewable.

Enterprise Lifecycle

Discovery / Qualification
Pilot + Security Review
Contracting
Rollout
Renewal
Pilot Best Practices Narrow scope + explicit success criteria + named assumptions to prove/disprove. Protect learning quality as much as customer excitement. Avoid agreeing to custom work that teaches nothing reusable.

API (Application Programming Interface) Contract , What to Define

  • Resources: Entities and their fields
  • Auth: Authentication mechanism (OAuth , Open Authorization, API key, etc.)
  • Rate limits: Per-client and per-endpoint limits
  • Error codes: Standard error format and recovery guidance
  • Versioning policy: How breaking vs non-breaking changes are handled
  • Deprecation timeline: Advance notice + migration path + brownout plan
API (Application Programming Interface) PM Rule An API contract is not about shipping endpoints. It's about durable contracts, DX (Developer Experience), and change management over time.

Platform vs Product Metrics (Common Exam Checkpoint)

Product Metrics
End-user adoption, activation, retention, NPS, feature usage, conversion
Platform Metrics
# of consuming teams, API (Application Programming Interface) call volume, reuse rate, integration health, time-to-integration, ecosystem attach rate, duplicate building eliminated
Red Flag Many teams say they are "running a platform" while still only measuring end-user feature outcomes. Platform success = leverage: one investment helps many consumers repeatedly.

MCQ Practice Bank

20 High-probability Questions · Click to answer, then reveal
0/0
Correct Answers
Q1. Which of the following is the strongest hypothesis?
  • A. Users will love the improved onboarding flow.
  • B. If we add a first-step checklist, first-key-action completion will rise from 42% to 50% in 7 days.
  • C. AI should improve the new user experience.
  • D. People probably need more guidance.
✓ B - Only B defines: user group, intervention, behavior, threshold, and time window. The others lack measurability or specificity.
Q2. A team's product disappeared tomorrow. Customers would use spreadsheets + WhatsApp instead. In the competitive landscape, spreadsheets are classified as:
  • A. Direct competitors
  • B. Alternatives (different category, same job)
  • C. Substitutes in the same category
  • D. Not relevant to positioning
✓ B - Alternatives solve the same job differently (DIY, status quo, adjacent tools). They drive switching fear, not parity comparisons.
Q3. The prioritization framework RICE stands for:
  • A. Reach, Impact, Cost, Efficiency
  • B. Reach, Impact, Confidence, Effort
  • C. Revenue, Impact, Confidence, Execution
  • D. Reach, Implementation, Clarity, Effort
✓ B - RICE = (Reach × Impact × Confidence) ÷ Effort
Q4. You need to validate whether a premium AI service has demand, but nothing is built yet. Which MVP (Minimum Viable Product) type is best?
  • A. Prototype
  • B. Concierge
  • C. Fake-door
  • D. Wizard of Oz
✓ C - Fake-door: show the promise before capability exists to test demand/intent fast.
Q5. Which metric type answers the question "Why did it happen?"
  • A. Descriptive
  • B. Diagnostic
  • C. Predictive
  • D. Causal
✓ B - Diagnostic (decomposition, regression, attribution) answers "Why did it happen?"
Q6. SOM (Serviceable Obtainable Market) must be constrained by:
  • A. The total addressable market size
  • B. The number of direct competitors
  • C. Your GTM capacity (pipeline, conversion, rollout)
  • D. Your current ARR
✓ C - SOM = what you can WIN in 12-24 months, limited by GTM capacity - not just market share aspirations.
Q7. In an A/B test, the abbreviation SRM stands for:
  • A. Sample Rate Measure
  • B. Sample Ratio Mismatch
  • C. Statistical Risk Model
  • D. Segment Response Metric
✓ B - SRM (Sample Ratio Mismatch) occurs when control/treatment split differs from intended allocation. Always check before reading results.
Q8. Which qual research method is best when behavior unfolds over days/weeks with triggers that don't show in a single session?
  • A. Problem interviews
  • B. Contextual inquiry
  • C. Usability tests
  • D. Diary study
✓ D - Diary studies capture triggers and behaviors over time (10-20 participants), ideal when the job spans days or weeks.
Q9. "TAM (Total Addressable Market) is $5 trillion" in a fintech deck is a red flag because:
  • A. The boundary isn't defined by job-to-be-done; it's the whole industry
  • B. Fintech is too small to be meaningful
  • C. TAM should always be expressed in users, not dollars
  • D. The timeframe is not specified
✓ A - "Fintech" is an industry, not a market. Real TAM must be defined by: job, buyer with budget, context, geo/timeframe, and substitutes.
Q10. The difference between an SLO (Service Level Objective) and an SLA (Service Level Agreement) is:
  • A. SLO is for enterprise; SLA is for SMB
  • B. SLO is an internal reliability target; SLA is an external promise that may carry remedies
  • C. SLO is stricter than SLA
  • D. They are interchangeable terms
✓ B - SLO = internal target (operations). SLA = external promise, often commercial, with consequences if missed.
Q11. Which prioritization framework is best when "waiting is expensive" and queue order (sequencing) matters?
  • A. RICE
  • B. Kano
  • C. WSJF , Weighted Shortest Job First
  • D. MoSCoW
✓ C - WSJF uses Cost of Delay ÷ Job Size, making it ideal for delivery programs where sequencing based on urgency and size matters.
Q12. What does NRR (Net Revenue Retention) > 100% indicate?
  • A. Error in the revenue calculation
  • B. CAC is too high
  • C. Existing customers are expanding faster than they churn
  • D. The product has reached market saturation
✓ C - NRR (Net Revenue Retention) >100% means expansion revenue from existing customers exceeds churn + downgrades. This is the holy grail of SaaS.
Q13. In a NSM (North Star Metric) tree for a cab booking app, "driver acceptance rate" is best classified as:
  • A. North Star Metric
  • B. A driver metric (leading indicator)
  • C. A guardrail metric
  • D. A vanity metric
✓ B - Driver metrics are leading indicators that causally influence the NSM (completed rides/active rider/week). Guardrails protect against harm (e.g., safety incidents).
Q14. "Readiness is about execution clarity, not just strategic approval." This statement refers to:
  • A. The pilot approval process
  • B. The "ready to build" standard for PRDs
  • C. OKR completion criteria
  • D. Launch approval gates
✓ B - A feature idea can be strategically valid but still be unready. "Ready to build" requires execution clarity: AC (Acceptance Criteria), scope, analytics, NFRs (Non-Functional Requirements), and QA (Quality Assurance) scenarios all defined.
Q15. Freemium conversion rate, CAC (Customer Acquisition Cost) payback period, and LTV (Lifetime Value) : CAC ratio are all examples of:
  • A. Acquisition metrics
  • B. Platform metrics
  • C. Unit economics / business model health metrics
  • D. Engagement metrics
✓ C - These are unit economics metrics that determine whether the business model is sustainable and scalable.
Q16. "Insight theater" in product research refers to:
  • A. Presenting research in a theatrical setting
  • B. Pretty quotes and themes that don't lead to decisions or actions
  • C. Sharing research insights with the executive team
  • D. A type of usability testing method
✓ B - Insight theater = synthesis that looks good but doesn't change roadmap decisions. Real synthesis ends with: opportunity → hypothesis → experiment → metric.
Q17. Which section of a PRD (Product Requirements Document) specifically prevents "trying to solve adjacent problems in the same release"?
  • A. Context / Problem
  • B. Goals / Non-goals
  • C. Acceptance Criteria
  • D. Risks / Open Questions
✓ B - Non-goals explicitly state what this work will NOT solve, preventing stakeholders from expanding scope mid-execution.
Q18. An AI assistant product is being tested. The experience feels automated to users, but a team member is manually responding backstage. This MVP (Minimum Viable Product) type is:
  • A. A fake-door test
  • B. A concierge MVP
  • C. A Wizard of Oz MVP
  • D. A prototype
✓ C - Wizard of Oz: the front-stage experience feels automated but a human powers the backend. Best for testing trust in the automated experience before the automation is ready.
Q19. In the SpendWise BMC (Business Model Canvas), what is the primary revenue model?
  • A. Usage-based pricing per transaction
  • B. Freemium subscription (Free → Plus → Pro tiers)
  • C. Advertising revenue
  • D. One-time purchase
✓ B - SpendWise uses Freemium Subscription: Free (1 bank, 3-month history) → Plus £4.99/mo → Pro £9.99/mo, with annual plans at 20% discount.
Q20. "Deprecation" in API product management means:
  • A. The API is immediately unavailable
  • B. The API still works, but customers are told to migrate on a defined timeline
  • C. A security vulnerability has been found in the API
  • D. The API version has been permanently retired
✓ B - Deprecation ≠ Sunset. Deprecation = still functional + migration notice. Sunset/EOL = no longer supported or available.

Case Study Questions & Model Answers

Scenario-based MCQs (Multi-select) · Escalation Judgment · Sunk Cost · Marketplace Diagnostic

Case 1 , Fintech Retention & False PMF (Product-Market Fit) Signal

Scenario A fintech product has achieved strong retention among its first 2,000 users , D30 (Day 30) retention sits at 58%, well above the category benchmark of 31%. The founding team concludes they have PMF (Product-Market Fit) and begins aggressive paid acquisition, spending $180K in the first month.

Results after 60 days:
• New user D30 retention: 22%
• CAC (Customer Acquisition Cost) increased from $18 to $74
• First cohort retention remains stable at 57%

The CEO says: "We just need to optimize the ads to get better-fit users."

Multi-select , pick 2 correct answers: As PM, what are the two most important things to investigate before authorizing further spend?

A. ✓ CORRECT
Whether the original 2,000 users were a self-selected, high-motivation cohort whose behavior doesn't generalize to a paid acquisition audience , making early retention a misleading PMF signal
B. ✓ CORRECT
Whether the onboarding experience was designed for users who already understood the product's value proposition and fails for cold-traffic users who lack that context
C. ✗ WRONG
Whether the ad creative and targeting need to be refined to attract users more similar to the original cohort
D. ✗ WRONG
Whether the $74 CAC is above industry benchmarks for the fintech category
Why A & B The core issue is whether early retention was a valid PMF signal or a cohort artifact. The first 2,000 users were likely early adopters , people who actively sought out the product, had high motivation, and may have already understood the problem space. Their 58% D30 retention reflects their engagement, not the market's.

A investigates who those early users were. If they were self-selected (word-of-mouth, waitlist, Product Hunt, founder network), their behavior predicts nothing about cold paid-acquisition users. PMF with a self-selected cohort ≠ PMF with a general market audience.

B investigates what happens post-click. If onboarding assumed users already knew why the product matters, cold-traffic users will churn at first friction. The 22% D30 retention suggests the activation funnel breaks for users without pre-existing context.

Why not C? Ad optimization is a downstream tactic, not a root cause investigation. The CEO's instinct ("just optimize the ads") is the exact trap this question tests. Better ads can't fix a product that fails to activate cold users.

Why not D? Comparing CAC to industry benchmarks is a surface-level check that doesn't diagnose the problem. A $74 CAC could be fine if LTV (Lifetime Value) supported it , but the question is about why retention collapsed, not whether the cost is "normal."
Key lesson: Early cohort retention can be a survivorship bias trap. Always ask: "Would this metric hold if our users were strangers who found us through a paid ad?" If you can't answer yes with evidence, you don't have PMF , you have a fan club.

Case 2 , Platform Migration Escalation Failure

Scenario A PM is leading a platform migration. The old system will be deprecated in 90 days. Three enterprise customers , accounting for 41% of ARR (Annual Recurring Revenue) , have not yet begun migration. Each has cited a different reason:

Customer A: "We're waiting for your SSO (Single Sign-On) integration to be ready"
Customer B: "Our legal team needs to review the new data processing agreement"
Customer C: "We don't have internal bandwidth until Q3"

The PM escalates to leadership: "We have a migration risk." Leadership responds: "Just send them reminder emails."

Single-select: What is the most accurate diagnosis of the PM's failure in this escalation?

A. ✗ WRONG
The PM escalated too early , 90 days is sufficient time and leadership's response is appropriate
B. ✓ CORRECT
The PM presented a status update rather than a structured risk: the escalation lacked quantified business impact, root cause differentiation across the three accounts, and a proposed mitigation plan for each blocker
C. ✗ WRONG
The PM should have involved the account management team before escalating to leadership, since customer relationships are outside PM scope
D. ✗ WRONG
The 90-day deprecation timeline was set without adequate customer input, making the migration risk a planning failure rather than an execution failure
Why B The PM said "We have a migration risk" , which is a status update, not an escalation. An effective escalation must include:

1. Quantified business impact: "41% of ARR ($X.XM) is at risk if these 3 accounts don't migrate by [date]."
2. Root-cause differentiation: Each customer has a different blocker , one is a product dependency (SSO), one is a legal/compliance process, one is a resourcing constraint. Treating them as one "risk" is lazy framing.
3. Proposed mitigation per blocker:
• Customer A: Ship SSO by day 60 or offer a workaround + extension
• Customer B: Pre-send the DPA (Data Processing Agreement) and have legal do a joint review call
• Customer C: Offer a dedicated migration engineer or a paid extension
4. Decision needed: "I need approval for [specific resource/timeline/exception]."

Leadership gave a weak response ("send reminders") because the PM gave them a weak input. The quality of the decision you get is bounded by the quality of the escalation you write.

Why not A? 90 days is NOT sufficient when blockers are structural (product dependency, legal process, resourcing). Time alone doesn't unblock these.
Why not C? While account management should be involved, the PM failure was in escalation quality, not in who was involved.
Why not D? While timeline planning matters, the question specifically asks about the escalation failure, not the planning process.
Key lesson: Escalation ≠ status update. An escalation must include: quantified impact + differentiated root causes + proposed mitigations + specific decision needed. If leadership can respond with "just send emails," your escalation wasn't structured enough.

Case 3 , Sunk Cost Fallacy in Feature Development

Scenario A PM is three months into building a new enterprise reporting feature. Engineering has invested ~600 hours. During a routine customer advisory call, the PM learns that two of the five design partners have recently standardized on a competing tool for reporting and no longer plan to use this feature even if built.

The remaining three design partners express moderate interest but none have committed to using it post-launch.

The PM says: "We're too far in to stop now."

Multi-select , pick 2 correct answers: What should the PM do?

A. ✓ CORRECT
Treat the sunk cost as irrelevant to the forward-looking decision , the question is whether the expected value of completing the feature justifies the remaining investment, not whether the past investment is recovered
B. ✗ WRONG
Immediately kill the initiative to stop further resource drain, since two of five design partners have dropped out
C. ✓ CORRECT
Run an explicit reassessment: revalidate demand with the remaining three partners, identify whether the feature has a viable ICP (Ideal Customer Profile) beyond the original design partners, and model the revised business case before deciding
D. ✗ WRONG
Complete the feature as scoped since stopping mid-build creates technical debt and team morale issues that outweigh the demand uncertainty
Why A & C A is the mindset correction. "We're too far in to stop" is the textbook sunk cost fallacy. The 600 hours are spent regardless , they cannot be recovered. The only rational question is: "Given what we know NOW, does the expected value of completing this feature justify the REMAINING cost?" Past investment is irrelevant to forward-looking decisions.

C is the action step. Recognizing sunk cost isn't enough , you need a structured reassessment:
Revalidate remaining partners: "Moderate interest" ≠ commitment. Get explicit: will they deploy it? When? What's the activation metric?
Expand the ICP search: Is there demand beyond the original 5 design partners? Check support tickets, feature requests, competitor analysis.
Model the revised business case: With 2/5 partners gone and 3 uncommitted, what's the realistic adoption? Does it still justify the remaining engineering weeks?

Why not B? Immediately killing is also reactive. You might discover the remaining 3 partners have strong intent, or that the feature serves a broader market. The right response is structured reassessment, not a knee-jerk kill.
Why not D? This is the sunk cost fallacy dressed in operational language. "Technical debt" and "morale" are real concerns but don't justify completing a feature nobody will use. Shipping unused features creates more long-term debt (maintenance, complexity, support).
Key lesson: When new evidence challenges the original thesis, run an explicit reassessment , don't default to either "keep going" or "kill it." The right framework: (1) Acknowledge sunk costs are irrelevant, (2) Revalidate demand with evidence, (3) Model the forward-looking business case, (4) Make the decision with fresh eyes.

Case 4 , B2B (Business-to-Business) Marketplace Health Diagnostic

Full Scenario A venture-backed B2B marketplace connects independent legal consultants (supply) with growing startups and SMBs (Small & Medium Businesses) that need on-demand legal guidance (demand). The platform charges a 20% take rate on completed engagements. Over the past year, the company grew GMV (Gross Merchandise Value) 3× through aggressive consultant acquisition and paid demand-side campaigns.

However, the business is now showing stress:
• Repeat buyer rate dropped from 44% → 27% in two quarters
• Average consultant utilization fell to 31% (despite more supply than ever)
• Support tickets related to "consultant quality" have tripled
• NPS (Net Promoter Score) among buyers: 18 | Consultant NPS: -4
• Finance: CAC (Customer Acquisition Cost) rose 2.4×, LTV (Lifetime Value) declined

Stakeholder opinions:
• CEO: "We're becoming a race to the bottom on price"
• Head of Supply: "We need more supply"
• Head of Demand: "We need better search and matching"
• CTO: "Let's rebuild the matching algorithm"
• CFO: "Raise the take rate"

You are the PM responsible for marketplace health and growth.

Model Answer , Part 1: Problem Framing Using Marketplace Dynamics

The symptoms (falling repeat rate, rising CAC, collapsing NPS) are not independent problems , they are outputs of a marketplace in quality decay. To diagnose correctly, separate three distinct failure modes:

Supply-Side Issues
• Utilization at 31% means oversupply relative to demand
• Consultant NPS at -4 suggests frustration (idle inventory, price competition, poor lead quality)
• Undifferentiated supply floods the marketplace, lowering perceived quality
Demand-Side Issues
• Repeat buyer rate halved (44% → 27%) = trust erosion
• "Consultant quality" tickets tripled = buyers aren't getting matched to the right supply
• Buyer NPS at 18 = passive at best , no organic growth loop
Match Quality / Platform Issues
• The marketplace is growing GMV but destroying match quality
• More supply + undifferentiated matching = buyers see a commodity, not a curated service
• The take rate isn't the issue , value delivered per transaction is declining
Core Diagnosis This is a match-quality crisis masked by GMV growth. The marketplace prioritized volume (more consultants, more buyers) without ensuring each match created value. The result: supply is diluted, buyers are disappointed, and the platform is losing its position as a trusted intermediary.

Model Answer , Part 2: Data & Signals to Pull First

Data PointWhat It Tells YouWhere to Get It
Repeat rate by consultant quality tierAre high-quality consultants driving repeat business? Is the drop concentrated in matches with low-rated supply?Transaction + rating data
Match-to-completion rateWhat % of matches result in a completed engagement vs. abandonment?Funnel analytics
Time-to-first-response / time-to-matchIs matching speed degrading despite more supply?Platform event logs
Support ticket text clusteringWhat specifically are "quality" complaints about , expertise mismatch? Communication? Deliverable quality?Support / NLP (Natural Language Processing) clustering
Consultant cohort analysisAre recently onboarded consultants rated lower than the original cohort? Is aggressive supply acquisition lowering the quality bar?Cohort + rating data
Buyer segment retentionWhich buyer segments still have high repeat rates? Where is value being created?Segment-level retention curves
Revenue concentration / ParetoWhat % of GMV comes from top 20% of consultants? Are they being utilized or buried?Revenue + utilization data
Disintermediation signalsAre buyers and consultants transacting off-platform after first match?Repeat patterns, payment data, survey

Model Answer , Part 3: Marketplace Health Metrics Beyond GMV

GMV alone is a vanity metric for a marketplace. True marketplace health requires measuring:

Liquidity
Can supply and demand find each other quickly?
• Time-to-match
• Search-to-booking conversion
• % of buyer requests that get at least 1 qualified response within 24h
Match Quality
Are matches creating value for both sides?
• Post-engagement rating (both sides)
• Repeat engagement rate with same consultant
• NPS (Net Promoter Score) by engagement type
Trust Signals
Do participants trust the platform?
• Dispute rate & resolution satisfaction
• Off-platform leakage rate
• Willingness to recommend (NPS)
• Return rate after a bad experience
Unit Economics Health
Is each transaction sustainable?
• Revenue per match (not per signup)
• LTV (Lifetime Value) by buyer cohort
• CAC payback at a per-segment level
• Consultant earnings vs. opportunity cost

Model Answer , Part 4: Segmentation — Where Value Is Created vs. Destroyed

Supply Segmentation
Segment by: specialization depth, rating, utilization, tenure, repeat-client rate

Hypothesis: A small cohort of high-quality, specialized consultants drives most repeat business and buyer satisfaction. The recently acquired long-tail supply is diluting match quality, confusing buyers, and depressing everyone's utilization.
Demand Segmentation
Segment by: company stage, legal need type, engagement frequency, spend tier, retention status

Hypothesis: Startups with recurring legal needs (contracts, IP, fundraising) retain well when matched with specialists. One-off, low-value requests churn regardless and inflate CAC without generating LTV.
Key Insight Value is likely concentrated in a specific supply × demand intersection: specialized consultants matched to repeat-need buyers. The rest is noise that costs money and erodes quality signals.

Model Answer , Part 5: Hypotheses & How to Test Them

HypothesisTest MethodSuccess Signal
H1: Aggressive supply onboarding lowered average quality, causing buyer churnCompare buyer retention & NPS when matched with pre-2024 vs. post-2024 consultantsPre-2024 matches show >2× repeat rate
H2: Matching algorithm doesn't weight specialization, causing expertise mismatchesAudit 50 "quality complaint" tickets; categorize mismatch type (expertise, communication, price)>60% are expertise mismatch
H3: Low-intent buyers (one-off needs) depress metrics and aren't worth the CACSegment buyer cohorts by intent (recurring legal need vs. one-time); compare LTV and repeat rateRecurring-need buyers have 3×+ LTV
H4: Top consultants are demoralized by low utilization and price competition, at risk of leavingSurvey top-20% consultants by revenue; interview 10 about satisfaction and alternatives>30% considering leaving the platform

Model Answer , Part 6: Prioritized Interventions (Top 3)

Intervention 1: Supply Quality Gate
What: Introduce a quality tier system. New consultants enter a "provisional" tier with limited visibility. Promotion to "verified" requires: rating threshold, response time, and completion rate.

Why: Stops the dilution problem at source. Protects buyer experience. Gives top consultants differentiation and visibility.

Metric: Buyer repeat rate on verified-tier matches vs. provisional
Intervention 2: Specialization-Weighted Matching
What: Rebuild matching to prioritize specialization fit over availability. Add structured intake (legal need type, industry, urgency) and match against consultant expertise tags.

Why: The #1 quality complaint is likely expertise mismatch. Better matching → higher satisfaction → higher repeat rate → higher LTV.

Metric: Post-match NPS, repeat engagement rate, quality ticket volume
Intervention 3: Demand-Side Segment Focus
What: Shift paid acquisition from broad targeting to high-LTV segments only (startups with recurring legal needs: contracts, IP, fundraising). Reduce or stop acquiring one-off, low-value demand.

Why: CAC rose 2.4× because acquisition was untargeted. Focusing on high-LTV segments improves payback and match quality.

Metric: CAC payback period, D30 & D90 repeat rate for new cohort

Model Answer , Part 7: Navigating Short-Term GMV vs. Long-Term Health

This is the central tension. Every stakeholder is proposing a fix that protects their domain but may worsen the system:

StakeholderProposalRisk
CEO"Race to bottom on price"Correct observation, but the cause is quality dilution, not pricing
Head of Supply"More supply"More supply without quality gates accelerates the death spiral
Head of Demand"Better search/matching"Partially right, but matching can't fix bad supply
CTO"Rebuild algorithm"Algorithm rebuild is 3–6 months; structured intake can ship in weeks
CFO"Raise take rate"Raising take rate when NPS is 18 and -4 will accelerate churn on both sides
Navigation Strategy Short-term (protect GMV): Don't remove supply overnight , tier it. Give top consultants premium visibility immediately. This protects GMV while improving match quality at the top.

Medium-term (bend the curve): Ship specialization-weighted matching + demand segment focus. These reduce CAC and improve retention within 1–2 quarters.

Long-term (rebuild trust): Earn the right to raise take rate by first demonstrating better outcomes. Higher buyer satisfaction → higher willingness to pay → sustainable take-rate increase.

What to STOP doing: Stop indiscriminate supply acquisition. Stop broad paid demand campaigns. Stop measuring success by GMV alone.

Model Answer , Part 8: 90-Day Plan & Executive Recommendation

Weeks 1–2: Diagnose
• Pull all data from Part 2 (segmented retention, quality tickets, consultant cohort analysis)
• Run hypothesis tests H1–H4
• Interview 10 top consultants + 10 churned buyers
• Deliverable: Diagnostic memo with root cause attribution
Weeks 3–6: Intervene
• Launch supply quality tiers (provisional / verified / expert)
• Ship structured intake form for demand-side (need type, industry, urgency)
• Shift 60% of demand acquisition budget to high-LTV segments
• Pause new supply acquisition until quality gates are live
Weeks 7–10: Measure
• Track: repeat rate by tier, match NPS, quality ticket volume, CAC payback by segment
• Run A/B: specialization-weighted matching vs. current algorithm
• Survey top consultants: satisfaction + retention intent
Weeks 11–13: Readout & Scale
• Executive readout: what worked, what didn't, revised marketplace health scorecard
• Decision: scale verified-tier matching or iterate
• Revised 6-month roadmap tied to health metrics, not GMV alone
• Recommendation on take-rate adjustment (only if NPS has improved)
Executive Recommendation (Summary) Our marketplace is in a quality crisis disguised as growth. GMV tripled but match quality collapsed , repeat buyers halved, quality complaints tripled, and both sides of the marketplace are unhappy. The root cause is undifferentiated supply growth without quality gates, combined with untargeted demand acquisition.

I recommend: (1) Implement supply quality tiers immediately, (2) Ship specialization-weighted matching within 6 weeks, (3) Refocus acquisition on high-LTV demand segments. We should STOP indiscriminate supply acquisition and STOP measuring success by GMV alone. I'll present a diagnostic readout in 2 weeks and intervention results at the 90-day mark.

Quick Reference Templates

Copy-paste structures for descriptive exam answers

Positioning Statement Template

Use this in any positioning question For [specific ICP , Ideal Customer Profile / target segment],
who see us as [category / type of solution],
we help them [achieve specific outcome]
because [our mechanism / unique capability],
unlike [top alternative], we [trade-off / focus / what we refuse to do].

Problem Statement Template

Problem Framing [Target segment] struggles with [specific pain] when [context/trigger],
resulting in [measurable impact / consequence].
Current alternatives [what they use today] fail because [why they're inadequate].

Strong Hypothesis Template

Discovery Hypothesis We believe [target user] experiences [problem/job].
If we [specific intervention],
then [behavioral metric] will [improve/increase/decrease]
by [specific amount] within [time window].

Decision Memo Template (1-page)

Decision Memo Structure CONTEXT: Why does this decision matter now?
OPTIONS: Option A | Option B | Option C
RECOMMENDATION: We recommend [X] because [evidence + rationale]
TRADE-OFFS: What we gain vs what we sacrifice
RISKS: What could go wrong + mitigation
REVIEW DATE: When to revisit this decision

RICE (Reach, Impact, Confidence, Effort) Scoring

InitiativeReachImpactConfidenceEffortRICE Score
[Your item]# users/period0.25–3.00.5–1.0person-weeksR×I×C÷E

Common impact scale: 0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive

Key Formulas to Memorize

RICE (Reach, Impact, Confidence, Effort)
= (R × I × C) ÷ E
WSJF (Weighted Shortest Job First)
= Cost of Delay ÷ Job Size
LTV (Lifetime Value)
= ARPU (Avg. Revenue Per User) × Margin% ÷ Churn Rate
NRR (Net Revenue Retention)
= (Start MRR (Monthly Recurring Revenue) + Expansion - Downgrades - Churn) ÷ Start MRR
Market Size
= # Customers × Revenue per Customer
Stickiness
= DAU (Daily Active Users) ÷ MAU (Monthly Active Users)

Frameworks Quick-Reference Card

FrameworkUse ForKey Question It Answers
RICE (Reach, Impact, Confidence, Effort) / ICE (Impact, Confidence, Ease) / PIE (Potential, Importance, Ease)Backlog prioritization scoringWhich option has best value per effort?
WSJF (Weighted Shortest Job First)Delivery queue sequencingWhich should we do first given cost of delay?
MoSCoW (Must, Should, Could, Won't)Release scope negotiationWhat must ship vs. what can wait?
KanoFeature category analysisMust-have vs performance vs delight?
TAM (Total Addressable Market) / SAM (Serviceable Addressable) / SOM (Serviceable Obtainable)Market sizingHow big is the prize? What can we realistically win?
AARRR (Acquisition, Activation, Retention, Revenue, Referral)Growth funnel metricsWhere in the funnel is the biggest drop?
HEART (Happiness, Engagement, Adoption, Retention, Task success)UX (User Experience) quality metricsIs the user experience improving?
DACI (Driver, Approver, Contributor, Informed)Decision roles clarityWho decides, contributes, and is informed?
JTBD (Jobs-to-be-Done)Problem framingWhat progress is the user trying to make?
Now/Next/LaterRoadmap communicationWhat are we doing and when, at right precision?
BMC (Business Model Canvas)Business model designHow does the whole system create and capture value?

Last-Minute Exam Tips

For Descriptive Questions Always structure answers with: (1) Define the concept, (2) Explain the reasoning/why it matters, (3) Give a concrete example, (4) Note common mistakes or trade-offs. Use frameworks but always explain the "why" behind them.
For MCQs Watch for: "competitor vs alternative" traps · Output vs outcome language · SLO (Service Level Objective) vs SLA (Service Level Agreement) · Deprecation vs Sunset · Strong hypothesis = all 5 elements (user, intervention, behavior, metric, time window)
SpendWise Reference Your course uses SpendWise (AI expense tracker) as a running example. Know: BMC (Business Model Canvas) structure, Story Map (Release 1=Must, Release 2=Should, Release 3=Nice), pricing tiers (Free/Plus/Pro), key assumptions (8–12% conversion, <6mo CAC (Customer Acquisition Cost) payback, >3:1 LTV (Lifetime Value) : CAC ratio).