Product Thinking & Problem Framing
Core Idea
Problem framing is the highest-leverage PM skill - it sets the user, the metric, and the scope boundaries before any solution is discussed. 42% of startups cite "no market need" as the top failure reason. If you can't write the problem + metric in 1 minute, you're not ready to build.
Key Terminology Cheat Sheet
Problem Statement Formula
Example: "Mid-market ops teams lose 3+ hours/week on approval workflows because email-based processes lack traceability, leading to compliance risk and decision delays."
Outputs vs Outcomes (Critical Distinction)
"Launch AI assistant"
"Add team analytics page"
"Reduce time-to-value for new teams"
"Improve paid conversion from trial"
Opportunity Brief Structure (A2 Deliverable)
- Segment: Who specifically?
- Problem statement: Pain + context + measurable impact
- Why now: What changed? Urgency driver?
- Constraints: Real limits on solution space
- Non-goals: What we will NOT solve
- Success metric + baseline: What moves? From where?
- Guardrail: What must not worsen?
- Top 3 assumptions: Ranked by risk
Product Thinking Lens: What Good Balances
Customers, Segments & Market Sizing
Core Idea
"The best segment isn't the biggest - it's the one you can win, and prove." Sizing is a decision tool: it helps you say "no" with confidence and allocate scarce resources to markets worth building for.
TAM / SAM / SOM
All buyers for this specific job-to-be-done (NOT the whole industry)
Segment/geo/constraints you CAN serve with your current capabilities
What you can WIN in 12–24 months, constrained by GTM capacity
Market Sizing Formula
Then apply reality filters: market boundary, adoption & switching friction, pricing/packaging, GTM (Go-to-Market) capacity, competition intensity.
Sizing Methods - Use At Least 2 + Triangulate
| Method | How It Works | When to Use | Watch Out |
|---|---|---|---|
| Top-down | Start from credible total, apply filters | Quick direction, investor decks | Easy to inflate; must sanity-check with bottom-up |
| Bottom-up | Units × Price × Adoption % (fully auditable) | Most credible for defensible models | Labor intensive; assumptions must be sourced |
| Value theory | Value created → willingness to pay | Pricing discovery phase | Theoretical; needs behavioral validation |
| Analogs | Comparable product's penetration + ARPU | New market with established proxies | Context differences matter; justify the analogy |
Market Boundary Canvas (Fill Before Any Math)
| Field | What to Define |
|---|---|
| Job (outcome) | What outcome is the customer purchasing? |
| Buyer (budget owner) | Who controls the budget decision? |
| Context | Workflow/constraints/industry |
| Geography + timeframe | Where and in what time horizon? |
| Substitutes | What competes for the same budget/job? |
| Unit for sizing | Accounts / users / transactions? |
Segment Scorecard (Beachhead Selection)
| Criterion | What to Look For |
|---|---|
| Pain severity | High urgency + high cost of status quo |
| Willingness to pay | Budget exists + ROI story is credible |
| Reachability | You can find and reach them (channels, communities) |
| Time-to-value | Visible win quickly; low setup friction |
| Differentiation | Clear wedge vs alternatives |
| Retention potential | Recurring usage; switching costs or habit |
| Competition intensity | Not a knife fight with 10 incumbents |
| Strategic fit | Matches your team's strengths and roadmap |
Competitive Landscape & Positioning
The Big Insight
Most deals are lost to the buyer's default alternative (DIY, spreadsheets, status quo) - not to direct competitors. If you can't name the #1 alternative buyers compare you to, you can't control the category lens.
Competitors vs Alternatives - Two Different Games
- Same buyer expectations (feature checklist + pricing norms)
- You win by outperforming on the category's decision drivers
- Drives parity expectations
- Messaging: "better/faster/cheaper than X"
- Includes DIY, spreadsheets, email, WhatsApp, "do nothing"
- Drives switching fear
- Messaging: "why this approach is smarter"
- Where category choice and narrative matter most
Differentiation Framework: Parity → Differentiator → Proof
| Level | What It Is | Example |
|---|---|---|
| Parity (table stakes) | Must-have just to enter shortlist | Contacts/pipeline/email sync |
| Differentiator (wedge) | Meaningful, defensible, provable advantage | Prebuilt workflows for specific ICP (Ideal Customer Profile) |
| Proof (reasons to believe) | Evidence that makes the claim credible | Time-to-first-value metric, customer story |
Positioning Statement Template
Who see us as [category/type of solution]
We help them [achieve outcome]
Because [mechanism / capability]
Unlike [alternative], we [trade-off / focus]
Message Map Structure
- 1 Core message: The single claim you want buyers to remember
- 3 Proof points: Evidence buckets (data, story, demo, trust signal)
- Top 3 Objections + Answers: Switching fears + mitigation
6-Step Competitive Workflow
Switching Cost Types
To reduce switching friction early: CSV (Comma-Separated Values) import + templates, guided setup, freemium trials, migration concierge, prebuilt integrations.
Qualitative Research & Insight Synthesis
Core Idea
Qual research reduces decision risk by making customer behavior legible. Better evidence beats better opinions. If your output is only quotes, you didn't synthesize.
Key Terminology
| Term | Definition | Strength |
|---|---|---|
| Observation | What people DO | Strongest evidence |
| Self-report | What people SAY they do | Weaker |
| Preference | What people say they WANT | Weakest |
| Finding | A fact observed in data | - |
| Theme | A recurring pattern of findings | - |
| Insight | Theme + implication (so what?) | Decision-ready |
| Opportunity | An insight framed as a bet | Actionable |
Qual Method Chooser
| Goal | Best Method | Sample Size | Output |
|---|---|---|---|
| Explore needs | Problem interviews | 6–12 | Themes / JTBD |
| Workflow/context | Contextual inquiry | 4–8 | Friction map |
| Validate UI | Usability tests | 5–8 per segment | Severity issues |
| Over time | Diary study | 10–20 | Triggers / timeline |
| Messaging | Concept test | 8–15 | Objections + language |
Interview Best Practices
- "Tell me about the last time…"
- "Walk me through the steps…"
- "What did you do when that failed?"
- "What did that cost you (time/money/risk)?"
- "Would you use…?" (hypothetical)
- "Do you like…?" (leading)
- "Wouldn't it be great if…?" (leading)
- "Would you pay…?" (hypothetical)
Bias Types (Know These for MCQs)
Mitigation: Ask for examples, artifacts, receipts. Keep returning to specifics: "What happened next?"
Insight Statement Formula
Synthesis pipeline: Raw notes → Clusters → Themes → Insights → Opportunities/Bets
Quantitative Research & Analytics Foundations
Quant vs Qual Decision Matrix
| Need | Best Tool | Why |
|---|---|---|
| Actual behavior at scale | Telemetry / product analytics | Observed; low self-report bias |
| Attitudes / preferences / awareness | Surveys | Can size across segments; captures stated constraints |
| Causality (impact of a change) | Experiments / A/B tests | Estimates lift; reduces confounding |
| Mechanisms / context / new needs | Qualitative | Explains "why" |
4 Types of Quant Research
Funnels, cohorts, segments, trends
Decomposition, regression, attribution
Propensity models, time series, scoring
A/B tests, holdouts, diff-in-diff
Survey Types for PMs
| Type | Best For | Typical Output |
|---|---|---|
| CSAT (Customer Satisfaction) / pulse | Fixing a specific journey moment | Top-2 box + segment deltas |
| NPS (Net Promoter Score) | Macro trend & segmentation | NPS score + drivers (directional) |
| Concept test | Compare positioning options | Preference share + objections |
| MaxDiff (Maximum Difference Scaling) / trade-offs | Prioritization (when everything is "important") | Ranked importance + winners/losers |
| Pricing / WTP (Willingness to Pay) | Range/elasticity direction | Bands + sensitivity; validate with behavior |
Survey 7-Step Process (with Gate Logic)
- Brief: Decision + hypothesis + population
- Constructs: Define what you're measuring
- Questionnaire: Draft + bias check
- Sampling: Quotas + n targets
- Pilot: Test for clarity + drop-off
- Field + clean: Quality monitoring + cleaning
- Report → Memo → Backlog: Action or next test
Tracking Plan Essentials
- Events: User actions to capture (at least 10 core events)
- Properties: Key attributes per event (user_id, plan_type, etc.)
- Owner: Who is responsible for each event
- Funnel steps: Ordered journey with defined entry/exit
- Cohort definition: Group by same start event + time window
Metrics, North Star & Experimentation
Types of Product Metrics (AARRR , Acquisition, Activation, Retention, Revenue, Referral)
NSM , North Star Metric
The NSM (North Star Metric) captures sustained value delivered to customers and correlates with long-term business success. It should be: measurable, sensitive to product improvement, and hard to game.
Cab booking example NSM: "Completed rides per active rider per week"
Metric Tree Structure
Example (cab booking):
NSM: Completed rides/active rider/week
Driver: Booking success rate → Lever: simplify pickup confirmation
Driver: On-time pickup % → Lever: better ETA model
Guardrails: cancellation rate, refunds, safety/SOS usage
Measurement Frameworks - When to Use Each
| Framework | What It Measures | Best For |
|---|---|---|
| HEART (Happiness, Engagement, Adoption, Retention, Task success) | Happiness, Engagement, Adoption, Retention, Task success | UX (User Experience) quality questions |
| AARRR (Pirate Metrics) | Acquisition, Activation, Retention, Revenue, Referral | Growth bottleneck diagnosis |
| OMTM (One Metric That Matters) | One Metric That Matters (short-term focus) | Team-level execution phase (4–8 weeks) |
| Metric Tree | NSM → Drivers → Levers → Guardrails | Strategy-to-execution system |
A/B Testing Fundamentals
End-to-End Measurement OS
Key principle: Experiments are not the starting point. Teams must first define value, define metrics, ensure data reliability, diagnose - and only then run causal tests.
Business Models, Pricing & Packaging
Key Terminology
Business Model Taxonomy
| Model | Core Logic | Key Metrics | Failure Mode |
|---|---|---|---|
| Subscription SaaS (Software as a Service) | Ongoing access; seat-based | NRR (Net Revenue Retention), churn, activation depth, CAC (Customer Acquisition Cost) payback | Metric mismatch (seat vs usage), shelfware |
| Usage-based | Value scales with consumption intensity | Gross margin/unit, usage retention, expansion rate | Bill shock, budgeting complexity |
| Transactional / Take-rate | Value per event/exchange | GMV (Gross Merchandise Value), take-rate, conversion, fraud rate | Disintermediation risk |
| Marketplace | Enable exchange; charge % of GMV (Gross Merchandise Value) | GMV, take-rate, market health | Leakage, trust/safety cost |
| Freemium / PLG (Product-Led Growth) | Free tier drives adoption; convert via limits | Freemium conversion %, support load, activation | Support load, low conversion rate |
| Advertising | Monetize attention; payer is advertiser | RPM (Revenue Per Mille/Thousand), CTR (Click-Through Rate), ad quality | User experience degradation |
How to Choose a Business Model
- What is the primary unit of value delivered?
- What is the primary unit of cost-to-serve?
- Is value continuous (subscription) or episodic (transaction)?
- Does value scale with usage intensity? (→ usage-based)
- Is there a network effect? (→ marketplace)
- Long procurement cycles? (→ enterprise licensing)
SpendWise Example (from Session 13 Materials)
Target: 8–12% freemium conversion by month 12. CAC payback <6 months. LTV:CAC >3:1.
Unit Economics Essentials
Discovery & Validation
Discovery vs Delivery - Two Operating Systems
Output: evidence
Cadence: fast loops
Artifacts: cheap + reversible
Question: what to learn next?
Output: working software
Cadence: planned execution
Artifacts: durable + scalable
Question: how to deliver with quality?
Assumption Mapping
Map assumptions across Desirability (do users want this?), Feasibility (can we build it?), and Viability (does it work for the business?). Then rank by Uncertainty × Consequence of being wrong.
Hypothesis Writing Formula
If we introduce [change], then [behavior X] will improve
by [metric Y] within [time window Z].
Example: "If we add a first-step checklist, first-key-action completion will rise from 42% to 50% in 7 days."
MVP (Minimum Viable Product) Types , Pick by Risk
| MVP Type | Best For | Signals to Watch | Common Mistake |
|---|---|---|---|
| Prototype | Testing flow, copy, and structure. Do users understand and navigate? | Task completion, confusion points, time to complete | Treating prototype clicks as proof of WTP (Willingness to Pay) or sustained usage |
| Concierge | Testing real value with manual delivery backstage. High-touch services. | Repeat usage, WTP (Willingness to Pay), quality expectations, service friction | Mistaking manual heroics for a scalable business model |
| Fake-door | Testing demand/intent fast before building capability | CTR (Click-Through Rate), signup rate, segment differences, drop after clarification | Treating shallow clicks as proof of lasting value |
| Wizard of Oz | Testing automated-feeling experience when automation isn't ready (AI) | Trust/confidence, task completion, latency tolerance | Hiding risks that would change user consent |
Experiment Card Structure
- Hypothesis: Specific belief to test
- Test method: How you'll test it
- Primary metric: One outcome reflecting core behavior
- Success threshold: Pre-agreed number that changes the decision (e.g., 42% → 50%)
- Guardrail: What must not worsen
- Decision owner: Named person
- Decision rules: Scale / Iterate / Stop criteria
Prioritization, Stakeholders & Decision Memos
PM Decision Stack
Prioritization Framework Chooser
| Framework | Best When | Why It Helps |
|---|---|---|
| RICE | Many comparable ideas with estimable inputs | Combines upside with cost; discounts weak evidence via confidence |
| ICE (Impact, Confidence, Ease) | Quick workshop when only directional estimates available | Lightweight; great for PLG (Product-Led Growth) / growth experiment lists |
| Impact–Effort 2×2 | Early sorting; visual alignment needed | Honest roughness beats fake rigor; good for exec conversations |
| WSJF (Weighted Shortest Job First) | Waiting is expensive; delivery programs | Makes urgency and job size visible together |
| Kano | Shaping feature set / release scope | Separates must-haves from performance features and delighters |
| MoSCoW | Time-boxed release scope negotiation | Forces commitment on "must" vs "could" - prevents everything being urgent |
| Narrative thesis | Large, discontinuous strategic bets | Better than feature math for existential choices |
RICE (Reach, Impact, Confidence, Effort) Formula
MoSCoW (Must, Should, Could, Won't) Categories
Stakeholder Mapping
Map stakeholders on Power × Interest grid. Key questions:
- Who decides? Who contributes? Who needs to know?
- Who can accelerate, block, or reshape the call?
- Pre-wire key stakeholders before the decision meeting
DACI Framework: D = Driver (moves it forward), A = Approver (final call), C = Contributor (input), I = Informed (kept updated)
Decision Memo Structure
- Context: Why is this decision happening now?
- Options: What were the alternatives considered?
- Recommendation: What did you decide and why?
- Risks: What could go wrong? Mitigation?
- Review date: When do we revisit?
Roadmapping & Product Ops
What a Roadmap Actually Does
A roadmap is a communication device for strategic direction, not a guarantee sheet or feature promise list. It must achieve 4 things:
Roadmap Types - One Does Not Fit All
| Type | Shows What | Best For |
|---|---|---|
| Outcome roadmap | Strategic change view (desired results) | Strategy translation, leadership alignment |
| Feature roadmap | Capability view (what capabilities being built) | Internal visibility, team-level alignment |
| Release roadmap | Launch view (what ships when) | GTM (Go-to-Market) coordination, sales/support readiness |
| Portfolio roadmap | Cross-product view | Leadership trade-offs across teams |
| Platform roadmap | Infrastructure/reusable capability view | Platform teams, dependency management |
Now / Next / Later Framework
The fastest way to raise roadmap quality: shift language from feature shipment to change creation.
Product Ops Cadence
| Rhythm | Purpose | Key Artifacts |
|---|---|---|
| Weekly | Execution review, blocker resolution | Standup notes, decision log updates |
| Monthly | Roadmap review, intake processing | Metric review, roadmap updates, RAID log |
| Quarterly | Strategy review, OKR (Objectives & Key Results) check-in, kill criteria | Business review, roadmap reset, OKR scoring |
RAID: R = Risks, A = Assumptions, I = Issues, D = Dependencies , track all four in a living log.
PRDs, UX Collaboration & Quality
"Ready to Build" - What It Really Means
- Problem, desired outcome, and user are named clearly
- Team knows what is in scope now, later, and not at all
- Design intent is concrete enough that implementation trade-offs are visible
- Quality expectations, analytics, accessibility (a11y), and support needs are explicit
- Goals are vague or conflict with other priorities
- Stories exist but AC (Acceptance Criteria) and edge cases do not
- Design and engineering making different assumptions
- Testing, metrics, or support expectations postponed
PRD (Product Requirements Document) Structure
| Section | Why It Exists | Typical Contents |
|---|---|---|
| Context / Problem | Explain why the work matters now | User problem, business context, evidence, dependencies |
| Goals / Non-goals | Prevent scope drift and false assumptions | Desired outcomes, what success is, what this will NOT solve |
| Scope | Clarify the deliverable envelope | In scope, out of scope, phased delivery, constraints |
| Experience requirements | Describe what users should be able to do | Flows, states, design references, content rules |
| Quality + data requirements | Make execution measurable and testable | Acceptance criteria, analytics, NFRs, support, QA scenarios |
| Risks / Open questions | Expose uncertainty early | Edge cases, unresolved decisions, dependencies |
Acceptance Criteria Patterns
Strong AC (Acceptance Criteria) describes observable behavior in the user's context, covers success states + validation rules + failure states, and makes "done" agreeable across design, engineering, and QA (Quality Assurance).
Edge Case Types (Classic Exam Question)
| Type | What to Ask | Example |
|---|---|---|
| Missing data | What if required info is absent or stale? | Import list but one email is invalid |
| Permissions | Who can see, edit, retry, or cancel? | Only admins can resend pending invites |
| System failure | What happens when API/dependency fails? | Save fails after network interruption |
| State mismatch | What if user returns later or uses another device? | Checklist shows completed state after mobile refresh |
| Abandon / retry | What if user leaves halfway through? | Draft setup progress persists for 24 hours |
NFRs , Non-Functional Requirements
Analytics Requirements in the PRD (Product Requirements Document)
| Need | Example |
|---|---|
| Primary outcome metric | % of new admins who invite 3 teammates in day 1 |
| Funnel events | Checklist viewed → invite started → invite completed → project created |
| Properties / segments | Workspace size, plan type, device type, setup source |
| Experiment / rollout fields | Variant, feature flag state, beta cohort |
| Ownership | Analytics owner, event spec reviewer, dashboard owner |
Enterprise, Platform & API Product Management
Why Enterprise PM Needs a Distinct Playbook
Key Terminology
| Term | Definition |
|---|---|
| Enterprise motion | Pilots, security review, procurement, rollout, and renewal around a deal |
| Platform | Reusable capability many teams or partners can configure and repeat |
| Product surface | End-user workflow customers directly use and pay for |
| API (Application Programming Interface) contract | Stable promise of resources, fields, auth, limits, errors, and behavior |
| DX (Developer Experience) | Developer Experience , how fast a developer can discover, test, troubleshoot, and launch |
| Versioning | Disciplined way to introduce change without breaking consumers |
| SLO (Service Level Objective) vs SLA (Service Level Agreement) | SLO = internal target; SLA = external promise that may carry remedies if missed |
| Deprecation | Still works, but customers are told to migrate on a defined timeline |
| Sunset / EOL (End of Life) | Capability is no longer supported or available |
| Brownout | Planned temporary disruption before cutoff to reveal hidden dependencies |
| Ecosystem | Partners, integrators, and add-ons that extend platform value |
Who Influences an Enterprise Product Decision
Enterprise Lifecycle
API (Application Programming Interface) Contract , What to Define
- Resources: Entities and their fields
- Auth: Authentication mechanism (OAuth , Open Authorization, API key, etc.)
- Rate limits: Per-client and per-endpoint limits
- Error codes: Standard error format and recovery guidance
- Versioning policy: How breaking vs non-breaking changes are handled
- Deprecation timeline: Advance notice + migration path + brownout plan
Platform vs Product Metrics (Common Exam Checkpoint)
MCQ Practice Bank
Case Study Questions & Model Answers
Case 1 , Fintech Retention & False PMF (Product-Market Fit) Signal
Results after 60 days:
• New user D30 retention: 22%
• CAC (Customer Acquisition Cost) increased from $18 to $74
• First cohort retention remains stable at 57%
The CEO says: "We just need to optimize the ads to get better-fit users."
Multi-select , pick 2 correct answers: As PM, what are the two most important things to investigate before authorizing further spend?
A investigates who those early users were. If they were self-selected (word-of-mouth, waitlist, Product Hunt, founder network), their behavior predicts nothing about cold paid-acquisition users. PMF with a self-selected cohort ≠ PMF with a general market audience.
B investigates what happens post-click. If onboarding assumed users already knew why the product matters, cold-traffic users will churn at first friction. The 22% D30 retention suggests the activation funnel breaks for users without pre-existing context.
Why not C? Ad optimization is a downstream tactic, not a root cause investigation. The CEO's instinct ("just optimize the ads") is the exact trap this question tests. Better ads can't fix a product that fails to activate cold users.
Why not D? Comparing CAC to industry benchmarks is a surface-level check that doesn't diagnose the problem. A $74 CAC could be fine if LTV (Lifetime Value) supported it , but the question is about why retention collapsed, not whether the cost is "normal."
Case 2 , Platform Migration Escalation Failure
• Customer A: "We're waiting for your SSO (Single Sign-On) integration to be ready"
• Customer B: "Our legal team needs to review the new data processing agreement"
• Customer C: "We don't have internal bandwidth until Q3"
The PM escalates to leadership: "We have a migration risk." Leadership responds: "Just send them reminder emails."
Single-select: What is the most accurate diagnosis of the PM's failure in this escalation?
1. Quantified business impact: "41% of ARR ($X.XM) is at risk if these 3 accounts don't migrate by [date]."
2. Root-cause differentiation: Each customer has a different blocker , one is a product dependency (SSO), one is a legal/compliance process, one is a resourcing constraint. Treating them as one "risk" is lazy framing.
3. Proposed mitigation per blocker:
• Customer A: Ship SSO by day 60 or offer a workaround + extension
• Customer B: Pre-send the DPA (Data Processing Agreement) and have legal do a joint review call
• Customer C: Offer a dedicated migration engineer or a paid extension
4. Decision needed: "I need approval for [specific resource/timeline/exception]."
Leadership gave a weak response ("send reminders") because the PM gave them a weak input. The quality of the decision you get is bounded by the quality of the escalation you write.
Why not A? 90 days is NOT sufficient when blockers are structural (product dependency, legal process, resourcing). Time alone doesn't unblock these.
Why not C? While account management should be involved, the PM failure was in escalation quality, not in who was involved.
Why not D? While timeline planning matters, the question specifically asks about the escalation failure, not the planning process.
Case 3 , Sunk Cost Fallacy in Feature Development
The remaining three design partners express moderate interest but none have committed to using it post-launch.
The PM says: "We're too far in to stop now."
Multi-select , pick 2 correct answers: What should the PM do?
C is the action step. Recognizing sunk cost isn't enough , you need a structured reassessment:
• Revalidate remaining partners: "Moderate interest" ≠ commitment. Get explicit: will they deploy it? When? What's the activation metric?
• Expand the ICP search: Is there demand beyond the original 5 design partners? Check support tickets, feature requests, competitor analysis.
• Model the revised business case: With 2/5 partners gone and 3 uncommitted, what's the realistic adoption? Does it still justify the remaining engineering weeks?
Why not B? Immediately killing is also reactive. You might discover the remaining 3 partners have strong intent, or that the feature serves a broader market. The right response is structured reassessment, not a knee-jerk kill.
Why not D? This is the sunk cost fallacy dressed in operational language. "Technical debt" and "morale" are real concerns but don't justify completing a feature nobody will use. Shipping unused features creates more long-term debt (maintenance, complexity, support).
Case 4 , B2B (Business-to-Business) Marketplace Health Diagnostic
However, the business is now showing stress:
• Repeat buyer rate dropped from 44% → 27% in two quarters
• Average consultant utilization fell to 31% (despite more supply than ever)
• Support tickets related to "consultant quality" have tripled
• NPS (Net Promoter Score) among buyers: 18 | Consultant NPS: -4
• Finance: CAC (Customer Acquisition Cost) rose 2.4×, LTV (Lifetime Value) declined
Stakeholder opinions:
• CEO: "We're becoming a race to the bottom on price"
• Head of Supply: "We need more supply"
• Head of Demand: "We need better search and matching"
• CTO: "Let's rebuild the matching algorithm"
• CFO: "Raise the take rate"
You are the PM responsible for marketplace health and growth.
Model Answer , Part 1: Problem Framing Using Marketplace Dynamics
The symptoms (falling repeat rate, rising CAC, collapsing NPS) are not independent problems , they are outputs of a marketplace in quality decay. To diagnose correctly, separate three distinct failure modes:
• Consultant NPS at -4 suggests frustration (idle inventory, price competition, poor lead quality)
• Undifferentiated supply floods the marketplace, lowering perceived quality
• "Consultant quality" tickets tripled = buyers aren't getting matched to the right supply
• Buyer NPS at 18 = passive at best , no organic growth loop
• More supply + undifferentiated matching = buyers see a commodity, not a curated service
• The take rate isn't the issue , value delivered per transaction is declining
Model Answer , Part 2: Data & Signals to Pull First
| Data Point | What It Tells You | Where to Get It |
|---|---|---|
| Repeat rate by consultant quality tier | Are high-quality consultants driving repeat business? Is the drop concentrated in matches with low-rated supply? | Transaction + rating data |
| Match-to-completion rate | What % of matches result in a completed engagement vs. abandonment? | Funnel analytics |
| Time-to-first-response / time-to-match | Is matching speed degrading despite more supply? | Platform event logs |
| Support ticket text clustering | What specifically are "quality" complaints about , expertise mismatch? Communication? Deliverable quality? | Support / NLP (Natural Language Processing) clustering |
| Consultant cohort analysis | Are recently onboarded consultants rated lower than the original cohort? Is aggressive supply acquisition lowering the quality bar? | Cohort + rating data |
| Buyer segment retention | Which buyer segments still have high repeat rates? Where is value being created? | Segment-level retention curves |
| Revenue concentration / Pareto | What % of GMV comes from top 20% of consultants? Are they being utilized or buried? | Revenue + utilization data |
| Disintermediation signals | Are buyers and consultants transacting off-platform after first match? | Repeat patterns, payment data, survey |
Model Answer , Part 3: Marketplace Health Metrics Beyond GMV
GMV alone is a vanity metric for a marketplace. True marketplace health requires measuring:
• Time-to-match
• Search-to-booking conversion
• % of buyer requests that get at least 1 qualified response within 24h
• Post-engagement rating (both sides)
• Repeat engagement rate with same consultant
• NPS (Net Promoter Score) by engagement type
• Dispute rate & resolution satisfaction
• Off-platform leakage rate
• Willingness to recommend (NPS)
• Return rate after a bad experience
• Revenue per match (not per signup)
• LTV (Lifetime Value) by buyer cohort
• CAC payback at a per-segment level
• Consultant earnings vs. opportunity cost
Model Answer , Part 4: Segmentation — Where Value Is Created vs. Destroyed
Hypothesis: A small cohort of high-quality, specialized consultants drives most repeat business and buyer satisfaction. The recently acquired long-tail supply is diluting match quality, confusing buyers, and depressing everyone's utilization.
Hypothesis: Startups with recurring legal needs (contracts, IP, fundraising) retain well when matched with specialists. One-off, low-value requests churn regardless and inflate CAC without generating LTV.
Model Answer , Part 5: Hypotheses & How to Test Them
| Hypothesis | Test Method | Success Signal |
|---|---|---|
| H1: Aggressive supply onboarding lowered average quality, causing buyer churn | Compare buyer retention & NPS when matched with pre-2024 vs. post-2024 consultants | Pre-2024 matches show >2× repeat rate |
| H2: Matching algorithm doesn't weight specialization, causing expertise mismatches | Audit 50 "quality complaint" tickets; categorize mismatch type (expertise, communication, price) | >60% are expertise mismatch |
| H3: Low-intent buyers (one-off needs) depress metrics and aren't worth the CAC | Segment buyer cohorts by intent (recurring legal need vs. one-time); compare LTV and repeat rate | Recurring-need buyers have 3×+ LTV |
| H4: Top consultants are demoralized by low utilization and price competition, at risk of leaving | Survey top-20% consultants by revenue; interview 10 about satisfaction and alternatives | >30% considering leaving the platform |
Model Answer , Part 6: Prioritized Interventions (Top 3)
Why: Stops the dilution problem at source. Protects buyer experience. Gives top consultants differentiation and visibility.
Metric: Buyer repeat rate on verified-tier matches vs. provisional
Why: The #1 quality complaint is likely expertise mismatch. Better matching → higher satisfaction → higher repeat rate → higher LTV.
Metric: Post-match NPS, repeat engagement rate, quality ticket volume
Why: CAC rose 2.4× because acquisition was untargeted. Focusing on high-LTV segments improves payback and match quality.
Metric: CAC payback period, D30 & D90 repeat rate for new cohort
Model Answer , Part 7: Navigating Short-Term GMV vs. Long-Term Health
This is the central tension. Every stakeholder is proposing a fix that protects their domain but may worsen the system:
| Stakeholder | Proposal | Risk |
|---|---|---|
| CEO | "Race to bottom on price" | Correct observation, but the cause is quality dilution, not pricing |
| Head of Supply | "More supply" | More supply without quality gates accelerates the death spiral |
| Head of Demand | "Better search/matching" | Partially right, but matching can't fix bad supply |
| CTO | "Rebuild algorithm" | Algorithm rebuild is 3–6 months; structured intake can ship in weeks |
| CFO | "Raise take rate" | Raising take rate when NPS is 18 and -4 will accelerate churn on both sides |
Medium-term (bend the curve): Ship specialization-weighted matching + demand segment focus. These reduce CAC and improve retention within 1–2 quarters.
Long-term (rebuild trust): Earn the right to raise take rate by first demonstrating better outcomes. Higher buyer satisfaction → higher willingness to pay → sustainable take-rate increase.
What to STOP doing: Stop indiscriminate supply acquisition. Stop broad paid demand campaigns. Stop measuring success by GMV alone.
Model Answer , Part 8: 90-Day Plan & Executive Recommendation
• Run hypothesis tests H1–H4
• Interview 10 top consultants + 10 churned buyers
• Deliverable: Diagnostic memo with root cause attribution
• Ship structured intake form for demand-side (need type, industry, urgency)
• Shift 60% of demand acquisition budget to high-LTV segments
• Pause new supply acquisition until quality gates are live
• Run A/B: specialization-weighted matching vs. current algorithm
• Survey top consultants: satisfaction + retention intent
• Decision: scale verified-tier matching or iterate
• Revised 6-month roadmap tied to health metrics, not GMV alone
• Recommendation on take-rate adjustment (only if NPS has improved)
I recommend: (1) Implement supply quality tiers immediately, (2) Ship specialization-weighted matching within 6 weeks, (3) Refocus acquisition on high-LTV demand segments. We should STOP indiscriminate supply acquisition and STOP measuring success by GMV alone. I'll present a diagnostic readout in 2 weeks and intervention results at the 90-day mark.
Quick Reference Templates
Positioning Statement Template
who see us as [category / type of solution],
we help them [achieve specific outcome]
because [our mechanism / unique capability],
unlike [top alternative], we [trade-off / focus / what we refuse to do].
Problem Statement Template
resulting in [measurable impact / consequence].
Current alternatives [what they use today] fail because [why they're inadequate].
Strong Hypothesis Template
If we [specific intervention],
then [behavioral metric] will [improve/increase/decrease]
by [specific amount] within [time window].
Decision Memo Template (1-page)
OPTIONS: Option A | Option B | Option C
RECOMMENDATION: We recommend [X] because [evidence + rationale]
TRADE-OFFS: What we gain vs what we sacrifice
RISKS: What could go wrong + mitigation
REVIEW DATE: When to revisit this decision
RICE (Reach, Impact, Confidence, Effort) Scoring
| Initiative | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| [Your item] | # users/period | 0.25–3.0 | 0.5–1.0 | person-weeks | R×I×C÷E |
Common impact scale: 0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive
Key Formulas to Memorize
Frameworks Quick-Reference Card
| Framework | Use For | Key Question It Answers |
|---|---|---|
| RICE (Reach, Impact, Confidence, Effort) / ICE (Impact, Confidence, Ease) / PIE (Potential, Importance, Ease) | Backlog prioritization scoring | Which option has best value per effort? |
| WSJF (Weighted Shortest Job First) | Delivery queue sequencing | Which should we do first given cost of delay? |
| MoSCoW (Must, Should, Could, Won't) | Release scope negotiation | What must ship vs. what can wait? |
| Kano | Feature category analysis | Must-have vs performance vs delight? |
| TAM (Total Addressable Market) / SAM (Serviceable Addressable) / SOM (Serviceable Obtainable) | Market sizing | How big is the prize? What can we realistically win? |
| AARRR (Acquisition, Activation, Retention, Revenue, Referral) | Growth funnel metrics | Where in the funnel is the biggest drop? |
| HEART (Happiness, Engagement, Adoption, Retention, Task success) | UX (User Experience) quality metrics | Is the user experience improving? |
| DACI (Driver, Approver, Contributor, Informed) | Decision roles clarity | Who decides, contributes, and is informed? |
| JTBD (Jobs-to-be-Done) | Problem framing | What progress is the user trying to make? |
| Now/Next/Later | Roadmap communication | What are we doing and when, at right precision? |
| BMC (Business Model Canvas) | Business model design | How does the whole system create and capture value? |