βΆWhat is a north-star metric and why does it matter?
A north-star metric is the single metric that directly measures customer value creation β the one metric your entire company optimizes for. Examples: Slack = DAU (daily active users), Stripe = transaction volume, Netflix = hours watched. It's north-star because it's (1) not vanity (moving it = moving business fundamentals), (2) measurable daily, (3) actionable (teams can influence it), (4) proportional to revenue. Without a north star, teams optimize local maxima (like signup growth) that destroy retention. Picking the wrong north star (like page views for a news site) leads to clickbait and churn.
βΆOKRs vs metric trees β what's the difference?
OKRs (Objectives & Key Results) are goals + outcomes you want to move. Metric trees are the input/output relationships showing how to move them. Example OKR: 'Objective: increase user engagement. Key Result: grow DAU from 10k to 15k.' A metric tree for DAU shows: DAU = (Signups Γ Activation%) + (Existing Γ Retention%) β if retention is the bottleneck, you focus on retention, not signups. OKRs are 'what we want to move'; metric trees are 'how things connect.' Use both: OKRs for direction + accountability, metric trees for diagnosis + prioritization.
βΆLeading vs lagging indicators β when do I use each?
Lagging = historical (retention 30d, revenue, churn) β tells you if something worked but it's too late to change course. Leading = predictive (signup rate, activation %, feature adoption %) β tells you today if tomorrow will be good. In dashboards, use leading for daily/weekly decision-making (is onboarding healthy?), lagging for monthly/quarterly reviews (did we hit our goal?). Example funnel: signups (daily leading) β 7d activation % (leading) β 30d retention (lagging). If activation drops, you know retention will drop 30d later.
βΆWhat makes a metric 'vanity' and how do I spot it?
Vanity metric = sounds impressive but doesn't predict business value. Examples: total signups (doesn't matter if 90% churn), page views (doesn't correlate with retention), daily active users (if they're not paying). Vanity metrics go up even when the business is broken. To spot them: (1) would losing 50% of this number hurt the business? (2) can any team directly influence it? (3) does it correlate with revenue/retention/engagement? If 'no', it's vanity. Real metrics are boring: MRR, LTV, retention rate, NPS, support tickets. Use vanity metrics for PR, use real metrics for decisions.
βΆCohort analysis: when to use retention tables vs survival curves?
Retention table = simple grid (cohort Γ week/month, cells = % retained) β best for quick Monday-morning dashboards and spotting seasonal patterns. Survival curve = smooth line showing retention decay over weeks/months β best for understanding long-term trends, comparing product versions, and predicting LTV. Use both: tables for daily monitoring, curves for strategy meetings. If your retention tables are flat (steady 85%) you have a product-market fit signal. If they're declining (80% β 70% β 60%) you have a churn crisis; diagnose with exit surveys + usage segmentation.
βΆWhat makes a good dashboard and how do I design one?
Good dashboard: (1) one metric per glance β no scrolling to see what matters; (2) clear hierarchy (north-star top, diagnostics below); (3) 3-5 snapshots only, not 20 (less = more); (4) contextual sparklines (is this metric up or down this week?); (5) drilldown capability (click funnel to see step-by-step); (6) automated alerts (ping Slack if metric drops >10% vs last week). Bad dashboard: 100+ tiles, same emphasis on everything, no context. Example: top row = [MRR sparkline | LTV trend | Churn rate], second row = [signup funnel | retention curve | NPS distribution]. Every metric answers a business question; remove anything decorative.
βΆData democratization β why does it matter and how do I set it up?
Data democratization = every team has self-service access to the metrics they need (via Looker, Mode, Tableau, etc.) instead of waiting for analysts. Why: (1) speed (PMs get answers in hours, not days), (2) ownership (teams own their metrics, not analysts), (3) learning (teams see patterns themselves). How: (1) define metric taxonomy (standardized names: signup_rate, not signup/day/new), (2) build a semantic layer / BI tool, (3) create dashboards per team (Growth = funnel + segments; Support = ticket volume + CSAT), (4) train teams on SQL/metrics basics, (5) centralize calculations (dbt for computed metrics). Without democratization, analytics is a bottleneck; with it, analytics scales.