βΆMixpanel vs Amplitude vs PostHog β which should I choose?
Mixpanel: best for retention + cohort analysis (mature UI, strong cohort workflows, pricey). Amplitude: best for product teams, built-in experimentation, real-time dashboards (good all-rounder, industry standard). PostHog: best for self-hosted/privacy-first orgs, includes feature flags + experiments built-in, lower cost, smaller feature set. Heap: best for autocapture (no event instrumentation needed), easier for non-technical teams but less flexible. PostHog excels if you're shipping features fast (flags built-in). Amplitude wins if you need robust cohort workflows + retention curves. Mixpanel if enterprise CAC justifies the cost.
βΆHow do I design an event tracking plan?
Start with 5-7 core events: signup, activation (first key action), feature adoption, engagement (daily active use), retention (day 7/30), churn indicators, monetization. Define event properties (user_id, timestamp, platform, feature_used, result). Use semantic naming: action_object_status (e.g., 'button_signup_clicked', 'feature_report_generated'). Avoid tracking every micro-interaction; focus on business outcomes. Create a shared tracking spec doc (Google Sheet: event name + properties + trigger condition), version control it, validate against real data before acting on it.
βΆWhat's the difference between funnels and cohort analysis?
Funnels (point-in-time): show drop-off between steps in a single flow (signup β email-verified β first-login β purchase). Answer: where do users abandon? Cohorts (behavioral groups): track retention/churn/engagement of users grouped by acquisition date, feature adoption, or behavior (e.g., 'users who tried feature X in week 1 have 40% better retention'). Funnels = spatial drop-off, cohorts = temporal behavior. Use funnels to debug conversion, cohorts to understand long-term impact and predict churn.
βΆHow do I set up retention curves and interpret them?
Track Day 1, 7, 14, 30 retention for a cohort (% of users active N days after signup/activation). Plot cohorts over time to spot trends. Healthy SaaS: Day 7 β₯50-70%, Day 30 β₯30-50%, Day 90 β₯15-30%. If Day 7 drops below 30%, engagement loop is broken. Compare retention by feature adoption: users who adopted feature X may show +20% lift vs control. Use Kaplan-Meier curves (time-to-churn) for richer survival analysis. Retention curves predict LTV: 40% Day 30 + 3% monthly churn = ~10-month lifespan.
βΆWhat's a North Star metric and how do I choose one?
North Star = one metric that best captures long-term business success. Examples: DAU (engagement), feature adoption rate (expansion), customer health score (retention), feature-enabled revenue (monetization). Avoid vanity metrics (pageviews, signups without activation). Choose one that: (1) correlates with revenue, (2) moves with product improvements, (3) lags leading indicators (e.g., DAU lags feature adoption by 2 weeks). Don't over-rotate: North Star guides strategy, but input metrics (activation rate, first-day retention) are tactical levers. Revisit annually. Spotify uses DAU but weights engagement depth; Figma uses ERR (engagement-weighted retention).
βΆHow do I interpret A/B test results in product analytics?
Always check: (1) sample size + statistical power (95% confidence, 80% power = n = (variance/effect)Β²). (2) Duration: run β₯1 week to avoid day-of-week bias, β₯2 weeks for mobile/low-frequency actions. (3) p-value <0.05 is not a guarantee (20% false-positive rate in 100 tests). Use sequential testing (peek early, stop if confidence β₯95%). Watch for multiple-comparison bias (if you run 10 tests, expect 1 false positive). Combine primary + secondary metrics: lift in signups but churn increased? Dig deeper. Report effect size + confidence interval, not just p-value. Run incrementality tests (hold-out group) for high-stakes changes if attribution is noisy.