βΆAnnual reviews vs continuous feedback β which is more effective for performance?
Annual reviews create a false sense of closure and fail to course-correct during the year. Continuous feedback (weekly 1-on-1s, real-time praise/coaching) accelerates learning and catches problems early. Best practice: continuous feedback + quarterly touchpoints + annual review as documentation of what was already discussed. Annual should never contain surprises β if it does, you failed at continuous feedback. 1:1s should be 30min/week minimum, focused on feedback + career development, not status updates (async Slack for that).
βΆHow do I run an effective calibration session without bias?
Calibration = comparing performance across peers to ensure fairness and consistency in ratings. (1) Collect evidence first (360 feedback, peer input, quantitative metrics), (2) discuss each person's strengths/gaps, (3) stack-rank (not forced curve, but relative positioning), (4) adjust outliers for consistency, (5) document reasoning. Watch for: recency bias (last 3 months weighted too high), halo effect (one strength excuses underperformance), likeability bias (friends rated higher). Use blind scoring if possible. Multiple calibrators reduce individual bias.
βΆWhen do I put someone on a PIP (Performance Improvement Plan)?
PIPs are a last-resort escalation, not a 'manage-out' vehicle (though sometimes that's the outcome). Use a PIP when: (1) employee is underperforming against clear expectations, (2) you've given continuous feedback + coaching for 4-6 weeks with no improvement, (3) the role is not a bad fit (otherwise, reassign first). PIP structure: 30-90 days, specific metrics/behaviors to improve, weekly check-ins, clear exit criteria (succeed = return to normal, fail = exit). Document everything. ~70% of PIPs end in exit; if you see that ratio, you're using PIPs correctly (selection tool, not punishment). Never surprise someone with a PIP in the meeting β they should already know they're struggling.
βΆWhat's a 9-box grid and how do I use it for promotion decisions?
9-box = 3Γ3 matrix: Y-axis = Current Performance (low/medium/high), X-axis = Future Potential (low/medium/high). Employees in the top-right (high performance + high potential) are promotion candidates; top-middle are solid performers with unclear growth; bottom-left are exiting. Use for: (1) identifying bench (who's ready for the next level?), (2) retention planning (high-potential people are flight risks if not developed), (3) succession planning (who replaces the VP when she leaves?), (4) talent spend (invest in top-right, develop top-middle, manage out bottom-left). Avoid: forcing people into grid too early (need 6+ months of data), using solely for rankings (combine with calibration).
βΆOKRs vs KPIs in performance management β which should I use?
KPIs = what success looks like (e.g., 'reduce churn 5%', '95% uptime'). OKRs = ambitious goals tied to company strategy ('Increase NPS from 40 to 60' = Objective; 'Launch 3 features, track adoption' = Key Results). For performance: use KPIs for accountability (non-negotiables, hard expectations), OKRs for aspiration (stretch goals, innovation). Weigh 70% KPI + 30% OKR in reviews. Rating someone on OKRs alone is demoralizing (70% miss rate is expected). Use KPIs to pass/fail, OKRs to celebrate stretch and learning. Tie bonus to KPIs (80%) + OKR progress (20%).
βΆHow do I handle fairness and bias in performance ratings?
Bias creeps in at every stage: goal-setting (some people get vague goals, others specific), feedback (some get detailed coaching, others 'just fine'), rating (halo effect, recency bias, demographic bias). Mitigation: (1) Standardize goals across the level/team (templates), (2) gather 360 feedback from multiple sources (not just manager), (3) calibrate with peers across teams (consistency), (4) anonymize ratings in first pass, (5) score quantitative metrics blindly (not names), (6) document reasoning for every promotion/big raise, (7) audit rating distribution by demographics (is X group consistently rated lower?). Use tools like Culture Amp to surface bias. Take 30% longer on calibration β rushing = bias wins.
βΆHow do I give difficult feedback without demoralizing the employee?
Use the Radical Candor framework: (1) Care Personally (show genuine interest in their growth, not just hitting metrics), (2) Challenge Directly (name the specific behavior/impact, don't sugarcoat). Example: 'I care about your growth here, which is why I'm bringing this up: when you miss deadlines, the team scrambles and morale drops. Here's what I need: commit to 1-week buffer, or let's talk about capacity.' Avoid: praise sandwich (good-bad-good = neutered feedback), vague complaining, comparing to others. After difficult feedback: offer concrete support (coaching, training, role shift), check in at next 1-on-1, celebrate when they improve. Most people can hear hard truths if they trust you care about their success.