Skip to main content

Research Tool · Primary Sources Only

AI Hiring Bias Tracker

50+ verified statistics on algorithmic discrimination in hiring — race, gender, age, and disability bias across screening, ranking, interview, and legal outcomes. Filter, search, and cite with confidence.

Compiled by JobCannon Research from peer-reviewed studies, EEOC filings, and government reports.

50+

Verified statistics

2004–2026

Data span

6

Evidence categories

Primary sources

No secondary claims

Why This Tracker Exists

AI hiring tools now touch over 79% of hiring processes (SHRM, 2024) — yet only 9% of companies using them have conducted an independent bias audit. The statistics below are not theoretical: they come from court filings, EEOC enforcement actions, government-commissioned audits, and peer-reviewed papers.

Each entry cites the primary source so you can verify independently. Where the original study is paywalled, the citation includes enough metadata to locate it through your institution or Google Scholar.

Statistics Database

Showing 50 of 50 entries

Frequently Asked Questions

What is AI hiring bias?

AI hiring bias occurs when automated screening, ranking, or interview tools produce systematically different outcomes for protected groups — race, gender, age, or disability status — not justified by job-related criteria. Bias can enter through biased training data, proxy variables (zip code, name, graduation year), or model architecture choices.

Is AI hiring bias illegal in the US?

Yes. Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) all apply to AI hiring tools. The EEOC's 2022 technical assistance document confirmed that AI systems producing disparate impact on protected groups can constitute unlawful discrimination. The EEOC's first AI-bias charge against iTutorGroup was filed in 2023.

What is the 4/5ths rule in AI hiring?

The 4/5ths (or 80%) rule is a guideline from the EEOC's Uniform Guidelines on Employee Selection Procedures. If a protected group's selection rate is less than 80% of the highest-selected group's rate, that is considered evidence of adverse impact. Many AI hiring audits use this as the primary benchmark.

Which AI hiring bias laws are in effect in 2026?

Key laws include: NYC Local Law 144 (2023) requiring annual bias audits; Illinois AIVIA (2020) requiring disclosure for video interview AI; Colorado SB 205 (2024) covering high-stakes algorithmic decisions; EU AI Act (2024) classifying hiring AI as high-risk. Federal coverage comes from EEOC enforcement of existing Title VII, ADEA, and ADA.

How can job seekers protect themselves from AI hiring bias?

Practical steps: request disclosure under applicable laws (NYC, Illinois); submit accommodation requests in writing if you need adjusted testing conditions; use RIASEC and Big Five self-assessments to understand your own profile independently; document rejection patterns if you suspect systematic bias. The EEOC accepts charges from individuals who believe they were discriminated against by AI tools.

Understand your own career profile

AI tools may misjudge you — but you don't have to. Take a science-backed personality and career assessment to understand your strengths independently of any algorithm.