Quick Answer: AI-assisted résumés produce a measurable hiring lift — +7.8% more hires and +8.4% higher wages in the only large-n randomised controlled trial on the question (NBER WP 30886, 480,948 jobseekers). The lift is real. The caveat is equally real: 49% of hiring managers say they automatically dismiss résumés they identify as AI-written (Resume.io, Jan 2025), and 62% reject AI résumés that lack personalisation (Resume Now, Mar 2025). The gap between these numbers tells you exactly what works and what doesn't. AI editing on top of human-written prose wins. AI authorship of a résumé from scratch loses. This article is the evidence-anchored playbook for staying on the right side of that line.
The Evidence Base: What the Research Actually Shows
Before any tactical steps, the NBER study is worth reading carefully because most "AI résumé tips" either ignore the evidence or cite it wrong.
NBER Working Paper 30886 (Wiles, Munyikwa & Horton, 2023) ran a field experiment on a major freelance labour platform. 480,948 jobseekers were randomly assigned to either AI-assisted writing tools or no help. The AI-assisted group saw a +7.8% increase in hires and a +8.4% increase in wages relative to control — not a negligible effect by any labour-market standard. Critically, the mechanism was editing: grammar, structure, and clarity improvements applied to candidates' own content. The AI was not writing the résumé. It was polishing prose that the candidate had already written, making it cleaner and more legible to both human readers and algorithmic ranking.
A second study from the economics literature supports the time-and-quality framing. Noy & Zhang (Science, 2023) ran a randomised experiment on professional writing tasks and found ChatGPT cut task time by 40% and raised quality ratings by 18%. This was not a résumé study, but the mechanism transfers: AI is fast at structure and flow, and humans retain the domain knowledge and specificity. The combination is stronger than either alone.
What does not appear anywhere in the peer-reviewed literature is evidence that a fully AI-generated résumé — one where a language model produces the entire document from a job title prompt — outperforms a hand-written one. The viral survey from ResumeBuilder (Sep 2024, n=948 business leaders) found 85% of jobseekers who used ChatGPT in their search negotiated higher pay. That is an association, not a mechanism — it likely reflects that more technically confident, higher-skill candidates are also more likely to use AI tools and to negotiate. It is not evidence that ChatGPT wrote a résumé and a salary went up.
The distinction matters because the tactics change completely depending on whether you are editing or authoring.
Why Most AI Résumés Get Rejected
If the NBER trial showed a hiring lift, why do nearly half of hiring managers say they reject AI résumés on sight? Because what hiring managers flag is not "AI editing" — it is AI authorship, and the tells are pattern-level rather than sentence-level.
Resume.io surveyed 3,000 US hiring managers in January 2025 and found 49% would automatically dismiss a résumé they identified as AI-written. The rejection triggers named most often: vague "impact" claims without metrics, bullet patterns that follow the exact same structure across every role, generic skill lists that match job descriptions too literally, and a missing sense of the person behind the résumé — what they actually worked on, what was hard, what they are proud of.
Resume Now (March 2025, n=925) found 62% of hiring managers reject AI résumés lacking personalisation. "Personalisation" here means specificity: project names, product contexts, real constraints faced, outcomes that couldn't have come from any company doing similar work. Generic AI prose replaces all of this with approximations that every résumé in a given function uses — "spearheaded cross-functional initiatives," "drove strategic alignment," "leveraged data-driven insights." Recruiters have seen these phrases ten thousand times. The pattern reads as AI even when it isn't.
Resume Genius (n=1,000 US HR professionals) found 74% of hiring managers have encountered AI-generated content in résumés — and a growing share treat it as a yellow flag on the application as a whole, not just the document. The concern is not "this candidate used a tool." The concern is "this candidate cannot differentiate themselves from every other candidate who used the same tool."
The irony is tight: AI is most useful for the mechanics that humans are worst at — grammar, parallel structure, action verb selection, keyword alignment — and least useful for the specificity that makes a résumé worth reading. The correct workflow keeps AI in its lane.
Step 1: Know Your Baseline Before Any AI Prompt
The most common AI résumé mistake happens before any prompt is written: trying to generate a résumé without a clear picture of your own strengths, working style, and career direction. AI has no access to that information. It substitutes plausible-sounding generic prose for the candidate-specific signal the résumé needs to carry.
The practical fix is to do the self-knowledge step first, then bring AI in. For most jobseekers this means a 20-minute pass across three questions: What were my three biggest measurable accomplishments in the last two years? What types of work energise me versus drain me? What would I say if a recruiter asked "why this role?" in the first phone screen?
If the answers are fuzzy, structured personality and career-fit assessments provide a vocabulary that turns out to be directly useful in résumé writing — not the horoscope-level generalities but the specific working-style dimensions that distinguish your contributions from someone with an identical job title. Understanding whether you are a technical executor, a cross-functional coordinator, or a strategic problem-solver changes which bullets you foreground and how you frame them. A career personality assessment gives you a language for that. A Big Five personality profile gives you a framework for the soft-skills narrative that AI cannot reverse-engineer from a job title. These are inputs, not outputs — they feed the AI prompt rather than replacing it.
Step 2: Write the Bullet Yourself, Then Polish
The NBER trial's design is the template. The candidates who got the hiring lift wrote their own content first. The AI then edited that content for clarity and structure. The authorship remained human. The polish became AI.
In practice this means: before opening a chat interface, write every bullet in ugly, unpolished prose that is nonetheless true. "I ran the rebrand project for the product redesign in Q3 2024, managed the three-agency vendor stack, we came in two weeks early and $40k under budget, CEO highlighted it in the board pack." That is ugly. It is also specific, verifiable, and yours. No AI could invent it.
Then paste it into the model with a clear prompt: "Rewrite this bullet in active voice, leading with the result, using the vocabulary from this job description section: [paste JD]. Keep every fact and number. Do not add claims I didn't make." The model returns something cleaner. You check it against the original for accuracy. If it has introduced phrasing you didn't write — "leveraged cross-departmental synergies" where you said "coordinated with three agencies" — you revert. The model is a copyeditor, not a ghostwriter.
The constraint to add explicitly to every prompt: no invented metrics, no invented technologies, no invented scope. Models hallucinate plausible-sounding details at a rate that varies by model and prompt quality. A claim that you "increased revenue by 23%" that you did not write will sit in your résumé until a recruiter or hiring manager asks you to walk through it — at which point the interview ends. This risk is real and underweighted in most "AI résumé tips" content.
Step 3: Mirror Job Description Vocabulary, Not Job Description Intent
The ATS ranking question has a precise answer. Modern applicant tracking systems — Workday, Greenhouse, Lever — function as search engines: the recruiter searches, the system ranks by keyword relevance. The mythologised "75% ATS auto-rejection" figure traces to a 2012 marketing claim by a startup called Preptel that shut down in 2013 with no methodology published (see our ATS rejection myth deep-dive). The Enhancv survey of 25 US recruiters found 92% confirm their ATS does not auto-reject. What happens instead is that your résumé is invisible in the recruiter's search if your phrasing does not match the terms they type.
AI's practical use here is keyword auditing, not keyword stuffing. Paste the job description. Ask: "List the ten most specific technical skills, tools, and methodologies the employer is looking for — just the nouns, no interpretation." Check which ones appear in your résumé verbatim. For every gap: either the skill belongs in your résumé because you genuinely have it (in which case, add it in context), or it doesn't and you shouldn't claim it. The model is useful for spotting vocabulary gaps you might have missed — "containerisation" vs "Docker deployment," "CRO" vs "conversion rate optimisation" — and aligning your phrasing to the JD's language without importing generic filler.
The failure mode is to use AI to produce keyword-dense bullets that are technically accurate but structurally impenetrable. A recruiter reading "Orchestrated synergistic cross-functional alignment leveraging data-driven insights to optimise conversion-rate-optimisation initiatives" has not been served by the ATS ranking that surfaced this résumé. They have been shown a résumé that passes algorithmic keyword screens and fails the six-second human scan. Both matter.
Step 4: Format for the Parser, Not the Designer
This step has nothing to do with AI per se, but AI résumé generation often produces the wrong format, so it belongs in this playbook. The ATS extracts structured data from the document file before any ranking or search happens. Multi-column layouts, skills inside coloured boxes, text overlaid on images, and skills stored in header and footer sections are common in AI-generated résumé templates and are also the most reliable ways to corrupt the parsed output.
The parser reads left-to-right, top-to-bottom, in the document's underlying text layer. A résumé built in Canva that looks visually polished will parse as fragments if the two-column layout renders as interleaved text. A scanned PDF parses as zero text. Header and footer text fields are routinely dropped by common ATS parsers — putting your phone number or LinkedIn URL in the header is a frequent, preventable loss.
The correct format is functional, not beautiful: single column, real text (no images of text), contact details in the body, clear heading hierarchy, dates in a consistent format, PDF exported from a word processor not a graphic design tool. If an AI tool generates a template that violates any of these, the visual polish is irrelevant. Fix the structure first.
Step 5: Put Back What AI Strips Out
AI prose tends toward completeness and polish, which systematically removes the most informative parts of a résumé: the constraint context, the difficulty signal, the unexpected choice. Recruiters who read several hundred résumés per week become excellent at noticing absence. The bullet that says "launched product" tells a different story than "launched product after two vendor contracts fell through and one engineering lead left, hitting the original timeline within three weeks." The second version is more informative and more credible. AI, optimising for clean parallelism, reliably edits toward the first version.
After the AI editing pass, read every bullet and restore two things: the specific context that makes it plausible, and the constraint that makes it impressive. You do not need long bullets. You need specific ones. "Rebuilt from scratch" conveys more than "developed," and "reduced from 14 steps to 3" conveys more than "improved." The specifics are what you know and the model does not. Putting them back is the step that separates AI-polished from AI-generic.
Step 6: Use AI in the Negotiation Phase Too
The ResumeBuilder finding — 85% of jobseekers who used ChatGPT during their search negotiated higher pay — is more plausibly explained by preparation than by résumé quality. AI is an unusually effective negotiation prep tool because it can simulate the other side of the conversation and is available at midnight the night before the offer call.
Practical use: once you have an offer, paste the role, the offer terms, and your compensation research into the model and ask it to roleplay the recruiter's counter-arguments against your target figure. Then practice your responses until they are automatic. Ask it to produce the phrasing for three common negotiation scenarios — counteroffer, competing offer as leverage, non-salary trade-offs — and edit the phrasing until it sounds like you, not like a business-writing template. The fluency gained in 30 minutes of this prep is real. The negotiation outcome depends on many factors beyond prep, but being able to say the number calmly and not immediately fill the silence is a learnable skill and AI practice makes it faster.
The Detector Question: Is Originality.ai Actually a Risk?
The short answer is no, and the longer answer matters because it shifts attention to the real risk.
Originality.ai claims 99% accuracy in detecting AI-generated text. Independent testing by Scribbr (August 2024) found GPTZero scored only 52% accurate and Originality.ai 76% on a realistic mix of human-edited-AI and fully-AI content. A 2023 Stanford HAI analysis of over 10,000 samples found false-positive rates above 20% on text written by non-native English speakers and on creative or stylistically varied prose. An AI detector trained on patterns associated with GPT-4 output is less a litmus test than a native-English fluency screen — which would create its own legal exposure for employers who deployed it as a hiring gate.
More practically: most recruiters are not running Originality.ai on résumés. Resume Genius found 74% of hiring managers have encountered AI content, but the same survey showed only a small fraction use detection software. The ones who flag AI résumés are doing it by eye — recognising the generic patterns, the identical bullet structure, the absence of specificity. Beating the detector is the wrong goal. Writing a résumé that doesn't trigger the recruiter's eye is the right goal, and the playbook above is how you do that.
The one population where this matters more: candidates whose first language is not English. Because detectors are calibrated on native-English AI patterns, non-native writers who also use AI assistance face a compounded false-positive risk. The mitigation is the same: specificity and context that no language model could generate from a job title alone. Our Originality.ai accuracy deep-dive documents the Stanford HAI data in full.
The Bias Layer: What AI Doesn't Fix
The NBER trial showed a +7.8% lift across the population — but population averages conceal structural inequalities that AI-assisted résumés do not address and may worsen. The University of Washington AIES 2024 study analysed more than 3 million résumé-to-job comparisons by three production LLMs and found white candidates preferred in 85% of comparisons versus Black candidates in only 9%. Male names were preferred in 52% of comparisons; female names in 11%. If your résumé is being screened by AI at any stage in the hiring pipeline — and the Workday 173-million-applications figure makes AI-assisted screening a near-certainty at scale — these biases operate on signals you did not put there and cannot edit away.
The implications for résumé strategy are real, even if uncomfortable. Research on résumé "whitening" (Kang, 2016, replicated 2024) found Black candidates received 25% more callbacks when they removed cultural markers from their résumés. The finding is a measurement of discrimination, not a recommendation — reporting it is not the same as endorsing it. What it tells you is that the signal environment your résumé enters is not neutral, and AI polish does not change that. Referrals, targeted applications to employers with documented pay-equity data, and legal channels for discrimination (see our Mobley v. Workday explainer) are the responses to a structural problem, not résumé tactics.
The AI hiring bias literature is evolving rapidly — the EEOC, ACLU, and class-action plaintiffs' bar are actively litigating it. The AI hiring bias studies deep-dive covers the full evidence base. The practical takeaway for résumé writers: AI helps most on the mechanics that are within your control, and the bias layer is not within your control. Optimise what you can; know the limits of the tool.
Red Lines: What Not to Do
Five practices that are common in "AI résumé tips" content and are actively harmful:
Do not paste a job description and ask AI to write your résumé from scratch. The output will be accurate to the JD and false to your experience. Every claim it makes is unverifiable because you didn't write it. When asked about any bullet in an interview, you will either have to reconstruct what the AI meant or concede that the résumé does not reflect your actual work.
Do not use AI to inflate scope or metrics. "Managed $2M budget" when you tracked a $200k project budget is the kind of specific claim that comes up in a background check or a structured interview. Invented numbers are not a résumé problem — they are a trust problem. Recruiters who discover them during the process do not move forward; hiring managers who discover them during employment have grounds for termination.
Do not submit the same AI-polished résumé to every application. The 62% rejection rate for non-personalised AI résumés (Resume Now) measures exactly this behaviour. The model can personalise — that is a prompt design problem, not a tool limitation. One résumé per role, with the JD vocabulary mirrored and the experience context adjusted.
Do not use AI-generated résumé templates with multi-column layouts or image-based design elements. The visual quality is irrelevant to a parser. The ATS sees what the document's text layer contains, and decorative layouts consistently corrupt that layer.
Do not rely on AI to catch factual inaccuracies in your own claims. Models do not have access to your employment history, your actual metrics, or your former colleagues' memory. They will not flag a claim that overstates scope. That quality check is yours, not the model's.
What the +7.8% Actually Means in Practice
Framing matters. The NBER +7.8% is an average treatment effect across a large, diverse population of freelance platform jobseekers. It is not a promise that your next application will convert at 7.8% higher odds. It is evidence that AI-assisted résumé editing is a real input — not a gimmick, not a red flag — and that its average effect is positive enough to be worth the effort.
The candidates who consistently outperform on résumé metrics are not the ones who found the cleverest AI prompt. They are the ones who treated the résumé as a 30-to-45-minute editing pass per role rather than a one-time document. They read the JD twice. They wrote bullets in their own voice first. They used AI for cleanup, not authorship. They added back the specificity the model removed. They sent it when it read like them, not when it read like every other application in the recruiter's queue.
That workflow is replicable. It does not require a particular AI tool — it works with ChatGPT, Claude, Gemini, or a human editor. What it requires is the discipline to keep the human as the author and the AI as the editor, which is exactly the arrangement the evidence supports.
Your next step is the one the résumé is advertising: getting a career-fit read that is yours, not AI-averaged. The Career Match assessment and the RIASEC Holland Code test give you the vocabulary — your work preferences, your interest profile, your skill clusters — that turns generic AI editing into personalised content. That is the foundation the AI prompt needs to produce something worth sending.