Skip to main content

Buyer\u2019s guide \u00b7 NGO grant reporting \u00b7 career outcomes

Guide to NGO grant reporting on career outcomes for Lumina, Walton, Casey, JFF and ECMC.

The recurring outcome metrics across major workforce-and-education funders, attribution windows, equity disaggregation, and where assessment platforms fit in the evidence file.

In Brief

This guide explains the outcome-reporting expectations of the major US workforce-and-education funders \u2014 Lumina Foundation\u2019s Stronger Nation framework, Walton Family Foundation\u2019s K-12 and CTE portfolios, Annie E. Casey Foundation\u2019s opportunity-youth measures, JFF\u2019s Pathway-to-Credentials reporting, ECMC Foundation, Strada, Joyce, Bloomberg, and the Gates Foundation\u2019s postsecondary work. It identifies the converged common-set outcome measures (enrollment / engagement, credential or milestone attainment, persistence at 6 / 12 months, employment at Q2 / Q4 post-exit, earnings at Q2 / Q4 post-exit, twelve-month wage trajectory) and explains how attribution windows shape what programs look successful. It walks through the four-block program theory of change and where assessment-platform evidence legitimately fits \u2014 baseline characterization in block one, milestone-progress signal in block three, never as outcome evidence in block four. It covers cell-size suppression (typically n=10 from PTAC and DOL conventions), equity disaggregation across race, ethnicity, gender, age, disability and prior education, and the multi-year-aggregation, dimensional-reduction, and qualitative-supplementation responses to small-cohort suppression. It surveys validity expectations \u2014 published peer-reviewed instruments, reliability evidence, fairness audits, transparency \u2014 and closes with a six-checkpoint grant-aligned implementation across the program lifecycle.

Chapters in this guide

A reading map for NGO program staff and grants officers.

Funder common-set metrics
Recurring measures across Lumina, Walton, Casey, JFF, ECMC, Strada and others. Engagement, milestones, persistence, employment, earnings.
Attribution windows
Q2 / Q4 employment, six and twelve-month persistence, twelve-month wage trajectory. Common pitfalls in window selection.
Theory-of-change evidence
Four-block model and where assessment evidence fits in blocks one and three (baseline and milestone), not block four (outcome).
Equity disaggregation
PTAC n=10 suppression, multi-year aggregation, dimensional reduction, and qualitative supplementation for small cohorts.

Assessment battery commonly used in workforce-grant programs

Pre-program baseline and post-program exit evidence.

Baseline at intake
Block-one evidence
Work-readiness traits
Block-three milestone signal

Compared to other grant-aligned assessment platforms

For an NGO grantee program serving 3,000 participants per year

$60-150K/yr
CareerScope by JIST
Per-seat licensing plus implementation
$45-110K/yr
Kuder Journey
Per-user licensing
$30-80K/yr
TruScore career assessments
Per-test pricing
$0
JobCannon
Unlimited, forever

What this guide covers

Funder common-set outcome measures across major workforce funders
Attribution windows and exit-definition discipline
Four-block theory of change and where evidence fits
Cell-size suppression rules from PTAC and DOL conventions
Equity disaggregation strategies for small cohorts
Validity expectations — peer-reviewed instruments, reliability, fairness
Six-checkpoint grant-aligned implementation lifecycle
Per-participant export reconcilable to administrative data

Related on JobCannon

This guide is one of twenty in the JobCannon for Business reading library; NGO programme directors reading the funder-outcome framing here also read the refugee employment skill-mapping guide for ORR / Volag attribution detail and the multilingual localisation-quality guide for how cross-language equivalence interacts with grant-level disaggregation.

For the operational landing of grant-aligned reporting, see our NGOs and nonprofits vertical, where the same primitives feed common-set workforce funder reports and per-participant case-management exports.

Pricing for NGO grantees

Participant-facing assessments and Career Guide stay free under an NGO partnership. Cohort reporting and grant-aligned exports run on the Business tier from $199/mo flat, or under a partnership scoped to your funder portfolio.

Starter

Try it with a micro-team

$0
  • 5 invites (one-time, not recurring)
  • All 50+ assessments
  • Basic individual reports
  • Share link via email or Slack
  • No credit card required
Request free access

Coach

For independent coaches and therapists

$29/mo
or $290/yr (save 17%)
  • 30 invites per month
  • All 50+ assessments
  • Detailed individual reports
  • Coach notes per client
  • PDF export (client-ready)
  • Session prep recommendations
Get Coach access
Most Popular

Team

For startups, teams and HR

$79/mo
or $790/yr (save 17%)
  • 100 invites per month
  • Everything in Coach
  • Team DNA dashboard
  • Compatibility matrix
  • Conflict-pattern detection
  • Compare 2-3 team members
Get Team access
Recommended

Business

For agencies, L&D and scale-ups

$199/mo
or $1990/yr (save 17%)
  • 500 invites per month
  • Everything in Team
  • White-label PDF reports (your logo)
  • API access (read-only results)
  • Custom assessment builder (beta)
  • Bulk CSV import/export
Get Business access

Enterprise

For 200+ person companies

From $5k/yr
  • Unlimited invites
  • Everything in Business
  • SSO (SAML, Google Workspace)
  • SLA (99.9% uptime)
  • Data residency options (EU/US)
  • Dedicated Customer Success
Talk to us

All plans currently activated manually via the contact form — we review each request within 24 hours and provision access the same day. Self-serve checkout coming once we've heard from the first wave of teams.

Talk to a grant-reporting specialist

Tell us your program model, your funder portfolio, and your reporting cycle. We respond within one business day.

We reply within 24 hours. No spam, no per-seat pitches.

FAQ

What outcome metrics do major workforce-and-education funders typically require in grant reports?

Major workforce-and-education funders — Lumina Foundation, Walton Family Foundation, Annie E. Casey Foundation, Joyce Foundation, JFF (Jobs for the Future), ECMC Foundation, Strada Education Foundation, Bloomberg Philanthropies, and the Bill & Melinda Gates Foundation — publish or share with grantees specific outcome metric expectations that have converged toward a recognizable common set. The Lumina Foundation’s Stronger Nation framework focuses on credential attainment by working-age adults, with disaggregation by race, ethnicity, and income; grantees in postsecondary completion programs typically report credential awards (degree, certificate, certificate-of-value), persistence at six and twelve months, and equity-gap closure measured as the difference between the highest and lowest-attaining demographic groups. The Walton Family Foundation’s K-12 and CTE portfolios emphasize industry credential attainment, postsecondary enrollment within twelve months of high-school exit, and earnings outcomes when wage-record data is accessible. Annie E. Casey Foundation’s opportunity-youth and young-adult workforce portfolios focus on the Opportunity Index components: education attainment, employment, earnings progress, and connection to support systems. JFF’s reporting framework for its grantees and partners emphasizes the Common Performance Measures aligned to WIOA where applicable, and Pathway-to-Credentials metrics for non-WIOA grantees. The recurring measures across funders are: enrollment and engagement (registered, started, completed), credential or milestone attainment (specific to the program model), persistence (typically six and twelve months post-program), employment (typically Q2 and Q4 post-exit), and earnings (typically Q2 and Q4 post-exit, and increasingly twelve-month wage trajectory). Funders increasingly expect grantees to report on equity disaggregation — the same outcome cuts by race, ethnicity, gender, age, prior education, and disability status — with appropriate cell-size suppression for privacy.

What are the typical attribution windows and how should they be measured?

Attribution windows are the time periods in which an outcome is counted as caused by the program. They matter because the same person could be counted as a successful outcome at six months, twelve months, twenty-four months, or never depending on where the window cuts. The recurring windows in workforce grants are: enrollment-to-program-completion (varies by program model, typically thirty days to twenty-four months), program-completion-to-Q2-employment (the second quarter after exit, which is approximately three to six months post-completion), Q2-to-Q4-employment (the third to fourth quarter after exit, six to twelve months post-completion), and twelve-month wage trajectory (median earnings comparison from program entry to twelve months post-exit). The choice of window affects what kinds of programs look successful. A short bootcamp model typically shines in Q2 employment and lags in Q4 retention; a degree-completion program lags in Q2 (because students are still enrolled) and pulls ahead in Q4 and beyond. A career-coaching program typically reports earlier engagement metrics and longer-term placement metrics, often using six-month windows because they allow for self-paced placement decisions. Two operational pitfalls are common. First, cohort-window definitions that don’t match data-availability windows — measuring twelve-month outcomes when state UI wage records have a six-month lag means you’re measuring outcomes from eighteen months ago, not twelve. Second, exit-definition ambiguity — “exited the program” can mean completed, dropped, transferred, or paused, and outcomes look very different for completers versus all exiters. Funders typically specify exit definitions in the grant agreement; grantees should not invent their own. JobCannon does not produce employment or earnings data — those come from state UI wage records, NSC enrollment data, employer-supplied data, or self-reported follow-up. The platform produces the upstream profile and skill-gap evidence that supports the grant theory of change.

How should an NGO present assessment-platform output as evidence in a grant narrative?

The grant narrative is where the program theory of change is articulated. Most workforce-and-education theories of change have four blocks: (1) participants enter the program with a defined set of needs or barriers; (2) the program intervenes with services that address those needs; (3) participants progress through milestones that indicate the intervention is working; (4) participants achieve outcomes that justify the investment. Assessment-platform output fits the theory of change in two specific places. First, in block one as baseline evidence — the platform produces a per-participant profile showing the starting point on career interest, skills, work-readiness traits, and aptitude. Aggregating across the cohort produces a baseline distribution that characterizes the population the program is serving. This matters for grant narratives because it answers the funder’s question “who are you serving and how does your population differ from the average?” in concrete language rather than demographic generalities. Second, in block three as a milestone-progress signal — a documented career-interest profile combined with a personalized plan and a self-rated skills baseline is a defensible mid-point milestone for many program models, particularly career-coaching, workforce-readiness, and pre-apprenticeship programs. The platform output is not a final outcome — it does not satisfy block four — but it is durable evidence of block one and block three. The narrative pattern that works: “Participants entered with [baseline distribution from platform]; the program addressed [specific gaps] through [specific services]; midpoint assessment showed [progress signal]; outcomes followed [outcome data].” Treat the platform as evidence in two specific blocks of the four-block theory; do not overclaim it as outcome evidence in block four.

What should an NGO know about cell-size suppression and equity disaggregation in funder reports?

Equity disaggregation — reporting outcomes broken out by race, ethnicity, gender, age, disability status, and prior education — has become a standard funder expectation across the workforce-and-education portfolio. The methodological challenge is that disaggregating small program populations across multiple demographic dimensions produces small cell sizes that are statistically unreliable and potentially identifying. Federal and state agencies use cell-size suppression rules to protect privacy: the Department of Education’s Privacy Technical Assistance Center (PTAC) recommends suppression of cells with fewer than ten students, with complementary suppression to prevent inferring suppressed cells from totals. The U.S. Department of Labor’s WIOA reporting uses a similar threshold. Grantees serving populations under, say, five hundred participants per year often find that meaningful disaggregation is impossible after suppression because the cells empty out. Three operational responses are common. First, multi-year aggregation — reporting outcomes pooled across multiple program years to build cell sizes large enough for disaggregation, with appropriate context about pooling. Second, dimensional reduction — reporting on race-ethnicity and gender independently rather than in combination, and grouping ages into bands rather than precise ranges. Third, qualitative supplementation — narrative case studies that describe equity-relevant patterns visible in the small cells without disclosing identifying detail. Career-assessment platforms should support all three. JobCannon’s admin export provides per-participant records with district-supplied or program-supplied IDs; aggregation, suppression, and disaggregation logic happen in the program’s reporting layer, not in the platform. The platform exposes the demographic fields the program collected at intake (age band, self-identified race / ethnicity, gender, prior education) and lets the program apply its own suppression rules.

How do funders evaluate the validity of assessment-platform evidence?

Funders evaluating assessment-platform evidence typically look for four signals. First, validity research — the platform should use psychometric instruments with published validity research in peer-reviewed literature, not proprietary opaque instruments. RIASEC (Holland 1959, 1985, 1997 and the substantial subsequent literature including Nauta 2010 and Tracey 2018), Big Five (Costa and McCrae 1992; John et al. 2008), Howard Gardner’s multiple intelligences (Gardner 1983, 1999, 2011 with caveats from Visser et al. 2006), the Maslach Burnout Inventory (Maslach and Jackson 1981, 1986; Maslach et al. 2018) are all defensible because the literature is public. Proprietary instruments that have not been peer-reviewed should be supported by the operator’s own technical manual and validity studies. Second, reliability evidence — internal consistency (Cronbach’s alpha typically above 0.70 for production scales, above 0.80 preferred), test-retest reliability where applicable, and inter-rater reliability for instruments scored by humans. Third, fairness research — the instrument should not produce systematically different outcomes across demographic groups in ways unrelated to the construct measured. Differential item functioning (DIF) studies and fairness audits are increasingly an expectation, particularly for AI-driven instruments. Fourth, transparency — the platform should publish its scoring logic, item content (or a representative sample), and recommendation rules in an accessible form. A platform that produces a result without transparency about how the result was derived is hard to defend in a grant narrative. JobCannon’s production posture: psychometric instruments use published frameworks; technical manuals and reliability evidence are available on request to qualified researchers; recommendation logic is documented in the result-page transparency layer; the knowledge graph (2,536 careers, 1,533 skills, 64,317 weighted edges) is published and inspectable rather than opaque.

What does a grant-aligned implementation look like across the program lifecycle?

A grant-aligned implementation has six checkpoints across the program lifecycle. First, intake — the platform is administered at participant intake, typically within the first week of program enrollment, capturing baseline interest, skills, traits, and aptitudes; the export feeds the participant’s file. Second, planning — the participant uses the platform output with a coach or counselor to develop a personalized plan; the plan is the program’s artifact, the platform output is its evidence base. Third, milestone checkpoints — at one or more midpoint assessments the participant retakes a subset of relevant instruments (skills audit, work-readiness traits) to capture growth; the cohort-level distribution shift is the milestone evidence. Fourth, exit — a final assessment captures the post-program profile, supporting the program’s value-added narrative. Fifth, follow-up — participants who consent are re-engaged at six or twelve months for a brief follow-up survey including self-reported employment status, role, earnings band, satisfaction; this is the operational source of self-reported outcome data when administrative data (UI wage records, NSC enrollment) is unavailable or lagged. Sixth, reporting — the program assembles aggregate cohort metrics from the platform exports, applies cell-size suppression, disaggregates appropriately, and integrates with administrative outcome data for the funder report. JobCannon’s production support for this lifecycle: intake and exit assessments with the same instrument battery for direct pre-post comparison; per-participant export with district / program-supplied ID; admin dashboard with cohort-level distributions; consented follow-up email infrastructure for self-reported outcomes; aggregate report download in PDF or CSV. The platform handles the assessment side of the lifecycle; the program handles the administrative-data integration side.

Author

Peter Kolomiets

Founder & Lead Researcher, JobCannon

Peter is the founder of JobCannon and leads the assessment validation, knowledge graph, and B2B partnerships. He has 10+ years working with NGO and educational career programmes globally.