Built for foundation outcomes reporting
Free for programme participants. Programme dashboard tracks completion, archetype distribution, Career Match alignment by site and demographic. Honest data — completion rate visible alongside completed-cohort outcomes.
JobCannon for Nonprofit Grant Impact Reporting serves workforce-development nonprofits accountable to major funders (Lumina, Walton, Strada, Gates, JPMorganChase New Skills at Work, Annie E. Casey, MacArthur, Joyce, ECMC). Provides assessment-baseline-to-outcome evidence at the output and short-term-outcome layers of typical funder logic models. Programme participants take validated assessments (RIASEC, Big Five, Skills Audit, Career Match) at intake and at 3, 6, 12-month follow-up; programme dashboard tracks change in archetype confidence, Career Match alignment, skill self-rating. Multi-site grantee networks set up each site as separate cohort under one master account; aggregate outcomes roll up to funder report, site-level stays private to each site programme manager. Cross-site comparison surfaces effective sites and sites needing technical assistance. Demographic disaggregation supports culturally-responsive evaluation expectations. Honest completion-rate reporting alongside completed-cohort outcomes — funders trust honest data and lose trust on inflated numbers. Long-term outcome data (placement, wage gain, retention) comes from your existing participant-follow-up systems; we contribute the assessment-evidence column. Free for participants; programme dashboard runs on Business tier or scoped under partnership for national grantee networks running multi-site programmes.
Logic-model-aligned, multi-site-friendly, honest-data-by-default.
Intake plus follow-up checkpoints at 3, 6, 12 months.
For a national nonprofit serving 8,000 participants per year
Participant access stays free. Programme dashboard runs on Business tier ($199/mo flat) or scoped under partnership for national grantee networks running multi-site programmes.
Try it with a micro-team
For independent coaches and therapists
For startups, teams and HR
For agencies, L&D and scale-ups
For 200+ person companies
All plans currently activated manually via the contact form — we review each request within 24 hours and provision access the same day. Self-serve checkout coming once we've heard from the first wave of teams.
Tell us your funder, your programme scope, and your reporting cycle. We respond with a partnership scope within three business days.
Most major workforce-development funders accept psychometric and assessment-completion data as one signal in a multi-evidence outcomes narrative. Lumina Foundation expects credential-attainment plus a self-reported career-clarity signal; Walton Family Foundation expects pathway-clarity for K-12 grantees; Strada Education Foundation funds career-pathway research and accepts assessment-baseline data; Bill and Melinda Gates Foundation expects outcomes data on Postsecondary Success programmes; JPMorganChase New Skills at Work expects measurable workforce-readiness signals; Annie E. Casey Foundation expects youth-outcome data including career-readiness markers. Each funder has its own outcomes framework — we do not satisfy any framework end-to-end, but we provide assessment-baseline-to-outcome data that fills the career-readiness piece of most.
Two patterns. (1) Programme-internal outcomes — participant returns to the programme at 3, 6, 12 months for a follow-up Career Guide update; programme tracks change in archetype confidence, Career Match alignment, and skill self-rating. (2) External-outcome match — programme uses participant ID across systems (case-management software, employment-placement records, post-programme survey) to link assessment baseline to placement, retention, wage data, or further-education enrolment. The platform exports baseline data; programme staff handle the external match in their preferred outcomes-tracking tool. We do not replicate Salesforce-based case management or external longitudinal data services.
Most workforce-development logic models distinguish inputs (participant served, dollars spent), outputs (assessments completed, services delivered), short-term outcomes (career clarity, pathway identification), intermediate outcomes (training enrolment, credential attainment), and long-term outcomes (placement, wage gain, retention). JobCannon contributes evidence at the output and short-term-outcome layers. We do not contribute long-term-outcome data directly; that comes from your participant follow-up. The platform output fits a logic-model evidence column without you rewriting your reporting structure.
A national grantee running 12 local sites under one foundation-funded programme sets up each site as a separate cohort under one master account. Aggregate outcomes roll up to the funder report; site-level dashboards stay private to each site programme manager. Cross-site comparison surfaces which sites are running the programme effectively and which need technical assistance from the national office. This pattern fits Hire Heroes USA, Year Up, Per Scholas, NPower, Generation, and similar national workforce nonprofits.
Outcomes reporting needs honest completion data, not inflated numbers. The platform tracks started-not-completed assessments separately from completed assessments, and outcomes reports include both completion rate (what percentage of enrolled participants engaged) and completed-cohort outcomes (what archetype distribution looks like for those who finished). We do not project missing participants forward or pad numbers. Funders trust honest data and lose trust on inflated numbers, so we keep both visible.
Most workforce-development funders are increasingly explicit about culturally-responsive evaluation — that programme outcomes should not be measured only against majority-population norms. The platform supports demographic disaggregation (set by the programme, not collected from participant) and reports outcomes by subgroup so the programme can tell a culturally-grounded story. The validated psychometric instruments (Big Five, RIASEC) have been studied across populations; we do not claim they are bias-free, but they have wider research validation than most career-assessment alternatives.