Engineering programs are increasingly expected to articulate how accreditation-driven student outcomes relate to workforce capabilities valued by industry. In college–industry partnership contexts, these conversations often unfold informally, leaving differences in vocabulary and underlying competence logic implicit and difficult to revisit. Our work-in-progress paper introduces Psychometric AI Mapping, a human-in-the-loop methodological framework for interpretive comparison of text-based outcome frameworks.
Drawing on the unified view of construct validity, the method treats accreditation outcomes and workforce skill statements as observable indicators of underlying competencies, and operationalizes their comparison through a transparent workflow of thematic aggregation, interpretive alignment, and matrix construction. Large language models are positioned as structured analytic instruments within this workflow rather than as autonomous interpretive authorities. We demonstrate the method on the ABET EAC Student Outcomes (Criterion 3, outcomes 1–7) and workforce skills from the World Economic Forum’s Future of Jobs Report 2025. A researcher-generated alignment matrix surfaces patterns of conceptual concentration, overlap, and absence revealing, for example, strong alignment around cognitive problem-solving alongside gaps in adaptability and operational capabilities. The contribution is methodological: a transparent, reproducible workflow that supports more systematic dialogue among faculty, assessment leaders, and industry partners.
http://orcid.org/0000-0002-6205-8510
Embry–Riddle Aeronautical University, Daytona Beach
[biography]
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026