Artificial intelligence (AI) increasingly shapes learning, work, health, and civic life, intensifying expectations that engineers and computing professionals develop AI systems responsibly. Persistent inequities in the AI workforce pipeline also motivate pathway programs that expand early access to authentic AI learning experiences. This paper reports short-term outcomes from a single-site implementation of the Experiential Learning Artificial Intelligence (ExLAI) summer program, an intensive eight-week introductory AI experience for early-stage learners (high school and early undergraduate students) that integrates trustworthy AI practices into the modeling workflow. ExLAI combines interactive lectures, hands-on coding demonstrations, and coached team projects in which participants define a problem, implement and evaluate an AI approach, and document trustworthiness considerations (e.g., data quality and stewardship, fairness and harmful bias, privacy, transparency/explainability, and robustness) using tools such as model cards and datasheets.
Guided by self-efficacy theory and Social Cognitive Career Theory, the study uses a single-cohort pre–post design without a comparison group; findings are presented descriptively. Data sources included locally developed pre/post surveys with an embedded 11-item AI knowledge assessment, post-survey items on instructional implementation and program components, pathway awareness and intentions, and program records; end-of-program focus group input was used to provide contextual insight. Thirty-six students enrolled and 35 completed the program (97% retention). In the matched pre–post sample, AI knowledge increased from M = 6.43 to 8.87 (out of 11; n = 32). Participants also reported higher confidence across technical AI tasks and trustworthiness-relevant tasks (n = 33), including considering ethical dilemmas and reasoning about societal impacts. Post-survey ratings indicated high overall satisfaction and strong instructor communication, with more mixed ratings for pacing and course level. Participants also reported substantial awareness of AI-related educational and career pathways and strong interest in continued AI learning, while near-term opportunity-seeking behaviors were less common at the post-survey time point.
Overall, the paper provides descriptive evidence from an intensive introductory program that integrates trustworthy AI practices into early technical instruction. It discusses design implications for supporting heterogeneous learners (e.g., pacing and collaboration scaffolds) and outlines future work incorporating performance-based assessment of trustworthy-AI competencies and longer-term follow-up.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026