2026 ASEE Annual Conference & Exposition

Teaching the Transparent Machine: Leveraging XAI to Foster Ethical Awareness in AI-Driven Programming Education

Presented at Computers in Education (CoED): AI in Education (1 of 9) -- M308A

The widespread use of AI code assistants is transforming programming practices, creating an urgent imperative for engineering education to evaluate the strengths, limitations, and risks of AI-generated code (Borenstein & Howard, 2021). Current curricula, rooted in human-centered coding paradigms, often overlook these emerging challenges and the opaque nature of AI-generated code. This highlights a critical need for specialized ethical instruction (Elnaffar et al., 2025; Akbar et al., 2025). AI ethics education has not yet been fully integrated into computing curriculum and traditional topics associated with AI in education (AIED). Its ethical implications are often neglected in high-level policy discussions. This study addresses this pedagogical gap by exploring how explainable AI (XAI) can inform responsibility-focused teaching models that emphasize ethical awareness, transparency, and risk mitigation in AI-assisted programming. The study employs a hybrid methodology, beginning with a quantitative analysis utilizing advanced machine learning models such as Random Forest, Support Vector Machines (SVM), and CodeBERT to perform authorship classification between human-written code (mined from open-source repositories) and artifacts generated by Large Language Models (LLMs) like ChatGPT. To demystify these “black box” decisions and identify the unique structural and logical patterns distinguishing AI-generated code, XAI methods, specifically SHapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) are subsequently leveraged for interpretation. These explanations are translated into student-facing instructional artefacts and integrated into classroom activities in which students interpret model rationales, critique AI-generated solutions for quality and maintainability, and propose revisions with structured reflection prompts that foreground responsible co-creation. Evaluation will focus on students’ engagement with and sensemaking of XAI explanations through rubric-based analysis of code critiques and revision rationales, supported by thematic analysis of written reflections and interviews/focus groups. This work underscores the crucial importance of integrating technical explicability with responsibility-focused ethical education. By positioning XAI as a pedagogical tool, engineering education can develop transparent ethical guidelines and practical curricula to help students recognize ethical issues in AI and learn to critically assess the risks of co-creation with AI.

Authors
  1. Ms. Abiha Tahsin Chowdhury Missouri State University [biography]
  2. Labiba Rahman Purdue University – West Lafayette (College of Engineering) [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026