Since ChatGPT’s public launch in November 2022, considerable discussion and changes have occurred in higher education. Active educational research related to generative artificial intelligence (GAI) has been conducted in various areas, including student learning, ethics, and assessment. Although many authors have raised concerns about the impact of GAI, particularly a large language model (LLM), in writing education, the systematic studies related to the ethical use of GAI are limited. While grounded in the ethical adaptation of GAI in grading and feedback for engineering lab writing, we focus on GAI’s capability to assist with engineering lab report assessment. Lab report grading is time-consuming for lab instructors and teaching assistants. Moreover, constructing impactful feedback can be challenging for many reasons. In this pilot study, we used Copilot and ChatGPT 4o to conduct evaluation and feedback on student lab reports of past courses when the instructors did not use generative AI technologies. The study space was limited to the two engineering labs in two institutions: strength of materials for mechanical and civil engineering students at a 4-year public polytechnic university and engineering materials for mechanical engineering students at a 4-year R1 university. GAI tools were asked to generate scores, overall reviews, suggestions, or improvement tips. We compared the evaluation scores and feedback of each student lab graded by instructors or graduate teaching assistants with those from GAI tools. The comparative analysis results will be discussed to answer how the GAI tool’s evaluation results align with scores and feedback by instructor/TAs regarding accuracy and clarity.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025