Recent headlines have featured large language models (LLMs), like ChatGPT, for their potential impacts throughout society. These headlines often focus on educational impacts and policies. We posit that LLMs have the potential to improve instructional approaches in engineering education. Thus, we argue that as an engineering education community, we should aim to leverage LLMs to help resolve challenges in engineering education. This study takes up one aspect of instructional design: valid assessment of students' learning outcomes in engineering ethics. In this study, we present a method for engineering educators to implement NLP in open-ended ethics assessments (here, written responses to an ethics case scenario). Grading such open-ended responses has challenges: it requires a non-trivial time commitment and attention to consistency. To mitigate these challenges, we developed an NLP approach based on open-source, transformer-based LLMs. We applied and evaluated our NLP approach for coding students' responses to an open-ended ethics case scenario in a first-year engineering course. The results showed that our NLP approach labeled 380 out of 472 sentences accurately. Conversely, only 8% (37 out of 472 responses) were inaccurately labeled. Overall, our NLP approach provides a step toward analyzing written responses to scenario-based assessments in a scalable manner. However, it is not perfect. One current downside of our NLP approach is that it requires a large upfront time investment in setting up the system. Our future work aims to lower that barrier to entry, thereby making it more accessible to a larger group of potential users.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.