This paper presents work in progress (WIP) toward using artificial intelligence (AI), specifically through Large Language Models (LLM), to support rapid quality feedback mechanisms within engineering educational settings. It describes applying to LLMs to improve the feedback processes by providing information directly to students, graders, or course instructors teaching courses focused on complex engineering problem-solving. We detail how fine-tuning an LLM with a small dataset from diverse problem scenarios achieves classification accuracies close to approximately 80%, even in new problems not included in the fine-tuning process. Traditionally, open-source LLMs, like BERT, have been fine-tuned in large datasets for specific domain tasks. Our results suggest this may not be as critical in achieving good performances as previously thought. Our findings demonstrated the potential for applying AI-supported personalized feedback through high-level prompts incentivizing students to critically self-assess their problem-solving process and communication. However, this study also highlights the need for further research into how semantic diversity and synthetic data augmentation can optimize training datasets and impact model performance.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.