2024 ASEE Annual Conference & Exposition

Evaluation of LLMs and Other Machine Learning Methods in the Analysis of Qualitative Survey Responses for Accessible Engineering Education Research

Presented at Educational Research and Methods Division (ERM) Technical Session 7

This research paper provides insights and guidance for selecting appropriate analytical tools in engineering educational research. Currently, educators and researchers face difficulties in gaining insights effectively from free-response survey data. We evaluate the effectiveness and accuracy of Large Language Models (LLMs), in addition to the existing methods that employ topic modeling, document clustering coupled with Support Vector Machine (SVM) and Random Forest (RF) approaches, and the unsupervised Latent Dirichlet Allocation (LDA) method. Free responses to open-ended questions from student surveys in multiple courses at University of Illinois Urbana-Champaign were previously collected by engineering education accessibility researchers. The data (N=129 with seven free response questions per student) were previously analyzed to assess the effectiveness, satisfaction, and quality of adding accessible digital notes to multiple engineering courses and the students’ perceived belongingness, and self-efficacy. Manual codings for the seven open-ended questions were generated for qualitative tasks of sentiment analysis, topic modeling, and summarization and were used in this study as a gold standard to evaluate automated text analytic approaches. Raw text from open-ended questions was converted into numerical vectors using text vectorization and word embeddings and an unsupervised analysis using document clustering and topic modeling was performed using LDA and BERT methods. In addition to conventional machine learning models, multiple pre-trained open-sourced local LLMs were evaluated (BART and LLaMA) for summarization. The remote online ChatGPT closed-model services by OpenAI (ChatGPT-3.5 and ChatGPT-4) were excluded due to subject data privacy concerns. By comparing the accuracy, recall, and depth of thematic insights derived, we evaluated how effectively the method based on each model categorized and summarized students’ responses across educational research interests of effectiveness, satisfaction, and quality of education materials. The paper will present these results and discuss the implications of our findings and conclusions.

Authors
  1. Xiuhao Ding University of Illinois at Urbana - Champaign
  2. Meghana Gopannagari University of Illinois at Urbana - Champaign
  3. Kang Sun University of Illinois at Urbana - Champaign
  4. Alan Tao University of Illinois at Urbana - Champaign
  5. Sujit Varadhan University of Illinois at Urbana - Champaign [biography]
  6. Bobbi Lee Battleson Hardy University of Illinois at Urbana - Champaign
  7. David Dalpiaz University of Illinois at Urbana - Champaign
  8. Dr. Chrysafis Vogiatzis Orcid 16x16http://orcid.org/0000-0003-0787-9380 University of Illinois at Urbana - Champaign [biography]
  9. Prof. Lawrence Angrave Orcid 16x16http://orcid.org/0000-0001-9762-7181 University of Illinois at Urbana - Champaign [biography]
  10. Dr. Hongye Liu University of Illinois at Urbana - Champaign [biography]
Download paper (1.84 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.