This paper introduces an innovative application of conversational Large Language Models (LLMs), such as OpenAI's ChatGPT and Google's Bard, for the early prediction of student performance in STEM education, circumventing the need for extensive data collection or specialized model training. Utilizing the intrinsic capabilities of these pre-trained LLMs, we develop a cost-efficient, training-free strategy for forecasting end-of-semester outcomes based on initial academic indicators. Our research investigates the efficacy of these LLMs in zero-shot learning scenarios, focusing on their ability to forecast academic outcomes from minimal input. By incorporating diverse data elements, including students' background, cognitive, and non-cognitive factors, we aim to enhance the models' zero-shot forecasting accuracy. Our empirical studies on data from first-year college students in an introductory programming course reveal the potential of conversational LLMs to offer early warnings about students at risk, thereby facilitating timely interventions. The findings suggest that while fine-tuning could further improve performance, our training-free approach presents a valuable tool for educators and institutions facing resource constraints. The inclusion of broader feature dimensions and the strategic design of cognitive assessments emerge as key factors in maximizing the zero-shot efficacy of LLMs for educational forecasting. Our work underscores the significant opportunities for leveraging conversational LLMs in educational settings and sets the stage for future advancements in personalized, data-driven student support.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.