Unlike many teaching pedagogies, such as evidence-based learning, personalized adaptive learning (PAL) takes a distinct approach by monitoring the progress of each individual student and tailoring the learning path to their specific knowledge and needs. Rather than providing a one-size-fits-all approach, PAL customizes the learning experience for each student. To implement PAL effectively, one essential technique is knowledge tracing that models students’ knowledge over time, enabling predictions about their performance in future interactions. Based on these predictions, resources and learning paths can be recommended to students according to their individual requirements. Additionally, content that is anticipated to be too easy or too difficult can be either skipped or delayed. In recent years, deep learning technologies have been successfully applied to enhance knowledge tracking, known as Deep Knowledge Tracing (DKT). This paper introduces a novel approach based on Large Language Models (LLMs) to further improve DKT. LLMs are deep learning models trained on extensive datasets using self-supervised and semi-supervised learning techniques. Prominent examples of LLMs include BERT, GPT, GPT-4, LLaMA, and Claude, all of which have demonstrated remarkable performance across a wide spectrum of natural language processing (NLP) tasks. This paper is to alleviate data sparsity issues related to one-hot encoding of student learning records. This is achieved by representing these records using LLMs. The representation process involves designing various prompts to encourage LLMs to establish correlations between different elements within the learning records. To validate the proposed method, extensive experiments will be conducted using multiple datasets, including ASSISTments (2015 and 2017), KDD Cup 2010, and NIPS 2020.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.