This research explores the use of pre-trained large language models (LLMs) to predict weekly lecture-based engagement of college STEM students based on longitudinal experiential data. We leverage non-cognitive attributes, such as emotional responses, and socio-economic background information to forecast engagement patterns. To address data limitations, we employ a contextual data enrichment method. Experiments with BERT (encoder-only) and Llama (decoder-only) models demonstrate that BERT achieves higher accuracy, particularly with non-cognitive data, while both models improve with background data integration. These findings highlight LLMs' potential to enable data-driven interventions in STEM education by predicting student engagement.