Predicting student performance early enough to intervene and provide help has been a longstanding topic of interest among the educational research community. Many studies have investigated making these predictions, and two main issues have been pointed out: the portability and robustness of these predictions. Learning Management Systems (LMS) and other tools used by courses today capture extensive amounts of information about student performance that is helpful not only in improving earlier attempts at these predictions, but also in taking them further. This is a work-in-progress study that looks at ways to accurately predict student performance based on the LMS data collected from 274 lower-division Computer Science courses taken by 2,656 students and taught by 37 instructors in various formats (face-to-face, online, virtual, hybrid) over a period of three years in a public four-year university. It uses a time-based approach that looks at the data as a set of sequences or time series, each representing the progress of a student both within a single course and across multiple courses. It tracks the progress of students within the three-year period as they go through the lower-division CS program and get their associate degrees. With courses being different from one another, we explore ways for “normalizing” the data in order to consider the whole learning journey of students across courses. The study explores questions such as: How does the progress of struggling students differ from one course to another across various formats? How early can student performance within these courses be accurately predicted? Can the cumulative progress of students at the end of the program be predicted? Are student journeys through courses unique? Are there patterns that transcend students and courses? How robust and portable are these predictions?
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.