Research has shown that in introductory programming courses, breaking complex concepts into smaller, manageable units is highly effective. Additionally, using scaffolding techniques helps learners progressively develop programming skills. However, determining the appropriate size of each conceptual unit depends on factors such as the learners' aptitude and experience.
In this paper, we present a data-driven approach to designing auto-graded activities in our online, interactive STEM textbooks, focusing on effectively breaking down complex concepts into smaller, more achievable steps for learners. We analyzed two types of activities: 1) activities on challenging topics as reflected by high struggle rates and 2) activities on introductory topics with lower struggle rates, but where students still needed assistance based on their feedback and incorrect submissions as they began learning programming. For both types of activities we examined multiple metrics such as students' average completion rates and common errors.
Based on these insights, we further refined the activities by dividing them into smaller components and measured the impact on student struggle rates. By comparing the metrics before and after these changes, we identify key best practices for designing and improving auto-graded programming problems, aimed at enhancing student learning outcomes in programming courses.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025