Auto-graded online homework and interactive textbooks engage students and generate big data. Several new research questions investigate students’ usage of and success on over 700 auto-graded questions within an interactive tool titled the Material and Energy Balances zyBook. Auto-grading occurs in real time, so students, teaching assistants, and faculty can see progress without waiting for assignments to be graded. Previous research examined reading participation and auto-graded problems at the course level; Findings included median reading participation over 93% for seven cohorts and median correct on auto-graded problems of 91% or higher for six cohorts. More specifically, auto-graded problems allowed unlimited attempts, so students received feedback and persisted until correct for these randomized problems. Here, two recent cohorts’ responses on hundreds of auto-graded questions examined specific types of auto-graded problems. From one perspective, formative, single calculation problems with scaffolding appeared in most sections, while more summative, multi-concept problems appeared at the end of each chapter. From another perspective, many problems required numerical answers within a tolerance, and other problems were multiple choice. Our research questions examined these different types and locations of auto-graded problems. New findings showed that median percent correct was high (above 80%) for all problem types. Attempts before correct provided a valuable metric to distinguish between problem types with numeric problems taking more attempts than multiple choice. Finally, a metric combining both correct and attempts, called the deliberate practice score, provided another quantitative aggregate measure. Of note, end-of-chapter numeric response problems had a much larger fraction of problems at higher deliberate practice scores than in-chapter, numeric questions.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.