This Complete Evidence Based Practice paper will explore one tool for supporting competency based assessment in a first-year engineering course. Competency-based assessment in a first-year engineering computation module offers a pathway to improve student engagement and enhance learning outcomes. Shifting the focus from traditional one-try assessment to a more dynamic evaluation of core computational skills—such as algorithmic loops, plotting, and functions—can enable deeper personalized learning experiences. The primary challenge is creating a more responsive, interactive relationship with every student, regardless of their previous content knowledge.
Autograding systems can play a pivotal role in this relationship by providing instant, real-time feedback on students' efforts. One approach to autograding systems is to allow autograding to occur during the assessment in an iterative process. To be effective, these systems must be designed to not only evaluate correctness but also analyze visual outputs like graphs and assess the intermediate steps of computation. This immediate iterative feedback loop follows the techniques content experts often deploy to solve challenging problems. This technique guides students to identify and correct mistakes as they learn, fostering deeper engagement with the material. By integrating real-time feedback driven evaluations, educators can create a more engaging learning environment that promotes essential computational reasoning skills. However, crafting automated feedback is time intensive and cost prohibitive, especially the first time. Collaborating problem sets, documenting observations and improvements we can aid to reduce these negative obstacles for broad implementation.
In this paper we document the process implemented to transition a first-year engineering class MATLAB assessment into an autograded environment. We will demonstrate techniques to evaluate the components of a proper figure, and ways to randomize a problem in the commercial Mathworks Grader environment. We will compare student performance on the assessment, student’s perception on the experience and explore the effect on uniqueness in submissions. The students' performance will be compared with a prior year's standard assessment results, and students' perception will be compared with a common end of course survey. Uniqueness of submissions will be evaluated with a tool to identify a percentage of similar lines of code. In the process of running an autograded environment educators are exposed to every early submission, so a metric of identifying which assessment objectives are the most challenging is collected as well.
Through implementation of autograded assignments, our courses have identified a decrease in the time to engage with a challenging problem and ask questions. One core issue identified in deployment is the challenge in creating multiple problem sets or banks and the difficulty in writing broad validation code. The anticipated survey and performance results will discuss observed student performance, perception and the amount of non-unique submissions. This approach supports individual learning needs and better prepares students for future computational engineering challenges by making assessment a more dynamic and impactful part of their educational experience.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025