This empirical research brief will present findings on a secondary-data analysis that explores data collected as part of a multi-case study that researched fundamental engineering course (FEC) instructors’ beliefs on test usage. Literature on instructor beliefs on test usage has been scarce but growing. Research on instructors’ beliefs is crucial considering research has shown that beliefs can shape teaching practices, though one instructor’s beliefs can also be conflicting depending on contexts. On test usage, engineering fundamental courses tend to heavily depend on testing as student assessments. Efforts to diversify assessment types and promote intentional test usage should involve research on understanding how instructors use tests and why they are using them heavily. These arguments set the foundation for the original multi-case study to explore FEC instructors’ beliefs on heavy test usage. To continue further exploration of the data as part of the efforts to diversity and promote intentionality, we conducted secondary data analysis to understand how these instructors designed their tests in relation to test usage. Specifically, we answer the research question: “What are the beliefs that shape these FEC instructors’ test question design?”
Secondary data analysis has been a growing method in the engineering education research, with the call by some in our community to embrace such method to take advantage of the large amount of data collected through numerous grants. Secondary data analysis can manifest in many forms, such as new researchers analyzing existing data, different existing data sets being combined, the researchers came back to analyze the data with changed positionalities, and others. Our method aligns with the third one as our positionalities have transformed when we came back to this data after two years. In terms of operationalizing our secondary data analysis strategies, we leveraged cross-case analysis, inspired by codes from the original analysis, to compare the individual cases to explore test question design among the participants. We used the same conceptual framework, grounded in the Situated Expectancy Value Theory (SEVT), from the original study to guide the analysis.
Our findings have found two groups of belief that shapes our seven participants (cases) behavior in designing their test questions. Two participants fall into the first group where they used the typical engineering problem-solving/workout problems. They explained that this type of questions expected students to solve the problems by demonstrating their concept application skill with step-by-step equations. The second group complemented workout problems with conceptual questions, with some explaining that workout and conceptual questions assessed different types of knowledge (problem solving and understanding of concepts). In addition, some expressed that using conceptual questions could address pattern matching among students. In short, our findings contributed to nuanced understanding of the beliefs on test question design behaviors to the literature, which is important in efforts to promote better test usage in FECs.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025