This full methods paper describes the development of a survey to measure engagement in engineering judgement skills through the solving of an open-ended modeling problem (OEMP). The engineering problem solving process is a structured approach to problems that is used across most engineering disciplines. Professional engineers make countless judgements through each step of the process (regarding assumptions, material selection, etc.), based on past experiences and theoretical understanding. Undergraduate engineering degrees are designed to provide new engineers with these experiences and theories to design systems. However, throughout their core curriculum undergraduate engineering students learn theories of engineering behavior, but textbook questions do not adequately prepare students to make the decisions on what and how theories should be used to create models. To strengthen student’s ability to make engineering judgements, open-ended, ill-defined problems have emerged as a tool to be used throughout undergraduate courses.
Engineering judgement describes how professional engineers make decisions to solve ill-defined problems. Previous research has developed a taxonomy of emerging engineering modeling judgement types through retrospective interviews with students [Source redacted for blind review]. The intention of this study is to use the engineering judgment taxonomy to develop a measure of student outcomes following project completion to perform large sample size analysis.
This paper will discuss the methods used to develop a survey to be administered during the 2025-2026 academic year following completion of an open-ended modeling problem. The survey underwent five iterations, with feedback from project researchers, engineering students, and instructors who employ similar problems in their classrooms. The final survey employs primarily Likert-scale questions to capture a range of opinions, enable direct comparison across scale items, and reduce variation of question understanding. The survey will be administered to undergraduate students at eight universities during the school year. This work focused on addressing the following three research questions:
(RQ1) How do we design a method to collect large-N quantitative data on students' practice of engineering judgment?
(RQ2) How do we validate survey items to measure engineering judgment?
(RQ3) What are the implications of using retrospective student self-assessment to collect data?
Implications of this work include giving instructors a tool to understand their students' use of engineering judgment, offering a self-reflection tool to students, and quantifying use of engineering judgement for further research. These results can be used by those instructors to guide future implementations of similar problems and projects in design courses to support development of engineering judgment skills, which can help students graduate more prepared to analyze real-world systems they encounter in the workforce. For students who are concerned about taking their theory-based degree and applying it in the workforce, this tool can be used to help them understand what they have and have not been accomplishing through projects. Overall, this tool will significantly simplify the workload of measuring engagement in engineering judgement and allow for computational validity testing. Closed response survey data allows future research analyzing frequency of responses and correlations between survey responses that is not currently possible.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026