2026 ASEE Annual Conference & Exposition

A Plug and Play Guide to Efficient Oral Assessments in Mid to Large Sized Manufacturing Courses

Short authentic oral assessments are a strong alternative to traditional assessment. Unlike traditional written exams that feature well-defined, single-solution problems that students must solve in a time and resource limited environment, authentic oral assessment is designed to mimic “real-life” engineering problems and help students practice the engineering competencies they will likely need throughout their careers. One concern instructors cite when contemplating oral assessment is the time commitment; instructors worry that oral assessments are too time-intensive to be a viable option for their courses. This is especially true for mid to large-size classes. While creating short (seven-minute) authentic oral assessments may help keep testing time to a minimum, factors like preparing and practicing for the assessments can seem daunting. In this paper, we describe two ways we have reduced the amount of instructor time needed to conduct authentic oral assessments. First, we outline a “Mad-Libs” or plug-and-play assessment format that allows instructors to quickly create authentic oral assessments that cover introductory manufacturing topics such as CRQFS (cost, rate, quality, flexibility, and sustainability) tradeoffs, design for manufacture, processes and parameters, and discipline-specific communication. This format can be used alongside pre-existing rubrics to scaffold question generation. Second, we discuss the use of generative AI to create data sets for a data interpretation-focused authentic oral assessment. Using AI to create large sets of realistic data saves instructor time while creating an authentic artifact. Different data sets can be used for different students, creating variability in discussion and reducing the risk of cheating. We will discuss the generation and application of both of these tools as it applies to a mid-sized upper-level manufacturing course. We hope that insights from both of these tools are helpful for instructors looking to adapt similar assessment methods in their classes.

Authors
  1. Dr. Sandra Walter Huffman Tufts University; Massachusetts Institute of Technology [biography]
  2. Kaitlyn Becker Orcid 16x16http://orcid.org/0000-0003-2650-295X Massachusetts Institute of Technology
  3. Dr. John Liu Orcid 16x16http://orcid.org/0000-0002-6085-0926 Massachusetts Institute of Technology [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026