High-stakes final examinations are defined as assessments that significantly impact student progression, typically accounting for 50% or more of the overall course assessment weight, or explicitly requiring students to achieve a passing grade to successfully complete the course, known as a ‘hurdle’ requirement. However, several critical issues arise from the use of these examinations, particularly regarding their validation, focus, and alignment with the course’s educational objectives.
One major concern is that high-stakes exams often lack rigorous validation processes to ensure they accurately measure student abilities. This absence of validation can undermine the credibility of the results, meaning that these exams may not genuinely reflect a student's knowledge or skills. Additionally, these examinations frequently concentrate on specific content areas, which may not encompass the full range of skills and knowledge students are expected to acquire. This narrow focus can create a disconnect between what is taught and what is assessed, ultimately compromising the educational goals of the course.
Despite these drawbacks, high-stakes final examinations remain prevalent in universities due to their perceived efficiency in assessing large groups of students in a standardised manner. While universities mandate that assessment tasks align with intended learning outcomes, the construction of these exams often occurs with a more holistic, and less detailed alignment, leading to inconsistencies.
This paper investigates the alignment between course intended learning outcomes, indicative content, and final examination questions in an electrical circuits course with approximately 100 students. The course included a 60% final ‘hurdle’ exam, qualifying it as a high-stakes assessment. The examination paper questions were analysed for their coverage of the course intended learning outcomes, indicative content and how these related to the distribution of marks across questions. Furthermore, questions were categorised using Bloom’s taxonomy to assess cognitive levels in relation to those implied in the course intended learning outcomes.
The analysis revealed that an overall score from an examination does not necessarily accurately indicate a student’s knowledge or ability across the course intended learning outcomes or indicative content. The paper provides recommendations for engineering educators who utilise high-stakes final examinations, emphasising methods to ensure alignment with intended learning outcomes and providing students with comprehensive feedback on their performance relative to these outcomes, beyond solely their final grades.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025