2023 ASEE Annual Conference & Exposition

Board 371: Relationships Between Metacognitive Monitoring During Exams and Exam Performance in Engineering Statics

Presented at NSF Grantees Poster Session

Our NSF-DUE-funded project studies whether providing students with training and practice writing questions about their confusions in an undergraduate engineering statics course supports improved course performance and metacognitive awareness.

As a part of this study we investigate relationships between metacognitive monitoring during statics exams and actual exam performance. Metacognitive monitoring is the process of observing one’s understanding and approach while completing a learning task. In this study, the learning tasks are semester and final exams. One way to assess students’ metacognitive monitoring is to measure students’ ability to accurately predict their score on an assessment of their understanding. Specifically, on each problem on each exam throughout the semester we asked students to predict their score out of a known total point value. To measure a students’ metacognitive skill, a bias index (Schraw, 2009) was calculated for each student on each exam. This index demonstrates the difference between a student's confidence ratings and performance scores, and indicates whether a student is "underconfident", i.e. performs better than they expected or "overconfident", i.e., performed worse than expected (Schraw, 2009). To investigate group differences in bias, the absolute value of the indices was calculated. This allowed the corresponding magnitudes of the bias indices to be compared and averaged.

Preliminary analysis indicates that students who earned a passing grade on an exam (above 60%) had statistically lower bias scores. This indicates that students who earned a passing grade were more likely to accurately predict their exam performance, exhibiting more effective metacognitive monitoring of their own understanding.

We also have found a significant increase in students’ ability (regardless of passing or failing grades) to more accurately predict their scores between the first and second exams, which we attribute to better understanding the learning task and how performance will be evaluated. No differences have yet been found in students’ ability to better predict their test scores between the second exam and the final exam.

Authors
  1. Dr. Chris Venters East Carolina University [biography]
  2. Dr. Saryn Goldberg Hofstra University [biography]
  3. Amy Masnick Hofstra University [biography]
  4. Kaelyn Marks Hofstra University
  5. Kareem Panton Hofstra University
Download paper (1.68 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.