High-stakes assessments such as midterm and final tests are commonly used to evaluate retention and recall in biomedical engineering courses. In courses such as physiology, such assessments are professionally authentic for pre-health students planning to take the MCAT and licensure exams. In large format courses, grading time can be reduced by using electronic platforms to automate the assessment administration and scoring. For students, these assessments often induce test anxiety that negatively impacts their learning. Collaborative learning techniques, including collaborative testing, have been demonstrated to improve performance, reduce test anxiety, and improve motivation. To test the hypothesis that collaborative testing improves learning outcomes and student perceptions of their learning, a collaborative testing strategy was implemented in a second-year biomedical engineering physiology course.
Each unit test in the course consisted of two parts delivered through an online audience response system: an individual and a team part. Teams were formed using a CATME survey completed at the beginning of the semester. The individual part had a 50-min time limit and consisted of 30 questions in the formats of multiple choice, check all that apply, put steps in order, or fill in the blank. During the team part, students had 15 min to discuss the same 30 questions in teams and to change any or all of their answers from the individual part. Each team member submitted individual answers, so they had agency to disagree with their team’s consensus answer. The individual part comprised two-thirds and the team part comprised one-third of the total test score. The control group for comparison was a previous course offering with only individual unit tests in which student had the entire 50-min period to complete the 30-question test.
In the course with only individual unit tests, the mean score was 77 ± 5% (mean ± SD%, n= 4). In semesters with collaborative testing the overall mean score was 76 ± 4% (n = 12). In the collaborative testing courses, the team part scores were on average 18% higher than the individual scores, and the average learning gain, a measure of the points gained relative to the points available to gain by making corrections, was 38%. The results suggest that overall test scores were not different when comparing collaborative to individual testing, but team discussion helped students gain points in the collaborative tests. Importantly, students’ perceptions of the test environment were significantly improved when they completed tests with a collaborative part. A majority (80%) of students reported that discussing the test with a team helped improve their score, and 74% felt more confident in their learning and less stressed about the test. Several students indicated they viewed the collaborative test as a learning activity in the course rather than as an evaluation activity, and 83% said they preferred to do a team activity on every test. In addition to direct survey questions about collaborative learning, students responded to the Motivated Strategies for Learning Questionnaire and the Generalized Self-Efficacy Scale; these responses will be analyzed to assess whether changes in learning strategies and self-efficacy were associated with the inclusion of the collaborative testing activities. Overall, students’ perceptions of testing as a learning activity were improved by the inclusion of a collaborative testing component.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.