Peer feedback for group activities gives instructors valuable insight into the inner workings of a group. Peer assessment can also provide useful feedback to the individual participants. In experiential learning, particularly in disciplines like undergraduate Aerospace Engineering, group work plays a valuable role to prepare students for a career in collaborative environments and feedback on an individual’s performance can be a useful pedagogical tool. To enhance the peer review process for large (200+ students), this study implements sentiment analysis, specifically using the roBERTa sentiment analysis model [1], to provide a quantitative assessment of reviews received by individual students. Additionally, the work aims to evolve to include AI-based constructive criticism paraphrasing to allow for timely individualized feedback in a large-enrollment setting.
The primary objective of this ongoing work is to examine how sentiment analysis scores correlate with the numerical scores assigned by students to their peers. By leveraging natural language processing and machine learning, we aim to quantify the qualitative aspect of sentiment in peer reviews. The underlying hypothesis is that sentiment analysis can provide a complementary perspective on the quality of peer reviews, shedding light on aspects such as positivity, constructiveness, and critical feedback.
The study focuses on an Aerospace Engineering sophomore experiential learning course, where group work serves as a fundamental medium of student engagement. The course uses a basic peer review process, where students evaluate their peers' contributions. These evaluations form the basis for an individual’s grade based on their contributions to their group’s work. However, there may be subjectivity, bias, or other unaccounted factors in a simple numerical score which sentiment analysis can potentially reveal.
In our presentation, we will discuss the methodology used to implement sentiment analysis and the dataset involved. We will share preliminary results and insights from our ongoing investigation, analyzing the extent to which sentiment analysis scores align with the numeric peer review scores. This alignment can serve as a measure of the effectiveness of sentiment analysis in capturing the quality of peer reviews.
As a further development of this work, we plan to incorporate AI-based constructive criticism paraphrasing. This extension will involve the use of natural language generation models to offer suggestions for improving an individual’s performance in future groups. By providing students with constructive, actionable recommendations from their peers, this addition aims to enhance the usefulness of peer reviews and contribute to a more positive and collaborative learning environment.
Our study has the potential to contribute to the ongoing dialog on peer assessment in education and to inform educators on the benefits and challenges of implementing sentiment analysis and AI-based paraphrasing in the evaluation process. Ultimately, we aim to improve the fairness and objectivity of peer assessment, thereby enhancing the overall learning experience in the field of engineering
This presentation will be of interest to educators, researchers, and practitioners in the Computers in Education Division at ASEE 2024, as it explores innovative approaches to improving the quality and effectiveness of peer review processes in STEM education. We look forward to sharing our insights and engaging in discussions with fellow attendees to advance the understanding of the role of sentiment analysis and AI-based paraphrasing in education.
References:
[1] Accessed from: https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.