Academic institutions rely on collecting students’ feedback on their learning experience as one of the indirect measures of learning assessment. This feedback, collected through end-of-term and sometimes mid-term surveys, provides the students with an opportunity to share their viewpoint on various aspects of their learning, including teaching style, teaching effectiveness, course objectives, course material, course evaluation, etc. This feedback is not only used by instructors to improve their teaching style and accommodate the needs of students, but it is also used by institutions as an indicator of the instructors’ teaching effectiveness during their annual review and tenure evaluation.
The course surveys typically contain both quantitative (ratings) and qualitative (free text) sections to evaluate the learning process, teaching effectiveness and the instructor’s inclusivity. Use of course evaluations has been studied in the past by many researchers from different perspectives, but most of the studies focused on statistical evaluations of the quantitative responses by different student groups and/or on the best questions to ask on the qualitative part for shorter and more informative responses from students. The qualitative sections are often only used to evaluate the instructor and are not analyzed in any detailed manner due to the complexity of the analytics that would be required. It would be beneficial to both instructors and institutions if a tool was developed that could statistically quantify student responses from the qualitative sections of course evaluations.
Educational opinion mining is an approach developed in the last decades to encode students’ feedback using tools such as qualitative text analysis. The objective of this research is to utilize these tools to design a methodology to study student comments and their polarity (positive/negative/neutral) and determine if they are in agreement with students’ responses to the quantitative sections. For example, graduate and higher-level courses typically have better responses to the quantitative sections of course evaluations than lower-level courses. A rubric was developed to categorize student comments as positive, negative, or neutral and used to analyze course evaluations from different engineering student populations at the University of XXX. The results are compared with those in the literature for responses from the quantitative sections of course evaluations.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.