Processing student end-of-semester course evaluations can be a challenging task for professors seeking to not only affirm their status as a qualified educator but also provide meaningful learning experiences for the students they influence. While professors attempt to use course evaluations for self-improvement and course development, the emotional aspects of reading student evaluations can overshadow the experience. While students are adept at understanding quality classroom engagements, they often lack training in providing quantitative feedback and experience in framing qualitative feedback in a constructive manner. The challenge of parsing through poorly written or personally directed feedback can deter faculty members from thoughtfully engaging with the comments. Faculty in various career positions may be constrained, which further disconnects the feedback from implemented change. Early career professors often need to prioritize research responsibilities, whereas mid- or late-career professors often take on additional administrative roles that divide their attention from incremental course improvements. However, most faculty do seek the opportunity to ensure the classroom they manage is positively received by students and able to serve the purpose of supporting the teaching and learning cycle. Generative artificial intelligence (AI) platforms may provide a solution to isolating constructive feedback while filtering evaluations which might be detrimental to the faculty’s growth mindset. This study seeks to explore a framework for applying AI platforms towards student course evaluation synthesis leading towards improving actionable response. A study was performed to process student course evaluations for four faculty at different career stages. The team defined their own parameters for the intention of processing student feedback using AI. The paper discusses the process for developing their method and reflections made by each participant. Closing remarks articulate the value found in using AI for this purpose but also the unexpected value of developing a peer cohort.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026