As generative AI tools, such as ChatGPT, Microsoft Co-Pilot, and others have burst onto the education scene, questions have been raised regarding their use as a teaching tool in the classroom. In efforts to further support the development of first-year engineering and computer science students’ technical writing skills, a team comprised of faculty from Computer Science, English, and Engineering investigated the feedback provided by a range of generative AI tools and how the feedback compared to feedback provided by faculty from the English department and that provided by Engineering faculty.
This study occurred at a small, private institution, located in a rural area of the midwestern United States. We chose to focus on a memo written as part of the “One-Minute Engineer” assignment, which occurs during the first course in our first-year engineering (FYE) sequence. The One-Minute Engineer (OME) assignment is a long-standing assignment within our curriculum. In this assignment, students select a topic of interest related to engineering, write a 1-2 page memo regarding the topic, receive feedback from the campus writing center, revise their memo, and then give a one-minute presentation to their classmates regarding their topic. The FYE courses are taught by engineering and computer science faculty, and, even while partnering with the writing center, faculty often struggle to give meaningful feedback on the OME assignment, due to grading volume and a lack of training in writing pedagogy.
In order to address this challenge, we sought to understand if generative AI tools could provide feedback to help students improve their writing, either during the drafting process, during the grading process, or both. A rubric for assessing the OME memos was developed by the English faculty member. This rubric was based on the existing OME rubric, the faculty member’s expertise in technical writing, and the stated goals of the assignment. After review by the research team, this rubric was applied to 40 OME memos drawn from previous student work by two teams: one team consisted of two faculty members from the College of Engineering who regularly teach in the FYE sequence, and the other team consisted of a faculty member from English with expertise in technical writing, and a student researcher with a background in political science and statistics. After all memos had been assessed by each team, the student researcher and a computer science faculty member utilized ChatGPT-4 to grade student assignments seeking to mimic human evaluators. The team also tested Microsoft Co-Pilot and Google Gemini to compare their performance. Moving to an AI tool will potentially enhance grading efficiency, mitigate certain biases or grading inconsistencies, and grading fatigue. (The researchers are aware that biases inherent in the data used to train AI systems could still influence the output).
From this data set, statistical comparisons between the evaluation provided by the writing team, the engineering team, and the AI tools were calculated. Additionally, comparisons of the strengths and weaknesses of each AI tool were noted. These were used to generate recommendations for how to use AI to support student writing education. The paper will briefly address if the authors feel that the AI tools are fully able to capture the nuance of some subjective grading aspects without specialized training or sufficient guidelines.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025