2026 ASEE Annual Conference & Exposition

Exploring Generative AI for Higher Education Accreditation: Opportunities and Challenges in ABET Reporting.

Presented at DSAI-Session 8: Generative AI in Assessment, Grading, and Accreditation

Generative AI is a growing focus of research in higher education. However, there is limited academic literature describing the use of AI in higher education accreditation. Generative AI, with its benefits and challenges, has the potential to improve administrative workflows, thereby saving faculty time and energy that can better be spent on other administrative duties and non-administrative priorities such as teaching and research. However, Generative AI tools require a human-in-the-loop to ensure the validity of the synthesized data before submission. Hence, while Generative AI has the potential to save faculty time, we need a better understanding of its merits and weaknesses in certain areas before this potential can be realized. One area where we theorize that Generative AI can be particularly helpful is for preparing accreditation materials.

This paper focuses on mapping course objectives to ABET learning objectives. Pursuing ABET accreditation is a time-consuming process that requires the creation of a Self-Study report. This report requires student data from multiple courses, instructor feedback and insights, and analysis of program changes. We explore the efficacy of four Generative AI platforms (ChatGPT 5.0, Copilot, Gemini 2.5, Perplexity Pro), including their ability to recreate tables, provide commentary, and map course objectives for ABET continuous improvement documentation. To test each platform, we uploaded sample rubrics, assignment descriptions, syllabi, and other course information into these systems and then prompted each Generative AI to produce analysis tables. We provided each AI with blank tables, including prescribed row and column headers taken from a human-generated report, along with syllabi data and assessment data. We then compared the results generated by each AI platform with the human-generated report by applying a self-developed accuracy scale.

The authors conclude with a discussion of future work and limitations to using generative AI for ABET report writing, including file uploading, and the challenges encountered with prompting across various platforms. In our study, we found that most of the Generative AI models tested produced unreliable outputs, yet ChatGPT 5.0 was found to be the most capable at creating a table from source data. Although generative AI can help reduce the time it takes to create an ABET report by formatting a table and populating it, human review is still necessary since errors were common.

Authors
  1. Caleb Levi Head Purdue University – West Lafayette (College of Engineering) [biography]
  2. Aadithan Anbuvanan Purdue University – West Lafayette (College of Engineering) [biography]
  3. Diane Patterson Purdue University – West Lafayette (College of Engineering)
  4. Dr. Justin L Hess Orcid 16x16http://orcid.org/0000-0002-1210-9535 Purdue University – West Lafayette (College of Engineering) [biography]
  5. Dr. Morgan M Hynes Purdue University – West Lafayette (College of Engineering) [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026