2025 ASEE Annual Conference & Exposition

Results and Evaluation of an Early LLM Benchmarking of our ECE Undergraduate Curriculums

The rapid integration of Artificial Intelligence (AI) into engineering practice necessitates
critically examining our educational approaches. This paper presents an investigation into the
performance of Large Language Models (LLMs) within the context of our Electrical Engineering
(EE) and Computer Engineering (CpE) undergraduate curricula at [BLIND] University. Our
study addresses a fundamental question: How do current AI tools perform on typical course
assessments, and what implications does this have for curriculum design?
We introduce a systematic methodology for benchmarking LLM performance on our course
assessments, including exams, assignments, and projects. Utilizing state-of-the-art LLMs, we
evaluate their capabilities across core courses in our EE and CpE programs. This includes
Circuits I (ECE 205), Digital Design (ECE 287), Energy Systems (ECE 291), and Signals and
Systems (ECE 306). Our benchmarking results reveal the strengths and limitations of these AI
tools in engineering education tasks, providing insights for curriculum adaptation. We discuss
how these results might inform the evolution of engineering education, highlighting areas where
AI could enhance learning and where human skills should be reinforced. This work contributes to
the ongoing dialogue on AI integration in engineering education. It offers a first step in providing
a replicable framework for continuously assessing AI capabilities in academic settings and how
this activity can aid educators. As we navigate the transforming landscape of engineering practice
and education, such benchmarking efforts are essential for ensuring our curricula remain relevant
and effective in preparing the next generation of engineers for an AI-augmented profession.

Authors
  1. Dr. Peter Jamieson Miami University [biography]
  2. Brian A Swanson Miami University
  3. Dr. Bryan Van Scoy Miami University [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025