2025 ASEE Annual Conference & Exposition

LLM Prompting Methodology and Taxonomy to Benchmark our Engineering Curriculums

As Large Language Models (LLMs) continue to transform our educational landscape, we
educators find ourselves challenged by how we should use these tools and how we should help
future learners use them. Engineering education has the fortune of already including high levels
of Bloom’s taxonomy learning outcome (i.e., create), which LLMs cannot consistently perform.
Still, we need to adapt to these tools’ emergence and capabilities. How will we adjust our
curriculums to this rapidly evolving technology? This paper proposes benchmarking our courses
against LLM capabilities to benchmark our curriculums. By introducing Jamieson’s LLM Prompt
Taxonomy, a three-level classification system for LLM prompts, we create a framework to
evaluate the performance of these AI tools within our existing educational structures.
Our methodology involves crafting prompts for course assessments, categorizing them using the
proposed taxonomy, and comparing LLM performance to traditional student outcomes. We
provide an example use of this with an undergraduate course on Digital System Design. We
demonstrate how this benchmarking process offers insights into the strengths and limitations of
LLMs in an educational context.
This work serves as a call to action for educators across disciplines. By systematically
benchmarking our curriculums against LLM capabilities, we can better understand the evolving
relationship between AI and education. This understanding will allow us to refine our teaching
methods, emphasize uniquely human skills, and prepare our students for a future where
collaboration with AI is expected. As we move forward, it is crucial that we, as educators, take
charge of shaping how these powerful tools are integrated into our classrooms and beyond. This
work will illuminate the need for curriculum-based learning outcomes at high levels in Bloom’s
taxonomy.

Authors
  1. Dr. Peter Jamieson Miami University [biography]
  2. Suman Bhunia Miami University
  3. Brian A Swanson Miami University
  4. Dr. Bryan Van Scoy Miami University [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025