Large language models (LLMs) have revolutionized content creation across various domains, yet their application in highly specialized fields such as curriculum gap analysis remains challenging. This study (full paper format), to be presented at the 2025 ASEE Annual Conference’s Computers in Education Division (COED), investigates the efficacy of LLMs in developing and evaluating cybersecurity curriculum pathways.
Traditional curriculum plans, such as those from institutions designated by the NCAE-C in
cybersecurity defense education (CAE-CD) and cyber operations (CAE-CO) tracks, involve a manual process of identifying knowledge units for specific courses. Throughout the mapping process, gaps are identified in curriculum plans based on the knowledge of the subject matter expert(s) (SME). Performing this task for one framework is challenging enough; consider the increased complexity and risk of error when multiple frameworks are cross-referenced into the plan. Improvement opportunities exist in the curriculum mapping and gap analysis process. This leads to the question of whether an LLM can speed up the curriculum mapping process compared to a manual process conducted by an SME, which will be evaluated through a set of ”human in the loop” experiments.
To evaluate this question, the paper details the results of the following experiments involving a computer science/cybersecurity curriculum being mapped to the CAE-CD knowledge units (KU):
1. A single SME will create a manual KU curriculum mapping.
2. Provide an LLM with details of the full curriculum, including catalog descriptions and
syllabi, and create a mapping for a single CAE-CD knowledge unit.
3. Provide an LLM with details of all CAE-CD knowledge units and information for one
course (catalog description and syllabus) and create a knowledge unit mapping for that
course.
4. Provide an LLM with details of all CAE-CD knowledge units, course descriptions, and
syllabi, and create a knowledge unit mapping for the full curriculum.
We will evaluate the results using the following metrics:
1. Accuracy - precision, recall, F1, and error rate of LLM vs SME
2. Efficiency - time to complete and reduction of human effort
3. Qualitative - expert review and comparison of output quality
Our analysis will show that the manual curriculum mapping and gap analysis process can be accelerated and improved by suggesting possible outcomes from the LLM, allowing the SME to assume the more effective role of critical reviewer over content developer. We will also show that this assistance comes with contraints.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on August 18, 2025