Writing high-quality learning objectives is crucial in the design of an effective curriculum. These learning objectives help the instructor align the course components (e.g., content, assessment, and pedagogy) to provide students with a good learning experience. However, as most instructors do not have formal education training, they lack the experience and expertise to write quality learning objectives. The lack of quality learning objectives could lead to misalignment in course components, where students often complain of variation between what is taught in class and what is assessed in the exams.
In this Work-in-Progress paper, we argue that Generative AI, the recent advancement in AI, has the potential to assist in improving the quality of learning objectives by providing real-time scaffolding and feedback. Also, the SMART(Specific, Measurable, Attainable, Relevant, and Time-bound) - a widely recognized best practice for crafting clear and compelling learning objectives- can be used as the criteria for evaluating learning objectives. In this regard, we collected 100 learning objectives from various STEM course curricula that are publicly available online. Using the SMART criteria, we evaluated each learning objective using two approaches. 1) Human experts generated feedback, and 2) Generative AI model (i.e., GPT; generative pre-trained transformer) based feedback. More specifically, we addressed the following research questions: How well does GPT feedback match human experts when evaluating course learning objectives using the SMART framework? We used Cohen’s Kappa to assess the level of agreement between GPT and human experts’ evaluations. Also, we did a qualitative analysis of learning objectives with strong disagreement among evaluations. Our findings showed that the GPT has a reasonable agreement in evaluating learning objectives’ “Relevant” aspects. However, there was an inconsistency in assessing the other criteria by the GPT compared to human evaluation. The potential issues could be the AI’s need for more contextual understanding, such as how the assessment is applied, access to broader course structure, learner needs, etc. Overall, the result suggests that while GPT could assess a certain part of learning objectives effectively, further refinement of GPT with more contextual information is needed. Furthermore, we plan to build a scaffolding for the instructor to provide instructors with real-time feedback while they work on their learning objectives after improving the current AI approaches. This study contributes to the literature, exploring ways to use AI in education, helping teachers make informed decisions with minimal work, and facilitating student’s learning.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025