The use of generative AI in the understanding engineering economics classroom was required over the course of a semester for students to validate their homework problems. This was integrated into a third of the homework required for the course, and students reflected on how AI answered the homework questions compared to their own solution. A pretest and posttest consisting of nine Likert-Scale questions and two open-ended questions administered on the first and last day of class showcased change in student attitudes regarding their confidence of using AI and their ability to integrate generative AI into their learning process. This is supported by both a moderate effect size (Ө = 0.60) and statistically significant change (p = 0.02) on the confidence question and a moderate effect size (Ө = 0.52) and statistical significance (p = 0.04) on the integration question. Thematic analysis of the qualitative questions illustrates initial student hesitation about AI ethics and concern about how AI will replace their learning process. Posttest data show the growth over the semester: students identify generative AI drawbacks, especially in it answering questions incorrectly. Their confidence in how and where to use AI increases for their own learning. This growth highlights the importance of teaching undergraduate students how to use AI as a professional tool and offers a framework of asking students to verify their own solutions to do so.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026