The growing integration of artificial intelligence (AI) using large language models (LLMs) into engineering education has proven to be both an asset to student learning and a potential limitation to student growth. With the recent wave of AI use in engineering education, the ways the technology is utilized and the characteristics of students who rely on it have been the subject of much research. How students utilize AI in their education can have profound implications on their learning outcomes; however, no consensus has emerged on how to measure the relationship between AI use and learning.
This paper discusses the development of an AI adoption and motivation survey , a scale designed to measure the relationship between AI use and learning, and reports on its psychometric properties, specifically when used within an engineering student population. The survey was created as an adaptation of three existing surveys —the Motivated Strategies for Learning Questionnaire (MSLQ), the Reliance on Technology Questionnaire, and the Student AI Survey—paired with the intention to examine relationships among specific constructs of AI use and learning measured by each survey. Previous work has established the reliability of each parent instrument, but population-specific reliability of each instrument has not been reported . The purpose of this paper is to establish the reliability of the instrument such that it can be used for future research among engineering student populations. This work is framed within the context of Novick’s Classical Test Theory (CTT), which states that the quality of an instrument can be measured through psychometric properties, including internal consistency. This framework is particularly suited to this work, as it establishes fundamental indicators of the instrument’s quality before advancing to more complex measurement models, such as factor analysis and item response theory.
Data for this study were collected through survey responses from N=131 students enrolled in the College of Engineering at a large R1 western university. Internal consistency of the instrument was assessed using Cronbach’s Alpha and McDonald’s Omega. Results of the analysis indicate that the survey has good reliability when used with an engineering student population. The results of this study have implications for engineering education researchers and educators, as it introduces a psychometrically grounded instrument to assess the relationship between AI use and learning. This work serves as a steppingstone towards further work to establish the validity of the instrument through use of factor analysis and item response theory. This work represents a foundational step in validating an AI use instrument for engineering students. Future research will extend this work by examining the instrument’s construct validity through factor analysis and item response theory, and by assessing its reliability across broader student populations.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026