This RIEF project concerns how engineering students use external resources, including generative AI, to supplement their studies. Students are increasingly utilizing online technologies far beyond what their instructors provide. These resources fall into three main categories: (1) problem-solving platforms (such as Chegg), (2) video content platforms (like YouTube), and (3) Generative AI tools (for example, ChatGPT). AI technologies are expanding quickly and offer unique capabilities, yet they also have notable limitations that necessitate the development of new skills and methods for effective use. Another issue with these widely accessible tools is the lack of curation by instructors, which places the responsibility on students to possess the metacognitive abilities needed to discern accurate and valuable information. This study aims to explore three related research questions: “In what ways do engineering students use external resources to assist with problem-solving in their coursework?”; “What motivates engineering students to prefer certain external resources, such as ChatGPT, based on perceived ease of use and utility?”; and “What metacognitive strategies do students apply when collaboratively solving problems using external resources, particularly generative AI?”
In the past year, the team has developed a survey instrument through an iterative process and a series of cognitive interviews to understand students’ perceptions of each item. The instrument was administered to engineering students at a large Midwestern university (n = 149), capturing self-reported patterns of resource use, perceptions of usefulness and ease of use, views on plagiarism in the era of generative AI adoption (AI-garism), and metacognitive strategies employed when problem-solving.
Our descriptive analyses show that online video platforms were the most used of any technology, with nearly 95% having tried them and about 75% of students using these platforms to supplement their studies. Conversely, only ~40% of students identified as users of AI tools or solution manuals for their studies. Interestingly, the number of AI adopters nearly equaled the number of rejectors, and only about 20% of students reported that they had not tried using AI. In the case of solution manuals, more users who tried them continued using them, as about 40% rejected them after trying. Many AI users predominantly focused on coding-related tasks, with concept clarification also being popular. Students did not seem to think AI was particularly useful for career advice or making presentations. This paper includes these broad statistics as well as a deeper dive into the interplay between different student attributes and their use of various tools.
The next steps for this project are to identify participants for think-aloud interviews, deploy the survey at other institutions, improve the AI-garism instrument, and conduct cognitive interviews with AI adopters to further understand their metacognitive strategies.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026