This Complete Research paper will present the findings of a survey conducted at the beginning of the Fall 2023 semester assessing first-year engineering student attitudes about ChatGPT.
Engineering students need to be able to find, evaluate, integrate, and ethically use information sources to solve problems during the design process [1], skills otherwise known as information literacy [2]. While there have been many effective interventions for integrating information literacy instruction into engineering curricula [3], this framework has not yet been revised to include the use of generative artificial intelligence (GenAI). In recent years, GenAI technologies have made significant advances and are increasingly integrated into various products and services, either explicitly or behind the scenes [4]. Consequently, there is an urgent need to introduce GenAI literacy within first-year engineering programs so students can engage with these technologies intelligently, both during their academic journey and in their professional careers [5]–[8]. This inclusion is important since students may not fully understand where information presented in GenAI is sourced, the authority or accuracy of information from GenAI, nor how they can appropriately attribute information gained by GenAI.
Therefore, the first-semester engineering course at a large Midwestern university was updated in 2023 to include lessons and projects around ChatGPT, a prominent and accessible large-language chatbot. Prior to any instruction, a survey was given to examine students’ experience and understanding. Questions asked about students’:
• Prior experience with ChatGPT,
• General attitudes toward ChatGPT, using an adapted version of the General Internet Attitudes Scale (GIAS; [9]), including subscales on affect, exhilaration, social benefit, and detriment,
• Trust in correctness of ChatGPT responses to sample prompts, gauged by likelihood ratings, and
• Responses to ethical questions related to ChatGPT usage in academic and professional engineering scenarios.
A total of 453 students out of 487 (93% response rate) participated in the survey. The findings showed that only 31% of the first-year cohort had no prior experience with ChatGPT. Among the remaining students, 32% stated that they had moderate experience or higher, which was described as having “used it several times with a purpose in mind.” The survey also revealed a wide variety of attitudes towards ChatGPT. For instance, when asked about the ethics of “using ChatGPT for writing programming code in an engineering workplace,” the histogram of responses was completely platykurtic (flat), demonstrating extremely high variance. The paper will describe the means and variance for each scale in detail and relations between student experience with ChatGPT and their attitudes.
These results establish a valuable baseline for understanding the attitudes and ethical perspectives of first-year engineering students toward AI, underscoring the necessity of incorporating GenAI literacy into engineering curricula.
Keywords: first-year, generative AI, survey, ethics, information literacy
References
[1] M. Fosmire, “Making Informed Decisions: The Role of Information Literacy in Ethical and Effective Engineering Design,” Theory Into Practice, vol. 56, no. 4, pp. 308–317, Oct. 2017, doi: 10.1080/00405841.2017.1350495.
[2] Association of College and Research Libraries, “Framework for Information Literacy for Higher Education,” Association of College & Research Libraries (ACRL). Accessed: Oct. 30, 2023. [Online]. Available: https://www.ala.org/acrl/standards/ilframework
[3] M. Phillips, A. Van Epps, N. Johnson, and D. Zwicky, “Effective Engineering Information Literacy Instruction: A Systematic Literature Review,” The Journal of Academic Librarianship, vol. 44, no. 6, pp. 705–711, Nov. 2018, doi: 10.1016/j.acalib.2018.10.006.
[4] S. Lee, M. Lee, and S. Lee, “What If Artificial Intelligence Become Completely Ambient in Our Daily Lives? Exploring Future Human-AI Interaction through High Fidelity Illustrations,” International Journal of Human–Computer Interaction, vol. 39, no. 7, pp. 1371–1389, Apr. 2023, doi: 10.1080/10447318.2022.2080155.
[5] I. Celik, “Exploring the Determinants of Artificial Intelligence (AI) Literacy: Digital Divide, Computational Thinking, Cognitive Absorption,” Telematics and Informatics, vol. 83, p. 102026, Sep. 2023, doi: 10.1016/j.tele.2023.102026.
[6] D. T. K. Ng, J. K. L. Leung, S. K. W. Chu, and M. S. Qiao, “Conceptualizing AI literacy: An exploratory review,” Computers and Education: Artificial Intelligence, vol. 2, p. 100041, Jan. 2021, doi: 10.1016/j.caeai.2021.100041.
[7] D. T. K. Ng, J. K. L. Leung, S. K. W. Chu, and M. S. Qiao, “Conceptualizing AI literacy: An exploratory review,” Computers and Education: Artificial Intelligence, vol. 2, p. 100041, Jan. 2021, doi: 10.1016/j.caeai.2021.100041.
[8] Y. Yi, “Establishing the concept of AI literacy,” Jahr – European Journal of Bioethics, vol. 12, no. 2, Art. no. 2, 2021.
[9] M. Joyce and J. Kirakowski, “Measuring Attitudes Towards the Internet: The General Internet Attitude Scale,” International Journal of Human–Computer Interaction, vol. 31, no. 8, pp. 506–517, Aug. 2015, doi: 10.1080/10447318.2015.1064657.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.