2024 ASEE Annual Conference & Exposition

Design and Development of Survey Instrument to Measure Engineering Students’ Perspectives on the Use of ChatGPT

Presented at Educational Research and Methods Division (ERM) Technical Session 22

Chat Generative Pre-Trained Transformer (ChatGPT) is a language model created by engineers working in Open Artificial Intelligence. It is a type of artificial intelligence system that generates human-like text responses to a wide range of prompts and questions. ChatGPT offers several advantages including 24/7 support, quick response to questions, finding research-related information, writing a coding program, etc. Despite these advantages, ChatGPT has limited contextual understanding of a certain topic, which can lead to incorrect/irrelevant responses. It can also be biased based on the data used to train the program, which can lead to unfair or inaccurate feedback. ChatGPT can unfortunately be vulnerable to causing security risks, which may lead to data breaches and sensitive information of students being leaked. With the rising popularity of ChatGPT, just like any other online resource, the over-reliance on it could lead to a decline in independent problem-solving skills and critical thinking in an academic setting.

This research project aims at understanding the students’ perspectives on the use of ChatGPT in engineering. This topic is relevant, timely, and important as ChatGPT as created sufficient stir in education. By exploring students’ experiences and perspectives, we aim to shed light on different aspects of usage of ChatGPT and glean critical insights. The objective of this study was to design, develop and validate a survey instrument that measures engineering students’ perceptions on the use of ChatGPT. To meet the objective of this research study, a survey instrument was designed which included five dimensions: learning tool (10 items), trustworthiness (5 items), ethical considerations (5 items), ease of access (6 items), and concerns with ChatGPT (6 items). To collect the evidence for content validity and face, the survey instrument was reviewed by three content experts and three potential participants. The survey instrument was revised/updated using the feedback from both content experts and potential participants. The data for this study was collected in summer and fall 2023, and 323 responses were included in the analysis. Exploratory factor analysis (EFA) revealed four factors learning tool, trustworthiness, ease of access and concerns with ChatGPT, and the dimension ‘ethical considerations’ was suggested to be removed after the EFA. The Cronbach’s alpha ranged between 0.62 to 0.82 suggesting good internal consistency reliability between the items.

Authors
  1. Mr. Mohammad Faraz Sajawal University of Oklahoma
Download paper (1.88 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.