2026 ASEE Annual Conference & Exposition

Engineering Students’ Perceptions of Plagiarism and "AI-giarism"

Presented at Computers in Education (CoED): AI in Education (1 of 9) -- M308A

Plagiarism, the act of presenting the work of someone else as one’s own without appropriate credit, has been a persistent issue in higher education. Although the concept has a straightforward definition, it takes many forms (e.g., direct, self, and mosaic) and consequently is understood differently among students. One form of plagiarism discussed in the literature is “ghostwriting,” the practice of contracting someone to complete a writing task on one’s behalf. Before the release of large language-powered technology, students needed to use a specific service or peer network to engage in this kind of academic misconduct. However, complicating the issue of ghostwriting further is the introduction of generative AI systems, which have democratized the ability for students to receive virtually free ghostwriting support for their assignments. With calls for educators to incorporate different aspects of AI literacy and integrate these tools into students’ workflows, understandably, confusion about appropriate use has emerged, just as we have seen with plagiarism. To form a common definition of inappropriate use of AI tools in the context of plagiarism, Chan (2025) defines “AI-giarism” as the “unethical practice of using artificial intelligence technology, particularly generative language models, to generate content that is plagiarised either from original human-authored work or directly from AI-generated content, without appropriate acknowledgement of the original sources or AI’s contribution” (p. 8089). For this full paper, we posed the following question: “How do engineering students define plagiarism in the context of generative AI?”

We addressed our research question using a quantitative research design. We draw upon the results of a larger questionnaire-based study (n = 149) conducted at a large public Midwest institution, focusing on engineering students’ use of external resources for their studies, specifically solution manuals, online videos, and generative AI. Most relevant to this work, the questionnaire included Chan’s (2025) instrument probing what students consider to be forms of plagiarism and AI-giarism. For example, students are presented with an item such as, “The student employed AI technologies only to assist with the checking of grammar for their assignment.” (Chan, 2025, p. 8097) and rated the extent to which they considered that act as a form of plagiarism on a seven-point Likert scale. The first item in the instrument is the proposed most egregious form of AI-giarism, the last item is not a form of AI-giarism, and the items in between progress downward in the extent of AI-giarism.

To analyze the responses, we relied most heavily on descriptive statistics. Because the instrument is not designed to be treated as a latent construct and instead as item-by-item, factor analysis was not pursued. We segmented the data by different categories, such as by major and by year in school. We also separated the data into three groups: those who use generative AI (adopters), those who have tried these tools but have since stopped using them (rejectors), and those who claim not to have used generative AI for their coursework (candidates). We calculated the descriptive statistics for these subgroups.

Our findings are consistent with the limited literature in this area. Student perceptions of AI-giarism are mixed, with little agreement on what counts as academic misconduct, especially on more nuanced forms of AI-giarism. Adopters and rejectors showed similar boundaries on where plagiarism occurred, and candidates had a more conservative stance on what constitutes plagiarism. Interestingly, the data also showed that students in their first year of study had stricter boundaries than in subsequent years, which had progressively lessened their views on plagiarism.
We expect this work on AI-giarism to help refine the instrument and support broader investigations into understanding student perspectives on the appropriate uses of generative AI for academic tasks.

Authors
  1. Aarohi Shah University of Cincinnati [biography]
  2. Maria Malik University of Cincinnati
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026