Considerable concern has emerged over the potential use of AI tools by students for completing assignments in their classes. Reactions in academia have been mixed, with some describing such use of AI tools as “cheating” while others compare it to the use of calculators and see it as the impetus for enabling deeper learning by students. To analyze some of these issues, the recently released AI tool ChatGPT was used to respond to actual Discussion Board questions in our online cybersecurity classes. ChatGPT was also asked to write a Python program to develop a backpropagation Neural Network for XOR. The results were excellent, both for answering the Discussion Board Questions and for writing code. Four findings emerged from this effort: 1) ChatGPT does an exceptional job of answering questions and generating code, 2) it is not clear how student submissions generated with AI should be graded, 3) along with the AI tools themselves, tools have been developed that can detect whether AI was used to generate a student submission but with a high rate of false positives, and 4) despite these three findings, students could and should be encouraged to collaborate with AI tools, similar to the way they would collaborate with other students. These results led to four conclusions: 1) ethically, the use of tools such as ChatGPT without acknowledging that they have been used is cheating, 2) it will be impossible to stop students from using tools like ChatGPT, but unacknowledged use can be detected, albeit with a very high percentage of false positives, 3) use of AI tools should be encouraged rather than discouraged, and 4) higher education should focus on new methods and mechanisms for assessing student learning that take advantage of the AI tools.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.