With the improvement in open generative artificial intelligence’s (AI’s) ability to craft human-like text, there is a valid concern by educators that this technology will be used by students to complete assignments without learning the subject matter. Also problematic is the limited ability to verify or prove a student’s potentially unauthorized or unethical use of AI. While these fears are valid, we believe the best way forward is to focus on educating students on how to use this powerful technology ethically and effectively. Though there is some work establishing best practices for using AI in writing scientific manuscripts [1], how best to utilize AI as an instructional aid for teaching scientific writing is less understood. For biomedical engineers, technical writing is particularly important: they need to master both engineering and scientific approaches to written communication across multiple formats. We have previously developed evidence-based technical writing modules, tailored to biomedical students, and vertically integrated them throughout our core curriculum [2]. These modules were developed prior to the recent advent of publicly available AI. To develop guidelines on instructional AI use, we first need to understand 1) student perception on the utility and ethics of AI, 2) student prior and current use of AI, and 3) how proficient AI is in providing students with adequate feedback.
To test student perceptions and use of AI, pre- and post- course surveys were administered to second- and third-year students in our department enrolled in writing-intensive lab courses (Biomedical Mechanics, Biomaterials, Human Physiology). The anonymous pre-course survey asked students to describe their experience level with using AI, how they have previously used AI in college, and their opinion on how ethical and useful AI is for an array of common technical writing assignments on a 4-point Likert scale (strongly agree, agree, disagree, strongly disagree). Students were informed by their instructor of record that use of AI on their writing assignments was permitted without penalty if they cited the name of the AI tool used and a description of how the tool was used. Data on student use of AI was collected both through the number of writing submissions that cited AI and the post-course survey. The post-course survey asked students about their use of AI during the semester and whether they felt AI was an effective tool. Total number of students who cited the use of AI and numbers who anonymously attested to using AI were compared. A two-sample nonparametric Wilcoxon Mann-Whitney Test will be used to make group comparisons between 1) student perceptions and use pre- vs post- course survey and 2) student in-semester reporting of AI use vs anonymously reported AI use in post survey. To determine AI’s efficacy in providing appropriate feedback on student writing we aim to use student submissions and instructor feedback to verify and test how closely AI feedback matches the instructor provided feedback. Ultimately, we believe these insights will enable us to provide guidelines for using AI as an instructional tool that we can incorporate into existing technical writing modules.
XXX University’s Institutional Review Board has reviewed and approved the procedures of this study. To date 100 students, 32 second year students, and 68 third year students have enrolled in the study.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.