The rapid integration of artificial intelligence (AI) across nearly every sector of society, from healthcare and education to manufacturing and government, has created both transformative opportunities and pressing ethical challenges. Engineers are often at the center of these sociotechnical shifts: they design, adopt, and adapt AI systems while simultaneously influencing how these technologies reshape professional practice and public life. As AI becomes deeply embedded in daily work across industries and disciplines, the next generation of engineers will enter a workforce where navigating AI tools is not optional but fundamental. Engineering educators face a dual responsibility: preparing students to leverage AI effectively while also helping them understand its broader social and professional implications. Biomedical engineering (BME) in particular sits at a critical intersection, where AI applications promise advances in diagnosis, treatment, and patient care but also raise complex questions of bias, accountability, and professional responsibility.
Many faculty remain uncertain about how to integrate AI into their teaching, and especially how to address its ethical dimensions. This uncertainty may be compounded in engineering education more broadly, where ethics instruction varies significantly across programs and is often treated as supplementary rather than integral to the field. While prior research has examined student use of AI and the way it shapes ethical reflection, far less is known about faculty perceptions, particularly within their disciplinary contexts. For faculty to effectively lead instruction related to AI ethics and responsibility, we need to understand both student and faculty perspectives on these topics. The goal of this work is to investigate two research questions. First, what ethical concerns and perceived responsibilities do biomedical engineering students and faculty associate with the use of AI in their academic and professional work? Second, how do perspectives of students and faculty compare in their views on ethical implications and responsibilities of AI use? To investigate these questions, we administered confidential surveys to faculty and students at the sophomore level. These surveys included open-ended questions about participant views on AI use and ethical responsibilities. An inductive thematic approach was utilized to identify patterns emerging from participants' survey responses, and several rounds of coding were used to identify central themes. Students and faculty identified overlapping ethical concerns regarding AI use, centering on academic integrity, the reliability of AI outputs, and the impact of AI on learning. Students and faculty differed in how they framed the stakes and responsibilities associated with AI use. Students emphasized learning-oriented responsibilities and individual academic concerns, while faculty perspectives reflected broader professional, ethical, and institutional responsibilities. Understanding and comparing these perspectives will support development of future instructional practices that will aid in preparing engineering students for an AI-integrated healthcare landscape.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026