The development of novel data analytics, machine learning, and artificial intelligence tools have reshaped biomedical engineering. Applications of machine learning (ML) and artificial intelligence (AI) in medical devices have resulted in FDA releasing action plans for regulating AI/ML-based Software as Medical Device (SaMD). Even in areas seemingly unrelated to ML, ML applications, such as image analyzers and text summarizers, have seeped into the daily routines of biomedical engineers. However, due to the relative opaqueness of ML algorithms, ML systems can demonstrate biased behaviors; these behaviors can be seen in both medical devices and general-purpose ML products such as ChatGPT. In fact, a sizable portion of the FDA AI/ML SaMD whitepaper illustrated the need for addressing biases in medical devices. For the next generation of biomedical engineers who will use and design ML-enabled systems, they will need to not only acquire ML literacy but also know how to address biases in ML systems. Sadly, curriculum for machine learning in biomedical engineering (BME) is already a rarity, let alone learning content for addressing bias in machine learning systems.
The BME department at UC Davis newly established a BME course, titled Machine Learning for Biomedical Engineers. To address the necessity of educating our biomedical engineers about bias in ML, we designed and implemented a one-week learning module about diversity, equity, and inclusion problems within the data collection process in ML products. The module ended with a hands-on exercise, where students were given a blind dataset to train a machine learning algorithm on, only to find that the trained machine learning algorithm associated career words with male and family words with female. Students were properly warned about the contents of the module before the instruction, especially before the hands-on module. A seven (7)-item short survey was issued to the students after the module was completed. The survey contained five (5) Likert-scale questions (1: strongly disagree; 6: strongly agree) on several aspects of diversity, equity, and inclusion issues in machine learning systems. The survey also contained two short-answer questions on the clearest and muddiest points covered in the module. The survey was conducted on paper and no demographic information was collected. To ensure anonymity, the instructor deferred the transcription and processing of the data to their research assistants that were not enrolled in the course. This work was designated as Non-Human Subject Research.
Nineteen (19) valid responses were collected out of 22 students who enrolled in the class in Spring 2024. Students reported good confidence on all Likert-scale measures, with the highest average confidence reported on providing equal opportunities of ML (5.28/6) and the lowest reported on ability to take actions to reduce bias in ML (4.74/6). Students reflected that tacking the “right” problems with ML and collecting data with equity in mind were the clearest points from the module; however, many students seek more clarity on how they could convince engineering teams to perform equitable ML, especially if the engineering team does not possess diversity or decides to accept societal bias in the data. We intend to further refine this module and disseminate this module through our platform for the broader educational community in BME.
http://orcid.org/0009-0003-2135-8220
University of California, Davis
[biography]
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.