2024 Collaborative Network for Engineering & Computing Diversity (CoNECD)

Technology Students' Recognition of Algorithmic Data Bias through Role-Play Case Studies

Presented at Track 6: Technical Session 1:Technology Students' Recognition of Algorithmic Data Bias through Role-Play Case Studies

As algorithms proliferate across domains, their development for analysis, prediction, and generation tasks raises questions about fairness, justice, and inclusion. One primary reason is algorithmic data bias, a common phenomenon across datasets and systems that reflects incomplete or misused data. With the incentive to make generalized systems that can do everything, everywhere, data bias reflects the data makeup and how it leads to systematically unfairly generated decisions or outcomes. As future engineers, analysts, and scientists, it is fundamental that technology students are made aware early in their careers of how bias can, at a minimum, alter the quality of an algorithmic decision and, at worst, harm people and communities. In this paper, we report on a three-year course implementation of interactive role-play case studies to raise student awareness of technology ethics and how ethical principles can affect the recognition of data bias in decision-making processes. Students participated in a semester-long course with multiple case studies addressing different aspects of the social implications of technology implementation. Three specific cases were designed to discuss algorithmic data bias and its effects on DEI: 1) using facial recognition on a college campus, 2) algorithmic profiling of demographic data for credit risk allocation, and 3) exploring trust between the community, farmers, and artificial intelligence developers in agricultural systems. We analyzed the transcripts from the role-play activities and responses to assignments through the lens of an AI bias framework and associated theories. Students were introduced to algorithmic data bias at multiple stages in the course to highlight how it affects ideation, development, and implementation across systems. Overall, we found that students initially focused on data acquisition and testing any algorithmic models as ways to overcome data bias, but through the discussion, highlighted the need to question and meta-reason about the need for a highly complex, often black-box AI system in the first place. Additionally, students showed progressive and nuanced discussions about how data bias affected and altered other ethical principles, including transparency, accountability, and trust.

Authors
  1. Mr. Ashish Hingle George Mason University [biography]
Download paper (2.13 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.

» Download paper

« View session

For those interested in:

  • computer science
  • engineering
  • gender
  • information technology
  • professional
  • race/ethnicity
  • Socio-Economic Status
  • undergraduate