As artificial intelligence (AI) becomes increasingly embedded in society, enhancing AI literacy among university students is essential to foster informed engagement with both the technological advancements and the ethical challenges that arise in various disciplines. This study explores how game theory, particularly the Prisoner’s Dilemma, can serve as a pedagogical tool to deepen computer science students' understanding of trust, fairness, and ethical decision-making in AI systems. Through semi-structured interviews with 36 undergraduate computer science students at a research-intensive university in Southeast Asia, the research examines how students interact with AI models designed to either cooperate or defect and how single versus iterative game trials influence their perceptions of AI ethics. By incorporating game theory into AI literacy education, this approach encourages students to critically evaluate key ethical dimensions such as fairness, transparency, and bias in AI. These findings highlight the potential for game-theory-based models to stimulate deeper reflection on the moral and ethical responsibilities of AI developers, aligning with the broader themes of public welfare and the common good in engineering education. This study suggests that integrating game theory into AI literacy can cultivate a more ethically conscious and critically engaged generation of future engineers, contributing to ongoing efforts to promote moral responsibility and public welfare in the development of AI technologies.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025