Promoting diversity, equity, and inclusion (DEI) offers multiple benefits to the academic world. There are many approaches to advancing DEI, one of which is through mindful use of language. Thoughtful language can help foster inclusivity, contributing to the broader goal of creating an inclusive and equitable academic environment. In particular, the American Psychological Association (APA) has published a language use guideline that provides instruction on language usage that offers practical suggestions and highlights examples of biased language commonly found in academic writing. In this academic atmosphere, the engineering education community is increasingly recognizing that language use is one of the essential components of creating an inclusive and equitable learning environment. While the influence of language on educational experiences has been the subject of several scholarly papers, no research have looked explicitly at language use patterns in the field of engineering education or the possible negative effects of biased language.
In light of this, the present study integrates two conceptual frameworks: implicit bias theory and academic literacy theory. This approach allows for a detailed investigation into biased language use trends within engineering education research, as well as an understanding of how these trends diverge from the field’s goals of diversity and inclusion. Implicit bias theory examines unconscious attitudes and stereotypes that subtly but significantly influence language use in academic settings. Meanwhile, academic literacy theory sheds light on the conventions and practices of communication in academic writing.
To determine what constitutes biased language, we developed a keyword-based model in accordance with the latest APA 7th language guidelines. We identified 85 keywords from the guide and grouped them into seven categories. The study analyzed 5,237 conference proceedings published at the American Society for Engineering Education Annual Meetings from 2020 to 2022. By employing the keyword-based model in R, we applied the keyword-based model to detect instances of biased language within the proceedings. Our analysis revealed a slight decrease in biased language usage over time, with 359 unique instances detected in 2020, compared to 283 instances in 2022. The three most persistent categories of biased language over the three years were Gender, Racial and Ethnic Identity, and Socioeconomic Status. Specifically, the top five most frequently used biased terms were "Females/Males," "Caucasian," "Achievement Gap," "The Poor," and "The Elderly."
The full paper will be available to logged in and registered conference attendees once the conference starts on February 9, 2025, and to all visitors after the conference ends on February 11, 2025