Engineering processes and products are value-laden, yet engineering science courses rarely create explicit space for students to articulate what they value about their learning, how they negotiate authority with AI tools, or how they assess their own knowledge production. This work-in-progress examines how structured reflection combined with ungrading might develop critical AI literacy and sociotechnical fluency in a mechanics of materials course.
Within a traditional mechanics of materials course, three Materials Futures Labs asked students to design speculative materials for Africanfuturist scenarios in 2125: a peace monument for Himba-Meduse reconciliation, an orbital habitat for a pan-African space agency, and astrolabes for a Namib-based design collective. Drawing on Nnedi Okorafor's Binti tetralogy, we center these modules on Africanfuturism, which centers African cultures, technologies, and futures. These Materials Futures Labs positioned all students as legitimate designers working within non-Western imaginaries, disrupting typical engineering science assumptions about whose knowledge counts and what problems are worth solving. The labs used ungrading: rather than receiving grades, students completed structured self-assessments answering: What are you most proud of? To what degree did AI influence your submission—give examples of AI's help vs. your own thinking. What grade do you deserve and why? These reflection questions functioned as a pedagogical intervention—teaching students to interrogate their relationship with intellectual authority and emerging tools. The following research question guides this WIP:
How might Africanfuturist scenarios, ungrading, and required reflection on AI use work together to develop critical engagement with authority, tools, and values in engineering education?
Data sources include self-assessment reflections (n=40+), student-created product sheets (n=40+), and peer review comments (n=120+). Analysis involves initial inductive coding of a subset of reflections to identify patterns in how students describe AI use, articulate pride/values, and justify self-assigned grades; discourse analysis of product sheets examining how students claim expertise and describe their materials; and pattern analysis of peer review quality.
Initial analysis suggests tensions that complicate simple narratives about AI literacy and ungrading:
Transparency paradox: Students reported AI use with surprising frankness—from "not at all" to strategic validation, ideation partnership, and task automation. However, those using AI "slightly" or "somewhat" demonstrated more metacognitive sophistication about the human-AI boundary than those at extremes (heavy use or none), suggesting ungrading creates permission for honesty but not uniform critical engagement.
Values vs. optimization: Pride statements emphasized collaboration, cultural significance, and ethical imagination. Yet preliminary discourse analysis suggests students still primarily optimized technical metrics in actual designs, with cultural/ethical considerations appearing more in futures thinking sections than material specifications.
This work suggests how reflection questions can make visible what typically remains hidden in engineering education: the entanglement of values, tools, cultural contexts, and technical decisions. By requiring AI transparency, the pedagogy may train sociotechnical fluency—helping students recognize knowledge production as always mediated, never purely technical. This has democratic stakes: engineers who can interrogate authority rather than passively accept expertise may be better prepared for accountability to diverse communities. Preliminary findings suggest structured reflection creates conditions for this critical stance.
http://orcid.org/https://0000-0002-1480-8209
Arizona State University
[biography]
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026