As AI-powered conversational agents begin to emerge in engineering education, there is growing interest in understanding how these tools support student learning and how interaction data can inform instructional improvement. This NSF-funded project (Award #2422510, EHR Core Research program) explores a systematic approach to analyzing chatbot conversation logs and translating them into concrete learning design decisions for an undergraduate engineering experimentation course at a mid-sized R1 university.
We deployed a purpose-built Socratic troubleshooting chatbot within a seven-week laboratory module in an engineering experimentation course on metrology, sensing, and data acquisition. The agent was configured to promote divergent thinking during troubleshooting by offering multiple hypotheses rather than single solutions, prompting students to test alternatives and reflect on outcomes. Over two academic terms, we captured and analyzed 80 distinct student-chatbot sessions comprising 884 coded questions using secure turn-level logging, de-identification, and NVivo 14 qualitative data analysis software.
Our learning analytics framework centers on a codebook aligned with stages of engineering work: Troubleshooting (330 questions), Scientific and Engineering Concepts (256), Integration (86), Analysis and Simulation (119), Design (40), Optimization (13), and Process Documentation (40). We extended this base framework with three diagnostic indicators that characterize the quality of troubleshooting support: divergent options (agent replies proposing multiple pathways), option uptake (student selection and follow-through), and root-cause heuristics (structured diagnostic moves like isolating variables, checking configurations, and comparing to expected ranges). These indicators are straightforward to compute and pedagogically interpretable, bridging the persistent gap between analytics outputs and instructor action.
Key findings reveal that troubleshooting dominated usage (37% of questions), with substantial demand for conceptual clarification (29%). Many sessions involved sustained diagnostic work: 36% included ten or more student turns, indicating extended back-and-forth rather than simple fact-lookup. A spot audit of twelve troubleshooting micro-episodes showed divergent options in 75% of cases and option uptake in 58%, with configuration/gain-setting heuristics appearing in 58% and expected-range reasoning in 50%.
Most critically, we demonstrate how these analytics can inform specific course improvements. Based on interaction patterns, we propose actionable changes including pre-lab preparation activities, visual scaffolding materials, and protocols for allocating teaching assistant support, though these interventions have not yet been implemented.
This work contributes a framework that enables instructors to use conversational traces as a design resource rather than merely a log of help-seeking behavior, directly addressing calls in the learning analytics for tools that close the gap between data and pedagogy.
http://orcid.org/0009-0000-3081-6367
Worcester Polytechnic Institute
[biography]
http://orcid.org/https://0000-0001-7905-421X
Worcester Polytechnic Institute
[biography]
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026