Mon. June 23, 2025 9:15 AM to 10:45 AM
001 -Exhibit Hall 220 C, Palais des congres de Montreal
There are currently 44 registrants interested in attending
For those interested in Academia-Industry Connections, Advocacy and Policy, Broadening Participation in Engineering and Engineering Technology, New Members, and Pre-College
AI Chatbot for Enhancing Troubleshooting in Engineering Labs
This study investigates the effectiveness of artificial intelligence (AI) as a learning tool in an engineering lab setting. The authors have developed an AI chatbot for engineering experimentation classes at an R1 research-oriented institution in the northeastern United States. These classes emphasize developing engineering metrology skills through electronic instrumentation and hands-on laboratory work. In this context, students frequently rely on the instructional team, including teaching assistants (TAs), for guidance on lab procedures, equipment setup, and troubleshooting. This reliance can overwhelm TAs with repetitive questions, often resulting in inconsistent responses. These issues impact the overall learning experience and reduce the efficiency of instructional support. To address this, we designed an AI chatbot that provides students with immediate, on-demand support to enhance their learning experience while reducing the workload on TAs.
The chatbot is built using OpenAI's large language model (LLM) and is tailored to use problem-solving frameworks such as issue trees and first principles. The issue tree framework breaks down complex problems into smaller, more manageable components, offering multiple potential solutions for each part. This approach allows students to explore various solution pathways systematically. On the other hand, the first principles method guides students toward a deeper understanding of concepts by breaking problems into their fundamental elements. The chatbot also uses Socratic questioning to engage students, prompting them to think critically and build upon their existing knowledge. This method encourages students to find solutions independently rather than providing direct answers, fostering their problem-solving skills and enhancing their grasp of engineering concepts.
The AI chatbot supports activities, including understanding lab procedures, setting up electronic instrumentation, and troubleshooting errors. Its contextual data for its responses include detailed procedures for the different laboratory assignments available to all students in the class, allowing information about the assignments to come directly from instructor-created sources.
To evaluate the chatbot's effectiveness, we adopt a mixed-methods approach, including qualitative student feedback, surveys, and an analysis of student-chatbot interactions. Students complete surveys at the end of the term, capturing data on their experiences, learning outcomes, and satisfaction with the chatbot. Analyzing student-chatbot dialogues provides insights into the chatbot's ability to parse course material, employ Socratic questioning, and encourage students to explore various problem-solving strategies.
Based on the data of chatbot interactions and students' responses, this AI chatbot enhanced the educational experience by promoting critical thinking and independent learning while reducing the workload of TAs. The success of this AI-driven tool shows that the concept of having a chatbot tool can expand to assist with other engineering lab courses or student projects, offering consistent, high-quality, on-demand support across disciplines and reducing reliance on real-time TA intervention.
Authored by
Marshall Ismail (Worcester Polytechnic Institute), Devin Kachadoorian (Worcester Polytechnic Institute), Sahil Mirani (Worcester Polytechnic Institute), Dr. D. Matthew Boyer (Clemson University), Mr. Tim Ransom (Clemson University), and Prof. Ahmet Can Sabuncu (Worcester Polytechnic Institute)
As computers are becoming more and more significant in today's society, everyone needs to be computer literate. Knowing computer literacy is important for many reasons, such as operating computers and computer applications, keeping up with technological changes, broadening avenues for employment, problem-solving, and enhancing communication. This research investigated whether and how gender, age, working hours per week, and attitudes toward computer usage impact student achievement in computer Literacy courses. A non-experimental quantitative design was used to statistically collect and analyze data from classes of Intro to Computer Technology and classes of Micro Computer Applications in Business during Spring 2023 and Summer 2023 terms. Statistical methods used in data analysis included descriptive methods, Pearson Chi-square test, and Binary Logistic regression analysis. The results demonstrate a statistically significant relationship between students’ attitudes toward computer usage and students’ achievements in Computer Literacy courses. A higher attitude towards computer usage was correlated with higher passing grades in Computer Literacy courses. However, no statistically significant difference was found in students’ achievements in Computer Literacy courses by age, gender, or by working hours per week. The result of the attitude also shows the odds ratio of passing the Computer Literacy course is 1.034 times higher than those failing the Computer Literacy course. Computer literacy in education appears to have a positive effect on students' success in college, future employment, and 21st-century living. The research provided suggestions and implications that would add to the repository of knowledge in the domain of education management. Sharing the research’s findings with educators can help students achieve in Computer Literacy courses. The recommendations for future research are addressed in this paper to provide a deeper understanding based on the findings of the study for further guiding how to assist students in mastering their Computer Literacy courses.
Research in this study is partially supported by NSF Project #1915520: Enhancing Additive Manufacturing Education with Virtual Reality and Cybersecurity and NSF Project #2012087: Collaborative Research: LIGO SEC Partnership Strengthening Communities of Learners.
Authored by
Ratana Prinyawiwatkul (Affiliation unknown), brian Warren (Southern University and Agricultural & Mechanical College), Dr. Shuju Bai (Clayton State University), and Dr. Albertha Hilton Lawson (Southern University and A&M College)
Bringing and utilizing innovative technology solutions in the classroom plays a crucial role in enhancing the learning experience, applying theoretical knowledge, and providing students with significant hands-on practice. While the concept of Bring Your Own Device (BYOD) or Bring Your Own Technology (BYOT) has been widely implemented, it has predominantly focused on personal devices for work-related tasks. In contrast, cluster computing, a technology gaining momentum among developers, researchers, and data scientists, is often impractical to implement in classroom settings due to its resource-intensive nature. This paper introduces the pedagogical approach of Bring Your Own Cluster to the Classroom (BYOCC), which combines the portability and affordability of personal devices with the functionality of cluster computing, offering an innovative learning solution.
Specifically, this paper explores the application of BYOCC through the use of Raspberry Pi clusters, which enable students to gain practical experience in cloud computing, cybersecurity, and current IT trends. The study compares two distinct Raspberry Pi 5 cluster architectures, detailing their building process, their use cases, technologies, and classroom applications. The first architecture utilizes the Turing Pi 2 Board, a compact 4-node ARM cluster board that incorporates a built-in Ethernet switch, offering a secure and scalable solution for edge computing. The second architecture implements a four-node high availability cluster using Docker swarm and a single managed switch. The comparison of these two architectures highlights their benefits in delivering a comprehensive, hands-on learning experience for students, fostering deeper engagement with key concepts in computing and IT infrastructure.
By exploring these cluster architectures, this paper demonstrates the potential of BYOCC in making cluster computing accessible and practical for educational environments, promoting technical learning while encouraging innovation in cloud technologies and IT practices.
Authored by
Dr. Chafic Bousaba (Guilford College)
Artificial intelligence (AI) literacy has become increasingly essential across academic disciplines as AI technologies continue to advance rapidly. However, there is little literature on the impact of different educational approaches on AI literacy among female undergraduate students. This study assessed AI literacy among 16 female undergraduate students from various academic backgrounds in Singapore and compared the effectiveness of two learning approaches: a technical skills course focused on programming and algorithms, and a concept-driven course that emphasizes real-world applications and ethical considerations. Participants from the engineering, business, and humanities disciplines were recruited from multiple university departments and randomly assigned to either of the courses. Using qualitative data from semi-structured interviews, the study investigated the unique strengths and challenges that these students faced from AI-driven education. The results suggest that the concept-driven course supports non-technical students, by enhancing their understanding and engagement, while the technical course was more effective for those with prior technical experience. These findings highlight the need for inclusive, tailored AI education strategies to support the diverse learning needs of female students in Singapore. The impact of different educational approaches on AI literacy, particularly among undergraduate students from diverse gender and academic backgrounds, remains underexplored.
Authored by
Jinyi Jiang (Nanyang Technological University) and Dr. Ibrahim H. Yeter (Nanyang Technological University)
The major impact of Covid 19 on learning was shifting from in-person to digitized learning. This change played a significant role in the university students’ learning and the environment that they were learning. The learning environment and how such environments impact learners is a major factor in learning. There is a broad range of environmental factors that include but are not limited to personal devices such as cell phones, computers etc. as well as physical environmental factors such as work, family, sports friends etc. that may impact learners’ education. The research on environmental factors impacting cybersecurity students learning has been focusing on applications of cybersecurity in different environments instead of the environmental factors impacting students’ learning. For instance, students are observed to be motivated to practice information security if they perceive high levels of severity, response efficacy, response costs and self-efficacy in cyberspace [1]. Psychological factors impacting pedagogy of cybersecurity education is discussed in [2] with its impact on students learning. Exploration of influencing factors in cybersecurity major or career choices is limited and most of the literature focuses on correlation of personality traits, academic performance in traditional STEM subjects such as math and science, and environmental factors such as parents, teachers, counselors, and socio-economic influences [3]. Students having little to no exposure to cybersecurity education within traditional middle school and high school curriculum and environments is pointed out as one of the environmental factors in K12 education [4]. Understanding environmental factors that impact university-level cybersecurity education and investing in fixing relevant issues that exist to apply corrective actions may play a significant role in the future of not only cybersecurity professional environment but also students’ choices of cybersecurity education.
In this research, the aim is to investigate environmental factors impacting cybersecurity students’ learning of cybersecurity major related concepts. The research is conducted in one of the public universities in the Northeastern region of the United States to obtain the results presented in this work. IRB approval is attained to conduct the research. Qualitative and quantitative data is collected from cybersecurity students; The quantitative data is the numerical data attained from more than 150 students based on the following two research questions:
1. What environmental factors impact (i.e. motivate or discourage) you to enjoy (i.e. like or dislike) an online course?
2. What factors in your life impact your learning in cybersecurity?
The qualitative data is collected from 20 students with a voice recording during the interviews. Each student received an incentive to participate in the interview. The interview targeted to learn details of the participant survey responses of the students with additional follow-up questions to understand the details of their written responses to the questions. Statistical calculations form the quantitative results while qualitative results rely on the voice-recorded interviews. This research is currently in progress and a summary of the results will be included in the abstract once it is completed.
References
1. Yoon, C., Hwang, J. W., & Kim, R. (2012). Exploring factors that influence students’ behaviors in information security. Journal of information systems education, 23(4), 407-416.
2. Taylor-Jackson, J., McAlaney, J., Foster, J. L., Bello, A., Maurushat, A., & Dale, J. (2020). Incorporating psychology into cyber security education: a pedagogical approach. In Financial Cryptography and Data Security: FC 2020 International Workshops, AsiaUSEC, CoDeFi, VOTING, and WTSC, Kota Kinabalu, Malaysia, February 14, 2020, Revised Selected Papers 24 (pp. 207-217). Springer International Publishing.
3. Emerick, G. J. (2020). Factors that influence students to choose cybersecurity careers: An exploratory study (Doctoral dissertation, University of Illinois at Urbana-Champaign).
4. Shein, E. (2019). The CS teacher shortage. Communications of the ACM, 62(10), 17-18.
Authored by
Dr. Emre Tokgoz (State University of New York - Farmingdale) and Alyssa Xiang (Affiliation unknown)
The National Science Foundation (NSF) suggested the Quantum Leap Big Idea in 2016 as one of its 10 Big Ideas for Future NSF Investments. The Quantum Leap initiative aims to advance the understanding and application of quantum phenomena, encouraging interdisciplinary research to achieve breakthroughs in quantum systems, materials, and communications.
Despite significant advancements in quantum technologies, a few previous studies have focused on high school students or computer science major students, very few address undergraduate students that have various major backgrounds. Some studies shared instructors' experience in quantum computing learning, efforts to teach quantum computing in higher education and investigating students’ learning and attitude are lacking. This study presents a novel tool named Spin-Quantum Gate Lab that allows students to learn about quantum computing through simulations. Spin-Quantum Gate Lab was designed based on multimedia-based learning (MBL) and simulation-based learning (SBL) theories, and aims to enhance higher education on quantum computing through MBL materials, SBL tools, and hands-on programming content. To investigate the usefulness and effect of the tool on students’ learning outcome and attitude, 19 undergraduate students from a public university in the South United States participated in a quantum information science course using this tool for learning for two weeks. Data were collected through pre- and post-surveys before and after the two week’s intervention, including knowledge tests (five items), attitude questionnaires (seven items), and post-only engagement and usability questionnaires (seven items). Open-ended questions (three items) about how they like or do not like the Spin-Quantum Gate Lab, and how it helped them learn concepts. Results show positive impacts on students’ quantum computing knowledge (p<.001), and engagement and perceived usability (M = 3.90, SD = 1.06). However, there is no significant attitude change found. This study introduced a novel learning tool for undergraduate quantum computing education and provides empirical evidence for SBL materials supporting quantum computing learning.
Authored by
Zifeng Liu (University of Florida), Yukyeong Song (University of Florida), Qimao Yang (University of Florida), Wanli Xing (University of Florida), and Jing Guo (University of Florida)
The Advancing Retention through Research Opportunities for Workforce Development in STEM (ARROWS) Project's mission is to increase the flow of minority women into scientific and technological careers. The ARROWS pipeline is an institutional project grant that exposes academically gifted high school and undergraduate female students to majors and careers in Science, Technology, Engineering, and Math. This project has an annual summer research initiative developed to attract and cultivate the next generations of female scientists, mathematicians, and technologists. When coordinated with our partner programs, these projects assemble into a three-stage student success pipeline that guides participants from secondary education to college, then to the technical workforce or graduate study. We nurture students’ enthusiasm for STEM through an immersive, summer research experience that improves competency in four areas of national need: artificial intelligence (AI) and machine learning (ML), robotics/autonomy, cybersecurity, and data analytics, specifically using hybrid learning environments. We desire to participate in overcoming the challenge of supporting traditionally underserved and underrepresented student populations in STEM such as females, minorities, and economically disadvantaged students. As the largest HBCU in the world, North Carolina A&T is well positioned academically, regionally, and culturally to support these particular prospective student demographic groups.
Authored by
Dr. Evelyn Sowells-Boone (North Carolina A&T State University) and Pal Dave (North Carolina A&T State University)
As K-12 supports improve inclusivity, more neurodivergent students enter undergraduate Computer Science programs. This field frequently attracts students with Autism Spectrum Disorder, Attention-Deficit Hyperactivity Disorder, dyslexia, and other neurological differences from the expected norm. Many of these students have an intense interest in Computer Science and other capabilities that are well-suited to the field. Unfortunately, their attrition rates are disproportionately high. This is because higher education was not designed for students with disabilities, and work must be done to improve inclusivity for all students.
This can be done, in part, by making teaching practices more inclusive. While the exact numbers are unknown, every Computer Science class is neurodiverse, meaning that the way instructors teach and provide support must account for all students. But neurodivergent student voices are underrepresented in education research, limiting instructor insight into the practices that would make teaching most inclusive. Typically, research on effective learning and instructional practices focuses on neurotypical students, conducted by neurotypical researchers. Without neurodivergent perspectives in research, we fail to account for the way these students regularly face challenges in undergraduate education. Additionally, the few university-provided accommodations in the United States available at most schools for neurodivergent students are not based on research to determine what would be effective for their needs.
In this exploratory research study, we sought to understand neurodivergent students and neurodivergent instructors’ experiences of inclusive practices. In the Spring Semester of 2023, the research team conducted a series of individual interviews and group interviews with separate teams of two neurodivergent faculty and three students in an undergraduate Computer Science program at a large, urban, Northeastern American university. Teams were tasked with sharing and reflecting on their perspectives about practices and interactions that shaped student learning. Individual student interviews were reserved for gaining additional insight into student experiences throughout the semester. Interviews were semi-structured to encourage students to share any experience that impacted their learning, whether in the classroom or on campus. Interviews and meetings were transcribed from audio files and analyzed through an interpretive phenomenological inquiry approach, an inductive approach that seeks to understand experiences as those who lived them perceived them.
Several themes emerged about the types of practices students and faculty collectively believe impacted the inclusivity of their classroom and their experiences of them. The many practices they described can be categorized into best practices, inclusive practices, and exclusive practices. That is to say, all students benefit from faculty and teaching assistants who use best practices, the basic required teaching practices to support learning. That includes making classrooms feel safe, eliminating distractions where possible, and keeping organized and clear instructions. For all students to be able to learn effectively, even those with disabilities or who are disadvantaged by classrooms that do not reflect their language, culture, or identity in other ways, teaching practices must also be inclusive. Universal Design for Learning is an excellent example of this type of approach, by implementing adaptability for a diverse set of students. For example, the framework recommends including multiple options for engagement and communication. In addition to these practices, however, neurodivergent students and faculty also had perspectives on practices that might not be beneficial to all students, but could improve things for neurodivergent students.
In this work-in-progress paper, we share student and faculty perspectives on how these practices shape their experiences, in addition to implications for strategies instructors can implement to improve instruction for Neurodivergent Students in Undergraduate Computer students. Ultimately, we anticipate inclusive practices will help lower attrition rates for neurodivergent students, and effectively structure an inclusive learning environment within undergraduate Computer Science programs as a whole.
Authored by
Ms. Valerie Elise Sullivan (University at Buffalo, The State University of New York) and Prof. Rachel N. Bonnette (University at Buffalo, The State University of New York)
This study explores the integration of Machine Learning (ML) concepts into the curriculum for 6th to 12th-grade students thus, addressing the growing importance of computational skills in the STEM workforce. Our hypothesis is that connecting abstract computing principles to practical ML applications in solving real-world problems through research-based exploratory learning will attract students to ML-related courses and STEM fields at large. Hence, we utilized a real-world example of predicting Alzheimer’s Disease severity based on data drawn from smart home sensors.
This project is the outcome of a Research Experience for Teachers summer program. The curriculum integration was implemented across Mathematics, Algebra, and Statistics subjects. Key components of the integration included exposure to computing fundamentals using Scratch, building basic ML pipelines with explainability in ORANGE, and conducting Automated ML using Aliro. The study also introduced students to Python programming concepts using Google Colab. Each of these software enabled an increasingly level of ML exposure while utilizing similar prediction model framework. Orange is an open source for ML and data analysis through Python scripting and visual programming. It is very user friendly for non-programmers. Aliro is an open-source software package designed to automate ML analysis through a clean web interface. It includes a pre-trained ML recommendation system that assists the students to automate the selection of ML algorithms and its hyperparameters and provides visualization of the evaluated model and data. The smart home data used to train and validate the prediction models is drawn from the CASAS Kyoto dataset.
Despite facing challenges such as time constraints and technological limitations due to school district policies, the project successfully incorporated ML concepts into existing curricula. The integration emphasized the iterative nature of research, model performance evaluation, and the importance of balancing model fit with generalizability. Students were able to learn ML concepts by repetition and reinforcement using the three different software suites (ORANGE, Aliro, and Google Colab). This study contributes to the growing body of literature on STEM education by demonstrating practical approaches to introducing complex ML concepts to secondary school students. It highlights the potential for interdisciplinary learning and the development of critical thinking skills essential for future STEM professionals.
Authored by
Dr. Tayo Obafemi-Ajayi (Missouri State University), Dr. Naomi L Dille (Missouri State University), Dhanush Bavisetti (Missouri State University), and Mrs. Sherrie Ilene Zook (Affiliation unknown)
This paper explores the innovative use of Large Language Models (LLMs) to help create interactive
online educational resources for digital system design and computer architecture courses, typically
in electrical and computer engineering undergraduate courses. We present a framework that uses
LLMs and existing online tools such as Google Colab and Gradio to rapidly develop and deploy
interactive simulations, exercises, and automated grading systems to help learners work in the
space with minimal need for up-front teacher development or full-end industrial tools associated
with them. Our approach shows how to reduce the time and effort required for instructors to create
engaging, personalized online learning experiences. We evaluate the effectiveness of this method
through a series of case studies and provide guidelines for instructors to leverage these technologies
in their courses.
Authored by
Dr. Peter Jamieson (Miami University), Ricardo Ferreira (Universidade Federal de Viçosa), and José Nacif (Universidade Federal de Viçosa)
Researchers have long faced challenges in drawing insights from complex and detailed customer feedback. Often, they rely on lexicon-based sentiment tools like VADER (Hutto & Gilbert, 2014) and AFINN (Nielsen, 2011) to classify words as positive, neutral, or negative, and aggregate scores to derive insights. However, these models often miss the nuances in reviews, especially when sentiments are context-specific (Dalipi et al., 2021). This research explores an advanced sentiment analysis technique called BERT (Devlin et al., 2019) to extract insights from lengthy reviews.
Traditional sentiment analysis tools are less challenging to apply but often struggle with the full context of reviews, especially when emotions, sarcasm, or complex expressions are present. To understand whether an alternative approach can help us overcome these limitations, we apply BERT (Bidirectional Encoder Representations from Transformers), a transformer-based model that belongs to the family of large language models (LLMs). Building on studies by Dalipi et al. (2021) and Sung et al. (2019), our research extends BERT’s use to analyze Coursera reviews across courses and institutions, providing a novel contribution to EdTech. Our dataset, sourced from Kaggle, contains over 1.45 million Coursera reviews, capturing variables such as review text, rating, date, and course ID (Muhammad, 2023). These reviews, some up to 2,470 characters long, allow us to explore patterns across courses and institutions, offering insights beyond the scope of traditional numerical ratings.
We compared the performance of VADER, AFINN, and BERT in predicting course ratings. BERT, by processing entire sentences and using this to understand context, provided more accurate sentiment predictions. BERT (R-squared: 0.26) consistently outperformed lexicon-based models (R-squared: 0.07) according to our regression models. When combined with traditional methods, the R-squared improved marginally to 0.27, reflecting a slight enhancement in prediction accuracy.
We also conducted t-tests to assess how sentiment varied over time, comparing weekend and weekday reviews. Results indicated that weekend reviews were generally more positive (t-statistic = 2.33, p < 0.05). Additionally, ANOVA revealed that similar courses at different institutions produced varying sentiment scores, influenced by the institution's reputation, with top-tier institutions averaging 0.99 in sentiment compared to 0.63 for lower-tier institutions, on a normalized scale of 0-1.
From a managerial perspective, BERT’s insights are valuable for identifying areas in need of improvement, such as course content or instructor effectiveness, each of these is often missed in any traditional models. For example, consistently low sentiment regarding instructors (coefficient = 0.21, p < 0.05) may signal the need for training or course redesign. Our analysis also indicated that temporal trends in feedback can inform strategies for collecting reviews or launching new content. Furthermore, understanding differences across courses (e.g., STEM vs. non-STEM course clusters) and their associated sentiment scores can help institutions assign developmental priority to courses most popular, better satisfying student preferences on platforms like Coursera.
In conclusion, this study underscores the potential of the BERT model to enhance sentiment analysis in the EdTech space. By providing more actionable and context-aware insights than traditional tools, BERT offers a more practical and accurate approach to improving educational platforms like Coursera. Future research could apply this approach to other educational settings, such as K-12 or corporate training programs, and explore how real-time feedback systems using BERT can enhance student outcomes. This research contributes to the broader discourse on how LLM tools can improve educational feedback mechanisms, offering both practical and academic insights into the future of EdTech and student satisfaction. Additionally, the approach outlined in this research can also be implemented across other business areas as well where platforms or brands intend to offer better services based on customers’ detailed feedback.
Authored by
Priyanshu Ghosh (Mission San Jose High School) and Dr. Mihai Boicu (George Mason University)
One of the largest issues facing college students today is addressing mental health concerns, especially due to the inaccessibility of mental health resources. Many students struggle with mental health throughout their college experience due to the unique variety of stressors at university (academic pressure, social difficulties, new environments, etc.). Students often find it difficult to reach out for services they need, so the University’s mental health resources need to be very accessible. Similarly, staff have a need for the University’s resources, for themselves and when concerned for students. Three computer science majors, overseen by a faculty member from the computer science department, set out to solve this by designing and developing the “XXXXCare” mobile app. XXXXCare provides a user-friendly interface for accessing relevant and elaborate guidelines for how to properly access resources for mental health concerns. The app development itself involved the usage of platforms such as Unity and Figma for building and sketching out the UI and back-end of the app. In addition, the University’s Counseling and Psychological Services office served as an outside expert. The app provides guidelines for which resources are relevant, as well as links and phone numbers for each of them. The app has been designed with the extensivity in mind, allowing for it to be updated to handle new audiences, as needed, and also integrate with other potential off-shoot projects.
Authored by
Mr. Thomas Rossi (University of New Haven), Ekaterina Vasilyeva (University of New Haven), Mx. Ren Oberdorfer (University of New Haven), and Jhansi Sreya Jagarapu (University of New Haven)
Artificial intelligence (AI) is predicted to be one of the most disruptive technologies in the 21st century (Păvăloaia & Necula, 2023), and to prepare all young people to live and work in an AI infused world, many are calling on teaching CS and AI across grade bands (Committee on STEM Education, 2018). However, this presents many challenges particularly in supporting teachers to integrate concepts that they were not taught in their preservice education (Gatlin, 2023) or don’t have clear connections to K-12 core standards. Further, most AI educational interventions utilize LLMs or chatbots. Computer vision is an underexplored, underutilized, and accessible way to introduce young people to CS and AI.
To help support teachers to integrate CS and AI into their instruction, our interdisciplinary team of paleontologists, natural history museum educators, computer science engineers, and educational technology researchers designed and developed an innovative, flexible computer vision curriculum for science teachers. The curriculum called [TITLE] is funded by [blinded for review; award number] and blends CS and paleontology as middle school students build and evaluate their own computer vision model used to classify fossil shark teeth.The curriculum consists of five, flexible modules aligned with national and [STATE] science standards. These modules include an introduction to AI, classifying shark teeth and data, training and evaluating machine learning models, identifying bias in datasets, and building a machine learning model. Teachers choose whether to teach these modules in sequence or in isolation, and we encourage teachers to change and adapt these modules to their own teaching styles and students’ learning needs.
[TITLE] is implemented as a three year project, and is currently in year three of implementation. To date, we have partnered with 57 teachers, 597 students, and 47 schools. In this paper, we showcase one 7th grade science teacher and describe the implementation choices and changes she made for her own classroom and students’ needs. The purpose of this paper is to explore how one teacher adapted this curriculum to her own classroom, and describe successes and challenges which can be applied to others wishing to design and develop similar curricula. The data we report on include teacher reflections, the changes she made to the modules, researcher observations of implementation, student achievement pre and post tests, document analysis of student work, and students’ science related attitudes before and after the modules.
This teacher used the curriculum at the beginning of Fall 2024 to teach the Nature of Science standards. She taught one module per day, in sequence and chose to emphasize data science, curation, and bias. As one example, this teacher was very intentional about exploring and comparing the quality of data curated by citizen scientists and scientific institutions, and helping students understand how data quality influenced their computer vision model. When creating their own computer vision learning models, students recognized that “clear images” and “many images” were needed for accurate models. Many students recognized that AI models were only as good as the data used to build the model, with one student saying AI may have “trouble generating answers for people. If you give limited answers or bad pictures of descriptions.” Students' achievement and science related attitudes are currently being assessed and analyzed, and more details will be shared during the full paper.
While curricular interventions using chatbots and LLMs have a growing literature base, curricula for young people on building and using computer vision models is sparse. A more robust description on our innovative computer vision curriculum, the curriculum modifications made by the teacher, data analysis, results and implications will be given in the full paper and the presentation.
Authored by
Dr. Christine Wusylko (University of Florida), Rachel Still (University of Florida), Dr. Pavlo Antonenko (Affiliation unknown), Brian Abramowitz (University of Florida), Dr. Jeremy A. Magruder Waisome (University of Florida), Victor Perez (Affiliation unknown), and STEPHANIE KILLINGSWORTH (University of Florida)
Immersive technologies like Virtual Reality (VR) and Augmented Reality (AR) have provided a promising shift in many areas, especially in education. Additionally, using Pedagogical Agents (PA) in education has improved student’s learning curve in many fields like computer science, math, engineering, and English, where it showed most effective during COVID-19.
Nowadays, a booming technique was introduced in many areas that provided a quick respond, almost accurate results in a fraction of the normal time at any field there is, which is Generative AI (GenAI). Particularly in education, this technique was used in many fields (e.g., English and STEM), where studies showed promising results. While testing the effects of using AI techniques on K-12 students, studies showed the significant impact of using these techniques on students' performance, behavior, and understanding of their courses. At the same time, some studies were concerned about teachers' acceptance level of using these techniques in their classes and the impact on their students.
The aim of this paper is to better understand the field of PA, and the advances that evolved after the GenAI revolution. To explore these advances, a comparison between the modern AI techniques and PA with immersive techniques will be provided to understand what has changed and what kind of improvements have been made. Therefore, this paper paves the way for researchers who are interested in education and technology to explore potential directions in the latest technologies in education. This paper also helps researchers and practitioners learn about the current approaches and technologies that connect GenAI and PAs in K-12 education.
Authored by
Mrs. Rawan Adnan Alturkistani (Virginia Tech Department of Engineering Education) and Mohammed Seyam (Virginia Polytechnic Institute and State University)
This work-in-progress paper explores the perceptions of students and educators regarding the impact of Artificial Intelligence (AI) on education, specifically before and after the release of OpenAI’s ChatGPT. Using a mixed-methods survey distributed through online platforms, the study examines participants’ adoption patterns, perceived usefulness and ease of use, and ethical concerns. Quantitative data were analyzed using constructs from the Technology Acceptance Model (TAM) and Diffusion of Innovations Theory, while qualitative responses were thematically coded and interpreted through the lens of the Constructivist Learning Theory. Results show that while participants largely view AI tools as useful, accessible, and aligned with modern learning needs, significant concerns remain regarding over-reliance, critical thinking erosion, and academic integrity. The findings reveal the need for structured AI literacy, responsible integration policies, and the design of AI systems that prioritize transparency, personalization, and ethical safeguards. This study contributes evidence-based insights to guide educators, developers, and policymakers in ensuring the ethical and effective adoption of AI in education.
Keywords: Generative AI, ChatGPT, perception, TAM, adoption, education, ethics
Authored by
Mrs. Hannah Oluwatosin Abedoh (Morgan State University), Blessing Isoyiza ADEIKA (Morgan State University), Mr. Pelumi Olaitan Abiodun (Morgan State University), Dr. Oludare Adegbola Owolabi P.E. (Morgan State University), Abiola Olayinka Ajala (Morgan State University), and OLUWATOYOSI OYEWANDE (Morgan State University)
The integration of generative AI, such as ChatGPT and Claude, into engineering classrooms is reshaping educational practices for both students and instructors. This study investigates how student use of generative AI impacts metacognition and problem-solving strategies in an undergraduate engineering context. We conducted a two-phase experiment in an "Intro to Engineering" course. First, students completed a MATLAB exam using only built-in documentation. Following the exam, we conducted a lab where students worked through the same exam problems using ChatGPT and Claude. Screen recordings were captured in both phases to analyze problem-solving patterns and approaches with and without AI support.
Authored by
Dr. Anthony Cortez (Point Loma Nazarene University) and Dr. Paul Schmelzenbach (Point Loma Nazarene University)
This paper documents a student-led Virtual Reality (VR) content creation proof of concept funded as a Research Experiences for Undergraduates (REU) supplement to an existing NSF-funded project. The original NSF project focused on faculty professional development using a community of practice model to foster the integration off-the-shelf VR contents in introductory STEM courses with the aim to enhance student engagement and improve STEM educational outcomes. A critical barrier identified amid the project was the lack of pedagogically sound, learner centered VR contents for the field of study in electrical and computer engineering (ECE). This REU project was then initiated by two motivated students who were enrolled in the redesigned, VR-integrated introductory ECE course, i.e., ECE90 - Principles of Electrical Circuits. Disappointed by the 3rd-party VR content they experienced, they went on teaching themselves VR content creation using the Unity game engine and C# programming language, developed a VR prototype (entitled MetavoltVR), and conducted user experience evaluation and learning assessment with peer students in ECE. As a proof of concept, this paper explored how student-led development of VR content and experience might offer a solution to a common obstacle faced by many STEM educators who are interested in exploring VR, which is the lack of readily adoptable VR content. This study contributes to better understanding the role and impacts of learner-as-creator/co-creator in engaging student learning in educational technology-integrated learning environments.
Authored by
Nicholas Cameron Amely (California State University, Fresno), Dr. Wei Wu (California State University, Fresno), and Jesus Leyva (California State University, Fresno)
Introduction
This full paper explores a 100-mile diet adaptation project introduced in a science education methods course for preservice teachers (PSTs) at a Canadian Initial Teacher Education Program (ITE). The project aimed to address climate anxiety by exploring connections between climate change-anxiety, place identity, and educational technology. The primary goal was to demonstrate how educational technology can enhance PSTs’ engagement with their communities and local-global climate challenges.
The 100-Mile Diet as a Pedagogical Framework
The 100-mile diet, introduced in 2005 by a Canadian couple, encourages sourcing food locally within a 100-mile radius. This concept was adapted in the course to help PSTs develop personalized or community-based solutions. PSTs explored how their adaptations could reduce carbon footprints, promote sustainability, and make climate change education more tangible.
Methodology
PSTs were tasked with adapting the 100-mile diet based on their personal lifestyles, practices, or family businesses. They worked individually or in small groups to reduce carbon footprints, engage with local resources, and build connections within their communities. The project integrated educational technology into both STEM and non-STEM subjects in early childhood and elementary education.
Over 12 weeks, PSTs documented, analyzed, and synthesized their adaptations using various formats, including PowerPoint presentations, digital books, diaries, and Instagram vlogs. The instructor collected 55 unique projects from 128 PSTs over three academic terms (2022-2024). Each project was evaluated using a 5-level rubric, validated by colleagues from other Canadian ITE. The analysis focused on identifying success indicators, assessing student engagement, and determining the project's relevance for future STEM teaching. The analysis was further mapped from the Seven Essential Elements for consideration in the future implementation of Experiential Learning in Undergraduate Engineering Education Review.
Findings
1. Technology as a Pedagogical Tool
Educational technology was central to the project, allowing PSTs to document, analyze, and present their findings using digital tools. Some tracked food costs and mileage with Excel, while others documented local food production via Instagram. Educational technology fostered creativity, collaboration, and critical thinking. It bridged the gap between STEM and non-STEM PSTs, making complex concepts more accessible through user-friendly tools.
2. Student Engagement and Ownership
The project was grounded in experiential learning, encouraging PSTs to actively design, implement, and reflect on their adaptations. Students developed ownership of their projects, often incorporating personal or community-based learning. One group traced their family’s apple production from local cooperatives to retail stores, while another documented growing native berries. Experiential learning helped PSTs address climate anxiety by grounding their learning in real-world actions.
3. Community Engagement and Relevance
The project emphasized the importance of engaging with local communities to address climate change. By interacting with local farmers, markets, and agricultural research centers, PSTs gained insights into the environmental and economic impacts of local food systems and reinforced interdisciplinary approaches to climate change-anxiety.
4. Diversity, Equity, and Inclusion (DEI)
A key objective was to bridge the gap between STEM and non-STEM PSTs, particularly those from underrepresented backgrounds. The inquiry-based approach promoted an inclusive environment where all students could participate meaningfully. Place-based learning allowed PSTs to draw on their personal and cultural practices. Educational technology helped PSTs present complex information in accessible formats, encouraging reflection and action.
Conclusion
The 100-mile diet adaptation project successfully combined educational technology, experiential learning, and DEI initiatives to address climate anxiety. By integrating personal and community experiences, the project fostered deep engagement with climate change. Future iterations could benefit from a greater focus on the emotional aspects of climate anxiety, while longitudinal studies could track the long-term impacts of these adaptations in classrooms.
Authored by
Dr. Gerald Tembrevilla (Mount Saint Vincent University) and Mohosina Jabin Toma (University of British Columbia, Vancouver)
The structure and timing of instructional material and courses has the potential for significant impacts on student outcome. An example can be found in the practice of spaced learning, where learners encounter the same material multiple times over an extended period. The length and timing of intervals between encounters are also influential in the retention of information over an extended period.
While acting as instructors for an intermediate level MATLAB course, we noticed a disparity between the performance of students, despite the assumption of a uniform introduction to the language. The majority of students in the first-year engineering program take Fundamentals of Engineering I, which introduces students to the language over a roughly 10 week period. However, students may also enroll in the course after taking an honors version of Fundamentals of Engineering which includes additional instruction in C/C++, or a transfer version of the course which can take place over a compressed timeline.
Further investigation revealed that the interval between students’ introduction to MATLAB in their first-year courses and their continued education in the intermediate level course was also not uniform for all students. Without specific requirements for when to take the intermediate course, the interval between students’ first collegiate introduction to the language and their intermediate course could potentially range from 1 month to more than 3 years. Moreover, depending on the student’s major, they may have not utilized any programming languages in the interim, or may have written code on a regular basis in their courses. Some students may also have worked as teaching associates for the department, and so may have repeatedly used the language in an instructional capacity, even if it was not required for their major courses.
This study investigates the potential correlation between the length of time since students’ collegiate introduction to MATLAB, the circumstances of their introduction, their major, and potential teaching associate status versus their performance in their intermediate level course. Information will be gathered via institutional records and self-reported data from the current class cohort and analyzed for correlations between the various factors and final course grades using a One-way ANOVA and Tukey HSD.
Ultimately, we wish to determine if the circumstances of introductory MATLAB courses and the interval between them and subsequent programming encounters impact student success in their second, intermediate MATLAB course. Ideally, these results would give students and program advisors guidance on the best timing of the intermediate course to maximize student success and retention of the course information.
Authored by
Dr. Jessica Thomas (The Ohio State University)
Technology plays a critical role to support problem-solving tasks for students in engineering education through computer-supported learning interventions. There is also a growing interest in using computer-supported learning to enhance the development of creativity, critical thinking, and problem-solving abilities among learners. In this space, design and design thinking are a significant focus for researchers looking to engage students in the design process to develop their expertise. Design thinking is a set of activities including empathizing, defining and ideating carried out by learners when designing a solution, product, or service. Many research approaches have been used to study how design thinking can be developed among learners through different pedagogical interventions yet, there is a continued need to investigate the diverse pedagogical approaches to develop design thinking. The aforementioned is even more important when computer-supported approaches are studied for enhancing design thinking abilities for open-ended problems that have multiple viable solutions. Literature has focused extensively on the design and implementation of Intelligent tutoring systems for close-ended problem solutions in STEM education. However, there remain primarily scattered efforts to build systems for open-ended problems. This work has often been done in isolation and little work has sought to synthesize them.
To draw this work together our research will employ a systematic review of this scholarship to identify approaches and strategies for conceptualizing an agent system to support learning in open-ended problems. Further, this work may also shed some light on how an Agent system should understand the student's existing knowledge and the nature of open-ended problems. Our initial review focuses on intelligent tutoring systems and Artificial Intelligence to support students’ learning in STEM, between the period of 1992 to 2024 in several databases. We focus on STEM topical areas and open-ended problems as the work that focuses explicitly on agents supporting design thinking is limited, however, we believe we may gain valuable insights from analyzing the broader area that will be transferable to supporting design thinking. An initial search procedure guided us to identify the scholarly work on Intelligent tutoring systems for STEM problem solving, Artificial Intelligence for Design thinking, and open-ended problem solving with virtual agents. We have restricted our scope by removing any scholarly work presenting an Intelligent tutoring system for the close-ended problems. We have considered articles that have K-12 or undergraduate participants. In analyzing past work as part of this review, this investigation seeks to address the following areas of interest i) Identifying interactions within learners and Agent within the learning environment and how those interactions evolve during the learning process, ii) how Agent Systems facilitate a structured learning approach (rules-based learning ) without nullifying the complexities and ambiguities of real-world learning, iii) How do Agent Systems facilitate discovery learning over instructional learning, and iv) What strategies are to be incorporated in Agent System to enhance the collaboration learning for open-ended problem-solving process. We will use inductive qualitative content analysis ,starting with open coding by two independent coders who will meet, reconcile codes and later develop categories to address the research prompts above. Our results will catalog the ways researchers have operationalized agents, student interactions, and the problem or knowledge space and the commonalities and differences between approaches. From these results we will discuss new avenues of structuring agent systems for open-ended design thinking problems reflecting on the innovations and limitations in existing scholarships and how interactive learning environment should be created to support this learning. The resulting outcomes provide a foundation for future researchers and system developers with a comprehensive strategy to adopt in the design and development of Artificial Intelligence (Agent Systems) to support the design thinking.
Authored by
Mr. Siddharthsinh B Jadeja (University at Buffalo, The State University of New York), Dr. Corey T Schimpf (University at Buffalo, The State University of New York), and A Lynn Stephens (Affiliation unknown)
Roughly 3% of children experience speech delays or difficulties globally (The Kids Health Organization, 2022). A “speech delay” is defined as a child's speech or language development significantly communicative milestones based on phases of language development, language acquisition, and reading comprehension skills (Mayo Clinic, 2011; Piaget, 1971). Although speech delays are commonly rooted in neuronormative presumptions (Cherney, 2019), speech delays can have long-term impacts on children (Lyons & Roulstone, 2018; Sunderajan & Kanhere, 2019). Some children experience a lack of well-being, lack social skills, and struggle to develop meaningful relationships (Lyons & Roulstone, 2018). With the large importance of supporting students in interactive and engaging manners and the rise of expansive technology, AR AniMotion leverages advanced AR and artificial intelligence (AI) technologies to create an engaging, immersive learning environment for children. Augmented reality, or AR, is a technology app that relates digital information to the real world through images, videos, and or 3D models to make an environment more visible and easier to manipulate. Therefore, this application supports closing the gap in implementing engaging technology to support positive long-term impacts on children with speech difficulties. This application was rooted in the literature highlighting how current speech therapy methods often involve repetitive exercises that may disengage young children over time. In addition, Other adopters of AR in the education sector have attested to benefits such as enhancing learners’ attentiveness, improving their performance, and facilitating interactivity and learning experiences (Billinghurst & Duenser, 2012; Bacca et al., 2014; Wu et al., 2013).
In developing AR AniMotion, the design was influenced by a comprehensive literature review, which underscored the importance of interactive learning tools for children with speech difficulties. Studies indicate that when children are provided with real-time feedback, it significantly improves their learning outcomes and motivates them to engage more actively in their education. With its ability to combine the real world with digital enhancements, Augmented Reality has emerged as a powerful tool in education, particularly in creating immersive learning experiences. AR AniMotion builds on these findings by providing auditory feedback and a strong visual stimulus, helping children connect the words they produce and the actions they see. In the initial testing phase, the research team thoroughly reviewed the application. We conducted internal testing, evaluating its functionality, user experience, and educational effectiveness. Feedback was used to refine the user interface and improve the responsiveness of the speech-to-text engine, ensuring a seamless interaction between the child’s speech and the application’s output. The application is ready to be tested in real-world classrooms or therapy settings and is pending approval for ethics. Once ethics approval is secured, the research team plans to conduct formal studies involving both teachers and students, focusing on assessing how well AR AniMotion supports speech development and how it can be integrated into existing speech therapy programs.
This application clearly closes the gap in supporting children with speech difficulties in language development, instilling a passion for learning, growth, and communication through images, and supporting long-term benefits with self-esteem, social skills, and well-being. The application converts real-time speech input into animated actions of 3D animal models, allowing children to see a direct connection between their spoken words and the resulting animations, making the learning experience highly interactive and motivating. This work can potentially revolutionize how speech therapy is delivered by integrating cutting-edge AR technology with sound pedagogical principles.
Authored by
Mr. MALEK EL KOUZI (Queen's University), Haley Clark (Queen's University), and Richard Reeve (Queen's University)
Parse trees, or syntax trees, are fundamental concepts in computer science as they represent the structure of programming language expressions. Traditional teaching methods require students to manually draw syntax trees for given expressions - a process that can become tedious for students to practice and time-consuming for educators to grade. This project explores the potential of virtual reality (VR) to provide a more engaging and interactive learning experience for syntax tree education while also supporting auto-gradable exercises for scalable practice.
We have developed a web-based VR tool that enables students to construct syntax trees through drag-and-drop interactions in an immersive environment. To evaluate its effectiveness, we plan to conduct a comparative study with three groups of undergraduate computer science students: one receiving traditional instruction with static diagrams, one using a browser-based drag-and-drop tool, and one utilizing the VR tool. The evaluation will include both qualitative and quantitative measures. Qualitatively, we will assess student engagement and self-efficacy through Likert-scale surveys. Quantitatively, we will compare task completion times and scores to evaluate learning outcomes. By automating tree validation and grading, the tool not only enhances engagement but also improves teaching efficiency.
Authored by
Colin Jacob Soule (Bucknell University), Lea Wittie (Bucknell University), and Sing Chun Lee (Bucknell University)
Introducing algorithmic data structure concepts to middle school students poses a unique challenge due to their complexities and reliance on code-specific syntax. Aiming to simplify this process, we adapted a tangible Binary Search Tree (BST) activity from the CS Unplugged curriculum to promote an experiential understanding of the BST data structure. This paper presents our iterative approach to developing this activity, first through describing our preliminary qualitative findings and then by proposing a mixed-method study to formalize our inquiry with quantitative data.
We implemented two versions of the BST activities in week-long summer coding camps for middle school students during June and July 2023. Our qualitative observations and field notes from these implementations informed future design considerations. In our first iteration, students drew BSTs with chalk on pavement, and they struggled with the inflexibility of chalk diagrams. In our second iteration, students created BSTs with manipulable notecards and physical connectors made from string, and a spy-themed narrative was added to increase student engagement. In this second iteration, we observed that students often bypassed systematic tree traversal by visually scanning for targets, potentially undermining conceptual understanding of BST efficiency.
Building on our qualitative findings, we propose a mixed-method study of middle school students in informal, summer STEM learning environments. Pre- and post-surveys will examine student self-reported engagement and enjoyment. An activity quiz will assess if students can identify correctly-structured BSTs and perform simple operations on a BST. Interviews with implementing educators will provide information on student behavior, questions, and misconceptions during the activity.
Authored by
Duncan Johnson (Tufts Center for Engineering Education and Outreach), Dr. Ethan E Danahy (Tufts University), and Elliot Benjamin Roe (Georgia Institute of Technology)
Engineering educators face vexing technical barriers when leveraging Generative Artificial Intelligence (GenAI) for assessment creation, leading to limited adoption of this transformative tool. This Work-in-Progress paper addresses the research question: "How can automated tooling reduce the technical complexity of implementing GenAI-powered assessment generation in engineering education while maintaining assessment quality?" The current process [1] requires educators to perform six distinct technical steps across multiple platforms, consuming 30-45 minutes per assessment to generate a quiz bank. This paper will show that consolidating these steps into a single automated workflow will significantly reduce implementation time while maintaining or improving assessment quality. The proposed system allows instructors to easily upload material, resulting in a new quiz that seamlessly appears in the list of quizzes for a specific course’s LMS. This streamlined approach not only accelerates assessment creation but also enables educators to generate more comprehensive and varied assessment materials, ultimately enhancing student learning outcomes through increased opportunities for practice and feedback.
While development is ongoing, our preliminary technical architecture demonstrates the feasibility of reducing the quiz creation process to 3-5 minutes through automation of format handling and direct API integration. Our research design includes planned quantitative analysis of time required for quiz creation and deployment, success rates of Canvas LMS integration, accuracy of technical content in generated assessments, and coverage of specified learning objectives.
The study's significance lies in its potential to democratize GenAI tool adoption in engineering education by removing technical barriers that limit widespread implementation. Next steps include completion of the integration tool, development of validation protocols for engineering content, and initiation of controlled testing with engineering educators. This research will contribute to understanding how automated tools can support the adoption of GenAI in engineering education while maintaining pedagogical quality.
Authored by
Dr. John William Hassell (OU Polytechnic Institute), Christopher Freeze (The University of Oklahoma), Mr. Ahmed Ashraf Butt (The University of Oklahoma), H. Glen McGowan III (Google), and William Ray Freeman (Affiliation unknown)
The COVID-19 pandemic has increased the need for robust methods to uphold academic integrity in online examinations, where issues like impersonation and cheating are prevalent. Most machine learning techniques rely on image or video data, but few are looking at other indicators, such as post-score analysis. This study assesses how proficient machine learning models, such as Random Forest, Support Vector Machine, Logistic Regression, and Gradient Boosting, identify suspicious behaviors based on response accuracy and timing on exam datasets. The Gradient Boosting model achieved the best performance with an accuracy of 97.99% and an F1 score of 98.56%, highlighting the viability of post-score analysis for scalable and reliable academic integrity detection. These findings emphasize the potential of post-score analysis to safeguard the integrity of online education through effective and trustworthy detection techniques.
Authored by
Sumaya Binte Zilani Choya (George Mason University) and Dr. Mihai Boicu (George Mason University)
This work-in-progress research paper describes an instrument development effort to understand how engineering students use external resources to supplement their learning and the associated metacognitive strategies they employ with those resources. With the increasing prevalence of generative AI tools like ChatGPT, traditional problem-solving platforms (e.g., Chegg) and educational video resources (e.g., YouTube, Khan Academy), engineering students now have unprecedented access to external learning resources. Considering the pace of technological developments lowering barriers to entry for students to get on-demand assistance with their studies, there is a growing need to understand how students engage with these different resources.
Our instrument is being developed to explore two key research questions: (1) How do engineering students use external resources, including generative AI, to assist in problem-solving within their coursework? and (2) What factors drive their preference for certain tools over others? Later in our project, we will explore how students use metacognitive strategies with various external resources – which is a separate research question not directly relevant to the instrument itself. Therefore, we plan to include scales related to metacognitive strategies to enable purposive sampling for student interviews based on the results of the instrument distribution.
Our theoretical framework to ground the instrument development process is the technology acceptance model. The model's premise is that individuals choose to adopt new technologies based on the perceived usefulness and ease of use of that technology, which has been used to explore the adoption of e-learning tools, learning management systems, and video conferencing platforms among faculty and students.
To source items for our instrument, we are reviewing the literature on student adoption of external resources (i.e., “homework help” websites like Chegg, video platforms like YouTube, and generative AI systems like ChatGPT). We are using databases including Education Research Complete, ERIC, and Arxiv to search for papers; this search is currently in progress. After extracting questions, we will align them with the technology acceptance model to construct a draft instrument. Later in Fall 2024, we will pilot the instrument with a sample of 10-30 undergraduate engineering students – members of the intended population – through cognitive interviewing to determine the relevance and comprehensibility of the questions. By the time of the draft, we will have the findings to share from the cognitive interviews.
After our cognitive interviewing phase, the instrument will be administered to at least 200 undergraduate engineering students at a large Midwestern university in Spring 2025. We plan to analyze the instrument using exploratory factor analysis if items are primarily customized and confirmatory factor analysis if more minor changes are made to specific scales. By the time of the conference, we expect to present initial survey findings from the Spring data collection.
We anticipate this research will enable us to better understand how students leverage external resources in engineering education to provide better support for maximizing their utility. We invite suggestions for improving the instrument by the draft submission and seek feedback on refining the instrument for broader use at other institutions in later semesters.
Authored by
Christopher Allen Calhoun (University of Cincinnati), Dr. David Reeping (University of Cincinnati), Mr. Siqing Wei (University of Cincinnati), and Aarohi Shah (University of Cincinnati)
The advent of large language models (LLMs), such as OpenAI’s ChatGPT, has augmented the challenge of assessing student understanding and ensuring academic integrity is maintained on homework assignments. In a course with a heavy focus on programming, it is common to have a significant portion of the grade be determined by such assignments. When an LLM is prompted with the instructions for a programming assignment, it can readily give a solution that required little to no thought from the student. This has made accurately assessing a student’s programming skills through homework assignments significantly more challenging.
This work-in-progress paper investigates the student experience of the transition from solely at-home programming assignments to at-home programming assignments with the addition of three handwritten components in the form of in-class programming assessments. A key piece of this transition is being cautious to not add additional work to the professor’s workload by requiring additional assignments. To offset the addition of assignments, the at-home assignments were converted to automatically graded assignments using Gradescope. These changes were implemented in a pilot session in the 2024-2025 academic year. In the transition, the total programming grade remained at 20% of the course grade, however, three-fourths of this percentage is now determined by the in-class assessments, reducing the portion of the course grade that could potentially be determined through the students’ use of LLMs from 20% to 5%.
To assess the effects of this change on student experiences, the students enrolled in the pilot course were surveyed on the clarity, difficulty, and effectiveness of the assignments, as well as the accuracy, fairness, and timeliness of the autograder. Instructors who taught the course were interviewed to assess the professor experience with the transition. The responses from the surveys, interviews, and student performance, provide a baseline for future adjustments to the three-course sequence to accurately assess students on their basic programming skills in a world where LLMs are becoming more prevalent.
Authored by
Mr. Joshua Coriell (Louisiana Tech University), Ankunda Kiremire (Louisiana Tech University), Dr. Krystal Corbett Cruse (Louisiana Tech University), and William C. Long (Louisiana Tech University)
The integration of artificial intelligence (AI) into higher education has accelerated significantly over the past decade, with AI increasingly being leveraged to personalize learning experiences, streamline administrative processes, and enhance data-driven decision-making. Despite this rapid expansion, there remain considerable challenges and gaps in knowledge regarding the effective and ethical implementation of AI technologies in educational settings. Many institutions continue to grapple with issues related to data privacy, algorithmic bias, and the broader implications of AI on both teaching and administrative practices. This work in progress seeks to explore the perspectives and experiences of key stakeholders, specifically faculty and academic management staff, concerning the adoption of AI in higher education. By examining their expectations, perceived challenges, enablers, and concerns, the research aims to provide a comprehensive understanding of the factors that shape AI integration in teaching and management contexts. Employing a mixed-method approach, the study combines quantitative survey data with qualitative insights gathered from focus groups. These focus groups, comprising faculty members and academic management staff from a private university in Chile, centered on performance expectations, effort expectations, facilitating conditions, perceived risks, behavioral intentions, and attitudes toward AI adoption. The discussions sought to capture participants' current experiences with AI and also their future aspirations and concerns about its broader implementation. Preliminary findings show that faculties and academic managers have high expectations for AI to enhance efficiency and personalize learning. They see potential in streamlining administrative tasks and adapting instruction to students’ needs. However, concerns about data security, privacy, and algorithmic bias persist. Access to technology and institutional support are crucial for adoption, along with comprehensive training for educators and administrators. While AI offers transformative potential, ethical considerations such as data privacy and fairness must be addressed. This study provides a basis for future research and strategies for responsible AI implementation in higher education.
Authored by
Prof. Maria Elena Truyol (Universidad Andres Bello, Santiago, Chile), Dr. Monica Quezada-Espinoza (Universidad Andres Bello, Santiago, Chile), Prof. Genaro Zavala (Tecnologico de Monterrey, Monterrey, Mexico; Universidad Andres Bello, Santiago, Chile), and Claudia Bascur (Universidad Andres Bello, Santiago, Chile)
Project-based experiential learning has a proven track record of success in engineering education across disciplines. For the last several years, our lab has run such a course, Diagnostic Intelligent Assistance for Global Health (DIAG), which teaches undergraduate and first-year master’s students about biomedical engineering and bioinformatics through multi-year longitudinal projects inspired by global health issues. DIAG’s projects incorporate knowledge across fields such as the biomedical sciences, computer science, bioinformatics, and AI. As part of DIAG’s inclusionary and interdisciplinary approach, students from widely varying backgrounds in these fields are encouraged to join. While this diversity comes with countless benefits, it makes it challenging to provide each student with the specific learning resources and support they need to efficiently and confidently get started and progress on new projects.
The recent and rapid development of large language model (LLM) capabilities to comprehensively answer user queries by synthesizing vast quantities of information across varying domains and modalities has made it a highly promising option to assist in engineering education. However, a major limitation to LLMs’ utility as an educational tool is their propensity to “hallucinate” incorrect responses and an inability for the user to readily detect such hallucinations or even verify the LLM’s source of information. Retrieval augmented generation (RAG) integrates the LLM with a corpus of domain specific information that is used to inform and guide the LLM’s response and enables the citation of sources from which it drew its response, thus addressing the greatest shortcomings of standard LLM applications as an educational tool. Here we propose the DIAG student-lead development and assessment of custom RAG-LLM based applications for assisting students from diverse educational backgrounds to confidently and efficiently get started and progress on team projects.
For our first aim, DIAG students will develop a working suite of custom RAG-LLM based applications. These applications will operate as chat-bot assistants for the user, answering questions they have as they “onboard” and then work on a project. This development phase includes compiling and organizing the open-source learning resource corpus for the RAG systems to draw upon. Applications covering three general categories will be developed to assess the value of application complexity and domain specificity. In the first category, the application will be built around a single general purpose LLM agent, such as ChatGPT. In the second, applications will be built around a single domain-specific fine-tuned LLM, such as OpenBioLLM for biomedical knowledge assistance. In the third category, the application will be built around multiple expert LLM agents using tools such as LangChain or LangGraph.
For our second aim, the utility of the custom RAG-LLM based applications will be assessed against the most popular existing standards such as ChatGPT and Claude in a double-blinded fashion by DIAG students starting on new projects. We plan to assess new students joining our course at the start of the winter semester in January 2025. We will assess the amount of time needed before starting a project; their attitude towards the “onboarding” process; and self-reported measures including their confidence and competency before and after extended use of the application.
Finally, all code and data will be shared freely and in such a way that enables others to re-implement our applications and validate our assessments. If resources are available, we intend to host the application online to enhance its usability by interested groups.
Authored by
Katie Vu (University of Michigan), Mr. Avery Mitchell Maddox (University of Michigan), Caleb William Tonon (University of Michigan), Guli Zhu (University of Michigan), Tyler Wang (Stony Brook University), Rafael Mendes Opperman (University of Michigan), Qiuyi Ding (University of Michigan), Zifei Bai (University of Michigan), zhanhao liu (University of Michigan), Ziyi Wang (University of Michigan), Arvind Rao (University of Michigan), and Daniel Yoon (University of Michigan)
The goal of this article is to look at the use of Augmented Reality (AR) technology in elementary schools for making students more engaged and to facilitate teamwork among them. It particularly focuses on the progress of the FrogAR_Connect app, an AR software that we developed to teach the life cycle of frogs. The application rethinks the old textbook method by turning the traditional content into an animated one. Students can now interact with 3D AR models, which aids them in the deeper understanding of scientific concepts. FrogAR_Connect is built on the two main pillars of situated learning theory and Vygotsky's socio-cultural theory. The Situated learning theory argues that learning is most effective when it is in a relevant context, therefore, making the knowledge more meaningful. Through the AR FrogAR_Connect, students will be immersed fully in a virtual life cycle of frogs, this approach enhances students' curriculum content by connecting theory with practical experience. The Vygotsky’s sociocultural theory underscores the significance of social interaction in the educational process. This is best exemplified by the idea of varying peer interactions within the activities, contributing to the improvement of the levels of critical thinking, exchange ideas and collaboratively tackle problems considering how this should be encouraged in the pedagogic process. A special highlight of our design is that the collaborative practices in FrogAR_Connect are tailored to support elementary students who are at different level of understanding to join the exploration.
Given the significance attributed to collaboration in educational literature, FrogAR_Connect was intentionally developed to enhance collaboration, with a specific focus on facilitating real-time collaboration among elementary students. Numerous augmented reality tools prioritize individual usage, thus restricting possibilities for collaborative opportunities for group learning. This gap is filled by FrogAR_Connect, the tool that helps students to work together and cooperate on the tasks concerned with the stages of the development of frogs life cycle. Moreover, the cooperative essence inherent in FrogAR_Connect, involvement will be increased and further social functions such as teamwork will be nurtured within the students as they are crucial for the complete educational growth. Then FrogAR_Connect goes beyond, since it has few benefits that outperform the traditional ways of learning. Where AR’s immersive nature proves to be more successful in capturing students attention compared to static images or videos. Furthermore, the ability to interact with 3D models proved a tangible experience that enhances understanding of complex biological processes.
The FrogAR_Connect has undergone advances and adjustments where the research team has conducted internal usability tests and rectified all the necessary modifications to ensure the tools is usable and ready to use. In terms of access, FrogAR_Connect has not been released to teachers and students but it is fully ready to be implemented once receiving the ethics approval. The following step involve seeking ethical approval to undertake a formal study to evaluate the use of FrogAR_Connect in classroom settings by teachers and students. The findings are expected to be most relevant in understanding the extent to which AR technologies can be used in educational platforms for achieving purposeful objectives and better student learning.
Authored by
Mr. MALEK EL KOUZI (Queen's University), Dr. Omar I.M Bani-Taha (Carleton University), and Richard Reeve (Queen's University)
The rapid advancement and widespread availability of AI technologies have marked a pivotal moment in higher education, necessitating a swift and comprehensive transformation in teaching and learning approaches. Institutions are compelled to adapt to this paradigm shift to prepare students for an AI-driven workforce. However, integrating AI into higher education presents multifaceted challenges, including the lack of clear guidelines for incorporating AI into curricula, the necessity for governance oversight, ethical concerns surrounding AI, faculty preparedness, and the imperative to equip students with relevant skills.
To address these challenges, the Milwaukee School of Engineering (MSOE) launched the rAIder Strategy in Fall 2024, a comprehensive framework designed to integrate applied AI throughout its educational programs, campus-wide academic productivity, research, industry, and community engagements. Building on MSOE’s long-standing tradition of applied education and industry engagement, and drawing on research and benchmark analysis of AI adoption from over 30 universities, the rAIder Strategy employes a structured, phased approach intended to balance immediate applied AI adoption with long-term strategic goals. While the strategy is in its initial stages, early implementation efforts are underway and already offering valuable insights.
To ensure all MSOE students get a baseline understanding of AI, the rAIder Strategy includes the integration of foundational AI concepts into the core curriculum across disciplines and opportunities to pursue AI education in academic degree programs, minors and certificates. Furthermore, the rAIder Strategy involves the launch of short-term professional development programs through workshops and training sessions to enhance faculty preparedness. These efforts are not only aimed at increasing faculty proficiency with applied AI tools but also at empowering faculty to integrate AI in their teaching and improve their productivity. The increasing faculty participation in these early sessions suggest a growing interest and heightened recognition of the importance of AI literacy.
Recognizing the necessity for governance oversight and ethical considerations, MSOE has established a new role of Applied AI Director and will be forming an institutional, multi-departmental steering committee to oversee the rAIder Strategy. MSOE has initiated the creation of comprehensive guidelines and policies for the ethical adaptation of AI within the educational process. There are plans to integrate AI ethics into coursework across curricula to engage students in the responsible use of AI technologies. While the rAIder Strategy is still in its nascent stages, early observations suggest positive outcomes. Preliminary results of the rAIder Strategy implementation also highlight the crucial need to maintain an ongoing dialogue between academia and industry to ensure AI applications align with changing professional standards and core job-readiness skills. Future objectives of the rAIder Strategy include expanding interdisciplinary applied AI academic programs, refining the governance structure, integrating AI literacy skills across curricula, increasing engagement with the educational community to share best practices and insights, and developing and implementing tools to assess the impact of applied AI.
Furthermore, these early findings suggest that the rAIder Strategy is laying down a sustainable framework for integrating applied AI within higher education. Through this innovative approach, MSOE is building a foundation for long-term AI literacy, responsible governance, robust research, industry partnerships, and community involvement, ensuring that both students and faculty are well-prepared for the future careers in the AI-driven domains.
Authored by
Dr. Nadya Shalamova (Milwaukee School of Engineering), Dr. Olga Imas (Milwaukee School of Engineering), Mr. James Lembke (Affiliation unknown), Dr. Maria Pares-Toral (Milwaukee School of Engineering), Dr. Derek David Riley (Milwaukee School of Engineering), and Daniel Bergen (Milwaukee School of Engineering)
Researchers have found that motivation and sense of belonging play a role in course performance. In this work we focus on a Discrete Math course which is a required gateway course in the computing sequence. This course involves conceptual problem solving that requires study behaviors different from those used in programming courses and might affect student sense of belonging and motivation in different ways. Specifically, we focus on two aspects of motivation from Expectancy-Value Theory: students’ expectation for success and the value they place in the course. For sense of belonging, we focus on whether students see themselves as computer scientists, and whether they think their instructors, parents and friends see them as computer scientists.
We have identified the following research questions:
RQ1: To what extent do students’ initial expectancy of success, value for the course, and sense of belonging correlate with their course performance, as measured by final grades?
RQ2: To what extent do students with below-average measures at the start of the course change their attitudes in a way that leads to positive course outcomes?
During Fall 2023 we surveyed over 400 students in a Discrete Math course at a large state R1 University. We measured students’ expectation, value and belonging during the first week of the semester. In the middle of the semester, students were given an intervention that discussed the importance of these attributes and how they can lead to success in the course. Students were then given an additional survey to measure how much they thought these attributes lead to their success in the course so far, and whether each of these attributes increased since the beginning of the year.
We divided the students into those that received a grade of an A,B,C, and D to see whether these groups differed in our measures. To answer RQ1, we found that the A students had higher expectations to do well when they first came into the course. This is perhaps because some students come in with prior background knowledge. However, there was no difference in value or in sense of belonging between the groups. In the middle of the semester, when we asked the students how important their mindset was, there was no difference between these groups, suggesting that across all students, they equally perceived mindset as affecting their outcome. When we asked students which of these measures increased since the beginning of the year, we found that the B students reported an increase in value as compared to the C and D students. Both the A and B students reported an increase in interest as compared to the D students. There was no difference between them and the C students. Only the A students increased in their belonging as compared to the B and C students.
To answer RQ2 we looked at students who started out low on the ‘value’ measure but indicated that their ‘value’ of the course increased. Those students did better than students who started out with low value and did not increase their value of the course. Students who expected to do well in the beginning and increased their interest did the best out of all the students. Increasing their interest for those students who started out with low expectations did not compensate.
These data allow us to study interactions between groups of students based on their initial responses, mid-semester responses, and final grades. This approach allows instructors to design interventions specifically tailored to their courses. We hope that our methodology will be easy to implement and useful to instructors of other conceptual problem-solving classes.
Authored by
Dr. Juan Alvarez (University of Illinois at Urbana - Champaign), Max Fowler (University of Illinois at Urbana - Champaign), Cheryl Ann Cohen (Affiliation unknown), Jennifer Martinez (Affiliation unknown), Dr. Jennifer R Amos (University of Illinois at Urbana - Champaign), and Yael Gertner (University of Illinois at Urbana - Champaign)
This paper introduces a course project to simulate a RISC-V CPU using Python as the hardware description language (HDL). Using Python for hardware description is not new. For example, MyHDL is an open-source Python HDL. However, this and other solutions focus on synthesis to produce ASIC and FPGA designs. This imposes added complexity that we do not need to perform behavioral simulation of CPU designs in an undergraduate computer science course. Our approach, in contrast, is minimal, imposing only a few constraints to focus on learning hardware description concepts at the expense of synthesis abilities. By capitalizing on the student's familiarity with Python from other courses, students are able to explore hardware design concepts while avoiding the steep learning curve associated with traditional HDLs. Furthermore, it is easier for students to understand and debug their code because they are already familiar with the syntax. The Python-based HDL is designed to be structurally similar to Verilog. Core Verlog concepts like modules and wires are related to Python classes and functions. This similarity facilitates a smoother transition for students who might go on to use Verilog or another HDL in the future.
Authored by
Dr. Alan Marchiori (Bucknell University)
Automated feedback systems are becoming more important in programming education as class sizes grow, and instructor resources are limited. Recent advances in large language models (LLMs) offer a practical way for educators to provide structured feedback for students on various assignments. A pre-experiment involved four student researchers solving Project Euler problems and showed an average improvement of 17.5 points on a scoring rubric out of 100 after code revision using feedback generated from Claude 3.5 Sonnet. There were also notable gains in time complexity, efficiency, and edge case handling, with percentage increases 24.45%, 22.59%, and 22%, respectively. Building on these results, we designed a classroom-based experiment involving students across various programming courses. Students will be divided into control (human feedback) and treatment (LLM feedback) groups, with feedback graded with a 14-criteria rubric. Claude 3.7 Sonnet will be the LLM used in this study, as it is the latest model released by Anthropic. The study evaluates both quantitative score improvements and students’ perceptions of feedback quality. The results of this study aim to inform the integration of LLMs into education assessment practices.
Authored by
Mr. Joel Nirupam Raj (Affiliation unknown), Ashwath Muppa (Thomas Jefferson High School for Science and Technology), Rhea Nirmal (Affiliation unknown), Teo W. Kamath (Affiliation unknown), Dr. Mihai Boicu (George Mason University), Achyut Dipukumar (Affiliation unknown), and Aarush Laddha (Affiliation unknown)
Generative Artificial Intelligence (GAI) has emerged in recent years as an innovative tool with promising potential for enhancing student learning across a broad spectrum of academic disciplines. GAI not only offers students personalized and adaptive learning experiences, but it is also playing an increasingly important role in various industries. As technologies evolve and society adapts to the growing AI revolution, it becomes necessary to train students of all disciplines to become proficient in using GAI. This work builds on studies that have established the effectiveness of intelligent tutoring systems, adaptive learning environments, and the use of virtual reality in education.
This work-in-progress paper presents preliminary findings related to the relationship between university students’ area of study and the frequency at which they utilize GAI to aid their learning. Data for this study were collected using a survey distributed to students from eight different colleges at a large Western university as part of a larger ongoing project geared towards gaining insight into student perceptions and use of GAI in higher education. The goal of the overall project is to establish a foundational understanding of how disruptive technologies, like GAI, can promote learner agency. By exploring why and how students choose to engage with these technologies, the project seeks to find proactive approaches to integrate GAI technology into education, ultimately enhancing teaching and learning practices across various disciplines. This work in progress specifically examines patterns of GAI use between different colleges where the students’ program of study is housed and seeks to answer the research question: How does the use of GAI among university students vary across different academic disciplines, and what factors contribute to these variations? Preliminary results based on responses from the first 977 students indicate that student use of GAI varies significantly between colleges, with students enrolled in the school of business reporting the highest use of GAI per week and students in the college of art reporting the lowest use. This variation in GAI use may be explained through the lens of the Technology Acceptance Model (TAM) which asserts that perceived usefulness and perceived ease of use are critical factors that influence the adoption of new technology. Students from various disciplines may receive different levels of exposure to technologies such as GAI which may influence how comfortable they are with the technology or how they see it benefiting their field of study.
These findings highlight the varying degrees of GAI integration into different academic disciplines and suggest that programs such as business may be more aligned with potential applications of GAI. By examining applications of GAI use in disciplines with students reporting higher usage, other academic programs with lower GAI use may be able to mirror some of the benefits of GAI into their own courses and programs. Results of this analysis also point to potential gaps in student exposure to GAI, with many students reporting that they have never used GAI as part of their education. Plans for ongoing research include a mixed-methods approach to determining effective uses of GAI in various academic disciplines as well as identifying reasons students do or do not choose to utilize GAI within their specific area of study. Understanding these patterns not only aids in curriculum development but also prepares students for a future where AI proficiency is crucial across all disciplines.
Authored by
Daniel Kane (Utah State University), Dr. Wade H Goodridge (Utah State University), Linda Davis Ahlstrom (Utah State University), Dr. Oenardi Lawanto (Utah State University), Michaela Harper (Utah State University), and Dr. Cassandra McCall (Utah State University)
AI technologies are increasingly permeating everyday life and are set to significantly transform university teaching in the coming years. A growing range of applications and concrete use cases for these technologies are emerging in the context of higher education. In particular, recent advances in AI, especially natural language processing (NLP) tools like ChatGPT, have created opportunities to meaningfully integrate these technologies and enhance the educational experience in laboratory-based instruction.
This submission presents work conducted within the KICK 4.0 research project, which focuses on connecting NLP tools with online laboratories in engineering education. The project addresses new competency requirements for students that are becoming increasingly relevant with the growing use of NLP technologies. The purpose of this submission is to provide a structured overview of the use of AI systems in laboratory-based teaching and learning in engineering education. Specifically, it aims to summarize how such systems have been applied in laboratory-based instruction so far and the insights gained from those experiences. The focus is on the opportunities and limitations of NLP systems, their potential to enhance student learning outcomes, and their ability to generate user-oriented feedback.
Three research questions guide our systematic literature review: 1) How is AI currently used in laboratory-based engineering education? 2) How can AI systems provide accurate and high-quality feedback to students in this context? and 3) How can students be trained to competently use NLP systems while understanding both their limitations and opportunities?
For this review, we examined national and international databases using relevant keywords such as “engineering,” “education,” “laboratory,” “AI,” and “NLP,” in various combinations, along with adjacent search terms like “feedback,” “opportunities,” and “limitations.” The results were screened for relevance and will be further examined for this paper. Preliminary findings show that many educators are exploring the use of NLP technologies, but satisfaction levels largely depend on the quality of feedback and the perception of the results' accuracy. High-quality feedback is essential, and the user plays a significant role in shaping the outcomes by posing the right questions or priming the system effectively. However, many educators are still unclear on how to successfully integrate these tools into their teaching. Additionally, there are critical voices highlighting the challenges that may arise from further integration of NLP technologies in higher education, particularly regarding the assessment and evaluation of student work.
As a result of this work, we expect to present an overview of the current research landscape in engineering education regarding the use of AI, specifically NLP technologies, in laboratory-based instruction. Our overall aim is to identify which applications have been introduced, which have been technically implemented and tested for effectiveness, and how students and educators view these technologies.
Authored by
Mr. Johannes Kubasch (University of Wuppertal), Dr. Dominik May (University of Wuppertal), and Doha Meslem (Bergische Universität Wuppertal)
This research brief presents initial insights into how AI-powered tools are influencing STEM education and pedagogical research, drawn from surveys we developed and conducted in Fall 2024.
Artificial intelligence (AI) has rapidly emerged as a transformative force in engineering education, particularly in the field of pedagogical research. While numerous studies have explored AI's effects on learning outcomes in early education settings—such as kindergarten, primary, and secondary schools—evidence on its efficacy in higher education is still limited. This project seeks to evaluate the impact of an AI-driven survey tool on STEM learning outcomes, with a specific focus on enhancing collaborative skills that are crucial in today’s educational landscape.
Effective teamwork is a fundamental component of STEM learning, especially given the growing emphasis on collaboration and communication skills in higher education. Yet, many college students face challenges in addressing complex, real-world problems due to insufficient collaborative abilities. To tackle this issue, we have developed an innovative AI-driven open-ended reflective tool that dynamically generates personalized questions based on students' prior responses, instructional context, and educational theories surrounding effective teamwork.
The tool is deployed across diverse student populations at Cornell University, specifically targeting students enrolled in two 1-credit courses: an upper level biomedical engineering course focusing on design and an introductory physics lab. Both courses feature intensive teamwork, with students actively engaged in collaborative projects. Teamwork skills are explicitly taught through structured activities, lectures and the use of team contracts. Furthermore, students in these courses participate in weekly reflections on various topics. The AI-powered tool will be deployed six times in the biomedical engineering course and three times in the physics course, reaching 455 students across both courses and gathering over 1,500 individual responses.
Through this research brief, we share our initial findings measuring the impact of the reflection instrument on engineering and physics students’ responses using a randomized controlled trial. We hypothesize that students will engage in deeper reflection when prompted with personalized questions. Data will be collected through an AI-integrated Qualtrics platform. We use qualitative methods to analyze the reflection responses. An a priori coding scheme based on socially-shared regulation of learning, as well as a coding scheme derived from work by Wong et al. (1995) and Rogers et al. (2019), indicating reflective skills. We also employ quantitative methods including keyword matching and response length as metrics for analysis. Ultimately, our work seeks to contribute significantly to the ongoing trend on integrating AI in education research, paving the way for more effective teaching and learning strategies in STEM disciplines.
Authored by
Kangxuan Rong (Cornell University), Dr. Campbell James McColley (Cornell University), Ted Karanja Mburu (University of Colorado Boulder), and Alexandra Werth (Cornell University)
Active recall and spaced repetition as study techniques have shown to improve comprehension and long-term knowledge retention. Active recall asks students to self-create review questions from learning material and gauge their ability to retrieve answers from memory during review sessions. Spaced repetition requires scheduling active recall review sessions systematically to maximize retention and understanding of learned material. Common software tools used by students to implement these techniques include Anki and Quizlet, which provide flashcard creation and scheduling capabilities. While these tools are effective, it can often be time-consuming for students to set up a thorough review set. When insufficient questions are provided, users often end up memorizing the set of answers, rather than focusing on learning the concepts.
Large-language models (LLMs) offer the ability to augment the active recall and spaced repetition learning process through the automatic generation and evaluation of similar content review questions. We introduce a tool which allows users to request additional LLM-generated review questions, which will be slightly varied questions on the same topics they have self-created. Students can also receive LLM-generated evaluations of their responses to questions, helping them assess their understanding of topics. By asking subtly different questions on similar topics, we hypothesize that our tool will reinforce a deeper understanding of core concepts, rather than encourage memorization of the same questions and answers. Additionally, by reducing the time spent on creating additional review content, students can spend more time on reviewing course material.
Upon completion of our tool, we plan to conduct a controlled study that compares students in a course who utilize our tool (test set) with students utilizing other flashcard applications or traditional study techniques within the same course (control set). Students will use our tool to create and review course questions over a defined time period. After this period, students will be prompted to review how much time was saved using our tool, their level of confidence with each course topic, and their satisfaction with LLM-generated questions and responses. Additionally, we can use in-app metrics such as user engagement and question review sessions to gauge student satisfaction. This methodology measures student confidence and engagement with course learning to observe the impact of our tool in streamlining the active recall and spaced repetition learning process.
Authored by
Mr. Muhammed Yakubu (University of Toronto), Mr. Jasnoor Guliani (University of Toronto), Mr. Nipun Shukla (University of Toronto), Dylan O'Toole (Affiliation unknown), and Dr. Hamid S Timorabadi P.Eng. (University of Toronto)
An enormous amount of data is generated daily by educational tools used in traditional (e.g., colleges) and non-traditional (e.g., open online course provider; Coursera) learning environments. This data holds the potential to uncover valuable insights that can inform instructional decision-making. Despite the widespread adoption of data mining techniques in education, these methods often seem like a “black box” limiting educators’ ability to interpret their decision-making and outputs. To address this challenge, studies have explored the use of Explainable Artificial Intelligence (xAI) to improve the transparency and interpretation of data mining algorithms in educational settings. This work-in-progress study employs several data mining algorithms to extract critical insights from the publicly available extensive educational datasets. These datasets contain information on students’ perceived health status and its relation with their mental health, academic performance in college, and dropout rates. We then applied an xAI technique, i.e., SHAP (Shapley Additive explanations), to interpret and better understand the output of these algorithms. Preliminary findings reveal that the applied data mining algorithms, combined with xAI techniques, were able to identify key factors contributing to student academic performance, identify the early warning signs of mental or physical health concerns, and assess students at risk of dropping out. This work contributes to the current educational data mining literature by offering guidelines for integrating xAI methods with data mining algorithms to enhance their interpretability in educational contexts. The future direction of this work is to explore other xAI techniques and apply them to diverse educational datasets. Furthermore, future research is also needed to evaluate the impact of this combined approach (data mining model + xAI) on decision-making in practical educational settings.
Authored by
Eesha tur razia babar (Affiliation unknown) and Mr. Ahmed Ashraf Butt (University of Oklahoma)
Writing high-quality learning objectives is crucial in the design of an effective curriculum. These learning objectives help the instructor align the course components (e.g., content, assessment, and pedagogy) to provide students with a good learning experience. However, as most instructors do not have formal education training, they lack the experience and expertise to write quality learning objectives. The lack of quality learning objectives could lead to misalignment in course components, where students often complain of variation between what is taught in class and what is assessed in the exams.
In this Work-in-Progress paper, we argue that Generative AI, the recent advancement in AI, has the potential to assist in improving the quality of learning objectives by providing real-time scaffolding and feedback. Also, the SMART(Specific, Measurable, Attainable, Relevant, and Time-bound) - a widely recognized best practice for crafting clear and compelling learning objectives- can be used as the criteria for evaluating learning objectives. In this regard, we collected 100 learning objectives from various STEM course curricula that are publicly available online. Using the SMART criteria, we evaluated each learning objective using two approaches. 1) Human experts generated feedback, and 2) Generative AI model (i.e., GPT; generative pre-trained transformer) based feedback. More specifically, we addressed the following research questions: How well does GPT feedback match human experts when evaluating course learning objectives using the SMART framework? We used Cohen’s Kappa to assess the level of agreement between GPT and human experts’ evaluations. Also, we did a qualitative analysis of learning objectives with strong disagreement among evaluations. Our findings showed that the GPT has a reasonable agreement in evaluating learning objectives’ “Relevant” aspects. However, there was an inconsistency in assessing the other criteria by the GPT compared to human evaluation. The potential issues could be the AI’s need for more contextual understanding, such as how the assessment is applied, access to broader course structure, learner needs, etc. Overall, the result suggests that while GPT could assess a certain part of learning objectives effectively, further refinement of GPT with more contextual information is needed. Furthermore, we plan to build a scaffolding for the instructor to provide instructors with real-time feedback while they work on their learning objectives after improving the current AI approaches. This study contributes to the literature, exploring ways to use AI in education, helping teachers make informed decisions with minimal work, and facilitating student’s learning.
Authored by
Mr. Ahmed Ashraf Butt (University of Oklahoma), Dr. Saira Anwar (Texas A&M University), and Dr. Asefeh Kardgar (Texas A&M University)
Work-in-Progress: Uncovering AI Adoption Trends Among University Engineering Students for Learning and Career Preparedness-progress study explores self-reported data on AI use by university engineering students. The purpose of this study is to investigate how students are utilizing AI technologies and to understand their views on the role of AI in their future. The primary research question formulated was: How does the adoption of AI technologies for learning vary across demographic groups among university engineering students? Advances in technology and the emergence of AI tools have attracted attention from academia, research, and industry. The rapid growth of deep learning technologies has changed the landscape in the work environment, and universities may need to adapt to keep pace. Dynamic changes in the workplace have accelerated as these AI technologies are being leveraged to complete tasks at a high-speed rate. Research indicates that the workforce is increasingly demanding higher skill levels, including specialized AI skills. Formal education in AI basics could be crucial for future career readiness.
Over 150 engineering students reported their demographics, including age, race, gender, year in school, and if they identify as having any form of disability. Currently, the survey remains open. The final study will incorporate more responses, and additional data will come from semi-structured interviews. This research explores the ways in which undergraduate and graduate students at a major R1 land-grant university in the western United States interact with AI tools.
Students reported on using AI technologies, like ChatGPT, to aid in their learning. Preliminary findings suggest that freshman students are less likely to have used AI technologies than those later in their college careers. Encouragingly, students closest to entering the workforce are the ones with the most exposure to these technologies. Interestingly, students who identify as having any form of a disability or condition that impacts their learning (e.g., learning disability, neurodiversity, physical disability, etc.) initially reported lower usage of AI technologies compared to their classmates. The lower use by freshmen and increasing exposure to generative AI throughout students’ university experience is noteworthy.
Students were also asked for their views on the formal integration of AI technologies into the College of Engineering courses. It could be valuable for universities to explore adding formal training to help equip students for the workforce. We anticipate that this study will highlight how exposure to AI technologies may prove essential for engineering students in preparing for a rapidly evolving workplace, as AI has the potential to enhance real-world problem-solving skills and help students become more equipped for workplace demands.
Authored by
Linda Davis Ahlstrom (Utah State University), Dr. Oenardi Lawanto (Utah State University), Dr. Cassandra McCall (Utah State University), Michaela Harper (Utah State University), Dr. Wade H Goodridge (Utah State University), and Daniel Kane (Utah State University)
This work-in-progress paper explores university students’ perspectives on Generative Artificial Intelligence (GAI) tools, such as ChatGPT, an increasingly prominent topic in the academic community. There is ongoing debate about whether faculty should teach students how to use GAI tools, restrict their usage to maintain academic integrity, or establish regulatory guidelines for sustained integration into higher education. Unfortunately, limited research exists beyond surface-level policies and educator opinions regarding GAI, and its full impact on student learning remains largely unknown. Therefore, understanding students' perceptions and how they use GAI is crucial to ensuring its effective and ethical integration into higher education. As GAI continues to disrupt traditional educational paradigms, this study seeks to explore how students perceive its influence on their learning and problem-solving.
As part of a larger mixed-methods study, this work-in-progress paper presents preliminary findings from the qualitative portion using a phenomenological approach that answers the research question: How do university students perceive disruptive technologies like ChatGPT affecting their education and learning? By exploring the implications of Artificial Intelligence (AI) tools on student learning, academic integrity, individual beliefs, and community norms, this study contributes to the broader discourse on the role of emerging technologies in shaping the future of teaching and learning in education.
Authored by
Michaela Harper (Utah State University), Dr. Cassandra McCall (Utah State University), Daniel Kane (Utah State University), Dr. Wade H Goodridge (Utah State University), Linda Davis Ahlstrom (Utah State University), and Dr. Oenardi Lawanto (Utah State University)
The power of engineering simulation tools is well known in industry; simulation skills are listed as a key area for new graduates. Another benefit simulation can add to a curriculum is aiding students in visualizing phenomena, particularly in “real-world” scenarios. But the tools themselves can be overwhelming for early-years students who could benefit most, requiring additional instruction time in already packed curriculums for full efficacy. In this paper, we present work in progress to leverage recent developments where the Ansys simulation suite has become more accessible through APIs and Python libraries, allowing the development of teaching resources designed for the higher education classroom. This work looks to bring the benefits of simulation into the curriculum without additional student training requirements.
Two implementation approaches will be discussed here. The first utilizes the Jupyter Notebook (or equivalent) interface to engage with the software. Students and instructors can interact with either the code or the simulation tools if desired, providing opportunity to expand depending on course needs. The second approach involves a Python-based application with front-end user interface. Students in this case interact with the desired visualizations via a simple “app”, leaving the more complex simulation software unseen in the background.
Details of the teaching resource creation process, implementation challenges, and example curriculum integration opportunities will be shared, as well as preliminary feedback from academics and students using the tools presented. Our hope with this work is to lower the energy barrier for including simulation in the engineering curriculum, allowing students to take advantage of the visualization capabilities and familiarize themselves with the concepts of simulation tools early in their degree journey.
Authored by
Dr. Susannah Cooke (ANSYS, Inc.) and Dr. Kaitlin Tyler (ANSYS, Inc.)
Registered attendees must be logged in to view papers during the conference.
Log in now