2025 ASEE Annual Conference & Exposition

Exploring Student Self-Efficacy in AI Through Model Building Artifacts

Presented at Computers in Education Division (COED) Track 4.E

With the recent implementation of Artificial Intelligence and Machine Learning within schools on the rise, students must get hands-on experiences with these technologies. New technologies require that we ask new questions in new ways, and so there is a need for research in AI and ML in the current educational contexts (Savenye & Robinson, 2004). In this study, a group of middle and high school-aged Black students partook in a summer program for two weeks to learn about AI in science. Throughout the program, they focused on how AI utilizes computer vision to classify images for scientific purposes. Students also identified potential issues with AI, such as biases. Not only did the students learn about AI, but they also had hands-on experiences building models using Google Teachable Machine. For their project presentations, they created posters that identified community-relevant topics, the classification to perform, the data they used, and classification accuracy.

This study analyzed students’ project artifacts and self-efficacy through the implementation of surveys taken before and after the two-week-long program. The surveys in this course were based on sources of self-efficacy identified in Bandura’s social cognitive theory (1997). Additionally, this study was informed by the expectancy-value theory as identified by Wigfield & Eccles, 2000 hypothesizing that higher self-efficacy beliefs can be associated with better-designed and implemented projects as students are more engaged and open to learning.

In this study, student self-efficacy was compared to the posters and projects that they produced. The surveys used Likert scales to determine self-efficacy in relation to AI for science and AI for tech. Student artifacts consisted of posters and Google Teachable Machine models. Students were divided into teams that consisted of three to four people and worked on a singular community-relevant topic which they presented at the end of the program. The team posters have been evaluated based on a rubric that contained four different categories totaling 12 points. The Google Teachable Machine models were evaluated based on rubrics that contained five different categories totaling in 15 points. The information has been analyzed with the self-efficacy scores to create efficacy quadrants which demonstrate correlations between high self-efficacy and higher quality models and posters.

There was a positive correlation between poster scores and model scores (r=.64,p<.001). Analyses were also performed to explore correlations between students' poster scores, model scores, and self-efficacy for AI for science, self-efficacy for AI for technology, and total self-efficacy scores on the pre and post-surveys. Students' poster scores and model scores were not correlated with their self-efficacy scores. Data will be discussed in the full paper and presentation relative to the theoretical framework and our experiences implementing this summer program on computer vision for science.

Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. [blinded for review].

Authors
  1. Miss Gabriella Marie Haire University of Florida College of Education [biography]
  2. Dr. Christine Wusylko University of Florida [biography]
  3. STEPHANIE KILLINGSWORTH University of Florida
  4. Brian Abramowitz University of Florida [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025