Artificial intelligence (AI) is predicted to be one of the most disruptive technologies in the 21st century (Păvăloaia & Necula, 2023), and to prepare all young people to live and work in an AI infused world, many are calling on teaching CS and AI across grade bands (Committee on STEM Education, 2018). However, this presents many challenges particularly in supporting teachers to integrate concepts that they were not taught in their preservice education (Gatlin, 2023) or don’t have clear connections to K-12 core standards. Further, most AI educational interventions utilize LLMs or chatbots. Computer vision is an underexplored, underutilized, and accessible way to introduce young people to CS and AI.
To help support teachers to integrate CS and AI into their instruction, our interdisciplinary team of paleontologists, natural history museum educators, computer science engineers, and educational technology researchers designed and developed an innovative, flexible computer vision curriculum for science teachers. The curriculum called [TITLE] is funded by [blinded for review; award number] and blends CS and paleontology as middle school students build and evaluate their own computer vision model used to classify fossil shark teeth.The curriculum consists of five, flexible modules aligned with national and [STATE] science standards. These modules include an introduction to AI, classifying shark teeth and data, training and evaluating machine learning models, identifying bias in datasets, and building a machine learning model. Teachers choose whether to teach these modules in sequence or in isolation, and we encourage teachers to change and adapt these modules to their own teaching styles and students’ learning needs.
[TITLE] is implemented as a three year project, and is currently in year three of implementation. To date, we have partnered with 57 teachers, 597 students, and 47 schools. In this paper, we showcase one 7th grade science teacher and describe the implementation choices and changes she made for her own classroom and students’ needs. The purpose of this paper is to explore how one teacher adapted this curriculum to her own classroom, and describe successes and challenges which can be applied to others wishing to design and develop similar curricula. The data we report on include teacher reflections, the changes she made to the modules, researcher observations of implementation, student achievement pre and post tests, document analysis of student work, and students’ science related attitudes before and after the modules.
This teacher used the curriculum at the beginning of Fall 2024 to teach the Nature of Science standards. She taught one module per day, in sequence and chose to emphasize data science, curation, and bias. As one example, this teacher was very intentional about exploring and comparing the quality of data curated by citizen scientists and scientific institutions, and helping students understand how data quality influenced their computer vision model. When creating their own computer vision learning models, students recognized that “clear images” and “many images” were needed for accurate models. Many students recognized that AI models were only as good as the data used to build the model, with one student saying AI may have “trouble generating answers for people. If you give limited answers or bad pictures of descriptions.” Students' achievement and science related attitudes are currently being assessed and analyzed, and more details will be shared during the full paper.
While curricular interventions using chatbots and LLMs have a growing literature base, curricula for young people on building and using computer vision models is sparse. A more robust description on our innovative computer vision curriculum, the curriculum modifications made by the teacher, data analysis, results and implications will be given in the full paper and the presentation.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025