The growing influence of generative artificial intelligence (AI) has redefined how students learn and engage with programming. To address this evolving landscape, an introductory programming course for computer science, engineering, and other STEM majors was comprehensively redesigned using multiple learning and support tools. These include a course-specific AI Agent, weekly mini-projects, paper-based coding assessments, a capstone project, the iota grade prediction tool, and Microsoft Teams as a central communication and collaboration hub. This study presents how the introductory programming course is redesigned to enhance students learning with the effective use of AI. The goal of this redesign is to help students build computational thinking, problem-solving ability, and AI literacy through an ecosystem that blends technology, human interaction, and reflective learning.
The course introduces weekly mini-projects that build iteratively toward a larger semester goal i.e., developing a functional application that grows with each new topic. This structure connects weekly learning objectives to tangible outcomes, enabling students to see how each concept contributes to a broader system. By iteratively expanding on prior work, students develop a sense of purpose and continuity while deepening their understanding of programming logic, data structures, and functions. This approach has shown to improve engagement, retention, and students’ ability to integrate new knowledge effectively.
At the core of the redesign is a customized AI Agent developed specifically for this course. Unlike general-purpose AI tools, the agent guides students toward solutions using scaffolded hints and reflective prompts, rather than providing direct answers. It also serves as an information assistant, helping students access course details, lab schedules, grading criteria, and exam formats. In large-enrollment settings, this tool enhances accessibility, engagement, and personalized support. Thousands of recorded student–agent interactions highlight its role as a trusted, course-specific learning companion. Students are also taught to critically evaluate AI-generated outputs by understanding the limitations of generative tools and learning how to use AI responsibly to augment their own learning.
The course further incorporates paper-based programming assessments that evaluate students’ logical reasoning and ability to structure code without relying on compilers. These exams ensure that students can write, organize, and debug code independently. To reduce grading workload in large classes, CodeShuffler is used to streamline the evaluation process while maintaining rigor and fairness. This method strengthens students’ conceptual grasp of programming logic and helps detect excessive dependence on AI for digital submissions.
The iota grade prediction tool complements these efforts by enabling students to monitor their ongoing performance and receive predictive feedback throughout the semester. This fosters accountability and helps instructors identify and support students at risk early.
Finally, the semester concludes with a capstone project in which students integrate a graphical user interface into their existing codebase using generative AI tools. This experience challenges them to craft precise prompts, integrate AI-generated code, and reflect on human-AI collaboration in problem-solving.
These pedagogical innovations of having iterative mini-projects, guided AI tutoring, and reflective assessments not only equips students with foundational coding skills but also instills critical awareness of how to think, learn, and create with AI.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026