2025 ASEE Annual Conference & Exposition

An Evaluation of Prompt Engineering Strategies by College Students in Competitive Programming Tasks

Presented at Computers in Education Division (COED) Track 5.C

Generative AI, powered by Large Language Models (LLMs), has the potential to automate aspects of software engineering. This study examined how computer science students utilized these tools during an AI-assisted competitive programming competition across multiple campuses. Participants used generative AI tools such as ChatGPT, GitHub Copilot, and Claude and submitted transcripts documenting their interactions for analysis. Drawing from prompt engineering literature, the study mapped six key strategies to 14 areas of best practices for competitive programming. These practices included clarifying instructions, streamlining prompt context, employing chain-of-thought prompting, providing feedback to refine solutions, and leveraging LLM meta-capabilities for problem-solving. The transcripts were analyzed to assess adherence to these practices. Findings revealed significant variability in adherence, with an average compliance rate of 34.2% across practices. While simpler practices achieved adherence rates as high as 98%, eight practices saw minimal or no usage. These results highlight that students often adopt basic prompt engineering techniques but struggle with more complex strategies, suggesting the need for structured prompt engineering instruction in computer science curricula to maximize the potential of generative AI tools.

Authors
  1. Sita Vaibhavi Gunturi Pennsylvania State University, Harrisburg, The Capital College
  2. Dr. Jeremy Joseph Blum Pennsylvania State University, Harrisburg, The Capital College [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025