2025 ASEE Annual Conference & Exposition

From Mathematical Theory to Engineering Application: An Undergraduate Student’s Research Journey

Presented at Computers in Education Division (COED) Track 5.C

With the rapid development of autonomous vehicles and advanced sensing technologies, the demand for expertise in computer vision has surged. However, many undergraduate students have limited or no exposure to this growing field. This paper documents an undergraduate student's journey in learning and implementing a Time-to-Contact (TTC) algorithm—a critical tool that estimates the time until a moving observer collides with an object, improving the performance of autonomous vehicles. By sharing this experience, the paper provides a roadmap for other instructors to guide their students in acquiring essential knowledge and practical experience in computer vision.
The student started with minimal knowledge of computer vision and no prior experience with optical flow algorithms. By studying scholarly papers on TTC and visual looming, and engaging with visual demonstrations, they developed a foundational understanding of the concepts. The next step was to create a vision-based 3D Python simulation to calculate TTC for a moving object relative to the camera, which yielded positive and verifiable results.
Weekly team meetings, where professors and students reviewed progress, created a collaborative learning environment. Peer teaching methods were used, with the student explaining learned concepts and outlining future steps. This ensured clarity, reinforced understanding, and strengthened the student’s communication skills—an essential aspect of research success. Discussions with professors encouraged divergent thinking, enabling the student to explore multiple approaches. This process ultimately led to the development of a threat detection algorithm using YOLO3D and simulation code to estimate TTC in real-world vehicles.
Instructors can replicate this learning experience by incorporating hands-on activities, such as modifying the object position matrix in the Python simulation. This allows students to observe changes in TTC as an object moves relative to the camera. With minimal guidance and clear code documentation, students can quickly grasp the core principles of optical flow and computer vision.
This scalable learning approach provides students with a strong foundation in applying mathematical concepts to real-world scenarios. As they progress, students can take on more advanced challenges, such as modifying object properties, further deepening their understanding of computer vision algorithms like OpenCV. By combining hands-on experience with effective teaching strategies, this approach accelerates learning and prepares students for higher-level opportunities in computer vision research.
By sharing both technical insights and teaching methodologies, this paper empowers instructors to introduce undergraduates to computer vision, paving the way for impactful contributions to autonomous technologies.

Authors
  1. Tony Malayil Florida Atlantic University
  2. Dr. Daniel Raviv Florida Atlantic University [biography]
  3. Juan David Yepes Florida Atlantic University [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025