Teaching evaluations are a very standard part of monitoring the efficacy of university instructors and provide instructors with valuable feedback for improving their own performance and the experience of students. The instruments to evaluate standard academic courses, however well designed and validated they may be for that task, do not typically serve well to evaluate how well the advisors of senior design (capstone) project teams perform their duties. Yet the same course evaluation instrument is sometimes applied to capstone project advisors by default, since capstone is typically listed and registered as an academic course.
The expectations of advisors vary significantly from program to program. Some faculty have the advising responsibility for many different teams; others may only advise one or two teams. Some programs assign advising roles only to faculty with design expertise or experience; in other programs the advising role is expected of all engineering and computer science faculty. Some advisors are expected to spend multiple hours each week meeting with a project team and providing expert guidance; others may only be expected to occasionally meet with the group to monitor and evaluate progress.
There has been a lot of research and publication on the administration of capstone courses, managing team dynamics, even effective team advising, but very little on appropriate methods of or instruments for evaluating the advisor. Given the range of expectations for such advisors, this may not be surprising. But closing this feedback loop for advisors is an important part of ensuring that all capstone students have a high-quality experience.
This paper describes a project in which one engineering college set out to create this feedback loop for capstone advisors. This college runs a capstone program in which all college faculty are expected to advise one or two projects (some multidisciplinary) and ensure that the design process is being followed, while not necessarily being subject-matter experts on the design area themselves. The authors first examined the expectations of capstone advisors solicited from a variety of university programs and collected survey instruments where available. Employing these resources, a capstone advisor feedback tool was developed which was tailored to the expectations of our program. Results of this feedback are analyzed and discussed.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025