This theory and methods paper presents a human factors and systems engineering evaluation framework to support holistic and comprehensive program and course evaluation in engineering education. Many times, program evaluation focuses on a single factor or small set of factors to assess program impact, which may oversimplify the context in which the program operates, leading to a lack of impact or ambiguous evaluation results. A more robust, holistic framework for evaluating programs in engineering higher education is needed.
This paper articulates the integration of the fields of human factors and systems engineering into a program evaluation framework for engineering higher education. This framework draws from human factors and systems engineering [1] and from health care programs that use human factors and systems engineering approaches in their evaluation designs [2, 3]. The levels of analysis for program evaluation may be at the course, department, division, or college levels. Further, while the framework is informed by human factors and systems engineering, its application is for evaluation in all engineering education disciplines.
The human factors and systems engineering evaluation framework for engineering programs in higher education (referred to as “the framework” in this paper) is specified in three unique ways. First, the student experience is focal and describes how individual factors such as motivation, cognitive load, and physiological states (including stress) influence student learning and programmatic outcomes. Individual experience is multifaceted, shaped by personal and social identities, and dependent upon situational context. Student experiences are central to the evaluation analysis in order to capture the influence of programmatic intervention on learning outcomes.
Second, the framework articulates how student experiences interact with system elements of the program as well as the broader engineering education context. The system elements are adapted from Carayon’s [1] conceptualization of a work system, which includes the individual (student) experience; learning and project tasks; course schedules and assignment timelines; organizational structures such as project teams, student study groups, or tutoring; technologies such as lab equipment or specific software; and the learning environment, such as classrooms, laboratories, or field work. These elements interact and interplay with one another to influence the individual experience of the student, which in turn influences learning and programmatic outcomes.
Third, the framework aligns to relevant programmatic outputs and outcomes associated with the program’s goals and objectives. Meaning, the framework does not exist in the abstract, but rather is tightly integrated with the specific objectives of the program, course, or initiative. The framework guides the selection of programmatic outcomes and bounds the articulation of system elements to ensure the resulting program evaluation design captures the nuance, relevance, and the interplay of the elements salient to the program's design.
This framework builds faculty, course instructors’, and program evaluators’ capacities to incorporate human factors and systems engineering principles into their course and program evaluation design and planning. This paper consists of a review of the relevant research base and resulting framework, figures, and graphics to illustrate the framework and its components, and a description of how to use the framework to plan and design a program evaluation.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.