This research paper presents validity evidence for a sophomore engineering experience survey that provides an initial understanding of how sophomores experienced their second year of engineering studies. While the sophomore year is a pivotal transition for engineering students, existing research and practices have largely overlooked this crucial period. Unfortunately, not much is known about how sophomore engineering students are thriving (or not) in their college experiences. As such, there is a need to assess these students and understand more about their college experiences so interventions can be planned and implemented. The primary aim of this research is to establish validity evidence for the scales used in the Sophomore Engineering Experiences Survey (SEES). The survey was adapted from Schreiner’s Sophomore Experiences Survey, and guided by Tinto’s framework of student departure to provide a multifaceted understanding of sophomore engineering students’ experiences. Specifically, we ask the following research questions: 1) What validity evidence is needed for each of the five scales in the SEES? 2) What is the validity evidence for each of the five scales in SEES, and how is it interpreted? 3) To what extent do SEES scores vary among the demographic groups of gender and race/ethnicity for sophomore engineering students? Surveys were administered each Spring semester from 2013 to 2022 to sophomore engineering students (i.e., students at the end of their fourth semester) at a large PWI institution in the Midwest, yielding a dataset of 1,766 clean responses. Based on prior theory and research, we determined whether there was sufficient prior validity evidence for adapting Schreiner’s survey and what additional validity evidence was needed for the sophomore engineering use case. Adopting Kane’s argument-based approach, we gathered evidence to find support for the validity of the interpretations of the five scales of the SEES, specifically for reliability and factor structure. We then employed either a) Confirmatory Factor Analyses (CFA) only, or b) both Exploratory Factor Analysis (EFA) and CFA based on this understanding. For the scales that underwent both EFA and CFA, we performed a stratified random split based on the year the survey was conducted, allowing us to use separate samples for the two analyses. Internal consistency was calculated for all scales in the SEES. Our findings provided supporting evidence for the reliability and factorial validity of the interpretations of each scale in the SEES. Specifically, all scales in the SEES exhibited acceptable Cronbach’s alpha, supporting internal consistency of the proposed score uses. We also found support for appropriate factor structure for all five scales in the SEES. We performed group analyses for gender and race/ethnicity groups, and the differences aligned with previous theories and established research. We conclude that the Sophomore Engineering Experiences Survey has sufficient validity evidence for assessing the experiences of sophomore engineering students and, therefore, can be used to 1) offer empirical insights into the current state of sophomore engineering experiences, 2) identify factors that contribute to positive or negative experiences, 3) further elucidate group differences, and 4) provide actionable guidance for students, advisors, and administrators.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.