2023 ASEE Annual Conference & Exposition

Validity evidence for measures of statistical reasoning and statistical self-efficacy with engineering students

Presented at Research Methodologies – Session 2

This research paper will contribute evidence that two instruments developed to measure statistical learning and statistical confidence retain construct validity with engineering students. This validation study is part of a larger project to evaluate the effectiveness of a homework intervention to improve learning in an undergraduate engineering statistics course. The two instruments we used to measure effectiveness are not new, but they have not been well studied in engineering specific courses or populations. In such cases, it is important to continually evaluate such instruments prior to using them as evidence to make claims about changes in pedagogy to ensure such claims have validity.

The two instruments we implement are the Statistical Self-Efficacy (SSE) instrument and the Statistical Reasoning Assessment (SRA). The SSE is a self-reported measure of confidence on introductory statistics topics. It contains 14 items that students respond to using a 6 point likert-like confidence scale. We evaluate the performance of the SSE using confirmatory factor analysis (CFA) to test for the expected structure of a self-report self-confidence and to test invariance across both demographics and length of the course. The SRA is a scored test of introductory statistics topics. It contains 20 multiple-choice items each with one correct answer. In addition to responding to the items, we also collected self-reported confidence to enable confidence weighted analyses. We evaluate the performance of the SRA using a confidence weighted Rasch modeling analysis. Similar to the goals of the CFA analysis, Rasch models look at instrument level fit of responses to a preconceived model of good measurement. Again, we test to ensure that the SRA shows the expected internal structure and evaluate differential item function - an analogue to invariance for scored tests.

Results from analyses of both instruments will be reported in the full paper. Analyses will use data collected in Spring 2022. Data comes from two sections of an undergraduate course on Biomedical Statistics at a large research-focused engineering university in the United States (n,enrollment=188). Both sections were part of a homework intervention study that randomly assigned each student to one of two homework grading schemes. The two instruments evaluated in this study were given to students in the first and last week of the semester (pre and post design). We received a total of 229 responses to each instrument, 118 from the ‘pre’ distribution and 111 from the ‘post’ distribution. In addition to the instruments, we collected basic demographic data at the end of the pre test.

Authors
  1. David S. Ancalle Department of Civil and Environmental Engineering, Kennesaw State University [biography]
Download paper (1.38 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.