2026 ASEE Annual Conference & Exposition

Instrument Validation for Measuring Teachers’ Practice and Trust in AI-Based Educational Technology

Presented at DSAI-Session 5: Student Perceptions, Trust, and AI Use in Coursework

Objective. The aim of this study is to validate a unified survey combining the intelligent-Technological Pedagogical Content Knowledge (i-TPACK) scale and the Trust in AI-based Technology in Education (TRAIT-Ed) scale and develop a brief version of the combined 52-item scale. The survey was administered to K–12 teachers (N=121) before and after a two-day professional development workshop on artificial intelligence (AI) integration. The unified instrument is designed to capture both teachers’ instructional competence (i-TPACK) and trust in adopting AI-based educational technologies (TRAIT-Ed) and examines changes in teachers’ confidence, competence, and trust in using AI-based educational tools. This approach offers a more holistic view of AI adoption in education.

Method. Confirmatory Factor Analysis (CFA) was conducted to evaluate the internal structure of the unified instrument. The Weighted Least Squares Mean Variance (WLSMV) estimator was used given the ordinal nature of the data (likert-type scale). Model fit was assessed using common fit indices including chi-square (χ2), comparative fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Skewness and Kurtosis values were used to assess univariate normality. Internal consistency was assessed using Cronbach’s alpha (α) and McDonald's Omega (ω) coefficients. Convergent validity was assessed by examining Pearson correlations between composite scores on the i-TPACK and Trust subscales.

Results. For the original combined instrument, overall, the internal structure of the factor solution consisting of eight latent variables, including, five i-TPACK factors: Technological Pedagogical Content Knowledge (TPACK), Technological Knowledge (TK), Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK), Ethics; and three Trust factors: Working alongside AI to improve pedagogy (Pedagogy), Perceived Benefits of AI in an educational setting (Benefit), Reasons for not trusting AI diagnosis (Barrier), demonstrated appropriate model fit except for the SRMR index, based on post-survey data: χ²=1930.160 (p-value=0.000), df=1,246.000, χ²/df=1.55, RMSEA=0.079 (90% CI=0.072-0.086), SRMR=0.107, and CFI=0.975, TLI=0.974. All item factor loadings (λ) were significant (p<0.001), supporting convergent validity. Twenty-one items were removed for reasons including low variability, cross-loadings, relatively weak loading and multicollinearity. An improved fit was achieved with a reduced 31-item version of the unified scale demonstrating good fit: χ²=518.056 (p-value=0.000), df=413.000, χ²/df=1.25, RMSEA= 0.054 (90% CI=0.038-0.068), SRMR=0.052, and CFI=0.993, TLI=0.992.

Significance of results. Overall, findings support the validity and reliability of a unified and brief version of a refined 31-item instrument. The final version of the i-TPACK-Trust instrument offers a holistic approach to measuring teachers’ trust and competence for AI integration in K-12 education. Insights from data collected using the final instrument can guide the design of professional development training programs for teachers and AI-based Educational Technology designers on improving practice and trust in AI-based EdTech.

Keywords: professional development, k-12 teachers, AI-based EdTech, confirmatory factor analysis (3 to 10 keywords)

Authors
  1. Margaret Wacera Gatongi Virginia Commonwealth University [biography]
  2. Moe Debbagh Greene Virginia Commonwealth University [biography]
  3. Radhika Barua Virginia Commonwealth University [biography]
  4. John E Fife Virginia Commonwealth University
  5. Ms. Elizabeth A. P. Denson Virginia Commonwealth University [biography]
Note

The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026