Assessing undergraduate and graduate students’ problem-solving is essential for engineering education and requires considerable human effort. Existing Systems focus on the correctness of final answers rather than reasoning or processes; there remains a critical gap in artificial intelligence (AI) tools capable of analyzing the problem-solving process itself, a reflection of cognitive sophistication utilized during solution generation. Conventional automated assessment systems integrated into Learning Management Platforms (e.g., Canvas, Moodle, Blackboard Learn) predominantly emphasize the correctness of final answers, often overlooking the underlying reasoning and solution processes. This oversight can result in inaccurate evaluations, where correct solutions may arise from flawed logic, or well-structured approaches may yield minor computational errors.
To overcome this problem, a hybrid human-machine dual assessment solution for LMS platforms is proposed in this study and a comparative analysis of supervised machine learning algorithms (Logistic Regression, Naïve Bayes, K-Nearest Neighbors, Decision Tree, Random Forest, etc.) are performed to predict student performance based on their problem-solving processes in the context of electrical circuit.
A dataset comprising 363 solution events was collected from the course Fundamental Electronics for Engineers at a mid-sized land-grant university in the Western United States. Since the dataset showed some imbalance across problem types and performance categories, an over-sampling method (SMOTEN) was used to balance the data. Each solution was qualitatively evaluated by two independent researchers using five process-oriented constructs adapted from Docktor et al.'s (2016) rubric: Useful Description, Engineering Approach, Specific Application of Engineering Principles, Mathematical Procedures, and Logical Progression.
Solutions were classified into three performance categories—Correct, Partially Correct, and Incorrect—based on expert comparisons. The machine learning models were trained and evaluated using key performance metrics: accuracy, precision, recall, F1-score, specificity, AUROC, mean cross validation scores, etc. The findings provide evidence of AI utility in assessing cognitive processes associated with engineering problem-solving. This study also has several implications across STEM disciplines, workforce management and development.
http://orcid.org/https://0000-0002-8060-7384
Ohio State University, Columbus Ohio
[biography]
The full paper will be available to logged in and registered conference attendees once the conference starts on June 21, 2026, and to all visitors after the conference ends on June 24, 2026