A primary goal of our DUE-funded project is to examine the quality of questions about course content asked by students enrolled in a statics course. We have developed a classroom-based intervention that provides statics students with training in the utility of question-asking and with frequent opportunities to submit written questions about their current confusions in the course. One goal of our project is to evaluate whether and how the nature and quality of student questions changes throughout the semester. The taxonomy provides a means for evaluating these changes.
Our original taxonomy was based on one developed for use with physics students (Harper et al., 2003). The taxonomy was approximately hierarchical, in which higher-numbered categories roughly represented metacognitively more sophisticated questions. Previously, we shared our process for creating—and subsequently modifying—the taxonomy for use in categorizing the quality of questions students ask about statics (reference to author work removed for blind review). While our modified taxonomy increased interrater reliability between faculty raters classifying student questions, a challenge remained pertaining to questions which could potentially fall into more than one category. Consequently, we have considered the utility of developing a categorization system designed with the expectation that questions will fall into more than one category. This approach alleviates some challenges associated with strictly sorting questions based on the type of knowledge required to answer the question, which becomes difficult when answers require multiple or overlapping knowledge types. This new approach also allows us to consider additional question features (e.g., closed- or open-ended, correct or incorrect use of statics vocabulary) that can more richly evaluate question quality.
In this paper, we share our progress on developing a revised taxonomy that captures multiple dimensions of question quality. Specifically, we describe our process of creating the multi-dimensional taxonomy, in which some dimensions are predefined using our prior work on question categorization, while other dimensions are explored via employing an inductive coding approach to discover newly emerging themes within student questions. We show the results of using the new taxonomy to categorize a set of student questions, and we compare the results from our previous taxonomy to illustrate differences between the two approaches.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.