2023 ASEE Annual Conference & Exposition

A tag-based framework for collecting, processing, and visualizing student learning outcomes

Presented at Framework Studies

The Mechanical Engineering faculty at a public four-year, comprehensive university in the Northeast region is developing and piloting a tag-based framework to systematically identify, collect, process, and visualize large volumes of student learning outcomes data for course- and program-level outcomes assessments. Student learning outcome identifier tags are applied by course instructors to link the questions on assignments, quizzes, projects, and exams to their course outcomes and the overall program outcomes. The goal of this pilot effort is to inform instruction improvement, course design, and program delivery.
To support program-level outcomes assessment, the department has developed pilot rubrics aligned with ABET’s student outcomes 1-7. Each rubric performance indicator has 4 performance levels, with each performance level assigned an identifier tag. For example, ABET Student Outcome 1 has six performance indicators ([ABET1a]- [ABET1f]), with each performance indicator divided into four performance levels which together form the associated tag set (e.g., [ABET1aL1], [ABET1aL2], [ABET1aL3], and [ABET1aL4], where L1 means novice level and L4 means expert level). In total, there are over 50 program-level outcomes performance indicators with over 200 associated tag identifiers. The framework also allows instructors to introduce course specific content and skills tags to identify course outcomes. Each course in the pilot has a defined tag syntax associated with a course content and skills mapping. In the initial pilot effort, Gradescope is used to apply program and course level tags as relevant. In the full paper, we will provide examples of the program-level outcomes tags as well as their implementation in the Gradescope tool.

The tag data collected from grading a given assessment is de-identified, cleaned, and entered into a SQL server database. This data is then processed in a Python-based visualization platform (V-TAG). V-TAG exploits Python plotting libraries to create course- and program-level interactive visualizations to inform instructors of students’ formative and summative performance in specific skills areas. The V-TAG plots (e.g., heatmap, wind rose, sunburst, and polar bar charts) present data to facilitate data-driven decision-making. For example, instructors could interrogate the aggregate tag assessment data to tailor their teaching as well as to redesign class activities in future offerings to increase student learning. Similarly, the program level student outcomes tags can be examined and interrogated to understand the formative and summative performance within the curriculum and help to identify curriculum redesign opportunities. Ultimately, the ability to collect, process, and visualize larger amounts of student learning outcomes data enables both courses and the program to better use higher volume assessment data to drive continuous improvement decisions.

Authors
  1. Tonghui Xu University of Massachusetts Lowell [biography]
  2. Dr. Melissa Nemon University of Massachusetts Lowell
  3. John Hunter Mack Orcid 16x16http://orcid.org/0000-0002-5455-8611 University of Massachusetts Lowell
  4. Dr. David J. Willis University of Massachusetts Lowell [biography]
Download paper (1.23 MB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.