State of the art curriculum development efforts are done with a committee often consisting of two to four faculty members but are commonly undertaken by the assigned course instructor. However, the small number of faculty participants in the curriculum development effort can yield an out-of-date and insufficient curriculum for students entering the industry workforce (Thompson & Purdy, 2009; Nakayma, 2012; Gupta, 2016). Crowdsourcing has been used to gather more input from domain experts consisting of faculty and industry professionals (Woodward et al., 2013; Satterfield et al., 2010; Nakayma, 2012). However, these efforts can yield large amounts of inputs from various crowd workers resulting in additional time required for the curriculum owner or committee to sort through all inputs, organize them into categories, identify similarities and determine which topics to include in the curriculum based on the crowd’s consensus [citation anonymized]. In a previous experiment performed, we obtained 1) that crowdsourcing efforts can be effective at gathering inputs which both confirm or reject existing topics and yield additional topics to be included; 2) continuing and dynamic expert crowdsourcing is helpful in building consensus of newly suggested topics for the curriculum; and 3) manual aggregation of crowdsourced inputs is an inefficient process [citation anonymized]. This paper proposes the integration of a consensus building function into a crowdsourcing platform to dynamically and automatically validate and integrate both structured inputs (topics with confirmed representation) and semi-structured inputs (newly suggested curriculum topics) from the expert crowd when developing a course curriculum. The topics are organized in an ontology, having a hierarchical relationship “subtopic of” and an associated consensus value representing the agreement between the crowd participants about the inclusion of the topic (and its representation). When a worker from the expert crowd is providing feedback about an existing topic, the consensus value for that topic will be updated (increasing or decreasing the agreement). To ensure confidence in the consensus value a minimal number of inputs is required, named threshold confidence, (which is established based on the number of available workers and the expected working time). A topic may be in three different states: under review (after it is proposed, and before reaching consensus), accepted (if the consensus value is above the acceptance threshold with the required confidence) or rejected (if the consensus value is below the rejection threshold with the required confidence). While the crowd process focuses on the topics under review, the consensus value continues to be updated and it is possible to have further refinement both of accepted and rejected topics. The process continues until consensus is reached on all topics. In this paper we describe an experiment performed using a dynamic crowdsourcing platform with consensus building function for a new cybersecurity curriculum and compare it with a previous experiment in which we use a typical crowdsourcing platform for another cybersecurity curriculum. The current cybersecurity curriculum focuses on the topic of managing network and data security, while the previous curriculum focuses on the topic of managing information security with vendors and partners.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.