By Phil Hill
Update (3/10): Patterns and descriptions have been updated based on feedback in a new post.
As discussed in my last post, the focus on “completion rates” in MOOCs is somewhat misplaced, as open education is not simply an extension of traditional education. As several others have noted, not every student is attempting to complete a course, and in fact different students have different goals while participating in the same open course. This holds true for both cMOOCs and xMOOCs.
Does this mean that we should throw out the completion rate data? No. As Katy Jordan described quite well in the comments:
A lot of people have asked whether completion rates are the right way of framing the success of a MOOC; I agree that there is much more to the potential positive impacts of MOOCs for students than completion rate but, at the moment, completion rate is what the providers are measuring most consistently.
In my mind, we should augment the models we use to evaluate MOOCs rather than throw the baby out with the bathwater. The challenge, therefore, is to move beyond the simplistic view of one type of student with one type of goal (course completion), and find patterns of student behavior that will give additional insight into the different goals and therefore different measures we should have in evaluating whether MOOCs are effective.
Study Based on Change11
In 2011-2012 as part of the Change11 course (a connectivist course, or cMOOC, facilitated by George Siemens, Dave Cormier and Stephen Downes), the Scottish group Caledonian Academy was given access for surveys and follow-up interviews to help understand the student population in a research study.
The first component of the study was to ask participants to complete an SRL profile instrument* we had developed for the study. The instrument was adapted from a number of pre-existing SRL self-report instruments (full details, and a copy of the instrument are here), most notably the Motivated Strategies for Learning Questionnaire (Pintrich et al 1991) and a more recent Self directed Learning Orientation scale developed by Raemdonck (Gijbels et al , 2010). [snip]
We saw different patterns of engagement. In addition to an expected cluster of lurkers who purposefully did not engage with other course participants, we identified two further groups: one group of passive participants, who expected ‘to be taught’, and viewed the course as a source of information, attempting to capture all the ideas being exchanged within the Change 11 community; and a final group, more active participants, who set their own goals, established connections with other learners and linked these connections with their existing personal learning network. [emphasis added]
Based on various first-hand descriptions of MOOCs over the past year, I would propose a fourth pattern – the Drop-In, where students who direct most of their active participation for a particular topic within the course or a particular discussion thread.
The Four Student Archetypes
This leaves us with four student archetypes to consider (note that these are emerging patterns based on partial information, and these descriptions may need to change as we get more data):
- Lurkers – This is the majority of students within xMOOCs, where people enroll but just observe or sample a few items at the most. Many of these students do not even get beyond registering for the MOOC or maybe watching part of a video.
- Passive Participants – These are students who most closely align with traditional education students, viewing a course as content to consume. These students typically watch videos, perhaps take quizzes, but tend to not participate in activities or class discussions.
- Active Participants – These are the students who fully intend to participate in the MOOC, including consuming content, taking quizzes and exams, taking part in activities such as writing assignments and peer grading, and actively participate in discussions via discussion forums, blogs, twitter, Google+, or other forms of social media.
- Drop-Ins – These are students who become partially or fully active participants for a select topic within the course, but do not attempt to complete the entire course.
These are not static patterns, in that students may move from one archetype to another. Lurkers may decide that they should spend more time in the course and become passive participants. Passive participants may become more engaged and become active participants over time. Of course, any of these students may also drop out and leave the course.
These student archetypes generally have different goals. Lurkers may not have specific goals beyond finding out what the course is about or doing a “drive-by” evaluation of whether the course merits more time and attention. Passive participants, as discovered in the Change11 MOOC may desire to just experience the MOOC platform or course design.
One problem with our study which we hadn’t anticipated (but perhaps should have) was that individual participants might have quite different (conflicting?) reasons for signing up. While some participants signed up for the content of the course, others (the majority) were primarily or exclusively interested in experiencing the Change 11 MOOC as a learning environment, often because they wanted to implement some of the features of a MOOC in their own practice.
I should also note that while the student archetypes are somewhat based on the general goals for taking a course, there are also important, but largely unexplored, questions on why students leave a MOOC. As Laura Gibbs described in several Google+ discussions, leaving a course because you got what you wanted is very different than leaving due to abusive discussion forums.
Whither Completion Rates
How would our understanding change if we understood the different student archetypes and goals for enrolling in MOOCs? I believe we would end up with better feedback to improve the MOOC models, and a more realistic discussion about the impact of MOOCs. Katy’s data curation and visualization is based on the data available, which is invaluable, but I think her linkage to sources might give us insight to build on the prevailing model and more closely understand student goal completion.
Completion rate should really be measured for active participants. For those students who planned to complete the course and participate in all or most activities, how many ended up achieving that goals and completing the course?
Let’s consider Internet History, Technology and Security taught by Charles Severance in 2012. By traditional measures (as captured by Katy) there were roughly 46k students enrolled with 4.6k students who received a certificate, leading to a completion rate of 10%.
But look a little closer at the data using Katy’s links:
There were 11.6k students who completed the first week of activities – a rough measure of active participants. Using the four student archetypes, the completion rate was closer to 40%. Likely the rate was higher as the 11.6k number included Drop-Ins who did not intend to fully participate in the full course. But for now, we don’t have the data to accurately separate out this group.
To me, these measures of 11.6k students who actively participated with 40% completing the course is more meaningful than the 46k students enrolled and 10% completion rate. Clearly the majority of the 46k never intended to participate in the whole course. In a traditional face-to-face course, would we include all students who checked out a course syllabus or students auditing a course as actual students in the completion rate measurements? No, we would only count students who indicate through the add / drop period that they intend to fully take the course.
I’d appreciate feedback on these patterns – feel free to comment below or in the Google+ post.