Last week the Brookings Institute published an article in its Economic Studies: Evidence Speaks series titled “Promises and pitfalls in online learning,” by Eric Bettinger and Susanna Loeb of Stanford University. The article is a condensed summary of a lengthier draft research paper, “Changing Distributions: How Online College Classes Alter Student and Professor Performance” that was supported by the Stanford Center for Education and Policy Analysis (CEPA) and includes two additional authors, Lindsay Fox (Stanford) and Eric Taylor (Harvard). The Brookings’ article was first reported by Inside Higher Ed earlier this week in a piece called “Is Online Ed Missing the Mark?”
I note the cascading levels of publication to draw attention to the occasional pitfalls of relying on reporting and journal summaries and the benefits of reading the original research. It reminds me a little bit of the time I spent working for a school board member in the L.A. Unified School District. An article would sometimes appear in a local paper, and what was reported would have little to do with the reality of what was actually being covered, or at least miss a lot of the complexity (that said, there was excellent reporting as well).
In this case, the necessarily brief approach IHE piece and the selective treatment in the Brookings summary do a disservice to the nuanced, qualified, even-handed and collaborative approach of the authors in the full-length report.
For starters, the IHE and Brookings articles (written by two of the full report's authors) create the impression that the study and its results apply to online education in general and across all higher ed sectors whereas the authors in the full-length report are very careful to point out that the results are specific to the institution they studied (DeVry University) while probably extendable to other for-profit institutions with similar demographics, but no further. They are explicit in positioning and defining the limits of their findings:
“While we focus on one large university in the for-profit sector, our results are likely generalizable beyond DeVry University...DeVry has a similar profile to other for-profit colleges, though collectively for-profit colleges differ from traditional sectors, including two-year colleges, given their focus on non-traditional students and African-American students.”
That said, the authors of the full report do at times make some broad statements that taken in isolation, without the context of the larger report, can lead to exaggerated claims about the state of online education more generally.
Second, the nature of the relationship with DeVry University is not covered in either the original IHE article or the Brookings summary. One gets the sense that DeVry is being singled out for criticism while that is not the case at all. The full report makes clear that the research was a collaborative effort with DeVry University by thanking them in a footnote on the title page:
“We greatly appreciate the support of DeVry University, especially Aaron Rogers, Ryan Green, and Earl Frischkorn.”
IHE, to their credit, published some clarifying remarks from the authors on Wednesday that further underscore the nature of the collaboration with DeVry University:
“Additionally, our study was part of a continuing improvement process that DeVry employs. They are constantly monitoring the quality of their courses and ways to improve student engagement. The data that they shared is at least five years old, and DeVry took the results of our study to modify and to improve the quality of their course offerings.
They especially worked to develop strategies to help all students succeed in these courses. They continue to monitor the quality of their courses. We know of no higher educational institutions in the public or private sectors that have dedicated the same amount of attention to introspectively examining the quality of their offerings.
By contrast, we applaud DeVry for their use of science and evidence to improve and to design their course offerings.”
The inclusion of this additional information in the earlier IHE article and the Brookings summary would have gone a long way to changing the tenor, if not the substance, of the findings.
Third, whereas the initial IHE and Brookings publications are overwhelmingly confident in their exposition of findings, the authors in the full report are a little less so. While fully standing behind their findings, they recognize that there complicating factors and issues beyond their control that could raise important questions.
“While the results suggest that students taking a course online do not perform as well as they would have taking the same course in a conventional in-person class, our results have limitations. First, a full welfare analysis of online college courses is not possible. Notably, online offerings make college courses available to individuals who otherwise would not have access. Our estimates are based on students who could take a course in-person or online, and we cannot quantify the extent of this access expansion in our setting. Second, we study an approach to online courses that is common today, but online approaches are developing rapidly. Further development and innovation could alter the results.”
An additional note to add here is that the data used for this research cover a four-year period from Spring 2009 to Fall 2013.
The authors of the study have overall done a great job working with the comprehensive DeVry University data set to identify the diversity of impacts online courses have on different slices of the student population when compared with similar students taking essentially the same courses in face to face settings. The basic finding is the following (I know, sounds like a generalizable conclusion but is specific to DeVry and maybe other similar for-profits):
“Our analyses provide evidence that students in online courses perform substantially worse than students in traditional in-person courses, and these findings are robust across a number of specifications. We also find that the variance of student outcomes increases, driven at least in part, by differentially larger negative effects of online course taking for students with lower prior GPA.”
More specifically, the authors find that on average, at DeVry University, taking a course online reduces a student's grade by 0.44 points on a traditional four point grading scale versus taking the identical class in person. Moreover, the effect is more pronounced on lower performing students and has negative knock-on effects in future courses.
The obvious questions now are why is this the case and what can be done to improve outcomes for students taking courses online. Thanks to the collaboration with the Stanford researchers, DeVry University is in a better position to tackle these questions.
All institutions and online education in general can benefit from the rigorous analysis and transparency brought to bear in this recent work. Especially if one takes the time to get past the headlines.
N.B. Just prior to publishing this post, Cali Morrison noted in a tweet the October 2015 date on the working draft of the study we've been discussing. I had missed that, having assumed we were looking at current research, and it seems relevant to mention that here. Without speculating on reasons for the lag (is it a typo?), it does raise questions about the timing of publication. If any of the authors or Brookings can clarify or shed light, that would be helpful.