By Phil Hill
Recently I pointed out that the widely-quoted Babson survey on online learning estimates 7.1 million US higher ed students taking at least one online course while the new IPEDS data indicates the number as 5.5 million. After looking deeper at the data, it appears that the difference in institutions (whether or not an institution offers any online courses) is even greater than the difference in students. This institutional profile is important, as the Babson report (p. 13) noted that institutions offering no online courses had very different answers than others, a theme that ran through much of the report: [emphasis added]
The results for 2013 represent a marked change from the pattern of responses observed in previous years. In the past, all institutions have consistently shown a similar pattern of change over time. Different groups of institutions typically reported the same direction of change – if one group noted an improvement on a particular index, all other groups would show a similar degree of improvement. The overall level of agreement with a particular statement might vary among different groups, but the pattern of change over time would be similar. This is not the case for 2013.
As noted above, there was a year-to-year change in the overall pattern of opinions on the strategic importance of online education, and on the relative learning outcomes of online instruction, as compared to face-to-face instruction. In both cases, the historic pattern of continued improvement took a step back for 2013, and all of the changes are accounted for in a single group of institutions: those that do not have any online offerings.
Institutions with no online offerings represent a small minority of higher education – how are they different?
Let’s look at the IPEDS data on institutions versus the Babson data, first by institutional control. I took the data on page 32 of the Babson report and recreated the graph, then I ran the same analysis using IPEDS data. (NOTE: these interactive charts do not come through on RSS feeds, so you probably will have to click through to post to see.)
The Babson report also evaluates these institutions by basic Carnegie classification and institutional enrollment. I did not evaluate the former (too messy), but I did run the same analysis by enrollment.
- While I have been able to recreate the universe of 4,726 institutions referenced on page 29 of the report, I cannot get the same total enrollment figure. The Babson data indicates 21.3 million students compared to IPEDS data of 20.6 million. I don’t believe this 3% difference is that meaningful.
- While the Babson data refers to a universe of 4,726 institutions, the data provided is based on 4,332 and 4,269 institutions, primarily by using far fewer for-profit institutions. There is no explanation for these different numbers, but keep in mind that Babson’s data is from a survey that extrapolates to estimate the universe.
- The big difference that should be obvious is that the Babson data shows less than half the number of institutions with no online offerings than the IPEDS data – 15% compared to 31%.
Not only does the IPEDS indicate that twice as many institutions have no online courses as previously reported, but I also question the finding that “institutions with no online offerings represent a small minority of higher education”. 31% is not a small minority.
I am not questioning the research methods of the Babson Survey Research Group nor the value of their annual survey. It is just that we now have a new source of data that must be accounted for. While I do not think the IPEDS data is flawless, it is better than the survey-based data used by Babson. Jeff Seaman, one of the two Babson researchers, said as much in this Chronicle article:
So which number is correct?
The lower one, probably. The Education Department data are more likely to be accurate, “given that they are working from the universe of all schools,” says Mr. Seaman by email. [snip]
The reporting requirements for the department “are such that I would always trust their numbers over ours,” he wrote. “However, I still believe that the trends we have reported for the past 11 years are very much real.”
I hope the analysis I’m doing based on IPEDS data doesn’t come off as nitpicking or attacking the Babson survey. The annual survey has been a very useful source of information, and the trend data as well as attitudinal data cannot be replicated by IPEDS. These Babson reports have enormous influence on the higher education community, being the most widely-quoted source on just how prevalent online education is in the US. It is very important to adjust our thinking based on new information and to be transparent with research data.