My latest Chronicle column is up. It analyzes the results of the SRI Education study of the Gates Foundation adaptive learning grantees, some of which we’ve covered in our e-Literate TV case studies. If you’re looking for evidence that adaptive learning is going to deliver on the promise of a robot tutor in the sky, you won’t find it there. But it’s easy to flatten that result into “adaptive learning doesn’t work.” I don’t believe that the SRI study shows any such thing.
First of all, what is our standard of proof? A good half of my column is devoted to the methodological challenges of doing big meta-studies like this one. It’s really hard to (ethically) control the variables across multiple classrooms well enough to get a clean result. SRI had to throw out most of the data they had for some measures.
But equally importantly, there’s just a lot that meta-studies can’t get at precisely because the goals of each implementation are different. For example, one of the goals for implementing OLI at UC Davis was getting students more prepared to engage in higher-level critical thinking in class discussion. Here are some Davis faculty talking about their course design goals:
I’m not sure how one would empirically measure such a result; nor could I see how to incorporate it into a meta-study that also includes, for example, Essex County College’s developmental math course.
While the Gates Foundation should get credit for bringing in a credible third-party evaluator to review the results of the grants, the design parameters for this particular study do not appear to be as useful as they might have been. That said, the larger point is that it’s really hard to do educational research well. Rather than using these studies as Rorchach tests, we should be taking the time to improve our educational research literacy and better understand what each study can and cannot prove.