This is almost old news now, but we just haven't been able to dig into it yet. As part of its Adaptive Learning Market Acceleration Program (ALMAP) program, the Gates Foundation funded SRI to do a study of the results of the grants after two years. I hope to finally clear some time to parse through it this week, but two high-level points jump out at me at first glance (neither of which is a big surprise):
- There are no conclusive wins here. This is not a robot-tutor-in-the-sky moment. A few programs did well here and there. A handful produced promising incremental gains. But this is not a report that screams, "Wow, adaptive courseware works!" The most you can say is that adaptive learning looks like it could be another arrow in the quiver that helps out in some situations—like developmental math courses at two-year colleges, for example.
- Large-scale educational research is incredibly hard and may actually be impossible to do rigorously for certain kinds of questions. I'll probably get into this more when I'm ready to really write about the report, but one reason the conclusions are murky is because there so many variables in each class—not just each course subject, not just each course at one university, but even with each section of each class taught by one teacher—that really matter, some of which are impossible to control and others of which are unethical to control. It would be a mistake to overinterpret the study as showing that adaptive learning doesn't help much. I've seen studies with narrower focus that have gotten clearer gains. This brings us back to the arrow-in-the-quiver theory. The wider the scope of the research focus, the more that pockets of real benefit will be obscured by noise and uncontrolled variables.
The report is here.