A while back, I mentioned that MindWires, the consulting company that Phil and I run, had been hired by Pearson in response to a post I wrote a while back expressing concerns about the possibility of the company trying to define "efficacy" in education for educators (or to them) rather than with them. The heart of the engagement was us facilitating conversations with different groups of educators about how they think about learning outcomes---how they define them, how they know whether students are achieving them, how the institution does or doesn't support achieving them, and so on. As a rule, we don't blog about our consulting work here on e-Literate. But since we think these conversations have broader implications for education, we asked for and received permission to blog about what we learn under the following conditions:
- The blogging is not part of the paid engagement. We are not obliged to blog about anything in particular or, for that matter, to blog at all.
- Pearson has no editorial input or prior review of anything we write.
- If we write about specific schools or academics who participated in the discussions, we will seek their permission before blogging about them.
I honestly wasn't sure what, if anything, would come out of these conversations that would be worth blogging about. But we got some interesting feedback. It seems to me that the aspect I'd like to cover in this post has implications not only for Pearson, and not only for ed tech vendors in general, but for open education and maybe for the future of education in general. It certainly is relevant to my recent post about why the LMS is the way it is and the follow-up post about fostering better campus conversations. It's about the role of research in educational product design. It's also about the relationship of faculty to the scholarship of teaching.
It turns out that one of the aspects about Pearson's efficacy work that really got the attention of the folks we talked with was their research program. Pearson has about 40 PhDs doing educational research of different kinds throughout the company. They've completed about 300 studies and have about another 100 currently in progress. Given that educational researchers were heavily represented in the groups of academics we talked to, it wasn't terribly surprising that the reaction of quite a few of them were variations of "Holy crap!" (That is a direct quote of one the researchers.) And it turns out that the more our participants knew about learning outcomes research, the more they were interested in talking about how little we know about the topic. For example, even though we have had course design frameworks for a long time now, we don't know a whole lot about which course design features will increase the likelihood of achieving particular types of learning outcomes. Also, while we know that helping students develop a sense of community in their first year at school increases the likelihood that they will stay on in school and complete their degrees, we know very little about which sorts of intra-course activities are most likely to help students develop that sense of connectedness in ways that will measurably increase their odds of completion. And to the degree that research on topics like these exist, it's scattered throughout various disciplinary silos. There is very little in the way of a pool of common knowledge. So the idea of a well-funded organization conducting high volumes of basic research was exciting to a number of the folks that we talked to.
But how to trust that research? Every vendor out there is touting their solutions based on "brain science" and "big data." How can the number of PhDs a vendor employs or the number of "studies" that it conducts yield more credible value than a bullet point in the marketing copy?
In part, the answer is surprisingly simple: Vendors can demonstrate the credibility and value of their research using the same mechanisms that any other researcher would. The first step is transparency. It turns out that Pearson already publishes a library of their studies on their "research and innovation network" site. Here is a sample of some of their more recent titles that will give you a sense of the range of topics:
- Measuring Academic Language Proficiency in School-age English Language Proficiency Assessments under New College and Career Readiness Standards in the United States
- Rich Classroom Discussion: One Way to Get Rich Learning
- Evaluating the Predictive Value of Growth Prediction Models
- The Future of Affirmative Action: New Paths to Higher Education Diversity after Fisher v. University of Texas Center
- New Methods in Online Assessment of Collaborative Problem Solving and Global Competency
- Category Fluency, Latent Semantic Analysis and Schizophrenia: A Candidate Gene Approach
- Evaluation of Pseudo-Scoring as an Extension of Rater Training
- Preparing Students for College and Careers: The Causal Role of Algebra II
- Teaching in a Digital Age: The Philosophies of Learning Behind Improving Access to Learning Resources
Pearson also has a MyLabs- and Mastering-specific site that is more marketing-oriented but still has some research-based reports in it.
How good is this research? I don't know. My guess is that, like any large body of research conducted by a reasonably large group of people, it probably varies in quality. Some of these studies have been published in academic journals or presented in academic conferences. Many have not. One thing we heard from a number of the folks we spoke to was that they'd like to see Pearson submit as much of their research as possible to blind peer-reviewed journals. Ultimately, how does an academic typically judge the quality of any research? The number of citations it gets is a good place to start. So the folks that we talked to wanted to see Pearson researchers participate as peers in the academic research community, including submitting their work to the same scrutiny that academic research undergoes.
This is approach isn't perfect, of course. We've seen in industries like pharmaceuticals that deep-pocketed industry players can find various ways to warp the research process. But pharmaceuticals are particularly bad because (a) the research studies are incredibly expensive to conduct, and (b) they require access to the proprietary drugs being tested, which can be difficult in general and particularly so before the product is released to the market. Educational research is much less vulnerable to these problems, but it has one of its own. By and large, replicability of experiments (and therefore confirmation or disconfirmation of results) is highly difficult or even impossible in many educational situations for both logistical and ethical reasons. So evaluating vendor-conducted or vendor-sponsored educational research would have its challenges, even with blind peer review. That said, the opinions of many of the folks we talked to, particularly of those who are involved in conducting, reviewing, and publishing academic educational research, was that the challenges are manageable and the potential value generated could be considerable.
Even more interesting to me were the discussions about what to do with that research besides just publishing it. There was a lot of interest in getting faculty engaged in with the scholarship of teaching, even in small ways. Take, for example, the case of an adjunct instructor, running from one school to the next to cobble together a living, spending many, many hours grading papers and exams. That person likely doesn't have time to do a lot of reading on educational research, never mind conducting some. But she might appreciate some curricular materials that say, "there are at least three different ways to teach this topic, but the way we're recommending is consistent with research on a sense of building class community that encourages students to feel like they belong at the school and reduces dropout rates." She might even find some precious time to follow the link and read that research if it's on a topic that's important enough to her.
This is pretty much the opposite of how most educational technology and curricular materials products are currently designed. The emphasis has historically been on making things easier for the instructors by having them cede more control to the product and vendor. "Don't worry. It's brain science! It's big data! You don't have to understand it. Just buy it and the product will do the work for you." Instead, these products could be educating and empowering faculty to try more sophisticated pedagogical approaches (without forcing them to do so). Even if most faculty pass up these opportunities most of the time, simply providing them with ready contextual access to relevant research could be transformative in the sense that it constantly affords them new opportunities to incorporate the scholarship of teaching into their daily professional lives. It also could encourage a fundamentally different relationship between the teachers and third-party curricular materials, whether they are vendor-provided or OER. Rather than being a solitary choice made behind closed doors, the choice of curricular materials could include, in part, the choice of a community of educational research and practice that the adopting faculty member wants to join. Personally, I think this is a much better selection criterion for curricular materials than the ones that are often employed by faculty today.
These ideas came out of conversations with just a couple of dozen people, but the themes were pretty strong and consistent. I'd be interested to hear what you all think.