In my recent post on Cengage Unlimited, I made a brief mention of the battle shaping up in the curricular materials world between “good enough” and “better enough.” I argued that Cengage is coming down on the “good enough” side by emphasizing all-you-can-eat pricing.
The distinction I’m trying to make between two strategies is a little tricky. I’m not arguing that Cengage, for example, thinks that their products aren’t great or that they think all anybody needs is the cheapest PDF possible. And on the other hand, “better enough” no longer means better editing or better production values, which is the way that textbook publishers used to position themselves against OER (and still do sometimes, although that reflex is beginning to fade). Rather, it’s about improving student outcomes.
To borrow a phrase from David Wiley, the fight boils down to standard deviations per dollar.1 This formulation boils the battle down to a fraction. In the numerator, we have impact. In the denominator, we have cost. David likes to say that it’s easier to change the denominator, i.e., reduce cost, than it is to change the numerator, i.e., improve student outcomes. One of the reasons this is true is that putting a different product in a class usually doesn’t have a big impact unless the instructor’s teaching practices also change to take better advantage of the product’s features. Or, if you prefer a formulation that emphasizes the teaching over the tools (which I do), digital courseware tends to have the most impact in classrooms where it supports the chosen pedagogical approach of the instructor.
For the incumbents, neither the numerator nor the denominator is particularly easy to change. In my last two posts, I wrote about the major investments—and risks—that Cengage took on to deliver their products at a better price point and still make their business model work (they hope).
But that may be a cake walk for the publishers compared with the challenge of changing the numerator. In my original post about Pearson’s efficacy strategy, I explored these challenges at length. I have chosen to quote a hefty excerpt here because none of these problems have gone away:
Let's think some more about the analogy to efficacy in health care. Suppose Pfizer declared that they were going to define the standards by which efficacy in medicine would be measured. They would conduct internal research, cross-reference it with external research, come up with a rating system for the research, and define what it means for medicines to be effective. They would then apply those standards to their own medicines. And, after all is said and done, they would share their system with physicians and university researchers in the hopes that the medical community might be reassured about the quality of Pfizer's products and maybe even contribute some ideas to the framework around the edges. How confident would we be that what Pfizer delivers would consistently be in the objective best interest of improving health?...
If Pearson were to say to faculty, "Here's what we think we know about the efficacy of this product, here's what we don't know yet, and here is how we are thinking about the question," they might get a number of responses. Maybe they would get, "Oh, well here's how I know that it's effective with my class." Or "The reason that you don't have a good answer on effectiveness yet is that your rubric doesn't provide a way to capture the educational value that your product delivers for my students." Or "I don't use this product because it has direct educational effectiveness. It frees me up from some grunt work so that I can conduct activities with the class that have educational impact." Most of all, if you're [Pearson CEO] John Fallon, you really want faculty to say to their sales reps, "Huh. I never thought about the product in quite those terms, and it makes me think a little differently about how I might use it going forward. What can you tell me about the effectiveness of this other product that I'm thinking about using, at least as Pearson sees it?" And you really want your sales reps to run back to the product teams, hair on fire, saying "Quick! Tell me everything you know about the effectiveness of this product!"
Pearson won't get that conversation by just publishing end results of their internal analysis when they have them, which means that they have a high risk of failing to align their products with the needs and desires of their market if they think about the relationship between their framework and their customers in that way....
There are a number of reasons why this part of the transformation will be at least as difficult as the part that Pearson is undertaking now. First, it is far from clear that the company has the trust of the academic community that would be necessary for them to take such a role. That would have to be built, in some cases from the ground (or even the basement) up. Pearson does have real strengths that are known within certain segments of the academic community---in data science, for example---but this does not transfer to a general reputation. Second (and relatedly), unlike the medical research community, the educational research community is still nascent and fragmented. Finding non-paternalistic but effective ways to bring that community together and facilitate useful conversations will be difficult to say the least. These two challenges are outside the company's sphere of control, which means that Pearson will have to develop new ways to think about how to build their relationships with the broader educational community.
Internally, changing the way they think about answering the questions that the framework asks them will entail as much subtle, difficult, and pervasive re-engineering of the corporate reflexes and business processes as the work being undertaken now.... [A]ll textbook companies that have been around for a while are wired for a particular relationship with faculty that is at the heart of how they design, produce, and sell their products. Their editors have gone through decades of tuning the way they think and work to this process, and so have their customers. When Pearson layers a discussion of efficacy onto these business processes, a tension is created between the old and new ways of doing things. Suddenly, authors and customers don't necessarily get what they want from their products just because they asked for them. There are potentially conflicting criteria. The framework itself provides nothing to help resolve this tension. At best, it potentially scaffolds a norming conversation. But a product management methodology that can combine knowledge about efficacy, user desires, and usability requires more tools than that. And that problem is even worse in some ways now that product teams have multiple specialized roles. The editor, author, adopting teacher, instructional designer, cognitive science researcher, psychometrician, data scientist, and UX engineer may all work together to develop a unified vision for a product, but more often than not they are like the blind man and the elephant. Agreeing in principle on what attributes an effective product might have is not at all the same as being able to design a product to be effective, where "effective" is shared notion between the company and the customers.2
Publishers that want to improve the numerator will have to completely rewire the ways that they work, both internally and externally. They need to rethink their product design process from the ground up while simultaneously completely resetting their relationships with their customers.
That post was published on December 31st, 2013. As we enter 2018, we are beginning to see examples of what such efforts might look like. For today’s example, I’m going to draw on recent work by Macmillan.
Resetting the Conversation
Before I get into the details, a little more disclosure than usual is called for here. I am a paid member of Macmillan’s Learning Impact Research Advisory Council (IRAC). As such, I was paid to provide input on the paper I’m about to write about as well as the underlying research processes that the paper describes. I was not paid to write this post about the paper. Or rather, I was paid to write something about it, but I was asked to write one page—one page!—of private feedback on the paper. I asked if I could write my feedback as a public blog post of unspecified length. The folks at Macmillan agreed.
The paper is called Unpacking the Black Box of Efficacy: A framework for evaluating the effectiveness and researching the impact of digital learning tools. Registration is required.
First piece of feedback for Macmillan: If you really want to foster a new dialog with academics, don’t start it by requiring them to give you their email addresses just to read your paper.
But the approach outlined in the paper is another matter. Recall that in the Pearson post quoted above, I advised the company to approach customers with something like the following proposition:
Here's what we think we know about the efficacy of this product, here's what we don't know yet, and here is how we are thinking about the question.
That is essentially what Macmillan’s paper attempts to do. It starts with an inventory, in plain English, some common educational research methods, how they work, and what their strengths and weaknesses are. The section on randomized controlled trials (RCTs) alone is worth the price of admission, given how often it is simplistically held up as the “gold standard” in research. Any thoughtful educator reading the description of the process will immediately think, “Hey, that’s...problematic in education.”
Even better, Macmillan was able to accomplish that with one page of text and one picture. They will need to be this incisive on a consistent basis if they are going to reach their intended audience.
Next, the paper describes their product development lifecycle. Again, there is a good balance here of clarity and brevity. The first stage of that lifecycle is called “Co-design & Learning Research.” While publishers have pretty much always started their product design process with input from customers, I wouldn’t call the historic process “co-design.” Rather, it was typically an author/editor collaboration with some limited and focused customer input. More recently, publishers have developed all kinds of hybrid processes. But Macmillan at least claims to be starting with a clean sheet of paper. They are certainly not the only publisher to do this, but from a communication perspective, framing educational product design as a combination of co-design with customers and structured but comprehensible research is a good move.
Speaking of which, the third section maps the various research methods described in the first section to the product design process in the second. There’s even a development timeline. The net effect is that educators (and students) have a clear and concise document explaining how Macmillan products are developed, how their potential learning impact is tested, and just how much it’s fair to say that the company knows about that impact at any stage in the development lifecycle.
While I am by no means claiming credit, this paper reads as if it could have been written as a direct response to my critique of Pearson’s first iteration of efficacy.
So yeah. I like it.
Good Enough for What?
You didn’t think I’d let them off that easily, did you?
Remember waaay back, all the way at the beginning of the post, when I made the point that learning outcomes are hard to improve with curricular materials partly because their impact depends on what humans in the classroom do with them? That problem still looms, and Macmillan’s paper barely touches it.
When I talk to students at length about the curricular materials that their instructors assign, their top complaint isn’t price. Don’t get me wrong; they hate the prices. But what they really hate is being told to buy a $200 book that the instructor barely mentions, let alone integrates into the class on a programmatic basis.
This is what “better enough” is competing against. I always thought it was funny that textbook publishers refer to everything outside the book as “ancillaries,” because many instructors tend to see the categories as reversed. The book is ancillary. It’s not central to the learning that happens in the classroom. Before Macmillan, or any other publisher, can sell products based on the value proposition of “efficacy” or “learning impact” or “learning outcomes”, instructors must first come to believe that these three propositions are true:
- Instructors are responsible for learning how to improve their students’ learning outcomes by improving their teaching craft.
- Improving their teaching craft includes learning to employ research-validated practices.
- Macmillan’s products support and enable research-validated practices effectively enough that they can make the credible case for having more than “ancillary” value.
This paper makes a good start—as good a start as any short paper can make—on the third proposition. The first proposition isn’t fair to lay at the publishers’ feet; it’s more driven by the incentives and culture of academia. It’s a problem, but not one that Macmillan or its peers can do much about directly. The second proposition is where the vendors, including but not limited to Macmillan, need to figure out how to do more.
To be fair, the paper nibbles around the edges of this problem. The educators and product developers need to develop shared goals. That’s what a co-design process is for. Educators and product developers also need to develop a shared sense of proof that the goals are being met better by one method than another. A lot of the paper develops the basis for a conversation around this.
But left implicit is the argument that education should be empirical and that empiricism needs to be formalized at least some of the time. There should be theories of learning impact and rules for what counts as evidence that supports or disproves those theories. This needs to apply not just to curricular materials design but for what happens in the classroom.
It’s probably too much to expect this paper, as focused as it is, to open up this Pandora’s Box. This paper is, in part, a trust-building exercise, and Macmillan needs to build trust before they can fully own up to the fact that incorporating curricular materials that meaningfully improve learning outcomes usually entails a course redesign. But that’s where both Macmillan and the industry need to get to if they want to be able to sell more heavily researched and designed products at a higher price point.
“Good enough” means “good enough for the way I use curricular materials in my classroom.” “Better enough” means “better enough that I’m convinced I should change the way I teach.” Macmillan has written a really good paper on the standards of proof they propose to live up to and how they propose to live up to them. But they also have to convince their customers to agree to live up to those same standards in their own teaching.
- “Standard deviation” is just statistics geek speak for a measure of difference—in this case, improvement—from the norm. [↩]
- Believe it or not, that is a short excerpt as measured as a percentage of the total word count of the post. [↩]