Before I move on to my next case study in academic institutions moving toward operational excellence at supporting student success, I want to revisit a section toward the end of my last post on the California Community Colleges Online Education Initiative (OEI). I was looking at the alignment that has to be achieved at various levels in the academic organization in order to encourage all the stakeholders to embrace this collective mission, with all the changes to their day-to-day work and even professional identities it would entail. Much of the piece is about the work that had been done so far to get alignment at various levels within the administration. But toward the end of the piece, I speculated a bit on potential opportunities for fostering faculty alignment through a course peer review process using a common rubric:
[T]o me, one of the most interesting vectors for culture-building is the course exchange course quality rubric. Every course on the exchange has to be evaluated against a rubric of evidence-backed effective online teaching practices. As the pace at which exchange courses are developed increases, OEI will not be able to keep up with demand to evaluate these courses using central staff. So they are creating a peer reviewer mechanism in which faculty on the campuses are trained on the rubric and presumably compensated to review courses that are candidates for the exchange.
This opportunity fascinates me. We know that faculty who go through an expert-supported course redesign process often experience intellectually deep and emotionally moving shifts in their teaching strategies. Is the same true when faculty are trained reviewers of their colleagues' redesigned courses? What effect will simply exposing faculty to more and different course designs have? How will their role as reviewers and critiquers shape or enhance that effect? Can a continuously improved and updated rubric become a vector for sharing new research-supported processes across the system on an ongoing basis? Will the impact be broad and deep enough to foster new kinds of intra- and inter-campus faculty dialogs about the scholarship of teaching and learning (SoTL)? Will these cultural changes help to foster alignment around continuous operational improvement for enabling student success? This is the last mile problem of higher education. Operational excellence at student success cannot be achieved unless it is infused in the daily operations in individual classrooms. That requires affirmative faculty buy-in, support, training, and embedding in a culture that invites them into the larger conversation.
Unpacking this a bit, what does it really mean to build a culture of operational excellence in supporting student success? What kind of change would be necessary at the individual level to achieve change at the organizational level?
There is a useful concept in organizational psychology called "double-loop learning." I'll give a simple example of a non-academic organization first to make the concept clear. Suppose your company manufactures smartphones. You want supply to match demand almost exactly as possible. If you manufacture too many phones, then you will sink expense into building units that will sit on the shelves and fairly quickly become obsolete. But if you manufacture too few, then you won't have phones to sell at the moments that people need to buy them, thus encouraging them to buy a different (more available) phone instead. In a single-loop model, you have one lever to pull, which is how many phones you produce at a given time. It's a like a thermostat: If the room is too cold, then turn on the furnace. If the room is warm enough, then turn off the furnace. If there are not enough phones on the shelves, then turn up production. If there are too many phones on the shelves, then turn down the production line.
The problem is that there's a significant lag between when the order is given to produce more phones and when they arrive on the shelves. During that time period, demand can change. Maybe by the time the new phones the company produces during a period of high demand actually land on the shelves during the beginning of a recession, or right after a competitor releases their hot new model. The single-loop, thermostat-like model doesn't work very well.
Of course, the people who run the company are smart enough to know this, so they come up with all sorts of work-arounds. They build warehouses to hold excess phones near where they are built, since holding onto the phones that way is cheaper than shipping them halfway across the world and negotiating with the retail stores that are selling them and may want to ship excess inventory back. They build sophisticated forecasting models that account for factors such as the economy and competitor behavior, so the chances of them being badly wrong are reduced. These are all work-arounds to a fundamental problem regarding the costliness of being wrong in your demand forecasts. And this is exactly the way manufacturers of all kinds of complex items, including smartphones, used to operate in the old days.
But then somebody somewhere questioned a fundamental premise that drove so much effort and activity: Does it have to take so long from the time the company order new products to be manufactured until those products reach the retail shelves? Maybe there's some part that often holds up the whole product; if we could only use a different part, or make the part ourselves, then we could get rid of a lot of the delays. Maybe the places where those component parts come from are farther away from our factory than they need to be; if we could just get them to move closer, then we could cut down on the lags. Maybe we have extra steps in our manufacturing process, or use outdated equipment; if we could only make some updates, then we can shorten the lag. And maybe if we do all of these things, as well as more generally finding and making changes anyplace where the process bogs down, then maybe we don't have to put products on shelves at all. Maybe we can get the delay between order and manufacture short enough that we could start manufacturing the device when the consumer orders it and get it assembled and shipped fast enough that the consumer would tolerate the delay.
This is double-loop learning. Organizations not only use processes that allow them to make adjustments but also regularly examine the assumptions behind those processes that may be unnecessarily getting in the way of achieving organizational goals. We assume that we have to develop processes to mitigate bad product demand forecasts because we assume that those costs will be high because, in turn, we assume that manufacturing the product will take a long time once we decide to do it. But what if we're wrong?
Double-loop thinking is a reasonably simple concept to understand but very hard to execute well and consistently. In the smartphone manufacturer example, think about all the many kinds of assumptions in the way things had always been done that would have to be identified, questioned, and replaced with a better-designed alternative. Particularly in the early days, when there weren't models to copy or lessons learned elsewhere, no one person who could see all the changes that would have to be made. There would be many people across the organization—in manufacturing, product design, contract negotiation, shipping, retail relations, and so on—who would each be able to spot an individual sub-optimization in her daily work experience. And then more people would have to be involved in designing a solution to each sub-optimization, including accounting for all the ripple effects across other aspects of the organization. It would be an all-hands-on-deck sort of affair. Everyone would be needed to find problems, identify potential solutions, check those solutions for side-effects, and then implement them well.
Double-loop learning in academia
Now think about a few of the many questions that are starting to be asked about the operating assumptions about the education-related processes of colleges and universities:
- Why must students stay in a course for a set number of weeks, regardless of how quickly or slowly they are capable of learning the material?
- Why are students only able to register for and start a course at most a couple of set times in the year?
- Why some very common teaching modalities based on the default assumption that all students learn roughly the same way and encounter roughly the same rough spots?
- Why do we define the minimum math literacy for a college degree as basic algebra rather than, say, statistics?
- We do we believe that professorial training requires at least five years of deep disciplinary education and at most one course in pedagogical education?
- Why do faculty gain job security through research excellence far more than through teaching excellence?
- Why do we assume that students know and understand everything they need to do from the moment they receive their college acceptance to the moment they arrive on campus for the start of their first semester of class?
- Why do we assume we can know when individual students are in trouble and need help from the academic institutions when no employee of that institution sees the student for more than a few hours in the week—at most—and there is no good mechanism for sharing concerns and observations among the people who have contact with that student?
Think about the people who were in a position to spot each of these assumptions. Think about all the people required to design, troubleshoot, and implement alternatives that arise out of questioning the assumptions. If we want to reliably create student-ready colleges, then we need to be able to identify many unwarranted assumptions and design many alternative ways of doing things in ways that will deeply affect the ways in which academic institutions—and the people employed by them—work. To change everything, you need everyone. That specifically includes faculty.
A rubric as a vector for change
So if you're the CEO of major textbook publisher and you want to unite the entire 45,000-employee company around a plan to transform the way the company does business, what do you do? Surprisingly, Pearson's CEO John Fallon's answer was, "I'll create a rubric."
I'm not going to analyze Pearson's rubric in detail here.... I'll say this much about it: It's nothing special. It's not bad, but it's not genius either. There are plenty of flaws and limitations you could find if you worked at it and applied it broadly enough. There is no magic in it.
But here's the thing: There is neverany magic in a rubric. The magic, when there is any, happens from the norming conversations that the rubric engenders. It happens when one colleague says to another, "What do you mean by 'quality of evidence'?" Or "I scored that course a 2 on effectiveness. Why did you think it was a 4?" To the degree that the Effectiveness Framework proves to have any magic for Pearson, it will be in the norming conversations that it engenders across the company. Like our hypothetical Berkeley president, Fallon is working with diverse groups within an institution that has a culture of independence and Balkanization. Some of this is for good reason; conversations about effectiveness in chemistry education should look very different from conversations about effectiveness in fine arts education. Some of the fractiousness is about lack of a common culture and language necessary to discuss what otherwise arecommon challenges. And some of it is just human territoriality and self-interest. The first two challenges might be addressed by having a deep and wide ongoing norming conversation about a rubric that is general enough to cover a wide range disciplines and products but focused enough to provoke important discussions. The goal is for that conversation to become the basis for a new culture. The third challenge might be addressed by reinforcing that culture through your HR and other business practices.
Since I wrote that post, Pearson has developed a set of rubrics for evaluating whether a given product supports research-backed learning design principles. They have rolled those rubrics out to every product team and trained their product teams on how to use them. They have released them under a Creative Commons license and are. (For more on both the resource itself and Pearson's interest in working with academics to make them more useful to academia, see the talk given by Pearson's Global Head of Efficacy and Reach at last year's Empirical Educator Project summit.) So Pearson continues to use what is essentially an academic strategy, not that different from the one being rolled out by California OEI, to build a double-loop culture around designing educational content and software functionality that are more effective at impacting student outcomes.
The rubric development, training, and norming processes are necessary but not sufficient. As I suggest in that last sentence of the Pearson post quote, other organizational processes need to be put in place as well in order to get the desired effect. It would be easy to get faculty thinking that the new practices are baked into the rubric, and as long as everybody is aligned with them, you're good. The organization goes through the double-loop, but only until the norming process is complete. This is, in fact, what happens in many colleges and universities that adopt course quality rubrics. The institution has to mindfully employ the rubric updating, retraining, and renorming processes as methods for collaborative innovation. The rubric needs to be designed at a high enough level that it invites discussion and thought rather than rote implementation. The processes around it need to be collaborative rather than broadcast-only. And many other processes—like compensation for time invested or rewards for innovation, to take a couple of obvious examples—need to be created or modified to support and dovetail with the rubric processes.
There are lots of organizations that implement course quality rubrics. Enough that we should be able to start gathering stories and effective practices for using them to foster continuous organizational improvement. If anyone has a good example, please let me know.
- Pearson is a sponsor of e-Literate's Empirical Educator Project. [↩]