Following the IHE piece on Essex County College's struggles to get good outcomes from their personalized learning program in developmental math, and following my blog post on the topic, Phil and I had an interesting exchange about the topic in email with ECC's Vice President for Planning, Research, and Assessment Doug Walercz. With his permission, I'd like to share some of his observations with you. One of the big takeaways from the conversation, for me, is that our cultural notion of the pedagogical work that happens in a good lecture is pretty impoverished relative to the reality. We don't have a clear understanding of all the things that a good lecture accomplishes, and therefore we often lose valuable elements of student support when we try to replace it. This has pretty serious implications for MOOCs, flipped classrooms, personalized learning, and a wide array of pedagogical approaches that replace a traditional in-person lecture with something else.
Here is part of Doug's initial response to my blog post and the IHE article:
My understanding or the experience at Essex continues to evolve, and it is something like this:
- Good teachers deliver a multi-dimensional learning experience, and the experience usually revolves around the content. When I say multi-dimensional, I mean that while the teacher is delivering the content, she is also assessing prior knowledge, building a positive classroom climate, establishing (high) goals for student performance, developing metacognition, dividing complex knowledge into manageable pieces, providing motivation for learning, and helping organize knowledge around key features. And all of these threads are woven into the lecture and discussion that comprises the classroom experience.
- Adopting a system like ALEKS moves the content from the professor to the software, and effectively removes the primary vehicle that used to carry essential dimensions of the learning experience. The software is great at delivering content, assessing prior knowledge, and dividing complex knowledge into manageable pieces, but it is not good at classroom climate, goal setting, metacognition, motivation, and organizing knowledge around key features. And our instructors don't know how to deliver these threads outside of a content-driven lesson.
- If you have students who are "good" students then they already have behaviors to establish a positive climate, they have metacognitive skills, they are self-motivated, etc. So, if your students are "college ready" they will not suffer significantly due to the absence of these threads in an adaptive-software driven course. However, if your students do not have the beliefs and behaviors of successful students, if they lack metacognition, if their primary motivation is driven by teacher approval (or lack thereof), if they don't know how to set goals or organize knowledge, then the absence of these threads will have a critical impact on their performance.
- The potential for better learning is there. Adaptive software does a better job of delivering pure content than faculty, especially when the students have a wide array of prior knowledge. Adaptive software also gives faculty more time to devote to non-content-driven threads, so there is the potential for significant gains in learning, but it will only happen after faculty learn to deliver those threads outside of a content-driven lesson. So, I am trying to focus on how to conceptualize these threads and get faculty to understand that they can be taught even when they are not giving a content lecture.
His assertion in his last point that "[a]daptive software does a better job of delivering pure content than faculty, especially when the students have a wide array of prior knowledge" tends to generate some heated debate, particularly in math education circles, so I asked him to elaborate on it. Here's what he had to say:
When I made the comment about delivering pure content, I was talking about the process of breaking complicated concepts or knowledge domains into component pieces, learning the pieces separately, and then integrating them into a whole. Faculty (and all experts) often suffer from "expert blind spot" when they try to teach a novice a skill that the expert mastered a long time ago. Even if the expert is tuned into some of his blind spots, it is very difficult to deconstruct a task and teach every step and not forget anything and do it reliably class after class. Adaptive software, on the other hand, is very well suited for this type of content delivery. A team of experts has done the deconstruction in painstaking detail, and data from thousands or millions of students have been analyzed to identify points where more (or less) explication is needed. Humans can't compete with that combination of comprehensiveness, detail, reliability, and adaptivity. I will say that when it comes to integrating component skills into large, integrative, projects, software is usually not up to the task because these integrative projects are almost always open-ended, and computers don't do "open-ended."
This is a defensible, research-backed position. I wrote about expert blindness a while back in my post on the Pittsburgh Science of Learning Center. I am aware of (and sympathetic to) what we might call the Dan Meyer school of thought regarding math education, which advocates for different goals and measures of success in math education. There is a reasonable debate to be had among thoughtful and competent math education professionals---one that I do not have any interest in rehashing, much less adjudicating, in this post. My point for the current purpose is simply that Doug is not coming from a knee-jerk "computers are awesome, teachers suck" perspective. Rather, he and his colleagues are engaged in an empirical examination of the various pedagogical functions that are necessary to help their students succeed.
For now, I'm more interested in his hypothesis that ECC instructors do know how to "classroom climate, goal setting, metacognition, motivation, and organizing knowledge around key features," but only in the context of a "content-driven lesson." This is an interesting assertion. What does it mean?
We tend to think of a traditional "content-driven lesson" as a "lecture," and we tend to think of a "lecture" as a professor droning on for an hour and twenty minutes with no student interaction. But most lectures are not that, and no lectures are only that. Let's start with the simple fact that lectures are live and in person. My wife and I recently watched a very odd miniseries on PBS called Big Blue Live, since we're commit to spend enough time together we even got intention bracelets to remember this commitment. A lot of emphasis was placed on the "live" part. The various hosts and experts kept going on excitedly about how the show was happening "live." But it didn't feel live. It didn't feel like we were actually there. Seeing a blue whale breach in person has to be a pretty dramatic experience. Watching it on TV did not feel anything like that. What it felt like was a prerecorded reality TV show. The hosts were excited because the whale was live for them, and their authentic in-the-moment reactions added to the viewing experience at home. But it was not at all like being there. And while I am not in any way suggesting that seeing a live lecture on, say, the role of prostitution on race relations in Reconstruction-era Memphis is anything like seeing a blue whale breach live, I would also say that it is inherently different from seeing the same lecture on video tape. It doesn't activate our attention mechanism in the same way.
But the emotional impact of a live performance is really just a small part of the picture. Good teachers who lecture do a lot more than just deliver a canned speech and then walk out the door. There is often a lot going on during that talk. Some of the most basic pedagogical moves in a good lecture have been re-invented in video-based pedagogy, without any apparent awareness that they often happen in a physical classroom as well. For example, much is made of the fact that videos should be ten minutes long or less. First of all, there may be some difference in attention lengths between a live and a recorded lecture. But more importantly, good lecturers have a rhythm to their presentations. They will signal a break in one way or another. They will pause. They will crack a joke. They will ask a question of the group. This brings us to the next "innovation" in video-based pedagogy. Asking a reinforcing question after a short lecture segment is not a new idea. It is true that it is easier to do with technology because you can give every student an opportunity to answer. But lacking that, rhetorical questions and questions asked to the class as a whole both work. And they are common.
Beyond that, good teachers who lecture are always scanning the room, seeing who is paying attention, who is drifting off, who is in trouble, and so on. Students can raise their hands and ask questions. And a teacher can do something different in response to any of these cues. She can change the lecture, or drop it for a moment to engage in a discussion.
In the popular narrative about lectures, either these moves don't exist or they don't matter. But they do. The thing is, since most faculty have received no training at all in teaching, even the truly great ones aren't always even fully conscious of what they are doing, and they don't always know how to separate the pedagogical functions and apply them in different contexts. They learned to do what they do by watching other good teachers do the same thing. There is no particular reason to think that professors will spontaneously, without any training or modeling, be able to transfer those pedagogical functions into an environment in which they can't weave it into main mode of teaching that they have seen throughout their academic lives.
When I think about the things that scare me the most about big technology-driven changes to the way we teach, this kind of thing is right up near the top. We don't fully understand what we are doing well now. Therefore, when we attempt to deconstruct it and then reconstruct it in a different environment, we don't really know what we will miss or how we will need to retrain our instructors so that we won't miss it. That's why it is so important to undertake these sorts of experiments thoughtfully, self-critically, and iteratively.