Blueprint for a Post-LMS, Part 5

In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software  and from cMOOCs.

In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.

Continue reading

Posted in Content Management & Taxonomy as Knowledge Management, Educational Pattern Languages, Emergence, Distributed Cognition, & Aggregation Science, Higher Education, Tools, Toys, and Technology (Oh my!) | Tagged , , , , , , , , , | 1 Comment

Blueprint for a post-LMS, Part 4

In part 1 of this series, I talked about some design goals for a conversation-based learning platform, including lowering the barriers and raising the incentives for faculty to share course designs and experiment with pedagogies that are well suited for conversation-based courses. Part 2 described a use case of a multi-school faculty professional development course which would give faculty an opportunity to try out these affordances in a low-stakes environment. In part 3, I discussed some analytics capabilities that could be added to a discussion forum—I used the open source Discourse as the example—which would lead to richer and more organic assessments in conversation-based courses. But we haven’t really gotten to the hard part yet. The hard part is encouraging experimentation and cross-fertilization among faculty. The problem is that faculty are mostly not trained, not compensated, and otherwise not rewarded for their teaching excellence. Becoming a better teacher requires time, effort, and thought, just as becoming a better scholar does. But even faculty at many so-called “teaching schools” are given precious little in the way of time or resources to practice their craft properly, never mind improving it.

The main solution to this problem that the market has offered so far is “courseware,” which you can think of as a kind of course-in-a-box. In other words, it’s an attempt to move as much as the “course” as possible into the “ware”, or the product. The learning design, the readings, the slides, and the assessments are all created by the product maker. Increasingly, the students are even graded by the product.

courseTarget20130412-TOP-no

This approach as popularly implemented in the market has a number of significant and fairly obvious shortcomings, but the one I want to focus on for this post is these packages are still going to be used by faculty whose main experience is the lecture/test paradigm.[1] Which means that, whatever the courseware learning design originally was, it will tend to be crammed into a lecture/test paradigm. In the worst case, the result is that we have neither the benefit of engaged, experienced faculty who feel ownership of the course nor an advanced learning design that the faculty member has not learned how to implement.

One of the reasons that this works from a commercial perspective is that it relies on the secret shame that many faculty members feel. Professors were never taught to teach, nor are they generally given the time, money, and opportunities necessary to learn and improve, but somehow they have been made to feel that they should already know how. To admit otherwise is to admit one’s incompetence. Courseware enables faculty to keep their “shame” secret by letting the publishers do the driving. What happens in the classroom stays in the classroom. In a weird way, the other side of the shame coin is “ownership.” Most faculty are certainly smart enough to know that neither they nor anybody else is going to get rich off their lecture notes. Rather, the driver of “ownership” is fear of having the thing I know how to do in my classroom taken away from me as “mine” (and maybe exposing the fact that I’m not very good at this teaching thing in the process). So many instructors hold onto the privacy of their classrooms and the “ownership” of their course materials for dear life.

Obviously, if we really want to solve this problem at its root, we have to change faculty compensation and training. Failing that, the next best thing is to try to lower the barriers and increase the rewards for sharing. This is hard to do, but there are lessons we can learn from social media. In this post, I’m going to try to show how learning design and platform design in a faculty professional development course might come together toward this end.

Continue reading

  1. Of course, I recognize that some disciplines don’t do a lot of lecture/test (although they may do lecture/essay). These are precisely the disciplines in which courseware has been the least commercially successful. []
Posted in Build This, Please, Higher Education, Instructional Design, LMOS, Openness, Tools, Toys, and Technology (Oh my!) | Tagged , , , | Leave a comment

Blueprint for a post-LMS, Part 3

In the first part of this series, I identified four design goals for a learning platform that supports conversation-based courses. In the second part, I brought up a use case of a kind of faculty professional development course that works as a distributed flip, based on our forthcoming e-Literate TV series on personalized learning. In the next two posts, I’m going to go into some aspects of the system design. But before I do that, I want to address a concern that some readers have raised. Pointing to my apparently infamous “Dammit, the LMS” post, they raise the question of whether I am guilty of a certain amount of techno-utopianism. Whether I’m assuming just building a new widget will solve a difficult social problem. And whether any system, even if it starts out relatively pure, will inevitably become just another LMS as the same social forces come into play.

chasmillustration6

I hope not. The core lesson of “Dammit, the LMS” is that platform innovations will not propagate unless the pedagogical changes that take advantages of those changes also propagate, and pedagogical changes will not propagate without changes in the institutional culture in which they are embedded. Given that context, the use case I proposed in part 2 of this series is every bit as important as the design goals in part 1 because it provides a mechanism by which we may influence the culture. This actually aligns well with the “use scale appropriately” design goal from part 1, which included this bit:

Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.

When we get to part 4 of the series, I hope to show how the platform, pedagogy, and culture might co-evolve through a combination of curriculum design, learning design, platform design, prepared for faculty as participants in a low-stakes environment. But before we get there, I have to first put some building blocks in place related to fostering and assessing educational conversation. That’s what I’m going to try to do in this post.

Continue reading

Posted in Educational Pattern Languages, Higher Education, Tools, Toys, and Technology (Oh my!) | Tagged , , , , , , | 1 Comment

Blueprint for a Post-LMS, Part 2

In the first post of this series, I identified four design goals for a learning platform that would be well suited for discussion-based courses:

  1. Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
  2. Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
  3. Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
  4. Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.

I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.

This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.

When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.

Continue reading

Posted in Higher Education, Instructional Design, Tools, Toys, and Technology (Oh my!) | Tagged , , , , , , | 8 Comments

Blueprint for a Post-LMS, Part 1

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind. CBE is loosely tied to family of pedagogical methods, perhaps the most important of which at the moment is mastery learning. In contrast, questions about why general LMSs aren’t “better” beg the question, “Better for what?” Since conversations of LMS design are usually divorced from conversations of learning design, we end up pretending that the foundational design assumptions in an LMS are pedagogically neutral when they are actually assumptions based on traditional lecture/test pedagogy. I don’t know what a “better” LMS looks like, but I am starting to get a sense of what an LMS that is better for CBE looks like. In some ways, the relationship between platform and pedagogy is similar to the relationship former Apple luminary Alan Kay claimed between software and hardware: “People who are really serious about software should make their own hardware.” It’s hard to separate serious digital learning design from digital learning platform design (or, for that matter, from physical classroom design). The advances in CBE platforms are a case in point.

But CBE doesn’t work well for all content and all subjects. In a series of posts starting with this one, I’m going to conduct a thought experiment of designing a learning platform—I don’t really see it as an LMS, although I’m also not allergic to that term as some are—that would be useful for conversation-based courses or conversation-based elements of courses. Because I like thought experiments that lead to actual experiments, I’m going to propose a model that could realistically be built with named (and mostly open source) software and talk a bit about implementation details like use of interoperability standards. But all of the ideas here are separable from the suggested software implementations. The primary point of the series is to address the underlying design principles.

In this first post, I’m going to try to articulate the design goals for the thought experiment.

Continue reading

Posted in Content Management & Taxonomy as Knowledge Management | 6 Comments