Anyone who has been awake in higher education in the last couple of years knows that there is a lot of attention on outcomes and assessment lately (although with distinctly different emphases in the U.S. and the E.U.). A natural consequence of this attention is that the various LMS platform developers are adding capabilities that are focused in this area. Blackboard has probably created the biggest splash with their highly promoted Outcomes product, but all the major platforms are doing work in this area, to different degrees and employing different strategies. I've been curious for some time about how the different approaches to this thorny problem space will shape up, which is why I am grateful that Ken Chapman, Desire2Learn's Leade Product Manager, was willing to sit down with me at EDUCAUSE and talk to me about what D2L is doing in this area.
Before we get into the details, though, we need to lay out the basics of the problem space. Fundamentally, outcomes assessment is about connecting a student's class experience with some larger goal. For example, take the case of a student reading Chaucer in a literature class. Did she learn how to better analyze a poem? Did she learn how to read Middle English? Did she learn about Chaucer's point of view and historical context? Did she learn skills and values that will make her more likely to pass other classes and graduate? Did she learn how to write a better essay?
Notice that these assessment points in my quick list are quite different from each other. This is the root of one of the most intractable problems in the outcomes debate: What should we be assessing? Which of the questions listed in the previous paragraph is the most important to answer? What is the most important possible outcome of an education? These are cultural, political, philosophical, practical, and ideological questions all tangled up into one big hairball. There isn't one universally best answer. Some of where you come down depends on why you're asking the question in the first place. Are concerned with training the next generation of literary scholars? Are you looking to maximize students' likely economic benefit from their education, regardless of career path? Are you trying to create better citizens? Or do you care most about helping the student cultivate a rich and fulfilling life of the mind? The answers to these questions have a strong impact on whether it makes more sense to look at test scores or portfolios, whether assessment instruments should be the same across courses or even across states, and lots of other critical implementation questions. Without widespread agreement on goals and priorities, there will be no widespread agreement about what to assess or how to assess it. It is nearly impossible to get such widespread agreement in many cases. And yet, there is also a sense that if we give up on assessing outcomes altogether, we run the risk that the schools that students, parents, communities, and governments invest in will produce nothing of value for anyone.
This is the morass into which the LMS developers must journey. They can't dodge the challenge of outcomes assessment and they can't afford to oversimplify it either. At a minimum, they have to provide tools that will support at least a significant subset of the kinds of goals I listed above. In an ideal world, the tools would different stakeholder groups make thoughtful and effective decisions about their goals and priorities by supporting a life-cycle process for the development and continuous re-evaluations of outcomes definitions and assessments. In this first post of this series, I'm going to look at how D2L defines the outcomes structure itself. In the second, I will describe some of the capabilities they offer for tying assessments to those outcomes, and in the third post, I will talk about how all this can link to a learning object economy and offer some final thoughts.
As you can gather from the discussion so far, any successful system for defining outcomes goals has to be reasonably agnostic and flexible. D2L's approach certainly succeeds in this regard. It has three fundamental building blocks: competencies, learning objectives, and assessments. This is a hierarchical structure with competencies at the top. In D2L's system, a competency can be defined flexibly enough to accommodate just about any of the questions about the student's experience in the literature class that I raised early in this post. They can be high-level or low-level. They can be defined for a course, a department, a semester, a major, an entire institution, or any other organizational unit that is represented within D2L. You can also share competencies across sub-units within an organization. For example, you could create a competency on understanding confidence intervals in statistics for the School of Arts and Sciences. Obviously, not every department will need or want to share that competency. But the math, physics, psychology, and sociology departments (for example) may see value in sharing the same competency definition.
Why would they want to do that? Suppose that the psychology department has decided that all psych majors should acquire a certain set of competencies before they graduate, and that understanding confidence intervals is one of those competencies. If other departments share the competency definition, then a student who learns about confidence intervals in his sociology class can track that he has made progress toward learning what he needs to know for his major. Now, on the other hand, it may turn out that the various departments feel their students have different needs with respect to understanding and properly using confidence intervals, so maybe they don't share competencies. Under D2L's system, they don't have to. The department can define their competencies separately. They system supports cooperation but doesn't mandate it.
So far, I listed three elements in the hierarchy but only talked about one of them. The learning objectives and assessments are really where the rubber meets the road. Sticking with our example of confidence intervals, how do we know if our students "understand" them? What would be the observable outcomes? Would they have to explain their meaning and how it impacts particular experimental results? Would they need to demonstrate how to derive confidence intervals given a particular data set? These assessable questions are learning objectives. Every competency has at least one learning objective under it. In turn, every learning objective has at least one assessment which is the actual instrument for checking to see if students have met the learning objective. Once again, though, the hallmark of the system is flexibility. You can have more than one assessment that tests a learning objective. For example, the psychology student can demonstrate that he knows how to explain the significance of a confidence interval in the context of an experiment described on a psychology test, or he could also demonstrate that same ability to satisfy the learning objective in the context of another experiment described in a sociology test. The system also supports multiple nestings of elements. So granular learning objectives can roll up to higher level learning objectives, and granular competencies can roll up to higher level competencies. In other words, the system supports a very wide range of mappings.
So that's the basics of D2L's competencies structure. In my next post, I'll look at the assessments and rubrics capabilities in more detail.