So what is the eMM anyway?

In my last post I (Stephen) promised to explain in more detail what the e-Learning Maturity Model (eMM) is and how it might be of use. The eMM is an example of a process maturity model and, like all such models, its founded on a basic presumption that success in any complex endeavour is a consequence of the processes used. When you are unfamiliar with the task being undertaken, decisions are often ad-hoc and made on the basis of immediate requirements. As experience in undertaking a task grows, successful organisations learn from their experience and use that information to be more effective in undertaking similar tasks. This growth in experience is refered to as “Maturity” and is commonly regarded as passing through five levels:

  1. Initial: The processes used to undertake the task are ad hoc, and occasionally even chaotic. Few processes are defined, and success depends mainly on individual effort and heroics.
  2. Repeatable: Basic project management processes are established. The necessary process discipline is in place to repeat earlier successes on similar tasks.
  3. Defined: Management and task activities are documented, standardized, and integrated into a family of standard processes for the organization.
  4. Managed: Detailed measures of the process and task quality are collected so that the process and task are understood and controlled.
  5. Optimizing: Continuous process improvement is facilitated by feedback from the process and from piloting innovative ideas and technologies.

An organisation with a high maturity level is described as being more capable in the key processes of a task than an organisation with a lower maturity level. This summary and descriptive approach has a value in itself in selecting those organisations who are capable of quality work from those that are still developing the necessary systems and skills. In the software field, the CMM (and now the CMMI) is used to guide the funding of large projects by DARPA by restricting funding to those organisations that achieve a minimum maturity level.

The maturity levels can also be used to define and describe common challanges and transitions faced by organisations attempting to improve their capabilities. An example of this latter use is the descriptive model used by Blackboard to illustrate the common challenges facing universities when developing their e-learning infrastructure.

More helpful than these descriptive uses, however, is the ability of maturity models to improve the quality of organisational processes and outcomes. Information on process capability allows identification of areas of organisational strength and weakness which is useful in determining where to focus resources. The description of the key processes provided in the model then provides a set of detailed indicators that can guide how those resources could be invested.

The processes measured by any maturity model obviously have to be those that are genuinely needed for successful and sustainable achievement of the task goals. The source of that information must be practitioners with experience in the task, combined with a robust evidence base of documented case studies. As I noted in my last post, we chose the Seven Principles and the Quality on the Line benchmarks as a defendable starting point. Since then, we’ve used the experience of applying the model, workshops held internationally in Australia and the UK, and an extensive literature review to expand and refine the key processes in the current version of the eMM. We know that this is unlikely yet to be the complete set, but by providing a starting point we hope others will help us improve this set further. One observation supporting the current process set is that comparisons between different e-learning benchamrking tools are showing a very substantial overlap. Its almost inevitable that the processes will be refined continuously as technology changes and research into successful e-learning shows us more effective approaches.

To illustrate how the processes are used, consider an example of one of the Learning processes “L1: Learning objectives are apparent in the design and implementation of courses.” This has extensive support from the research literature and also “face validity” – its hard to argue that knowing what the goals of a course are is unimportant. The eMM takes this process and contends that, for an organisation to be capable in any process, it must demonstrate that capability from the perspective of five dimensions:

Dimensions of the eMM

These dimensions relate to the five maturity levels listed above, but unlike the levels, they are not hierarchical. For the process to be successful all five dimensions must be addressed, and failure to perform the aspects of the process for a given dimension diminishes the organisation’s capability in that process.

When assessing capability in the L1 process each dimension is considered in turn and evidence of practices such as “Learning objectives are provided explicitly in the formal descriptions of the course provided to students, including the summary versions provided prior to enrolment as well as within detailed course prospectuses or syllabi” (Dimension 1), “Institutional policies require that a formal statement of learning objectives is part of all course documentation provided to students” (Dimension 3) and “Information on student achievement of learning outcomes is used to inform and support the current and future design and (re)development of courses, programmes and degrees” (Dimension 5) is assessed. The evidence of capability in all of the practices is then aggregated and summary assessments made for the process on each dimension.

This multidimensional assessment of capability is perhaps the one feature that most clearly distinguishes the eMM from other, more descriptive, maturity models. It allows the eMM to acknowledge that building capability in a process is not a linear, deterministic outcome of predefined steps, but rather a complex mix of organisational strengths and weaknesses interacting to create a patchwork of capability. This patchwork, sometimes called a “carpet,” is how the eMM visualises capability:

NZ Sector eMM Assessments

Each set of five columns in this diagram describes one organisation’s capability for the five dimensions of the processes, one process per row. Black squares indicate full capability for the given process and dimension, lighter shades indicate reduced or absent capability. This visualisation makes comparisons within an institution and across many institutions easy – dark areas are relatively strong, light areas relatively weak.

For example, in the results above, University B is relatively strong in the processes in the Learning and Development area but weak in the Support and Evaluation areas, suggesting that these should be prioritised for future investment. All institutions are relatively strong in process O8 while processes L10 and D7 are consistently weak. Across the entire set of results there is little capability in the last two columns (dimensions 4 and 5) suggesting that much work yet remains to be done in the areas of measuring processes and their outcomes and then improving them. A number of similar observations can be made from these results.

The choice of colours and a four point scale in this visualisation, as well as the size of the boxes, are quite deliberate. While the eMM is a form of benchmarking (more on that in a later post) it has been deliberately designed to assist organisational improvement rather than ranking. Using colours instead of numbers makes comparisons more relative in nature. If numbers were used it would be tempting to ascribe significance to small numerical differences. The reality is that the quality of empirical research evidence used to inform the eMM, despite being the best available, is just not very good. Currently, broad trends rather than fine detail remains the best that the evidence supports.

This is also the reason the four point scale of fully adequate, largely adequate, partially adequate and not adequate is used. There is also good evidence to suggest that people are incapable of discriminating more than these four points on a scale when undertaking assessments. By displaying the assessment results in this way, we’re trying to use something that humans are very good at – pattern recognition, while also ensuring that the analysis discourages misuse of the results.

Finally, the ability to put a large amount of information on a single page also has advantages when dealing with the real audience for eMM results – senior managers. The eMM has always been intended as a tool for strategic and operation planning: a support for management decision making in a complex area. The results of the eMM analysis are not an end in themselves but are a tool for encouraging and supporting organisational change, something I’ll talk in more detail about in my next post.

Cheers
Stephen

Share Button

Google+ Comments

This entry was posted in Ed Tech and tagged , . Bookmark the permalink.