The problems of benchmarking

In 2004 the United Kingdom e-University failed and was put out of its misery. The analysis of this failure was, and is, extensive and ongoing. Historians will no doubt provided us with a nuanced assessment of the failure that draws on a variety of strands and contributing factors.

I’m not going to be that sophisticated in my assessment in this article. I think that the essential problem was a failure to appreciate just how complicated e-learning can be when it scales from small, ad-hoc initiatives. Failure to appreciate this complexity means that the expenditure of 50 million pounds over the five year lifetime of the UKeU was seen as being an extravagance. My guess is that five to ten times this amount would have seen the UKeU succeed – or roughly the amount of money earnt annually by the University of Phoenix.

As an e-learning researcher based within a university I periodically get concerned with the reality that we just don’t know what it takes for institutions to succeed in e-learning. The UKeU failure is merely one example of many – the Open University failed in the US despite being regarded as one of the most successful open universities in the world, and there are many others such as the Californian Virtual University, Fathom.com, Western Governor’s University etc. littering the field.

In the nineties this lack of knowledge led some to see threats to higher education from media companies and the emergance of global virtual universities. These may yet come to pass but I’m not holding my breath. More recently concerns have been raised in some countries that the Global Agreement on Trade in Services (GATS) will see local (by implication and assumption high quality) providers driven out by larger multinational providers.

So what’s this got to do with benchmarking?

The University of Michigan undertook a study a while ago in a variety of industries. They approached the senior managers and CEOs and asked them to indicate the relative status of their company:

  • 90% of the respondents thought their companies were above the average for the industry;
  • 50% put themselves in the top quartile;
  • 25% claimed to be among the top 10%

Humans are very good at misleading themselves – cognitive priming and the fun of visual illusions are common examples, but more generally this blindness to the unexpected seems to be a limitation of the information processing systems in our brains. Previously I’ve commented on the resistance to change prevalent in universities, but it also seems that an inability to see the need for change or learning is a general human characteristic. Ken’s recent blog entry discussing transformative learning theory is a nice example of how having jarring information presented to you can result in significant learning and change.

Benchmarking can potentially provide a means for an institution to experience a similar discontinuity of perception. The problem is that not all benchmarking activities will challenge the perceptions of an institution (or the management of that institution). The term has grown to encompass a wide variety of potential outcomes being defined variously as

  • a tool to identify, establish, and achieve standards of excellence.
  • a structured process of continually searching for the best methods, practices, and processes and either adopting or adapting their good features and implementing them to become the “best of the best.”
  • the practice of measuring your performance against world-class organizations.
  • an ongoing investigation and learning experience ensuring that best practices are uncovered, adapted, and implemented.
  • a disciplined method of establishing performance goals and quality improvement projects based on industry best practices.
  • a searching out and emulating of the best practices of a process that can fuel the motivation of everyone involved, often producing breakthrough results.
  • a positive approach to the process of finding and adapting the best practices to improve organizational performance.
  • a continuous process of measuring products, services, and practices against the company’s toughest competitors or those companies renowned as industry leaders.
  • learning how leading companies achieve their performance levels and then adapting them to fit your organization.
  • a research project on a core business practice.
  • a partnership where both parties should expect to gain from the information sharing.

This covers a lot of ground and no one benchmarking approach is going to do it all with any resonable investment of resources. Experience with successful benchmarking projects in a number of contexts is clear that effective benchmarking requires a significant investment of resources including the time of senior managers. Consequently there is the temptation to pre-select the areas that are focussed upon – to pick the areas that management “know” they need to consider. This has the benefit of reducing the costs of benchmarking but at the risk of predetermining the outcomes and losing the chance that something new might be learnt. It was interesting hearing Gilly Salmon admit in the ALT-C benchmarking session that she had not wanted to benchmark particular aspects of her institutions e-learning activities, but having done so, she learnt a number of unexpected things that changed her perception of their e-learning performance.

Another temptation is to use readily available metrics to benchmark the institution. The Australian VET benchmarking exercise illustrates this with over 400 institutions having their e-learning measured. The problem is, in what way does knowing the percentage of courses using some form of e-learning help those involved in institutional leadership make better decisions? Its not enough to have absolute numbers, they need a context that illustrates whether they are indicative of a problem or not. And, as I noted above, our lack of knowledge of causal relationships means that its impossible to say if these metrics are a result of effective e-learning, a contributor to its success or imminent failure, or simply meaningless but easy to measure.

Humans are very good pattern recognition machines, far better at seeing trends and relationships than any machine we create. Unfortunately, we are also good at seeing patterns that are artifacts of measurement. Stephen Gould, the noted biologist, noted this problem of Reism in his book on IQ measurement – “The Mismeasure of Man.” The act of measuring something does not of itself produce meaning, but having measurements results all too often in an attempt to force meaning from them, particularly meaning that confirms our predjudices and pre-conceptions.

So where does this leave us? One of the key goals of the eMM, mentioned previously, is assisting senior managers in making strategic and operational decisions about their institutions engagement with e-learning. This assistance must be of sufficient value to warrent the investment of time and resources needed, but it must also convey an clear sense of the limitations of the analysis. It must help those involved avoid the pitfalls of their own preconceptions while also assisting the prioritisation of resources and compromises that are fundamental to management. I can’t say that we have achieved this yet with the eMM but we’re trying, and focussing on these issues seems to have more value than simply reporting the same tired and irrelevant measures in league tables…

Cheers
Stephen

Share Button

Google+ Comments

This entry was posted in Guest Bloggers, Higher Education and tagged , , , , , . Bookmark the permalink.