Ed Tech Evaluation Plan: More problems than I initially thought

Late last week I described the new plan from the US Department of Education (ED) and their Office of Educational Technology (OET) to “call for better methods for evaluating educational apps”. Essentially the ED is seeking proposals for new ed tech evaluation methods so that they can share the results with schools – helping them evaluate specific applications. My argument [updated DOE to be ED]:

Ed tech apps by themselves do not “work” in terms of improving academic performance. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”. [snip]

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

Upon further thought as well as prompting from the comments and private notes, this ED plan has even more problems that I initially thought.

Advocate or Objective Evaluator

There is a real problem with this plan coming out of the Office of Educational Technology due to their mission.

The mission of the Office of Educational Technology (OET) is to provide leadership for transforming education through the power of technology. OET develops national educational technology policy and establishes the vision for how technology can be used to support learning.

The OET strongly advocates for the use of ed tech applications, which I think is a primary cause of their inside-out, technology first view of the world. They are not an objective organization in terms of whether and when technology should be used, but rather an advocate assuming that technology should be used, but please make it effective. Consider these two statements, the first from the National Technology Plan and the second from the paper “Learning Technology Effectiveness” [emphasis added]:

  • The plans calls for applying the advanced technologies used in our daily personal and professional lives to our entire education system to improve student learning, accelerate and scale up the adoption of effective practices, and and use data and information for continuous improvement.
  • While this fundamental right to technology access for learning is nonnegotiable, it is also just the first step to equitable learning opportunities.

I have no problem with these goals, per se, but it would be far more useful to not have advocates in charge of evaluations.

A Better View of Evaluation

Richard Hershman from the National Association of College Stores (NACS) shared with me an article that contained a fascinating section on just this subject.

Why Keep Asking the Same Questions When They Are Not the Right Questions?

There are no definitive answers to questions about the effectiveness of technology in boosting student learning, student readiness for workforce skills, teacher productivity, and cost effectiveness. True, some examples of technology have shown strong and consistent positive results. But even powerful programs might show no effects due to myriad methodological flaws. It would be most unfortunate to reject these because standardized tests showed no significant differences. Instead, measures should evaluate individual technologies against specific learning, collaboration, and communication goals.

The source of this excellent perspective on evaluating ed tech? An article called “Plugging In: Choosing and Using Educational Technology” from the North Central Regional Educational Laboratory and commissioned by the US Department of Education in 1995.

As Richard Parent commented in my recent post:

You’re exactly right to reframe this question. It’s distressing when the public demands to know “what works” as if there are a set of practices or tools that simply “are” good education. It’s downright depressing when those who should be in the know do so, too.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about the initial article.

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

Share Button
"Ed Tech Evaluation Plan: More problems than I initially thought", 5 out of 5 based on 4 ratings.

Google+ Comments

About Phil Hill

Phil is a consultant and industry analyst covering the educational technology market primarily for higher education. He has written for e-Literate since Aug 2011. For a more complete biography, view his profile page.
This entry was posted in Big Picture, Ed Tech, Policy and tagged , , , , , , . Bookmark the permalink.