Coursera CEO Interview: Mike Caulfield nailed it two months ago

Two months ago Mike Caulfield lamented the inability for many people in online education, especially massive online initiatives, to honestly learn from the past. In the post Mike referred to the failed AllLearn initiative and the seminal post-mortem written up in University Business.

How does that relate? A paragraph from the 2006 post-mortem of AllLearn really stuck out for me:

Oxford, Yale, and Stanford have kept quiet about the collapse of their joint e-learning venture…[h]owever, AllLearn’s closure could offer an unprecedented opportunity to step back and discuss the strengths and weaknesses of the business model… Further research into the series of collapsed online ventures may shed some light on what makes a successful distance education program, and enable some of the surviving online providers to redefine their business models and marketing strategies accordingly

Of course they don’t delve into these things honestly, and as a result most people in these institutions are unaware of them. Like Leonard, the institutions alter the record of the past. They wake up the next day with amnesia, consult a set of dramatically altered notes, and wonder why no one has tried massive Ivy League courses yet. The PR push to cover one’s tracks ends up erasing the institutional knowledge that could build a better initiative.

Little did Mike realize that he was writing a script.

One month later Coursera hired Richard Levin as its new CEO. As president of Yale, Levin was one of the key figures in the creation of All Learn in 2000, and after the 2006 collapse of the initiative Levin was one of the key figures directly responsible for the Open Yale Courses initiative.

Continue reading

Posted in Higher Education, Notable Posts, Openness | Tagged , , , , , , | 6 Comments

Links to External Articles and Interviews

Last week I was off the grid (not just lack of Internet but also lack of electricity), but thanks to publishing cycles I managed to stay artificially productive: two blog posts and one interview for an article.

Last week brought news of a new study on textbooks for college students, this time from a research arm of the  National Association of College Stores. The report, “Student Watch: Attitudes and Behaviors toward Course Materials, Fall 2013″, seems to throw some cold water on the idea of digital textbooks based on the press release summary [snip]

While there is some useful information in this survey, I fear that the press release is missing some important context. Namely, how can students prefer something that is not really available?

March 28, 2014 may well go down as the turning point where Big Data lost its placement as a silver bullet and came down to earth in a more productive manner. Triggered by a March 14 article in Science Magazine that identified “big data hubris” as one of the sources of the well-known failures of Google Flu Trends,[1] there were five significant articles in one day on the disillusionment with Big Data. [snip]

Does this mean Big Data is over and that education will move past this over-hyped concept? Perhaps Mike Caulfield from the Hapgood Blog stated it best, including adding the education perspective . . .

This is the fun one for me, as I finally have my youngest daughter’s interest (you made Buzzfeed!). Buzzfeed has added a new education beat focusing on the business of education.

The public debut last week of education technology company 2U, which partners with nonprofit and public universities to offer online degree programs, may have looked like a harbinger of IPO riches to come for companies that, like 2U, promise to disrupt the traditional education industry. At least that’s what the investors and founders of these companies want to believe. [snip]

“We live in a post-Facebook area where startups have this idea that they can design a good product and then just grow, grow, grow,” said Phil Hill, an education technology consultant and analyst. “That’s not how it actually works in education.”

 

Posted in Blogging, Higher Education, Notable Posts | Tagged , , , , , , , , , , , , | Leave a comment

Head in the Oven, Feet in the Freezer

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?” Continue reading

  1. But I’m not bitter. []
Posted in Tools, Toys, and Technology (Oh my!) | Tagged , , , , | 2 Comments

AAC&U GEMs: Exemplar Practice

A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd.

As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces.

Continue reading

Posted in Higher Education, Tools, Toys, and Technology (Oh my!) | Tagged , , , , | Leave a comment

Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course….

It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution.

Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all:

MSU DFWThat’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for).

When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say:

I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students.

I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”.

Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon.

To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches.

There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

Posted in Higher Education, Instructional Design | Tagged , , , , | 4 Comments