So the news broke today about the Empirical Educator Project's (EEP's) year two experimental design, which we're calling EDwhy. The "ED" stands for Educational Design," so the full name means, basically, "Why is your educational design the way that it is?" It invites educators to interrogate their own designs and aspires to give them the tools to do so. Here is the press release.
We have some good coverage to start you off from Inside Higher Ed and EdSurge. At IHE, Lindsay McKenzie goes broad. She starts with some good shoe leather work at Carnegie Mellon with some interviews. Pay close attention to the interview with Ken Koedinger, as he talks about (but does not name) a research finding called the doer effect, which I'm going to use as an example later in this blog post. She also provides a good refresher of the open source versus proprietary question that universities often face with substantial software intellectual property that they develop, and then touches lightly on EEP's role with the EDwhy announcement at the end (although with a clutch statement from Duke’s Matthew Rascoff, who always seems to say the right thing with a lot of intellectual and moral clarity in very few words). If you're looking to find a way into this story from the beginning in a compact way, Linday's story one good route in.
Meanwhile, Jeff Young at EdSurge has dug a little deeper into significance behind the EDwhy idea and mechanics. I think the question that is on everyone's minds is, "OK, $100 million dollars, lots of software, cool learning science-y things, but really, how is this going to be made useful?" Jeff begins to explore that question, and I'm going to take a deeper dive in this post. He also has some commentary from me about why we chose the name we did. You'll have to go read it on EdSurge to get those details, but I'll say this much here: On e-Literate, where one of our major roles is to critique hype and protect against the dangers of bad actors, we have an ethical obligation to throw some sharp elbows. With EEP, where we are not watching from the sidelines but actually entering the fray, we are mindful that our obligation shifts as our role shifts. We take the e-Literate lessons to heart while also attempting to be humble both about the accomplishments of those before us and how easy it is for us to fall into the same traps that very smart people before us have fallen victim to.
But I don't want to write about the naming decision too much here. Instead, I want to write about how we are going to attempt to live up to the humbling confidence that Carnegie Mellon expressed in us when they chose us as a partner in their grand project. Obviously, when they offered to make their enormous contribution through our fledgling organization, it both forced and empowered us to rethink how we would go about the project in Year 2. We had always planned to stop, evaluate, and iterate on the design after the first year, but this opportunity demanded a pretty dramatic rethink in approach which, to be honest, is still ongoing. We have an idea that I'm going to share with you now that I believe makes sense in concept but does not yet have a fine-grained implementation plan. We are working hard with our Carnegie Mellon friends to have a foundation in place by the time of the summit. We will also workshop the idea at the summit with the cohort to refine our approach. This is going to be a year-long project. So we expect to spend some time after the summit continuing to put pieces in place and fine-tuning as we go. At the end of the year, we will do a progress check, evaluate, and iterate.
I am always mindful about appropriating terms from Silicon Valley culture because I think it tends to be reflexively idealized. That said, there is a lot to like about the educational value of a hackathon. It is a social, time-bounded, self-organizing, problem-based learning exercise. A group of people will get together to solve a defined problem over a period of time. That group is often cross-functional. They might have software engineers, user experience designers, end users, and so on. Hackathons have a tangible and several intangible goals. The tangible goal in the canonical case is a piece of software, but we can think of it more broadly as an artifact that has been tested and demonstrated to solve the problem that was the goal set out at the beginning of the exercise. The intangible goals often include learning how to work in a cross-functional team, learning how to solve difficult problems with unexpected wrinkles, and learning particular craft-related skills necessary to solve the problem (e.g., programming tricks or software testing techniques).
This is a good model for the kind of culture building that EEP has always aspired to achieve and, I believe that inspired Carnegie Mellon to see us as a good fit for their own ambitions. While I want to be clear that I do not speak for them, my understanding of their goals from our conversations thus far is that it would be a mistake to interpret their primary goal to be broader adoption of their software and other tools. Sure, they want to see that happen. But my read is that they see that as a second-order effect, or maybe as means to an end. What I hear from them in our conversations is that they really want to make their approach to improving education broadly accessible and meaningfully useful. They call that approach "learning engineering," which they seem comfortable with me characterizing as one flavor or methodology within a broader developing family that we call "empirical education." The hackathon works to support this goal because it creates an environment in which people habitually self-organize in cross-functional groups to improve educational design in ways that empower greater student success. It brings together the right people around the right kinds of goals and conversations. If we can then empower them with the right tools and methods, we are on your way to promoting learning engineering. If we can achieve that, we can unlock the real power of the big release, which is to help democratize the science of education.
While I said I didn't want to dwell on our name choice here, it's probably worth spending a little time on the word "design" in the way we are using it in EDwhy. A number of different overlapping but distinct stakeholder groups in academia tend to compete for mindshare around this word—Design Thinking practitioners, Instructional Designers, Learning Designers, User Experience Designers, and others. Making sense of how these all connect yet are distinct from each other is non-obvious even before we get to culturally local differences in usage. To give one example, Herb Simon, in addition to being the father of Learning Engineering, is considered by some to be the grandfather of Design Thinking. These are two compatible but distinct and non-interchangeable disciplines. In most places outside of Carnegie Mellon, their practitioners tend to be either completely ignorant of each other or find themselves cast as rivals in educational solution design.
"Design" in the EDwhy context is a holistic and colloquial term meaning, simply, the way you decided to put something together. A cross-functional EDwhy hackathon team might include people with knowledge of Design Thinking, Instructional Design, Learning Design, User Experience Design, and/or Learning Engineering. Who is at the table will depend on the specific nature of the challenge being tackled and the kinds of expertise needed to take it on.
At any rate, as we started thinking about how to help our network digest Carnegie Mellon's $100 million contribution—never mind the sum of all possible contributions from all current and future EEP participants—we started thinking about both the digestive process and coming up with a form that is digestible. Verbs and nouns.
The hackathon is the verb. Theoretically, the hackathon is flexible enough to allow for projects of different sizes and ambitions, whether inter- or intra-institutional. We still very much want to encourage inter-institutional collaboration, but one lesson we learned last year is that inter-institutional collaboration is incredibly hard, even with a lot of work done by third parties to lower barriers. We have to build a gentle slope toward that level of collaboration. The hackathon is a form that lets people start small and grow in ambition. At some point, they will outgrow the form and need to form something more like a traditional project with more formal management structures.
We aspire to reach the point where we have that problem. For now, we are focused on culture-building, and we hypothesize that the hackathon is a good ritual for accomplishing that while also delivering immediate educational utility.
The hackathon idea is simple enough to grasp in the abstract. The hard part is putting it together with the right packages that help people identify and solve new problems using the contributions from Carnegie Mellon or other participants. For this, we've developed the concept of an EDwhy "seed." This is one of the pieces I will want to workshop with the EEP cohort, but there's enough here conceptually that the general idea should be clear.
We start with a general area of interest where some research has been done but where there are more questions to be answered. For example (and as I mentioned earlier, Ken Koedinger and his CMU colleagues have done some research into something called "the doer effect." It means pretty much what it sounds like. The researchers were able to demonstrate, using solid, quantitative methods that learning by doing is, for example, about six times more effective than learning by watching a video.
(Side note for all you liberal arts folks out there who are suspicious of this data stuff: This study more or less just made the case for constructivism. Using numbers and computers and statistics and stuff.)
That's an interesting finding, if not a shocking one, but it also highlights a lot that we don't know. For example, is doing always better than watching a video (or reading) for learning? Should we throw out all books and videos? If not, then how much watching or reading is good? In what order? Does the subject matter make a difference? The expertise of the learner? Other characteristics of the learner? Other characteristics of the overall course design? Or course goals?
Let's make this more concrete. One of my favorite course designs is Habitable Worlds by ASU's Ariel Anbar. There is a lot of learning by doing in that problem-based course, but also liberal use of video. It would be interesting to do some testing and experimentation to find out how to make the most out of the doer effect and find the optimal balance of the course elements.
As it turns out, Carnegie Mellon's contributions include the software that was used to conduct the original doer effect research. (The IHE article mentions LearnSphere. Spend a little time exploring that site if you're curious.) That software includes a data repository with access to (appropriately anonymized) data that could be used to replicate the results (or try to run different analyses on the data), a visual workflow that makes the study easily repeatable with different data, and access to the underlying R packages (for those who can understand them) to make the research methods completely transparent. If you put together the original studies, the software, the workflows, the data to practice reproducing the results, the transparency of the methods, and wrap in some documentation, some training, and a number of suggested starter questions for investigation, you have a seed. A self-organizing community could take up that seed and develop a hackathon project. If there were also a community forum where the hackathon group could ask questions of statisticians, cognitive psychologists, and psychometricians, as well as some technical support folks, as well as share lessons learned with each other, then you could really have something.
I'm guessing the net result might turn out to be what would call an "intermediate" seed. Not every team would have the capability to self-organize around something this complex. We'd like to develop beginner, intermediate, and advanced level seeds, where beginner seeds are approachable by non-technical groups, intermediate seeds might require some technical skill and some knowledge of experimental design, and advanced seeds are really for folks who have some serious specialist expertise in their groups. The I'll defer on the final difficulty ratings of each seed, including the one I just described, to the creators and the early adopters. One skill set we will be learning in the EDwhy experiment is how to package up a seed to make it accessible and useful to different sorts of audiences. Eventually, we may develop profiles of hackathon teams that are richer than just beginner/intermediate/advanced.
At any rate, our goal for the year is to prove out and refine the approach through some pilot seeds and hackathons. We don't imagine that we will be able to address the entire surface area of Carnegie Mellon's $100 Million contribution in the one-year time frame, but we do aspire to prove out a novel and sustainable support and diffusion mechanism, not only for the software but for the methods and the culture. And during this time, we will also invite other EEP members to develop and contribute their own seeds, some of which will be less technical or tackle entirely different types of educational problems than Carnegie Mellon's seeds will. This is a general mechanism we will be trying out. Interestingly, another arrow that CMU has in its quiver is the Open Learning Initiative (OLI) authoring and delivery platforms. So we may very well find their contributions to seed development goes well beyond the open source software code, which I think is the way in which people are naturally tending to think about the contribution at this early stage in the process.
Both learning and science—or any path to enlightenment, really—starts with a simple admission: "There is so much that I don't know, and so much that I would like to understand better." Big announcements like this generally run against the grain of that admission. We have an ingrained cultural notion that, after spending a $100 million, you are supposed know all the answers. After spending 7 years in graduate school, you are supposed to know all the answers. After getting all the press and all the buzz, you are supposed to know all the answers.
Nope. Sorry. It doesn't work that way.
There is so much that we don't know, and so much that we would like to understand better. If you keep repeating that mantra to yourself every time you hear something new about Carnegie Mellon's contribution or about EEP or the EDwhy initiative, each new piece of information will make a lot more sense to you.