Why the Retention Early Warning Critics Are Wrong

One criticism I consistently hear when talking about retention early warning systems is that they may provide value for the university but mostly don’t for the student. The university benefits by retaining the student because it gets more tuition. But, the argument goes, the student may have all kinds of valid reasons for dropping out of a course or a program. Furthermore, retention and learning have no necessary relationship, they argue. You can stay in school and still not get anything of value out of it. The (usually implicit) conclusion from these arguments is that retention systems are nothing more Big Brother tools for squeezing more money out of hapless students.

There’s something a little odd about this argument even on the face of it. No early warning system is forcing students to stay in school. The students must somehow be complicit in any impact on retention. Even so, I have never been one to dismiss it out-of-hand. However, after spending a half day in an EDUCAUSE seminar by John Campbell and Kim Arnold about Purdue’s early warning system, I can say with great confidence that the critics are missing the boat—at least with respect to Purdue’s approach, and probably in general.

Let me start by acknowledging that “retention early warning system” may not be a completely accurate naming of the Purdue system. “Retention” usually means the ability to keep students from dropping out of school. It is an inter-course measure rather than an intra-course measure. And while the Purdue folks do have the ultimate aim of moving the needle on inter-course measures, their early warning system actually focuses on intra-course measures. The theory is that if they can keep students from failing their individual classes, then they are more likely to keep them from dropping out or failing out. That said, my personal sense is that most effective retention early warning systems are going to have to focus on intra-course measures as Purdue has. 

The Purdue model breaks down student risk into three factors: preparation, effort, and help-seeking behavior. “Preparation” is really about how much at-risk the student is in when she walks in the door on the first day of class. Does she have a history of success in school? Does she have the pre-requisites for the class? Academic analytics systems will look at things like past grades, test scores, class rank, course transcript, etc., to come up with some sort of preparation score, not unlike a credit score. This score becomes a modifier for the sensitivity of the early warning system. For example, a student who has failed the same class twice before should probably be monitored more vigilantly than one who has not. “Effort” is defined by things like the student’s grades in the class, the frequency of logins to the LMS, participation in class discussion, history of turning in assignments and completing tests on time, etc. “Effort” is a bit of a misnomer, since grades aren’t direct proxies for effort (for example). What you really are asking is whether the student is completing the tasks necessary to pass the course. “Help-seeking behavior” is fairly obvious. Has the student gone to office hours? Has she gone to the tutoring center? Help-seeking behavior doesn’t matter that much if the student is doing well in “effort”; a straight-A student doesn’t need to go to the tutoring center. So these three variables interact with each other in relatively complex ways. What we do know is that a student who isn’t doing well and isn’t getting help isn’t likely to pass. We also know that a student who has a history of not doing well is more likely to fall through the cracks than one who is generally successful in school.

Once they have analyzed these factors, the Purdue folks do several things. First, they generate a stoplight rating for each student: green, yellow, or red. So each student has a visual indicator of whether she is at risk and to what degree. In their current system, the Purdue team updates the indicators on a weekly basis. Second, they send out messages to the student of increasing urgency depending on whether the students are in the yellow or in the red and on how late in the semester it is. The messages come in email and text, and in situations that have been escalated because, say, a student is in the red and it’s getting close to the point of no return, the student may get a phone call. The messages are carefully calibrated to encourage help-seeking behavior, and they are tested to see what language and which modality gets the best results. In extreme cases, students are actually encouraged to drop the course if they have no chance of passing and will lose their tuition money if they stay and fail. But this is an exception; the goal is to help the students succeed. Notice that this is very similar to what good teachers do with their students anyway. They let them know if they are in trouble, they escalate the seriousness of the warnings over time, and they encourage students to get help. The main difference is that this system does so very consistently and in a timely manner, which is really tough for a human to do if that human has even a moderately heavy teaching load.

The results of the program are impressive. Students tend to get more B’s and C’s, with fewer D’s and F’s. (A’s don’t seem to be affected. More on that shortly.) There is also an increase in the number of drops and withdrawals, which you would expect as part of the reduction in F’s. (Failing students drop out and preserve their tuition and GPA before they fail out.) A high number of yellow light students convert to green. There is some movement among the red light students as well, although less than among the yellows. There is also a very significant increase in help-seeking behavior. Most impressively, the effects seem to be lasting. Students who go from yellow to green tend to stay green. Help-seeking behavior continues among students after the warning messages stopped. There is anecdotal evidence that the academic improvement carried over into both other classes without this early warning system during the same semester and classes in subsequent semesters. 

So what does this mean? It appears to mean that moderately at-risk students lack the skills to know when they are in trouble and seek appropriate help. When they are provided with increased feedback, they can learn these skills. Once they have done so, they continue to be more successful in school. This isn’t going to impact the very top students or the very bottom students, but it will have a big impact on the fat middle of the bell curve. These students want to succeed but don’t know how—not because they are not capable of doing the work but because they don’t have the school skills that they need. The Purdue retention early warning system is, in fact, a teaching tool. It teaches them how to monitor their progress and what to do when their progress is not adequate. This interpretation is further supported by the overwhelmingly positive feedback from the students on how much the system helped them.

Share Button

Google+ Comments

About Michael Feldstein

Michael Feldstein is co-Publisher of e-Literate, co-Producer of e-Literate TV, and Partner in MindWires Consulting. For more information, see his profile page.
This entry was posted in Ed Tech and tagged , , , , . Bookmark the permalink.