Russ Poulin at WCET has a handy summary of the new College Scorecard produced by the Education Department (ED) and the White House. This is a "first read" given the scorecard's Friday release, but it is quite valuable since Russ participated on an ED Data Panel related to the now-abandoned Ratings System, the precursor to the Scorecard. Russ describes the good, the "not so good", and the "are you kidding me?" elements. One area in particular highlighted by Russ is the usage of the "dreaded first-time, full-time completion rates":
I knew this would be the case, but it really irks me. Under current data collected by the Department’s IPEDS surveys. They the group on which they base their “Graduation Rate” as: “Data are collected on the number of students entering the institution as full-time, first-time, degree/certificate-seeking undergraduate students in a particular year (cohort), by race/ethnicity and gender; the number completing their program within 150 percent of normal time to completion; the number that transfer to other institutions if transfer is part of the institution’s mission.”
This rate has long been a massive disservice to institutions focused on serving adults and community colleges. Here are some example rates: Empire State: 28%, Western Governors University: 26%, University of Maryland University College: 4%, Charter Oak Colleges: no data, and Excelsior College: no data.. The problem is that these numbers are based on incredibly small samples for these schools and do not reflect the progress of the bulk of the student body.
I won’t quote data for community colleges because they are all negatively impacted. They often serve a large number of students who are not “first-time” or define “success” in other ways.
I know that they are working on a fix to this problem in the future. Meanwhile, who atones for the damage this causes to these institution’s reputation. This data display rewards colleges who shy away from non-traditional or disadvantaged students. Is this what we want?
Russ is not the only one noting this problem. Consider this analysis from Friday [emphasis added]:
The most commonly referenced completion rates are those reported to IPEDS and are included on the College Scorecard (measuring completion within 150 percent, or six years, for predominantly four-year colleges; and within four years for predominantly two- or less-than-two-year schools). However, they rely on a school’s population of full-time students who are enrolled in college for the first-time. This is increasingly divergent from the profile of the typical college student, particularly at many two-year institutions and some four-year schools. For instance, Marylhurst University in Oregon, a four-year institution that has been recognized for serving adult students, reportedly had a 23 percent, six-year completion rate – namely because a very small subset of its students (just one percent) fall in the first- time, full-time cohort used to calculate completion rates. As with many schools that serve students who already have some college experience, this rate is, therefore, hardly representative of the school’s student body.
Who wrote this critical analysis, you ask? The Education Department in their own Policy Paper on the College Scorecard (p 17). Further down the page:
The Department has previously announced plans to work with colleges and universities to improve the graduation rates measured by the IPEDS system. Beginning in 2016, colleges will begin reporting completion rates for the other subsets of their students: first-time, part-time students; non-first-time, full-time students; and non-first-time, part-time students. In the meantime, by using data on federal financial aid recipients that the Department maintains in the National Student Loan Data System (NSLDS) for the purposes of distributing federal grants and loans, we constructed completion rates of all students receiving Title IV aid at each institution. For many institutions, Title IV completion rates are likely more representative of the student body than IPEDS completion rates – about 70 percent of all graduating postsecondary students receive federal Pell Grants and/or federal loans.
Given concerns about the quality of historical data, these NSLDS completion rates are provided on the technical page, rather than on the College Scorecard itself.
In other words, ED is fully aware of the problems of using IPEDS first-time full-time completion data, and they have plans to help improve the data, yet they chose to make fundamentally-flawed data a centerpiece of the College Scorecard.
Furthermore, the Policy Paper also addressed the need to understand transfer rates and not just graduation rates (p 18) [emphasis in original]:
The Administration also believes it is important that the College Scorecard address students who transfer to a higher degree program. Many students receive great value in attending a two-year institution first, and eventually transferring to a four-year college to obtain their bachelor’s degrees. In many cases, the transfer students do not formally complete the two-year program and so do not receive an associate degree prior to transferring. When done well, with articulation agreements that allow students to transfer their credits, this pathway can be an affordable and important way for students to receive four-year degrees. In particular, according to a recent report from the National Center of Education Statistics (NCES), students were best able to transfer credits when they moved from two-year to four-year institutions, compared with horizontal and reverse transfers.
To address this important issue, ED put the transfer data they have not on the consumer website but in the technical and data site (massive spreadsheets, data dictionaries, crosswalks all found here). Why did they not make this data easier to find? The answer is in a footnote:
We hope to be able to produce those figures for consumers after correcting for the same reporting limitations as exist for the completion rates.
To their credit, ED does address these limitations thoroughly in the Policy Paper and the Technical Paper, but very few people will read them. The end result is a consumer website that is quite misleading. Knowing all the problems of the data, this is what you see for UMUC.
Consider what prospective students will think seeing this page. UMUC sucks, I'm likely to never graduate.
UMUC points out in this document that less than 2% of their student body are first-time full-time, and that the real results paint a different picture.
Consider the harm done to prospective UMUC students by seeing the flawed, over-simplified ED College Scorecard data, and consider the harm done to UMUC as they have to play defense and explain why prospects should see a different situation. Given the estimate that non-traditional students - those who would not be covered at all in IPEDS graduation rates - comprise more than 70% of all students, you can see how UMUC is not alone. Community colleges face an even bigger problem with the lack of transfer rate reporting.
And this is how the ED is going to help consumers make informed choices?
Count me as in agreement with Russ in his conclusions:
The site is a good beginning at addressing the needs of the traditional student leaving high school and seeking a college. It leaves much to be desired for the non-traditional students who now comprise a very large portion of the college-seeking population.
I applaud the consumer-focused vision and hope that feedback continues to improve the site. I actually think this could be a fantastic service. I just worry that in the haste to get it out that we did not wait until we had the data to do it correctly.