What’s Wrong with College Rankings?

Education professionals often profess to loathe college rankings. In fact, a growing number of colleges are now refusing to supply data to U.S. News. Sometimes, one suspects, it’s because their school didn’t rank as well as they thought it should. Yet some of the non-participants fared very well in the rankings fray. . There are, indeed, many legitimate reasons to question the value of college rankings. Here are a few of them:

Can Colleges be Ranked At All? A modern college or university tends to be an incredibly diverse and multifaceted group of people, activities, and goals. Basic instruction is taking place, but also research, athletics, extracurriculars, community interaction, artistic performance, socializing, and many other activities. A college can be distinguished from its peers by many factors: faculty, financial strength, name recognition, success and satisfaction of its graduates, the quality of its incoming students, its ability to succeed with marginal students, its physical facility, the surrounding community, and so on. It can certainly be argued that it is simply impossible to reduce all of these factors to a single, mono-dimensional ranking that says “College A is better than College B.”

Rank vs. Individual Fit. College admissions counselors universally agree that a school must “fit” the student in terms of academic environment, social environment, athletic and other extracurricular opportunities, urban or rural location, etc. A good fit will result in a great college experience and, most importantly, maximum personal growth and achievement. Rankings can be a negative influence when students or parents look more at how highly a school is ranked instead of how well it will serve the needs of that particular student.

False Precision. Most ranking systems actually assign numeric weights to various measures, and then combine a school’s performance in each area with the assigned weighting to produce a composite score. These composite scores then can be used to rank schools in an apparently unbiased and quantitative manner. There is more than one fly in this ointment, however. First, the selection of factors to be counted and the weighting applied to each factor must be determined, more or less arbitrarily, by the people doing the ranking. This is quite evident from the U.S. News rankings in which schools have been suddenly elevated or downgraded by a change in factors or weights.

Even agreeing on which factors should be used can be difficult. For years, U. S. News has used “retention rate”—the percentage of students that return after freshman year, and the percentage that graduate within a specified number of years—as a key measure. Certainly, one would like to attend a university where most of the students who start there actually graduate. However, a school with rigorous grading standards that does not allow professors to inflate grades of poorly performing students might actually suffer in this statistic, even though it is arguably “better” than a less rigorous school. Similarly, a school that has a policy of admitting marginal students would also be hurt by a retention- rate statistic. Even if this school turned half of the marginal high schoolers into star performers, they would probably have far more dropouts than with a more pre-qualified population. Thus, a statistic that looks like a great measure of school desirability at first glance turns into something that is, unfortunately, much more ambiguous in detailed analysis. Rankings leave little room for ambiguity, footnotes, and detailed qualitative comparisons.

Moreover, the use of a numeric composite score implies a precision that simply doesn’t exist. For years, U.S. News would rank a school that scored “94.2” higher than one that scored “94.1”. In recent years, the trend has been to do more grouping, so that schools that score about the same are listed at the same rank. Even these slightly larger groupings, though, suggest too much precision—A “93” still outranks a “92.” Some experts have suggested very broad groupings of schools for ranking purposes. Unfortunately, this would take much of the competition out of the rankings and would no doubt sell far fewer magazines and books. Furthermore, there would still be problems with schools being relegated to a lower category because they just missed a quantitative cutoff for the next level up.

College rankings are a lot like sausage—the finished product can be appealing, but most people don’t want to see what goes into the manufacturing process.

Dubious Data. Much of the information that goes into college rankings is provided by the schools themselves. To their credit, ranking compilers usually go to great lengths to try to get data that is both accurate and comparable from school to school. Nevertheless, creative administrators have occasionally found ways to report their data in a way to make their institution look better. SAT scores, for example, would appear to be the most unequivocal of all statistics. Nevertheless, some schools have submitted data that excludes scores from “special admissions” (e.g., athletes, students identified as learning disabled, etc.) One school reportedly left out the verbal scores of international students but kept the math scores. When schools adjust their data without making their adjustments clear to the rankers, it makes the rankings questionable (since other schools did not make those same adjustments) and the overall data less useful.

Negative College Behavior. As noted in our discussion of the positive aspects of rankings, schools will sometimes attempt to improve situations that hurt their rankings. The dark side of this effect, though, is that colleges can find other ways to manipulate their numbers; these methods don’t improve the school in any real way, and may actually be negative for some students or applicants. An example of this is the recent trend for schools to either reject or place on the waiting list applicants who appear to be “overqualified.” Why would a school not accept an outstanding applicant who falls into the top 5% of the applicant pool? It’s simple—the admissions committee thinks that applicant will almost certainly gain admittance to a more prestigious school and will be unlikely to accept their own offer. By not extending these low-probability students offers of admission, the school sends out many fewer total acceptances while losing very few actual matriculants. This improves the “yield” (the percentage of students who accept offers of admission). These statistical changes also make the school look more “selective” and, in turn, boost the school’s ranking.

Another negative behavior that can be encouraged by rankings is increased reliance on early decision (ED) candidates. Since ED candidates are committed to attend that school if accepted, the yield rate is close to one hundred percent. Fewer regular decision applicants, with their much lower yield rate, need be accepted. Again, this makes the school look more selective and boosts rankings, even though most college counselors think early decision is not the best approach for many high school seniors, especially those requiring financial aid.