Skip to Content, Navigation, or Footer.

Gelfond ’23: Brown should lead in advocacy against college rankings

The morning of Jan. 1, I woke up in a cold sweat. I had finished the majority of my regular decision applications (19 of them by that point). But something still worried me: What if I — god forbid — ended up at a “safety” school? I pulled up US News and World Report’s rankings and scrolled once more through the familiar list, going on to apply to four more schools ranked in the top 20. I valued this list near the point of worship. In my view, an “elite” education was a prerequisite for happiness, career fulfillment, financial stability and so much more.


I can anecdotally confirm that my view of the college application process is not at all unusual. As such, I was pleased to see Johnny Ren’s poignant opinion in The Herald about “the persistent problem of college rankings.” But I would go a step further: College rankings are not just a problem for prospective students. They also serve to undermine many of the ideals of higher education, both inside and outside of Brown. It is therefore in the University’s interest to take a stance against rankings and to advocate for a reduction of their importance.


After spending an embarrassing portion of my high school career engrossed in College Confidential and Reddit’s r/ApplyingToCollege, I was sold on a “shotgun” approach when it came time to apply, which means send an application to as many top-ranked schools as possible in hopes of being admitted to one. The strategy works on the premise that elite school application pools are full of practically indistinguishable qualified students, all with glowing teacher recommendations, impressive extracurriculars and brilliantly crafted essays.


This approach is not entirely misguided. According to the Washington Post, Harvard applicants are ranked on a scale of one to four on their academic, extracurricular, athletic and personal qualities. For the class of 2015, the average ranking in the academic category was a two — defined as having “Summa cum laude potential.” Harvard’s admit rate in 2014 was 5.9 percent, a far cry from the 42 percent of students who ranked an academic one or two on their scale that year. The latter three categories are arguably more subjective — different admissions officers or committees could easily read the same student’s application with quite disparate conclusions in these categories. While results will obviously differ at other schools of similar caliber, it seems reasonable to assume that we would observe similar phenomena at other institutions. This would support the logic behind the shotgun method — the only way to maximize one’s chances at attending a top 20 school would be to apply to as many as possible.


This process creates a vicious cycle for universities and students alike. Universities are incentivized to pursue actions that artificially raise their ranking, often sacrificing legitimate scholarship in the process. Students are thus encouraged to apply to more schools and to value rankings as essential, possibly attending schools that are poor fits for them. As more and more people adjust their behavior to fit within the structure of college rankings, the rankings gain importance, power and influence and create a self-perpetuating cycle in which the rankings’ prestige is the only beneficiary.


Many rankings use easily manipulated metrics to approximate qualities which cannot be quantified. Forty percent of a school’s ranking is based entirely on the school’s number of highly cited researchers and its number of staff members who have won Nobel Prizes and Field Medals, according to the widely used Academic Ranking of World Universities. This methodology obviously favors larger institutions with larger faculties. Moreover, this system equates citations and prizes with quality of instruction, thereby ignoring the deep differences betweenresearch and teaching. This trend follows in other research-focused rankings, like the QS World University Ranking, which values citations per faculty as 20 percent of the ranking and the Wall Street Journal/Times Higher Education College Ranking which values research papers per faculty at eight percent of its ranking.


In addition, rankings use faculty salary as a proxy for faculty quality. This method raises another set of problems. Such rankings bias toward institutions with older faculty (whose earnings tend to be higher); they overvalue institutions with strengths in more lucrative fields (e.g. economics) and presume a false relationship between faculty salary and performance.


I could go on. Many rankings take into account graduate salaries (which will be affected by the most popular fields of study at each institution) and spending per student, which does not account for the ways in which money is spent and for class size.


But the most egregious of all of these factors are reputation surveys, employed by every prominent ranking system. Because of how these rankings impact a university’s reputation, a system that values reputation will self-perpetuate a cycle that entrenches a certain set of schools at the top. If US News and World Report gives a high rank to a university, because of the ranking’s influence, those subsequently surveyed will likely report a college as having more prestige, further ensuring a high ranking the next year regardless of qualitative changes at the university.


In pursuit of this self-rewarding cycle, colleges try to “game” rankings through actions like giving lower scores to peer institutions, diverting resources from departments like financial aid to raise faculty salary, increasing institutional size for more absolute research, favoring early decision applicants to increase yield and putting more emphasis on test scores and class rank in the admissions process. These actions are a natural consequence of college’s desire for high rankings, and they hurt students and universities alike.


Writing in the Huffington Post, author Peter Sacks takes this argument a step further, indicting these rankings for exacerbating class inequality: A “ranking amounts to little more than a pseudo-scientific and yet popularly legitimate tool for perpetuating inequality between educational haves and have nots,” he writes. As Sacks explains, rankings allow people to distinguish and separate “the rich families from the poor ones, and the well-endowed schools from the poorly endowed ones.”


College rankings fail to capture what I love about Brown. I love our accessible professors, resources and opportunities. My education is enriched because of how individualized my experience is through the Open Curriculum. The focus on balance that so many of my classmates have and the University’s impressive resources for mental health have made my experience infinitely more joyful. I’m motivated by my professors who evidently care about teaching undergraduates. None of these things are captured in rankings—the very rankings that almost sent me elsewhere.


As such, rankings stand in the way of attracting the best students and professors. They are liable to shift the priorities of the administration away from running a richly intellectual institution and toward one that chases arbitrary prestige. Rankings are not only inaccurate; they undermine our institution and higher education as a whole. Because Brown has so many characteristics that can never be quantified, it is uniquely suited to advocate against rankings. It can start with, as Ren hinted at in his piece, ending the publishing of rankings as any indication of our merit as an institution. The University should serve as a leader and organizer in diminishing the power of such rankings and actively standing against their influence.


Lucas Gelfond ’23 can be reached at lucas_gelfond@brown.edu. Please send responses to this opinion to letters@browndailyherald.com and op-eds to opinions@browndailyherald.com.

ADVERTISEMENT


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.