Skip to Content, Navigation, or Footer.

Fang ’26: Brown should drop the U.S. News & World Report’s flawed ranking system

On Jan. 17, Harvard Medical School announced its withdrawal from the U.S. News & World Report medical school rankings, following Harvard Law School to become the university’s second graduate school to boycott the list. In a letter to the HMS community, Dean George Daley wrote that the “rankings cannot meaningfully reflect the high aspirations for educational excellence, graduate preparedness and compassionate and equitable patient care” that HMS strives for. HMS is far from the only school that has chosen to withdraw from the rankings — medical schools at Penn, Columbia and Stanford University have done the same. And law schools at Yale, Stanford University, the University of California at Berkeley, Columbia and Georgetown University have all announced their departure from the rankings, citing similar reasons for pulling out as HMS did. 

USNWR’s rankings can be a source of pride for top-ranked universities and an influential guide for prospective students. I was swayed by them myself while searching for good undergraduate public health programs. However, the USNWR rankings have fundamental methodological and philosophical flaws that undermine their usefulness. As a top-ranking school itself, Brown should follow its peer institutions’ lead at the graduate level — and lead among its undergraduate counterparts —  by moving away from this imperfect ranking system. 

Researchers have long criticized USNWR’s ranking methodology. In a 2019 article, Northwestern medical professor William McGaghie wrote that USNWR’s ranking of medical schools creates “distinctions without differences.” McGaghie finds that the main criteria for medical school rankings consist of National Institutes of Health research activity per faculty member, incoming medical students’ undergraduate GPA and MCAT scores, acceptance rates and a “quality assessment” based on surveys sent to the staff of peer medical schools. Oftentimes, these staff are handpicked by the medical school being rated.

USNWR ranks schools on easily-measurable factors that do not yield meaningful information about the quality of a school. In the case of medical school rankings, USNWR’s scoring system is an imperfect method to predict important outcomes like clinical skill acquisition in residency. Instead, its focus on historical prestige and exclusivity promotes the false notion that an attractive academic pedigree inevitably leads to career success. A study conducted by medical education researchers at the University of Michigan showed that students that attended higher ranked medical schools did not necessarily possess better clinical skills in residency. Rankings based on acceptance rates and crude variables such as student GPA or SAT and MCAT scores also reveal little about the quality of education and diversity of the school’s students. 

ADVERTISEMENT

McGaghie offers options for more meaningful measures of medical school quality in his article, such as focusing on impact of research rather than quantity of research funding, identifying how graduates perform on professional competency tests such as the Accreditation Council for Graduate Medical Education milestones, following up with graduates in the future to measure professional satisfaction and utilizing graduate survey data on the education and career readiness provided by their school. Though these variables would be more difficult to measure than average MCAT scores or acceptance rates, they offer much more meaningful information to students. McGaghie’s measures focus on a school’s fit for a student and its ability to create successful doctors, not the pedigree associated with a school’s name.

The USNWR ranking framework also squares poorly with the mission of higher education, especially for medical schools. For example, the mission of Brown’s Warren Alpert Medical School is “to support and promote the health of individuals and communities through innovative medical education programs, research initiatives and clinical excellence in service to society and to improve the health and wellness of all.” The system of ranking schools from “best” to “worst” is far too simple to capture the important nuances of medical education and advances the idea that all medical schools have the same priorities. In reality, schools, especially ones located in small communities, each have unique missions. For example, one of the University of California, San Francisco Medical School’s main missions is to advance industry-changing research, while University of Texas, Rio Grande Valley is more focused on researching and reducing local health disparities. Homogenizing these institutions through arbitrary metrics encourages schools to focus on boosting their rankings rather than pursuing specific, community-oriented goals.

The ranking system for undergraduate institutions is flawed in similar ways. 20% of a college’s USNWR score is based on similar peer assessment surveys. University admissions officers and administrators rate other schools through little more than a popularity contest; schools with long histories and strong reputations are more likely to receive high rankings than smaller, newer and less prominent schools that are not necessarily of any lower quality. The mission of many colleges to serve the community and the world by educating and preparing students for their future careers is undermined by a ranking system that largely defines the quality of a college by its prestige and its students’ GPAs, test scores and acceptance rates. But unlike their medical counterparts, elite undergraduate institutions have not left the rankings.

By leaving the rankings and focusing on more relevant metrics for evaluation, institutions would have incentives to improve the things that really matter, such as student and alumni satisfaction, community engagement and professional readiness. Prioritizing measures of quality would help students choose the schools that will best suit their needs. And even without rankings, quantitative measures can still help students compare schools. There is no reason that a university could not continue to publish data on its applicants and matriculants. Students and parents can still use this data to make thoughtful decisions about school fit without the distractions associated with USNWR rankings. 

Brown could adopt a similar approach by withdrawing from the USNWR rankings — both the medical school rankings and the undergraduate rankings — while still providing raw admissions data about demographics, acceptance rates, and state of legal residence of applicants and matriculants on its website. After all, the fit of any school for a student is simply too complex and individual to be reduced to a flawed ranking system. 

Juliet Fang ’26 can be reached at juliet_fang@brown.edu. Please send responses to this opinion to letters@browndailyherald.com and other op-eds to opinions@browndailyherald.com.

ADVERTISEMENT

Juliet Fang

Juliet Fang is a second year at Brown studying Ecology and Evolutionary Biology. In her free time, she enjoys running, cycling, and watching duck videos.



Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.