There are more than 4,000 colleges and universities in the United
States, ranging from open-enrollment community colleges to highly
selective Ivy League institutions, from colleges with historical
missions to educate the underserved to universities exploring the
cutting edge of online education.
So—which college is the best in the nation?
This conflict between acknowledging the full variety of
colleges—each with its own role to play and students to serve—and
attempting to identify which schools are most successful at
leading their graduates to fulfilling lives and careers is the
core challenge that any attempt to rank colleges and universities
faces. Yet despite this considerable hurdle, in recent years the
number of outlets offering college rankings lists has
proliferated and is likely to continue to grow as federal and
state governments release more data on postsecondary
institutions. The rankings published by U.S. News & World Report
were the first, arriving on newsstands in 1983. But since then,
many other outlets have joined the fray, including Washington
Monthly, Money magazine, The Economist, The New York Times’
Upshot, Forbes, and several others.
Indeed, in 2013 President Obama announced that the U.S.
Department of Education would produce its own system for rating
higher education institutions, though notably this proposed
effort would only have given colleges a score, not rank them as
the media lists do. But after nearly two years of discussions
with college administrators and researchers, the department
decided not to produce ratings, opting instead just to release
more data about these institutions, including the average salary
for students 10 years after they enrolled.
The department’s decision to abandon its ratings proposal, after
facing criticism from many parts of the higher education
community, demonstrates how challenging it can be to measure a
college’s performance and the consequences that a particular
rating or ranking could have for that institution. The federal
agency originally sought to use the ratings as a tool for holding
colleges more accountable, perhaps even tying an institution’s
students’ eligibility to receive financial aid to how well the
school performed by the Education Department’s measures. But, for
example, if a college’s graduation rate becomes a key part of the
rating, would two-year colleges or other institutions with high
transfer rates receive unfavorable ratings?
Even though colleges and universities typically do not face any
direct accountability for their ranking in the lists published by
media outlets, these rankings are widely considered to have
substantial impact on the ways some postsecondary institutions
operate, as they seek more favorable rankings. From which types
of students they seek to enroll to how they allocate their
financial resources for faculty and campus development,
administrators and trustees often have to consider how important
a rise or fall on the list might be to the their institutions’
reputations and revenue.
Journalists should approach writing about college rankings and
ratings with insight that goes beyond the rank or score a college
received—and whether it’s up or down this year—to offer analysis
on how and why the institution measured up that way in that
particular list. The best way to achieve these deeper
insights is to explore the data used to create the ranking. Most
college rankings start with the data available publicly through
the National Center for Education Statistics, the federal
organization that collects and analyzes these numbers. There is a
wealth of information available from this resource, but
journalists should note that for each institution, the reported
data are only for students who were first-time, full-time
enrollees in college. This means that for institutions that
enroll many adults who are returning to school (e.g. community
colleges) or accept many transfer students (regional public
universities), the NCES data might not be the most accurate
representation of how well the college is serving its students.
This note of caution is particularly relevant for the data
regarding graduation and retention rates.
College rankings increasingly are using data that reflect the
earning potential of students at each institution. In the fall of
2015, the College Scorecard the federal government produces for
the first time added data for the average income for students at
each college 10 years after they enrolled. These data are a boon
for researchers, rankers, journalists, and—of course—students and
families. It should be noted, however, that these data are for
all students who enrolled that year, regardless of whether they
graduated. For colleges in which many students drop out or
transfer before earning a degree, the earnings number reported on
the Scorecard is likely lower than it might be for actual
graduates of that institution. These scorecard data also are for
the institution overall, not program-by-program, so reporters
should examine whether the college produces many graduates in
science, engineering, or business disciplines, for instance, that
might be tipping the scales.
Several state education departments do gather earnings data at
the program level for colleges and universities in their borders.
Check with your department to see whether it collects such data
that you might use to cross-check what you see listed in
When reporting about rankings, also look to see whether the
publisher gathers any proprietary data. For example, U.S. News &
World Report each year commissions a survey of the top
administrators of every college in an effort to measure each
institution’s reputation. And, on the opposite end, the Princeton
Review surveys more than 100,000 college students across the
country each year to find the nation’s best party schools and
least happy students.