Experts: The White House Plan to Rate Colleges Has Major Issues
A new rating system backed by the White House aims to evaluate nearly all of the nation’s colleges and universities. Roughly 6,000 schools that educate around 22 million students are about to endure an unprecedented amount of federal scrutiny.
And though a version of the Postsecondary Institution Ratings System is scheduled to be unveiled in the fall, policy watchers are still unsure of what’s in store.
“If you get a ‘C’ from the U.S. Department of Education, based on data that is questionable, it is probably a problem for the institution,” said Terry Hartle, the top lobbyist for the American Council on Education. “That’s the reason so many institutions are watching this very uneasily.”
Though the Obama administration announced its plan to rate the nation’s higher education system one year ago, key questions about the ratings’ purpose remain unanswered. Will the ratings aid consumers or hold colleges accountable? Will the administration make a distinction between specialty schools and those with broader missions? And in a field already crowded with metrics gauging everything from a college’s ability to graduate students on time to the number of low-income learners it enrolls, is the public ready for one more measurement of how schools are faring?
These were just some of the ideas explored at Education Writers Association’s seminar on higher education, hosted by Southern Methodist University in Dallas, Texas.
“Rankings and ratings are the talk of the fall,” said Michelle Asha Cooper, director of the Institute for Higher Education Policy. But despite a bevy of consumer tools meant to help students find affordable college options, the public nonetheless believes the cost of a degree is out of their reach, Cooper said.
She cited a recent Gallup Poll showing that of the 97 percent of Americans who agreed college is important, three-quarters said a degree is unaffordable.
However, there may be an appetite for a college ratings system if it captures an institution’s value. “There’s an erosion of the public confidence in higher education,” said Cooper. “There’s this agreement among the public that higher education is important, but there are concerns about quality.”
But communicating that value may pose a challenge to the rating’s architects because they’ve effectively received a dual mandate from the administration, Hartle said.
“When the president talks about it, he says it’s about both consumer information and accountability,” Hartle said. “These are different things.”
Consumer information is “clear, easy to interpret and widely used,” Hartle explained. The U.S. Department of Education provides several consumer tools that require a few clicks to capture key information about a college, such as the College Navigator or College Scorecard.
Accountability, on the other hand, Hartle said, is about “what we want institutions to be doing.”
Though the rating system due out in the fall is widely viewed as a preliminary effort, onlookers worry its accountability provisions lack the data to evaluate schools accurately.
Among the shortfalls Hartle cited are that:
- Federal retention and graduation data are inaccurate because among other things, they don’t capture transfer students, who represent a large block of the college-going population;
- Government earnings data are incomplete, because the Internal Revenue Service only tracks that information for students who received federal student aid, leaving out all those students who didn’t rely on Uncle Sam to fund their education;
- The U.S. Department of Education lacks access to a serial number that captures all the academic and workforce activity of a student, known as a student-unit record. Federal law currently bars the government from creating such a tool.
To illustrate that data shortfall, the Institute for Higher Education Policy released a report identifying the kind of information a reliable ratings system would need, and how readily available is that information. Fewer than half of the 30 types of measures IHEP identifies are easily accessible. The rest require major modifications or aren’t available nationally.
Also compromising the ratings is time. Hartle relayed intelligence that when the White House proposed the ratings last year, it drew concern from the Education Department over worries the task was too great to complete in 12 months. Invoking former Secretary of Defense Donald Rumsfeld, who famously quipped, “You go to war with the army you have, not the army you want,” Hartle said, “The Department of Education needs to build its rating system with the data they have, not the data they need.”
Beyond locating the right data, colleges are worried about the methodology the administration plans to use in its ratings system. Architects of the ratings effort intend to judge institutions by type, creating “peer groups” that cluster schools and compare them to each other. “We’re quite concerned that the ratings systems reflect and incorporate the different missions of institutions,” Hartle said.
As an example, Hartle noted three Massachusetts colleges with similar enrollments, but completely different student bodies and missions: Berklee College of Music, Olin College of Engineering and Wellesley College. Hartle predicts many colleges will balk at the peer groups in which they’re placed, arguing, “Wait a minute, we shouldn’t have to be compared to that set of institutions.”
The Department of Education has a response to those worries.
“If we create a ratings system that pushes institutions to accept fewer low-income students, then we’ve failed,” said Under Secretary of Education Ted Mitchell to the room of journalists. “We’re quite concerned that the ratings system reflect and incorporate the very different missions of institutions and institutions doing different things.”
But Hartle has his doubts, because the ratings are bound to create incentives that place institutions in a bind. How do colleges post higher retention rates without diminishing access to certain students? How do they display higher graduation rates for Pell grant recipients without overlooking low-income students who typically struggle in college?
Kim Clark, a senior reporter for the magazine Money, argues colleges brought this scrutiny onto themselves. “Institutions have been fighting accountability for years,” Clark said. “A lot of this stuff we’re having to get outside of the institution because they refused to provide accurate information.”
Clark pointed to Bennington College, whose graduates earn less than workers who completed similar schools, according to Money’s ranking of the value colleges provide. The magazine used the website Payscale to capture average pay. Clark admits the data are flawed, but it’s the best the magazine could use.
“Frankly, if you’re borrowing a lot of money, you’ll want to know how much you’re making after graduation,” Clark said.
Still, despite the huge lift these ratings require, the administration risks missing its target audience: aspiring college students from low-income or minority households.
If it’s just another website, “that’s not the best way to reach students these days, and we have to remember that,” Cooper said.