The recent revelation by Claremont McKenna College that it falsely reported artificially high SAT scores for six years of incoming students is the most recent example of a college or university fudging the numbers in order to look better in the ubiquitous U.S. News and World Report college rankings. Other recent examples of colleges trying to game the rankings include:
- Iona College reported in 2011 that SAT scores, acceptance rates, student-faculty ratios, graduation rates, and alumni giving rates were exaggerated for a decade. These data were used by 14 external agencies, ranging from their accrediting body to Moody’s and the NCAA.
- The University of Illinois’s law school reported in 2011 that LSAT scores and undergraduate grades were substantially inflated for six classes of students. These incorrect scores were not discovered until the law school spent a million dollars to investigate the admissions dean.
- Villanova University’s law school also reported in 2011 that inaccurate (and likely inflated) LSAT scores and GPAs were reported for several years.
- A former institutional researcher at Clemson University reported in 2009 about how the university changed its class size and admissions policies to look better in the U.S. News rankings. For example, modestly prepared students were less likely to be admitted to the freshman class and encouraged to enroll at a later date (when they do not count in the rankings).
- A 2009 investigation by Inside Higher Ed revealed the questionable nature of the reputational portion of the U.S. News rankings. Presidents and provosts were extremely likely to give their own institution the highest rating, and often gave competing institutions surprisingly low ratings.
All of these problems have led to calls to get rid of the U.S. News rankings. However, few people are discussing the true problem with the rankings: they’re measuring perceived institutional prestige instead of whether students actually benefit from attendance—and that is why there is such a strong incentive to cheat! The U.S. News rankings currently give weight to the following measures:
- Undergraduate academic reputation (ratings by college officials and high school guidance counselors): 22.5%-25%
- Retention and graduation rates (of first-time, full-time students): 20%-25%
- Faculty resources (class size, faculty degrees, salary, and full-time faculty): 20%
- Selectivity (ACT/SAT, high school class rank, and admit rate of first-time, full-time students): 15%
- Financial resources (per-student spending on educational expenses): 10%
- Graduation rate performance (the difference between actual graduation rates and a predicted graduation rate based on student and institutional characteristics): 0-7.5%
Most of the weight in the rankings is given to factors that are inputs to a student’s education: initial academic preparedness, peer quality, and money. In fact, the focus on some of these outcomes is detrimental to students and the general public. The strong pressure to keep admit rates low is partially a function of institutions encouraging students to submit applications with a very low chance of admission and results in colleges being unwilling to open their doors to even a few more deserving students. Encouraging higher rates of per-student spending does not necessarily result in better outcomes, especially considering these resources could go toward serving more students. In fact, raising tuition by $1,000 per year and burning it on the quad would improve a college’s ranking, as long as the pyromania is classified as an “instructional expense.”
Retention and graduation rates do capture a college’s effectiveness to some extent, but they are also strongly correlated with institutional resources and incoming student characteristics. The Ivy League colleges routinely graduate more than 90% of their students and are generally considered to be the best universities in the world, but this doesn’t mean that they are effectively (or efficiently) educating their students.
Yes, I did use the word “efficiently” in the last sentence. Although many in the education community shudder at the thought of analyses of efficiency or cost-effectiveness, they are essential in order for us to know whether our resources are helping students succeed at a reasonable price. The U.S. News rankings don’t speak to whether certain colleges do much to improve the outcomes of students, especially as the students attending highly-rated universities are extremely likely to graduate no matter what.
College rankings should recognize that colleges have different amounts of resources and enroll different types of students. (U.S. News currently does this, but in a less-than-desirable manner.) They should focus on estimating the gains that students make by attending a particular college, both by taking student and institutional characteristics into account and placing much more weight on the desired outcomes of college. It is also important to measure multiple outcomes, both to reduce the ability of colleges to game the system and to reflect the many purposes of a college education. Washington Monthly’s set of alternative college rankings are a good starting point, including national service and advanced degree receipt in its set of outcomes. These rankings should also take cost into account as a negative factor; as the net cost of attendance rises, fewer students can expect to come out ahead on their investment of time and money.
Despite the wishes of many in academia, college rankings are not going away anytime soon; a sizable amount of the public use the information and publishers have found this business to be both profitable and influential. However, those of us in the higher ed community should push for rankings that attempt to estimate a college’s ability to help students meet their goals instead of measuring a college’s ability to enroll students who will graduate anyway. As a part of my dissertation (in work with Doug Harris), I am examining a potential new college ranking system which takes both student and college resources into account and adjusts for the cost of providing education. I find that our set of rankings look much different than the traditional college rankings and reward colleges which appear to be outperforming given their resources.
This sort of ranking system would eliminate the incentive for colleges to submit inflated test scores or to become extremely selective in their admissions processes. If anything, holding colleges accountable for their resources would give colleges an incentive to fudge the numbers downward—the exact opposite of the current rankings. Just like measuring multiple outcomes helps to reduce gaming the system, multiple ranking systems reduce the incentive for colleges to cheat and provide false numbers.
No comments:
Post a Comment