The rankings allow you to compare at a glance the relative quality of U.S. institutions based on such widely accepted indicators of excellence as first-year student retention and graduation rates and the strength of the faculty. And as you check out the data for colleges already on your shortlist, you may discover unfamiliar schools with similar metrics and thus broaden your options.
What factors did it use to decide that Princeton and Harvard are Nos. 1 and 2, as they have been for several years? And that Yale and the University of Chicago are tied for third, as they were in 2017, but that in the new 2018 ranking, the Massachusetts Institute of Technology moves from seventh into a three-way tie for fifth with Stanford and Columbia?
Let’s set aside the fact that the magazine is comparing schools with vastly different resources, missions and student populations, and let’s not dwell on the fact that colleges self-report most of the data, though the magazine says it checks and double-checks with other sources. Let’s look at some of the data used in the rankings.
The U.S. News formula gives weight to the opinions of those in a position to judge a school’s undergraduate academic excellence. The academic peer assessment survey allows top academics — presidents, provosts and deans of admissions — to account for intangibles at peer institutions, such as faculty dedication to teaching.
To get another set of important opinions on national universities and national liberal arts colleges, U.S. News also surveyed 2,200 counselors at public high schools, each of which was a gold, silver or bronze medal winner in the 2016 edition of the U.S. News rankings. The counselors surveyed represent every state and the District of Columbia.
That means one of the two factors with the most weight is based on subjective views by, among others, presidents, provosts and deans of admissions of rival institutions.
Some presidents, provosts and admissions deans have told me over the years that they don’t fill out the forms themselves because they don’t really have a deep understanding of other schools’ programs. And they doubt that many of those who do complete the survey possess a deep understanding. How many college leaders have time to investigate and then rank their competitors fairly?
Counselors know about a lot of schools because they help students decide where to apply, but their jobs are to find the best student-college fit, not figure out which school is better than the other. Besides, the 2018 rankings include data on more than 1,800 colleges and universities, including nearly 1,400 that were ranked. Counselors generally have a group of schools with which they are familiar and can’t be expected to be able to rank the quality of a lot of schools.
This measure has three components. U.S. News factors in the admissions test scores for all enrollees who took the SAT critical reading and math portions and the composite ACT (65 percent of the selectivity score).
U.S. News also considers the proportion of enrolled first-year students at national universities and national liberal arts colleges who graduated in the top 10 percent of their high school classes or the proportion of enrolled first-year students at regional universities and regional colleges who graduated in the top quarter of their classes (25 percent).
The notion that SAT and ACT scores can be seriously useful in determining a student’s abilities is, well, slightly scary. Scores from these exams don’t do a good job predicting how well students will do in college, and the SAT has been so troubled that it has reinvented itself a few times in recent years in an attempt at modernization. There is also something called test anxiety, which causes very smart and ambitious students to do poorly on high-stakes tests, and, of course, some kids do poorly because of external factors that don’t indicate their actual abilities.
Those scores account for 65 percent of the student selective factor. Twenty-five percent of the student selective factor is the proportion of enrolled first-year students who graduated at or near the top of their high school classes.
The tests are coachable, and, of course, people with more money can hire test prep tutors who offer one-on-one services, giving them an advantage (though the College Board does offer a free online SAT prep class through the Khan Academy).
And a 2014 study by the National Association for College Admission Counseling found among colleges that don’t require ACT or SAT scores, only “trivial” differences existed in the college graduation rates or cumulative grade-point average of students who do and don’t send in their results. Those least likely to submit scores when they weren’t required to do so were minorities, women, first-generation-to-college enrollees, Pell Grant recipients and students with learning differences.
Ten percent of the student selective factor is the freshman acceptance rate. Anybody paying attention to college admissions knows what a dubious data point this is. Today students apply to many more schools than they used to for a variety of reasons, and the admissions world has become far less predictable than it once was. Students apply to schools that they really have no chance of being admitted to. In 2016, UCLA became the first school to get more than 100,000 applications, for an entering class of 6,500. For the new freshman class, Stanford accepted only 4.8 percent of applicants. One counselor described college admissions as akin to “The Hunger Games.”
Faculty resources, which accounts for 20 percent of the total, is largely a function of a school’s wealth. As for alumni giving, which counts for 5 percent, well, that benefits schools with fewer poor students to start with, and schools that graduate students in majors that draw bigger salaries. A school that graduates a lot of engineers is going to have more alums with more money to spend than a school that graduates a lot of teachers, or poets, or social workers. Does that mean one school is better than another?
This year for the first time, U.S. News is including salary data for alumni on its website, though it didn’t include that information in its rankings — this year. Other rankers, such as the Wall Street Journal, have used salary data as a ranking measure. See above paragraph for the problem with incorporating such data in a rankings formula of academic quality.
The success of a data-based ranking is, obviously, the quality of the data. If you put junk in, you get junk out.