This Blog is Ranked #1

Once again, college ranking season is upon us.  Timed for when high school juniors and their parents are starting to focus on college admissions for the following year, a whole host of magazines, newspapers, and websites publish their annual “rankings” of their “top” institutions of higher learning.

What are you – parent or student – supposed to do with all these opinions?  How do you know which college is the best, or, more important, the best for your student?

When buying a washing machine, many consumers start with Consumer Reports magazine.  Is there a “Consumer Reports” rating authority for colleges?

Alas, no.  Instead of one possibly authoritative source, there are well over a dozen contenders.

Start with traditional news media organizations that publish college rankings, including:

U.S. News and World Report: (the 800 lb. gorilla of college ratings)

The Wall Street Journal/Times Higher Education of London:

The Economist magazine:

Forbes magazine:

Money magazine:


Washington Monthly magazine:


You can also consult web sites which publish their own rankings, such as:


The Alumni Factor:

College Factual:




Let us not forget guidebooks.  Some are specialized, such as the “40 Colleges Which Change Lives.” Here, however, I am referring to the largest publishers, those whose guidebooks profile 300 or 400 of the “top” colleges in the nation.  Although these guidebooks, such as Fiske and the Princeton Review, do not rank colleges, their inclusion of each college in their books is an endorsement of sorts for that institution, or at least a culling from the herd of over 2,500 four-year institutions of higher learning.  These guidebooks also include specialized lists, such as private and public university “best buys”.

Why are there so many lists?  For the most part, money is the motivator.   The magazines gain subscribers and, for those readers who find them on the Internet, advertising dollars.  The guidebook companies sell more books.  Which book would you buy:  “382 Colleges” or “The Best 382 Colleges”?  Princeton Review chose the latter title for its book:

Another reason for the plethora of lists is that rating any institution, including a college, depends upon what is deemed most important to the reader.  Consider the large number of criteria available:

  1. School resources, including size of the endowment, availability of research equipment and funding for undergraduates.
  2. Selectivity – how difficult is to win admission.
  3. Affordability – usually a combination of price (tuition, room and board, fees) and availability and generosity of financial aid.
  4. Academic record of accepted students.
  5. Graduation rates, at 4 and 6 years.
  6. Student retention rates after freshman year.
  7. Reputation of the faculty.
  8. Faculty’s teaching ability.
  9. Student-professor ratio.
  10. Alumni contributions, both in percentage giving and amounts donated.
  11. Student “engagement” with professors.
  12. Student satisfaction with the college.
  13. Projected earnings for graduates; sometimes expressed as a ratio with tuition to arrive at an ROI (return on investment) for each college.
  14. Recruitment of disadvantaged students (e.g., race, income).
  15. College emphasis on service by its students (e.g., community service requirements).
  16. Student loan repayment rate.

Thus, determining “the #1 college” depends upon on what information is deemed important.

The key to using these lists is to understand:

  1. What criteria are used, and how those criteria are weighted, in arriving at the ranking.
  2. Whether the data used is sufficient to support the conclusion drawn from it.
  3. Whether the ranking relies on data which is subject to misinterpretation or even fraud.
  4. Whether the criteria are relevant to your student’s interests.


A few examples of “best practices” and, well, “less than best practices,” follow.


What criteria are used, and how are they weighed?

The most serious problem arises when the ranking is done in a “black box”, where only the ranking service knows what it considers important.

Consider the explanation offered by Niche (formerly College Prowler) concerning how it creates college rankings (

The Niche 2018 College Rankings are based on rigorous analysis of academic, admissions, financial, and student life data from the U.S. Department of Education along with millions of reviews from students and alumni. Because we have the most comprehensive data in the industry, we’re able to provide a more comprehensive suite of rankings across all school types.

Very impressive, if a bit vague concerning exactly what that data is, and how much of it is relevant to ranking colleges.  However, we can break it down.  The Department of Education collects a trove of data from colleges – the complete dataset is over 200 megabytes – and makes it public.  Most ranking services use this data, and sites such as (highly recommended) compile and present it in usable form.  See

Niche’s claim to fame appears to be that it combines some of that data with its proprietary student survey data.  Confusion results when it explains how it uses that data (emphasis added in bold).

With clean and comparable data, we then assigned weights for each factor. The goal of the weighting process was to ensure that no one factor could have a dramatic positive or negative impact on a particular school’s final score and that each school’s final score was a fair representation of the school’s performance. Weights were carefully determined by analyzing:

How different weights impacted the distribution of ranked schools;

Niche student user preferences and industry research;

After assigning weights, an overall score was calculated for each college by applying the assigned weights to each college’s individual factor scores. This overall score was then assigned a new standardized score (again a z-score, as described in step 3). This was the final score for each ranking.

Yes, but what factors – criteria – were used, and how were they weighted?  Niche does not say.  For example, if Niche weighted “reputation of faculty” at 90%, then the rankings would be skewed heavily in favor of prestigious schools.

In contrast, many ranking sites are transparent about how they use such data in arriving at their rankings.  See e.g., (U.S. News and World Report – listing factors and the weight assigned to each); (the Economist magazine factors and weights used in its ratings for British universities).  If we cannot discern what information underpins rankings, then rankings are not very helpful.

Further, Niche makes explicit what I suspect other rankings purveyors do quietly – it takes a “second look” at the data to make sure that it looks “right” before publishing its rankings.  Here is the quote from above, with different emphasis added in bold:

With clean and comparable data, we then assigned weights for each factor. The goal of the weighting process was to ensure that no one factor could have a dramatic positive or negative impact on a particular school’s final score and that each school’s final score was a fair representation of the school’s performance. Weights were carefully determined by analyzing:

How different weights impacted the distribution of ranked schools;

Niche student user preferences and industry research;

That certainly looks like Niche “tried out” different weightings for each of the unnamed criteria, and then changed those weights if the resulting rankings did not look “right”.  The last sentence even suggests that it adjusts its ranking to conform to what other ranking services report (Niche analyzes student preferences and “industry research”, i.e., how other college insiders rank the schools).  That level of subjectivity is understandable – sort of applying a “smell test” to the results – but it does not add confidence that the weighting is completely objective.  It also limits the possibility of “uncovering hidden gems”.  What is the point of rankings if a school cannot score at the top unless it “looks the part”?

The lesson here is that before relying upon a list, understand exactly what criteria are being used, and how they are being weighted.


Is the data sufficient to support the measurement? 

When you step on a scale, the datum that stares you in the face, no matter how unpleasant, is almost certainly sufficient to determine how much you weigh.  However, if three people step on the scale – one at a time – the average of their weights will be insufficient to determine the average weight of the population of the United States.

Some ranking criteria require complete datasets to be relevant, and those datasets are often very difficult to obtain.  For example, many surveys measure “student outcomes,” usually through a proxy such as average earnings after 1 year, 5 years, etc.  Unfortunately, when schools survey graduates about their employment, often only the graduates with “good news” to report answer.  Would you be eager to tell your alma mater that you are unemployed?  And even those with good news to report simply ignore the surveys – perhaps out of fear that a letter from the development department asking for funds will follow.  (On a personal note, for the last 38 years, UCLA has been able to track me to a new address within six months of my arrival – very impressive.)

For example, only 19% of University of Kansas graduates responded to an outcome survey.  See  The university then took the unusual step of looking up LinkedIn profiles to supplement responses.

Take any exemplary “placement rate” with a large grain of salt.  One item to look for when evaluating individual colleges’ placement rates is whether they use the NACE standard for survey responses.  See  Even then, be careful with any statistics about “full-time employment” – many colleges interpret Starbucks baristas as being fully employed.

The college rankings that rely on graduates’ salaries also suffer from incomplete datasets.  See (“Want to Search Earnings for English Majors?  You Can’t.”  New York Times, 12/1/17).

In addition, graduates tend to find work close to their colleges.  Unless adjusted for cost of living, student earning surveys will skew in favor of West and East Coast schools.

Other criteria are easy to compute, but relatively meaningless.  The percentage of students graduating after four or six years is susceptible to sample bias.  It is normal for many engineering students to take more than four years to graduate.  Unless you know the sample (MIT, small liberal arts college), low four-year graduation rates coupled with high six year-rates may be “normal”.  Obviously, any school where both rates are low will drop out of any ranking without ceremony.  A better measure is freshmen retention – if students are not returning after freshman year, the odds are higher that your student will not, either.


Some data is subject to “gaming,” or even fraud

Selectivity is the ratio of applicants to students admitted.  In 2016, Stanford only admitted 1 in 20 applicants, leaving it with a selectivity of 5%.  That means Stanford must be the best, because only the best students get in, correct?

Well, a lousy school will not attract enough students unless it relaxes admission standards.  But be careful with putting too much weight on that correlation, because many colleges “game” selectivity.  Methods include:

  1. Using “VIP applications”. Colleges send out these one page, no-fee, applications to thousands of students.  Because they do not require essays or application fees, many students fill them out and return them.  See  I commented on this in my “You’ve Got Mail” post (6/22/15).


  1. Adopting the Common Application. The Common Application is used by over 500 colleges.  Students need only fill out the application once to apply to all member colleges.  Most colleges require students using the Common Application to respond to one or more essay prompts unique to the school.  Stanford has 11 such prompts, although many of them are short.  Colleges who wish to encourage applications do not require any supplemental essays.


  1. Going test-optional. A growing “fair test” movement in college admissions eschews standardized tests (SAT, ACT) as merely reflecting affluence or penalizing students who do not perform well on standardized tests.  These schools allow students to apply without them.  Although this may be laudable, it can also increase applications.


Colleges using these tools can lower their selectivity number, which, remember, is the percentage of applicants admitted, simply by inducing more students to enroll while not increasing the number of students accepted.  Why would they do this?  The U.S. News and World Report includes a college’s selectivity number in its ranking system, and guidebooks list colleges’ selectivity ratings prominently.  Students use selectivity as a proxy for “desirability,” which results in even more applications; a vicious cycle for students ensues.

The U.S News rankings also factors in the SAT and ACT score of enrollees.  You might ask how colleges could possibly “game” such numbers.  Well, five colleges since 2012 have been caught reporting higher scores than students actually earned – we call that fraud.  And there may be more who have not been caught.  See


The criteria are not useful to students

U.S. News and World Report assigns 22.5% of its college ranking to “undergraduate academic reputation”.  What does that phrase mean?  Here is the explanation from U.S. News:

For the Best Colleges 2013 rankings in the National Universities category, 842 top college officials were surveyed in the spring of 2012 and 53 percent of those surveyed responded. Each individual was asked to rate peer schools’ undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished).

Those individuals who did not know enough about a school to evaluate it fairly were asked to mark “don’t know.” A school’s score is the average score of all the respondents who rated it. Responses of “don’t know” counted neither for nor against a school. 

The problem is that “reputation” is intangible – the reader must ask: “reputation according to who?”  At least U.S. News makes the “who” clear:  college officials and high schools counselors.  But should students care how college administrators regard their colleagues (some of whom will be rivals)?  Unless the school is nearby (in which case it may well be a rival), most college administrators are unable to make fine distinctions between colleges.  And high school counselors rarely follow up with their charges to determine their experiences after enrollment.

Indeed, shouldn’t students care a lot more about what employers think about the schools?  The Wall Street Journal published just such a list: — in 2010.

And some ranking services reject traditional criteria as merely a proxy for wealth.  For example, The Washington Monthly publishes its list as “our answer to U.S News & World Report, which relies on crude and easily manipulated measures of wealth, exclusivity, and prestige to evaluate schools.”

Alas, not many career-oriented students (and parents) will be interested in its alternative:

We rate schools based on their contribution to the public good in three broad categories: Social Mobility (recruiting and graduating low-income students), Research (producing cutting-edge scholarship and PhDs), and Service (encouraging students to give something back to their country).

Kudos to those who are.

The largest failing of college rankings is that they are usually so general as to be meaningless.  Students in STEM fields may not value small class sizes as much as a school’s facilities and labs.  Liberal arts students are just the opposite.  Students who are planning professional careers may not care as much about “outcome measures”, as opposed to acceptance rates at professional schools (which the ranking industry tends not to measure).  Wealthy families may not worry about financial aid – for less wealthy parents, their inquiry may begin – and end – with that criterion.


College rankings can be useful

With the caveats expressed above, college rankings do have their uses.  Here is how to use them:

  1. Understand what is being measured, and the weights assigned to each criteria.
  2. Understand what criteria are omitted, and determine if the omission is important to your student’s needs.
  3. Choose the most specific ranking possible. U.S. News and World Report, along with other services, publishes “sub-lists”, e.g., best undergraduate engineering programs.  Prospective engineers should start with that list.
  4. Consult more than one ranking. Schools that are consistently ranked highly may be more likely to deserve their rankings.
  5. Prepare to dig deeper. Sometimes variations in formulas make little difference in the rankings they produce.  Eight of the top 10 universities on the U.S. News list were also in the Journal/THE top 10.  The other two were right behind.  You will have to do your own research to tease out the differences among high-ranked schools, or make decisions based on other factors (g., geography, cost).

How do I use rankings?  First, I use them to ascertain which schools my clients will value highly.  I may have to overcome preconceived notions with research.  Second, I use them like a string around a finger – when I am composing a list of candidate schools, I check the rankings to make sure that I have evaluated all of the commonly considered schools.

I also use college matching services, such as BigFuture ( to create lists of schools, but that is grist for another article.

Leave a Reply