Colleges Announce Plans to Audit ECs

“EC” refers to extra-curricular activities.  Students concerned that they fall short of their peers in their achievements outside the classroom may be tempted to slightly exaggerate their deeds in sport, academic competitions, or community service.  The temptation to do so may depend on the assumption that colleges deluged with applicants typically do not verify ECs.

Indeed, the first reports after the “Varsity Blues” scandal indicated that colleges were still not going to scrutinize applications looking for fraud or misstatements (other than the University of California, which regularly audits applications, see

Per a Wall Street Journal article on the subject:

But with a mandate to review applications quickly—some elite schools spend just a few minutes on an application due to the high volume of material—they say they may not notice if four people all say they were MVP of a regional team, or overstate their placement in a debate tournament. Schools also tend not to confirm the race or ethnicity someone claims on paper.  Our process is as good as the information that we do receive,” says Stefanie Niles, vice president for enrollment and communications at Ohio Wesleyan University and president of the National Association for College Admission Counseling. “There’s been a lot of trust.”

“Even After the Admissions Scandal, Colleges Won’t Check Most Applications,” Wall Street Journal, August 2, 2019, (paywall).

This may be about to change, as colleges are beginning to verify ECs.

Hat-tip to one of my IECA colleagues for sharing a Bloomberg News article about colleges redoubling their efforts to prevent fraud in high school admissions.  See “U.S. Colleges Step Up Admissions Spot Checks After Scandal,” (counts toward “free” articles non-subscribers are allowed to view).

Bloomberg reports that some colleges are planning to double-check the applications of student-athletes to make sure that the students’ sporting credentials are legitimate.

Further, some schools are planning to go beyond athletics to ECs generally.  For example, here is an excerpt from a letter from Yale posted on its website:

Yale’s Admissions Committee evaluates each applicant to Yale College using a thoughtful whole-person review process. When selecting applicants, the committee values a wide range of strengths, talents, and qualities that enrich the undergraduate educational environment and contribute to its remarkable diversity. Beyond athletics, we will be implementing measures to reduce the risk of fraud in all applications, such as verifying certain extracurricular accomplishments and awards and auditing a sample of applications at the end of each admissions cycle. (emphasis added in bold).

Yale is not alone.  From the Bloomberg article:

Admissions officers say that while they want to spot evidence of such wrongdoing in the future, they also want to quash more mundane embellishments.

“There always has been the pressure to push it a little bit further,” said Whitney Soule, dean of admissions at Bowdoin, a liberal arts school in Maine. “We want to relieve that pressure.”

Bowdoin’s application website now states that the school may verify information provided on applications or supplemental materials, and that inaccurate or fabricated information may lead to offers being withdrawn.

Accuracy in describing ECs has always been important.  Now the stakes are higher – some colleges are likely to perform spot-checks and may even rescind offers for what used to be called resume “puffing.” 

Bowdoin’s Soule, who has worked in admissions since 1991, said the changes this year are meant to reinforce that students should be honest, even with seemingly small details.

If an applicant is a co-captain of a team, the student shouldn’t feel pressure to say he or she is the single leader, for example.

“Don’t be afraid to show us that you are sharing the responsibility,” Soule said.

The takeaway here is simple and sobering:  be careful not to exaggerate.  While the likelihood of an audit appears remote for students not claiming athletic achievements, the results of even a bit of fudging could be catastrophic.

If you are a parent or student living in Tucson, please note that I am one of a handful of independent consultants who live and work here.  Please see The Tucson Advantage for why that matters. 

No BASIS for Exclusion

The exclusion of BASIS from the latest list of “Best High Schools” published by U.S. News and World Report is a local story with national implications.

Tucson readers will immediately understand part of the title. BASIS is a charter school founded in Tucson in 1998. It gradually expanded within Arizona to Oro Valley, Phoenix (multiple locations, most notably its Scottsdale school), and Flagstaff. It has also opened schools in California (Silicon Valley), Louisiana, and Washington. 

Some BASIS schools start in elementary grades; many begin with grades 4 or 5 and end at grade 12. BASIS is free and open to all.  If necessary, a lottery is used to select enrollees. 

[Full disclosure:  I have worked with students attending BASIS schools, but it has been a small part of my practice.  I do not receive referrals from any school, including BASIS.  In addition, this article is not about the efficacy or social utility of charter schools.]

Charter schools are common in Arizona.  However, BASIS stands out because it claims to operate some of the finest schools in the country.  From a BASIS web site:

Our schools are among the nation’s best schools by any measure: national rankings, OECD Test for Schools (based on PISA) exams, Advanced Placement® results, National Merit Scholarship Program® honors, earned college merit aid, and college admissions, among many other highly respected standards and honors. We hire bright, passionate people to teach the acclaimed BASIS Curriculum at all 27 BASIS Charter Schools, and provide nearly 17,000 students with an excellent education.

Another interesting feature is that BASIS students finish their studies – and all state required curricula – in 11th grade.  Their senior year consists of “capstone courses” which venture into college curricula and often involve internships.

Of course, many high schools claim to accelerate learning and place their elite graduates in fine universities.  And many high schools in Tucson do exactly that, including – but very much not limited to – BASIS. 

But part of what made BASIS unique was its recognition by U.S. News and World Report.  You may associate USNWR with college rankings, but it also ranks high schools. 

There are other high school rankings in addition to USNWR’s list (see e.g.,;, but they are not directed toward college admissions officers. 

Here was USNWR’s list of the top ten high schools for 2018.

  1. BASIS Scottsdale, Ariz.
  2. BASIS Chandler, Ariz.
  3. BASIS Oro Valley, Ariz.
  4. BASIS Tucson North, Ariz.
  5. BASIS Flagstaff, Ariz.
  6. Meridian School, Round Rock, Tex.
  7. International Academy of Macomb, Clinton Township, Mich.
  8. BASIS Peoria, Ariz.
  9. Baccalaureate School for Global Education, New York
  10. Thomas Jefferson High School for Science and Technology, Fairfax County, Va.

Yes, BASIS schools, including two in Tucson, occupied the first five spots out of approximately 5,948 schools ranked that year by USNWR!  See  BASIS schools also dominated the 2017 and 2016 rankings. 

But no more, and therein lies a tale, provided courtesy of the Washington PostSee

Here are the top ten schools listed by USNWR for 2019:

  1. Academic Magnet High School (SC)
  2. Maine School of Science and Mathematics
  3. BASIS Scottsdale (AZ)
  4. Thomas Jefferson High School for Science and Technology (VA)
  5. Central Magnet School (TN)
  6. Gwinnett School of Mathematics, Science, and Technology (GA)
  7. Haas Hall Academy (AR)
  8. International Academy of Macomb (MI)
  9. Payton College Preparatory High School (IL)
  10. Signature School (IN)

BASIS no longer dominates the list.  Without coming across as too much of a Tucson “homer,” I was surprised.  But the article in the Washington Post, and a review of USNWR’s website, proved revelatory.   

With malice toward none, I have concluded that USNWR’s Best High School List is no longer a useful indicator for college admissions. 

The “old” list – “you have ONE job”

Before 2019, USNWR used a single criterion for ranking high schools:  its College Readiness Index, which in turn was based solely on “performance on and participation in Advanced Placement and International Baccalaureate exams.”

USNWR’s use of this one criterion explains BASIS’s prominence on its list.  BASIS requires its students to take AP exams – students’ test results factor into their grades for those AP courses.  The average BASIS student takes 11.9 exams, with an average score of 3.8.  BASIS notes that “many BASIS Curriculum School graduates choose to take as many as 20 AP Exams.”

How did USNWR obtain the AP participation and performance data to report those scores?  From the College Board, of course, after obtaining permission from every state other than South Dakota – did you really think that your scores were private?  See

For years, BASIS schools had a lock on the rankings simply by virtue of its policy and the hard-working and talented students who stuck to a grueling regimen to graduate.   It takes a certain type of bright student to achieve under these circumstances, and high schools that can turn out those students are certainly some of the best of breed.  (Again, whether the BASIS model is the best method for educating students is outside the scope of this article.)

The USNWR criterion was intelligible and – in its way – useful to colleges looking for the hardest-working, highest achieving, students in the country. 

However, BASIS schools are quite small.  At the high school level, they do not provide the same opportunities as much larger high schools for extracurricular activities, such as science and math competitions, varsity sports, or other activities which require teams of students supported by advisers.  Students who are interested in the performing arts will find better opportunities elsewhere because BASIS schools do not have the critical mass of students necessary to support orchestras, theater productions, and the like.  BASIS misses out on plenty of outstanding students as a result.  They also lose a substantial number of their students to attrition.

Rating BASIS high schools the best in the country provided an incomplete picture of what makes a high school outstanding, and its graduates competitive for college admissions.  Nonetheless, colleges could understand what the rankings meant, and give them whatever weight they chose. 

The new list – you had ONE job, and you did what?!

This year, USNWR expanded its project to rate over 17,000 public schools.  It also replaced the single criterion test with a weighting of six criteria: 

The 2019 Best High Schools rankings take a holistic approach to evaluating schools, looking at six factors: college readiness, reading and math proficiency, reading and math performance, underserved student performance, college curriculum breadth and graduation rates. Specifically, college readiness measures participation and performance on AP and IB exams.

Here are the six criteria, and the contribution of each to the final score used for ranking:

  1. The school’s absolute performance in math and reading (i.e., performance index or PI) on state assessments (20%).
  2. The school’s relative math and reading performance, defined as the difference between the school’s PI and its expected PI given its population of historically underserved students (20%).
  3. The school’s equity gap, or the degree to which the performance of a school’s historically underserved groups differs from the performance, on average, of non-underserved students in the state (a difference sometimes referred to as an “external performance gap”) (10%).
  4. The school’s graduation rate (10%);
  5. The school’s college readiness index based on Advanced Placement and/or International Baccalaureate participation and performance (30%);
  6. The school’s college curriculum breadth index, based on the breadth of AP and/or IB participation (10%).

And here is the explanation by UNSWR’s consultant about why it created a new – and more complex – methodology:

Factoring state assessments and graduation in rank order provides a more balanced result than relying only on the rank order of college-level exam data. Using multiple indicators to contribute to a single score ensures that rank order is less affected by the idiosyncrasies of measuring heterogeneous student cohorts on the single metric of college-readiness exams. In the revamped methodology, a school’s ranking incorporates data from multiple measures of academic quality, producing a more thorough and non-idiosyncratic assessment of its relative performance. (the entire methodology explained).

This explanation is interesting, but irrelevant.  To start, schools are not ranked separately on each criterion on the one list that is displayed, although that data is included for each high school if you “click through” its name on the ranking list.  Separate lists ranking high schools by graduation rate, or by the performance of underserved children, would be relevant in determining the relative performance of an underserved or “at risk” student.  But the ranking of the school in the list itself is based on the “composite” score alone.

There are also additional issues associated with four of the six criteria. One is obviously flawed, and three others are not useful to colleges.

The flawed measure is the fourth criterion, which relies on the “graduation rate” of each school. Given that students move with their parents, trying to decipher excellence by merely stating the percentage of 9th graders who ultimately graduated from a particular school is a fool’s errand.  Yet, that is the methodology used: (page 11).  Speaking as a local, I am pretty sure that University High in Tucson does not have a graduation rate that ranks merely #1,232nd in the United States. 

Moving from the flawed to the merely unhelpful, the first criterion relies on the results of state assessment tests. Yet, assessment tests vary in content and difficulty from state to state. Further, most of the top students applying to college are studying material that is far more difficult, and in some cases just different, from that tested on state assessment tests. (A similar problem arises with the SAT/ACT, as the math tested on those exams does not include calculus.)

Finally, the second and third criteria add adjustments to the achievement scores of the students to reflect how well the school should have performed based on USNWR’s assessment of: 1) the percentage of underserved students in the student body; and 2) the performance of those underserved students as compared to that of underserved students elsewhere in the state. 

This means that a high school which does a better job of educating its underserved students than its peers will receive a higher rating.  Yet most college applicants are not among the underserved at most schools. Here is a clue that the new ratings are not about helping colleges assess the strength of college applicants at different high schools.  Rather, they are about recognizing how well schools are fulfilling certain educational goals. 

Because a substantial percentage of the USNWR ratings for these schools do not directly measure the academic abilities of students, those ratings cannot be used to compare college applicants from different schools.  This is potentially confusing to parents, students, and even college admissions officers because the USNWR college rankings are designed, and advertised, as a measure of the academic quality of the colleges ranked.

Of course, there is nothing wrong with an organization choosing to rate “the best high schools” to identify outstanding efforts by a school’s administration and faculty. School districts and policymakers will no doubt use the USNWR list as a measure of how well high schools are serving their communities, along with areas for improvement. 

What this list should not be used for? College admissions.

College consultants may wish to explain this conclusion to their clients.  One hopes that colleges already understand it.

Oh, and what about those BASIS schools left off the Top Ten list?  You can still find them in a quiet corner of the USNWR web empire, under “Best Charter Schools.”  See  In any event, I doubt that many colleges will be looking there.

If you are a parent or student living in Tucson, please note that I am one of a handful of independent consultants who live and work here.  Please see The Tucson Advantage for why that matters. 

You May Be the True Victim of the College Scandal

Why do I mention that I am an Associate Member of IECA, the Independent Educational Consultants Association?

IECA is the gold standard of college consultants organizations. Members are expected to adhere to the highest ethical standards; those who do not are not allowed as members.

This brings us to the college admissions scandal currently in the news, a tale of avarice and cunning that will shake up some corners of the college admissions world. I think that there a few points worth noting.

First, the consultant at the center of the scheme was not a member of IECA. This is not surprising – IECA does not look kindly on consultants charging outrageous fees for college admissions. Frankly, the amount of work involved for a reputable consultant (in my view, 100-200 hours), does not justify tens of thousands of dollars in fees. (And the much smaller fee I charge is not enough to bribe a school mascot, never mind a college coach.  For information about my fees, see Why Hire Me?.

IECA also demands that college consultants not “guarantee” admissions, simply because there is no honest way to do that. In addition, college consulting is about finding the right school for each client. It is hard to accomplish this goal when your clients are fixated on doing whatever it takes to get into a “prestige school.”

Second, there has been no evidence presented that college admissions staff were bribed. I suspect that this is because admissions officers generally make decisions in groups – bribery would be cumbersome, more expensive, and risky (it would take only one admissions officer to talk to end the scheme). White-collar criminals generally find the “soft spot” in the victim organization; in this case, altering the records on which the admissions officers rely was simpler and cheaper. (Ask me how I know –  see About Me.)  Whatever the reason, I find the absence of admissions officers as defendants reassuring.

Third, and most important, the victims in this scandal are not really the universities, however they spin it. Yes, their employees accepted bribes, and some undeserving students were admitted. Some colleges – most notably USC – may suffer undeserved reputational damage.

But the number of students involved are a drop in the bucket compared to the thousands of students enrolled in those colleges legitimately. More important, there remain other, legal, ways for those with money to oil the gears of the admissions process. Legacy admits (students whose parents attended the college) have a huge advantage in Ivy League admissions. And then there are contributions – for the reported $6.5 million one parent invested in bribery, that parent probably could have achieved the same result by simply donating to the university.

College counselors will be largely unaffected.  Perhaps those who are charging outrageous fees might garner suspicion as to exactly what services they are offering to justify same, but consultants belonging to reputable associations (e.g., IECA, HECA, NACAC) may even benefit from all of the discussion about what reputable counselors do.

The victims are students. To be precise, they are high school students with learning disabilities. If your student falls into this category, then you and that student are potential victims. This is because one of the tentacles of this scandal may be particularly far-reaching:  the college consultant claimed that he was able to bribe therapists to provide false documentation of a disability to be used to obtain accommodations on the ACT or SAT.

From the New York Times, March 13, 2019:

The conspiracy relied on the parents getting medical documentation that would entitle their children to extra time on the test, an accommodation normally made for students with disabilities. Students who need extra time generally take the test alone, supervised only by a proctor — providing the opportunity for the bribed proctor to rig the outcome. Mr. Singer advised parents on how to get the medical documentation needed to qualify.

According to court filings, in a conversation with one of the parents, Gordon Caplan, Mr. Singer explained that for $4,000 or $5,000, a psychologist he worked with would write a report saying Mr. Caplan’s daughter had disabilities and required special accommodations. He assured Mr. Caplan that many parents did this for their children.

“What happened is, all the wealthy families that figured out that if I get my kid tested and they get extended time, they can do better on the test,” Mr. Singer said in the conversation. “So most of these kids don’t even have issues, but they’re getting time. The playing field is not fair.”

This is a potential disaster for many of you. It can be tough for students with learning disabilities to obtain accommodations. Now we can expect the ACT and the College Board (the SAT) to make the process even more difficult. If psychologists (possibly a neuropsychologist, but the story does not say) can be bought, then how will the testing companies identify students who really do need these accommodations?

The initial reaction from the College Board was encouraging and helpful:

The College Board considers all reasonable requests for accommodations — such as large print, Braille, or extended time — needed by students with documented disabilities.

The board asks for documentation in some cases, Mr. Goldberg said, but in the “vast majority” of cases, the modifications are granted through the schools that students attend, where they are evaluated and given an individualized education program.

It appears that the best defense is to have a history of accommodations. Parents should be prepared to show current and past IEPs and 504 Plans. A long history of documentation is likely to allay suspicions. However, for those students whose diagnosis is too recent to have obtained an IEP or 504 Plan, or whose need for minor accommodations does not justify the hassle and expense of obtaining same, this scandal may prove troublesome.

This is another reason for parents of students with learning disabilities to consider paying for an assessment by a neuropsychologist and, if appropriate, obtaining an IEP or 504 Plan.

What Juniors Should Be Doing This Spring

For some intensely motivated families, the college hunt started a long time ago.  For some high school juniors, the student has not – and does not – want to think about college yet.  But there are steps juniors should be taking right now to prepare for college admissions applications season later this year.


Summer plans

A few very selective universities (e.g. Stanford, Princeton) ask students how they spent the summer after their junior year.  (Stanford asks about the last two summers.)  Many more ask about extra-curricular activities or work experiences – summer is prime-time for both.  For the record, the summer after my junior year, I slept 12 hours a night and “hung out” with friends doing nothing important whatsoever.  Alas, this is no longer a viable college strategy for juniors.

The “rules” about what constitutes a worthy summer experience are in flux.  The accepted wisdom has been that travel, internships, and summer courses on college campuses are the best ways for juniors to show that they have used their time to learn more about the world and themselves.

However, stung by charges of elitism, colleges admissions officers are proclaiming that local volunteer work or even “merely” having a job will suffice.  And as I have written about previously in Mission Accomplished?  Maybe Not Anymore, overseas service trips are losing their luster because they are perceived as available only to the wealthy.

Each student’s circumstances will determine their best “summer strategy.”  Exploring a potential medical career by “shadowing” in a hospital, or a science career by working in a lab as a research assistant, or any career by working in a related internship, is a common activity.  Students leaning more toward humanities and the social sciences often participate in writing workshops, fine arts experiences, and travel.  Students may also study college-level material in a formal academic program.

Of course, plenty of students needing to support their families or save for tuition have only one choice:  work all summer.  As noted above, colleges are slowly accepting this reality, and many fine essays come out of the most mundane of work experiences.  Families unwilling or unable to pay for summer experiences will favor this option, along with encouraging their student to read for enrichment; Wake Forest currently asks for the student’s five favorite books.

The only rule that matters for now is that families should start working on summer plans while it is still cold out.



Most colleges will accept one to two teacher recommendations; some colleges will also accept a recommendation from a non-teacher.

Stay tuned for a post about how to secure the best recommendations.  For now, note that most recommendations come from teachers who are currently teaching your student.  Teachers in your student’s senior year will have too little time in which to learn about the student before applications are due; sophomore teachers may not remember much about the student when it comes time to write recommendations two years later.  Much the same is true about non-teachers, such as employers, coaches, and the like.

Juniors who have impressed teachers should redouble efforts in those classes.  They should  keep notes of their achievements (best papers, projects, and tests) for later use when asking for a recommendation.  Note:  doing so may be essential because a few colleges are now requiring students to submit a graded paper with their application – this may well become a trend.

Juniors should also increase their participation in class, review their teacher’s comments on papers and projects with them after class, and generally gain their teacher’s regard.  Some may call this “sucking up.”  Why yes, that is exactly right.  Welcome to the world of college admissions – and life.

Finally, all colleges require a letter from students’ “guidance counselor.”  This can be the most important recommendation of all.  We will take up the reasoning supporting this assertion in a subsequent post; suffice to say at this juncture that juniors should get to know their guidance counselors.  In large schools, the guidance counselors serving as college advisers have huge caseloads – even identifying them and introducing yourself can be a challenge. Nonetheless, juniors should begin planning to make an appointment with their guidance counselors to discuss college plans.

If you are a parent or student living in Tucson, please note that I am one of a handful of independent consultants who live and work here.  Please see The Tucson Advantage for why that matters. 


The Invasion of the Reps!

Tis the season when college admissions representatives (“reps”) descend on high schools. Prepare to take advantage of this opportunity.

Not all colleges send reps to high schools to recruit students. Some elite and/or small universities combine forces to present sessions off-campus in the evening. Here is an example: Other colleges offer information sessions in various cities – if you are not in a large city, you probably will not see them.

However, a significant number of colleges send reps directly to high schools. These colleges generally assign reps to assess applications from different regions and states; part of their job is to visit schools in their assigned area. This means that reps who visit your school are likely to be part of the team evaluating your application.

The typical rep (not rap) session is devoted to an informal presentation on the merits of the college, followed by questions. Students are usually allowed to skip class to attend. The entire session usually lasts 30-60 minutes.

Most reps confine their visits to the “strongest high schools” in your area. If your school does not typically host college reps, then you should consider examining the website of the strongest schools in your area and choosing one or two information sessions with colleges to which you intend to apply. If you can swing the logistics (traveling from one school to the other, losing class time, etc.) seek permission from the host school (and your own) to attend the information session at the other school. Of course, you will identify yourself to the rep and explain the situation. Your initiative will seize the rep’s attention, which may bolster your admissions chances.

You have several objectives when attending a college information session:

  1. Learn information about the school that will help you decide whether to apply.
  2. Learn information about admissions that is not available elsewhere.
  3. Register your interest in the college and, if possible, impress the rep.


Here are some tips to accomplish these objectives.

  1. Determine which colleges are sending reps to your area.

Apart from checking with your high school about which colleges are sending reps to the school, most college websites – under “Admissions” – will announce when and where reps will be visiting your area. This may take some digging, but it’s there. If all else fails, call the college’s Admissions office.


  1. Choose your sessions carefully.

Attending sessions often requires skipping classes or extra-curricular activities. Unless you are seriously interested in the college, skip the session. As one rep notes:

I always felt one of the great ironies of the high school visit was that I was there exhorting students to take the toughest classes, do as well as possible…and then skip them when I came to school. That never made sense to me.


  1. Research the college.

This is another reason to choose your sessions carefully:  you should do some homework on the college before the session.

Determine whether the college cares about “demonstrated interest.”  College sessions generally include a “guestbook” where students sign in to register attendance. Reps also hand out their cards, giving the student the opportunity to send “follow-up questions”.  Most colleges keep track of students who attended the session as an indicator that the student “demonstrated interest” in the school. But a few do not.

Fortunately, there is data out there from which you can determine which colleges care about demonstrated interest. Begin by checking the college’s website. A few colleges, such as Carnegie Mellon, make it abundantly clear that they do not track this information. See

Next, check, which compiles data submitted by colleges to the U.S. Department of Education. Find your college, and then click on the “Admissions” tab. About halfway down the page, you will find “Selection of Students.”

Here is the data for one elite university.


Factor Very Important Important Considered Not Considered
Rigor of Secondary School Record X
Academic GPA X
Standardized Tests X
Class Rank X
Recommendations X
Essay X
Interview X
Level of Applicant’s Interest X
Extracurricular Activities X
Volunteer Work X
Character/Personal Qualities X
First Generation to Attend College X
State Residency X
Geographic Residence X
Relation with Alumnus X
Religious Affiliation/ Commitment X
Ethnicity X
Work Experience X


As you can see above, the ratings are: “Very Important”; “Important”; “Considered”; and “Not Considered.”  One of the attributes is “Level of Applicant’s Interest.”  Most colleges – including the one above – state that such interest is “Considered.”  A few label it as “Important.”  That is often a code word for “you’d better go visit that college if you want to win admission.”  (Yes, Rice University, I’m looking at you.)

Colleges that state that the Level of Interest is “Not Considered” should be taken at their word. These are often elite colleges that do not want anxious students flooding onto campus simply because they believe they must. Feel free to attend their high school presentation sessions, but do not assume that you are bolstering your admissions chances by doing so.

Note:  the “Interview” attribute refers to interviews offered by the college after receiving your application. Most of the colleges which request interviews will offer to have an alumnus interview you close to your location; there is usually no need to visit the school for that purpose. For purposes of deciding whether to attend a rep session, you can ignore this attribute.

We’re just getting started with research here. Know the basics:  the college’s location, size (and perhaps gender/racial composition) of its student body, and courses of study available. A quick review of the college’s web site should yield this information.

Then dig a bit deeper:  determine the GPA and SAT/ACT scores necessary to be competitive, the schools, departments, and majors for which the school is best known, and some of the colleges for which it typically competes for students. There are several resources where you can find this information, but for a “quick look”, I consult Type in the school name, and choose the “overview”, “admissions”, and “student life” tabs.

Then run the college’s name through a search engine and browse the links which appear, from Wikipedia to various ranking sites (e.g., Niche, Princeton Review). Hard-core readers with plenty of time might look up the school in “College Confidential”, as well.

Finally, you should examine the Common Application (or the Coalition Application or, in a few cases, the college’s own application) and determine what supplemental essays and short-answer questions are contained in the application for the college visiting your high school. Be patient – there is a very good reason for doing so.

Now you are ready to evaluate the rep’s presentation and ask an intelligent question.

Here is a step-by-step guide to getting the most out of the session itself.

  1. Arrive 5 minutes early. That is enough time to snag a front row seat – you want to be seen – and not so early that you are stuck talking to the rep with nothing to say. Talk to the rep after the session, not before.
  2. Sign the guestbook.
  3. Turn off your cell phone and put it away.
  4. Listen carefully to the presentation; take notes.

The mere act of taking notes marks you as a serious applicant. And you may use the information later.

What should you be listening for?

What does the college consider its strengths?  Colleges compete for your attendance. Let them sell the benefits their school offers. Take notes, because at some point in your application to that school you will want to express interest in those benefits. In other words, you will want to tell them – at least briefly – what they want to hear. This is where you connect the information and sales pitch with your answer to their “Why Our College” question.

What information is the rep revealing about admissions policies and/or priorities?  Although most colleges simply repeat the information on their websites, sometimes they will deliver a nugget of information you can use. Be alert for suggestions that you apply early, or that the college is looking for certain extracurricular activities. (Although you cannot invent an activity, this information helps you decide which activities to emphasize.)  One year a rep from a top college responsible for my student’s region emphasized that he likes to read humorous essays. My client obliged – she was admitted. (Of course, her academic record might have had something to do with it.)

5. Do not ask more than one question during the session.

Asking more than one question may give the impression that you are trying to dominate the session. Of course, if you have a follow-up question, do not be afraid to ask it.

Avoid questions where the answers are apparent on the website. For example, asking about admissions requirements that are set forth on the website makes you look lazy. This is one reason to the research I suggested above.

Ask more general questions that may pertain to your interests, such as:

Question: “I am interested in exploring both sciences and the liberal arts. How difficult is that to do at ________?”  “Is it common for students to double-major?”

Question: “How popular is undergraduate research [presuming your interest is in STEM] at ______?”  “What are the requirements for students who are interested in performing research?”

Question: “Is there much interaction between the students on campuses and the surrounding community?”  “What are the internship (or co-op – where you work full-time) opportunities in the area?”  [This question can be tweaked depending on whether you want to know about social, business, or academic opportunities – if you are budding social worker, you may care more about the demographics of the area than other students.]

I list two questions for each topic because another attendee might ask the first one.

After the session ends:

  1. If you haven’t done so already, sign the guestbook.
  2. Engage the rep.  Start by asking for a business card.

There will probably be a line of students waiting to talk to the rep. If you have plenty to say, then you might want to linger in the back of the line in hopes of being the last student who talks to the rep. You want to be memorable, in a good way.

When you get home after school, if you are still interested in the college, write a quick paragraph (based on your notes) and put it into a file for that college (electronic or visual) for use when drafting your application. Remember to save (or scan) that business card.

And write a thank-you note and e-mail it to the rep.  Something simple will do:  “Thank you for your information session today at [“X high school”].  I found it very informative and useful.”  Noting the high school is important, because reps often visit more than one high school each day.


A Test That Is Too Easy Results in Hard Feelings

A not so funny thing happened on the June 2018 SAT test – many students were surprised by their low marks. In fact, the entire testing community appears to have been taken aback, and not in a good way.

Here is an article that explains all of this in detail: Other test-prep companies have also discussed this issue. See

The takeaway for students and parents is that the June 2018 SAT test was too easy. As a result, the College Board psychometricians (a vocabulary word that will probably never appear on the SAT – it refers to experts on testing) used a very steep curve to avoid giving the same scores to everyone. This curve punished mistakes harshly. As the Compass article linked above points out:

Compare this to how the June SAT 2018 Math fits in among its fellow new SATs. A 650 could be achieved with 50 correct answers. That’s the lowest scaled score the new SAT has ever produced for 50 correct answers. The highest score it has produced for 50 correct answers on an actual, released exam is 740 points — a 90-point swing! So in its first two years, the new SAT has approximately doubled the extremes seen on the old SAT over 10 years and 4 times as many exams.

What will happen to those scores?  The College Board remains committed to the results of the test, going so far as to insist that the results were not “curved,” but “equated.”  Technically this may be correct, but the impact for many is that their scores on that test do not reflect their ability to score well on the “typical” SAT.

However, the furor surrounding this exam changes the usual calculation concerning when students should re-take a standardized test. Normally, students with two or more SAT (or ACT) results should not retake the exam except under two circumstances: 1) students are very confident that they will be able to improve their scores because they have now received extra time for a learning disability, were ill previously, or have put in more time in studying for the test (perhaps with help from a test-prep company); or 2) students absolutely need a higher score to stand a chance of being admitted to their “reach” schools.

The risk of taking the exam repeatedly is that some colleges require students to send all test results. For those colleges, there are several risks. First, the student may get unlucky and receive lower scores on the retake. Second, some of those colleges (e.g., Ivy League) frown on students taking the SAT or ACT multiple times. Finally, colleges tend to discount later scores as being due to the “practice effect”, i.e., students who are more familiar with the test typically post higher scores.

Some of these risks may be reduced for students who scored below their expectations on the June 2018 exam. Although colleges will probably not discard the June 2018 scores, the furor over the test means that they will consider those scores with an asterisk – they are more likely to accept later scores favorably.

If you are dissatisfied with your score on the June 2018 SAT, you should seriously consider retaking the test.

SAT Score? No, But Check Out How I Did on the Gaokao!

A couple of news items grabbed my attention in June.  First, the universe of test-optional schools expanded with the addition of a top-ten institution, the University of Chicago.


Just say no to standardized tests?

Test-optional schools are what the label implies:  students are not required to submit SAT/ACT scores when applying.  The “test-optional” practice has become increasingly popular, with 100 colleges adopting the practice during the last five years.  Why is this becoming popular?  From my vantage point I see good – and perhaps not so good – reasons.

Some students just do not test well.  Students with learning disabilities who need extra time sometimes do not receive accommodations, or do not benefit from them.  Standardized tests stand accused of cultural bias – asking questions which presuppose knowledge that students not raised in “mainstream” America lack.  The “test-prep” industry, which often only the wealthy can afford – can boost some students’ scores at the expense of their less financially endowed competitors.  In other words, in some cases the tests simply do not measure students’ academic potential.

However, some colleges adopting this practice are being more strategic rather than altruistic.  Colleges are businesses, and once you get beyond the top 100 colleges, demand for places starts to ebb.  Many of those “test-optional” institutions are small private colleges.

“Going test-optional” enlarges the pool of interested students.  See  In a few cases, those extra students can be essential for financial survival of the institution.

Schools that go this route can obtain another benefit – selective reporting.  Students with low standardized test scores who stand to be admitted for other reasons (e.g., legacy admissions, athletic scholarships) are most likely to not submit those scores.  When the school reports its admission statistics, its average standardized test scores for admitted students will rise commensurately.  This makes the college appear more selective, which can affect its rating in reference lists such as U.S. News and World Report.  These colleges get the best of all possible worlds:  more applicants, higher reported test scores, and a few rungs up the “prestige” ladder for schools (which in turn prompts more students to apply).

A few “test-optional” colleges have “refined” this concept by requiring submission of standardized test scores to obtain financial aid.  This appears to be an obvious “pay to play” plan, where students with poor test scores are not required to report them if all they seek is admission.  If those students are admitted, they subsidize their better testing peers by paying full tuition.

Many – but not all – top colleges have resisted adopting a “test-optional” policy because of the stigma attached.  The SAT and ACT are marks of quality (however imperfect); announcing that they are no longer required for admission suggests a lack of rigor.  This is increasingly the case because widespread grade inflation is making GPAs less reliable for distinguishing among students.

This is what makes the decision by the University of Chicago so surprising.  It is an elite institution with perhaps the most “cerebral” reputation of them all.  This was where the Manhattan Project helped win WWII, and the Chicago School of Economics changed economic policy around the world.  This is the university famous for its admissions essay questions – a few years back applicants were invited to explain “what is odd about odd numbers”?  The university is known as the place “where fun goes to die,” because everyone is so busy studying.  These students run an intellectual gauntlet second to none.

Perhaps most important, the University does not need more prestige – it already ranks #3 on the all-important U.S. News and World Report list of top colleges.  Thus, when the University of Chicago goes “test-optional”, the world of higher education pays attention.

So why did the University do it?  It cites some of the “good” reasons stated above; it is coupling the move with an increase in financial aid for families earning under $125,000.  If there were also strategic motives behind the plan, they were not announced.  (No surprise there.)

Two questions remain.  First, will this change the demographic of admitted students at the University?  Will more minority and students with learning disabilities apply, and will they be accepted?  Second, will other schools follow the University’s lead?

We will have to wait a few years to find the answers to both questions.  In the meantime, the “test-optional” movement just gained a significant boost.

Students considering test-optional schools should carefully evaluate the testing policies of individual colleges – they can differ in important respects.  College counselors can add value here.


The rise of the gaokao

Meanwhile, there was another standardized test in the news this week – the gaokao.  This one dwarfs the SAT and ACT in almost every respect.  About 3 million U.S. students take the SAT or ACT, while approximately 9 million Chinese students sit for the gaokao.  Like some European countries, the gaokao is the most important, if not the only, data used by Chinese colleges for admission.

The gaokao is a nine-hour exam given over two days.  Compulsory subjects include Chinese, mathematics, and, usually, English (students can substitute other languages); students also sit for an additional subject depending upon whether they are pursuing STEM or other careers.

It is hard to overestimate the importance of the gaokao to Chinese students.  The results determine which colleges students can attend and the majors they may pursue there once admitted.

Many students compress their high school careers so that they can graduate early and spend the next year cramming for the exam.

The hype around the exam makes stories about test anxiety in the United States seem tame.  Per the South China Morning Post:

Hengshui Middle School in Hebei province, where more than 100 students earned admission to the prestigious Peking and Tsinghua universities, students have been given IV drips as they study, believing that it will help them with concentration and focus. Girls are given contraceptive pills to delay their periods until after the exam.

Similar stories abound.  See (referred to as “the Atlantic article”).

The Atlantic article suggests that the growing number of Chinese students seeking to attend college overseas are driven by worry about – and disdain for – the gaokao.  It notes that Chinese students taking the SAT and ACT are under similar pressure, and links that pressure with the recent wave of cheating scandals on all three exams in Asia.  (The penalties are a bit stiffer for cheating on the gaokao – cheaters are banned from retaking the test for years, and those caught facilitating cheating face prison sentences of up to seven years.  See

The gaokao and the SAT/ACT share one justification – identifying talent.  The archetypical example in China is the student in the rural provinces who would otherwise have been consigned to a life of farming but did well enough on the gaokao to change her life.  It is ironic that the SAT/ACT are threatened in the U.S. while the gaokao remains dominant in China.

One reason for the dominance of the gaokao is that for many Chinese students, the test is the only guaranteed authentic mark of achievement and talent.

From the Atlantic article:

Guessing the percentage of fraudulent transcripts in applications from China is a popular parlor game among educators over here. Unscientific estimates abound: One prominent agent who works with students at some of the best high schools in China recently estimated to me that at least half of the transcripts in China are doctored to look like the students have done well in a robust high school curriculum, when the reality is one of almost constant memorization and practice tests. Unfortunately, no one in the college prep industry in China would be surprised if the actual percentage was significantly higher.

The Chinese system poses a challenge for U.S. colleges who tout their “holistic college admissions” processes.  How can they distinguish among foreign students who spend all of their time studying for one exam, and whose transcripts, even if produced, may be fraudulent?

The obvious answer is to consider the results of the exam.  After all, colleges rely on the TOEFL exam to assess competency in English.

And so it begins.  Newspapers this week trumpeted the decision by the University of New Hampshire to consider the gaokao.  But UNH is only the first public university to do so – the University of San Francisco (“USF”) started accepting the gaokao in 2015.  Dozens of universities in Australia, Canada, and Europe accept it.

This article from Inside Hire Ed reporting on USF’s program is skeptical that many U.S. universities will follow, mostly because the timing of the gaokao conflicts with the admissions cycle:

We shall see.  When 9 million test-takers sit for an exam, the number of “underperformers” is in the seven-digit range.  It is therefore no surprise that U.S. universities are after some of those test-takers, preferably those who will pay full tuition in the United States.

Per the USF administrator in charge of Chinese admissions:

He anticipates that USF will set gaokao cutoff scores equivalent to the marks needed to get into a first-tier Chinese university in each province, plus or minus a few points. Students who are admitted based on their gaokao scores will pay their own way, though Nel said they could be eligible for merit scholarships of up to $20,000 per year.

Brave words, but I doubt that the high standard will be maintained.  After all, the goal of almost every student who takes the gaokao is to snag a spot in a first-tier Chinese university.  Very few will trade that for a spot at most U.S. colleges.  Expect lower, less publicized, standards as this practice grows.

And it will grow.  With 337,000 Chinese students currently enrolled in U.S. colleges, and financial pressures on those colleges increasing as public funding continues to lag, do not be surprised when this practice spreads throughout our American system of higher education.

Ken Rosenblatt — Tucson College Counselor

TEN-HUT! Applying to Our Nation’s Military Service Academies

The summer before senior year is usually quiet for high school students.  Not so for those seeking to attend our Nation’s military service academies.  This is because of the importance of seeking – and obtaining – a nomination for an appointment.

An appointment is the equivalent of a college acceptance.  A nomination is a request by a designated person or institution that an applicant receive an appointment.

If this looks complicated, it is.  However, rest assured that the service academies will walk you through the process (the Coast Guard Academy does not require nominations, but is listed below for reference):

Army (West Point, NY):

Navy (Annapolis, MD):

Air Force (Colorado Springs, CO):

U.S. Coast Guard (New London, CT):

U.S. Merchant Marine (Kings Point, NY):

Most students seeking nominations apply to their local Representatives and Senators.  The Vice-President and President also nominate candidates.

Applicants are encouraged to ask each of their local Representatives and Senators for nominations because each of these office-holders can make only a limited number of nominations.  As the Air Force puts it:

We recommend attaining a nomination from as many sources as you are eligible for, President, Vice-president, Senators and Representatives. This will improve your chances for success if one source does not nominate you.

Certain military officials, JROTC units (discussed below), and even some private military academies can also nominate applicants.  Consult service academy websites for more information on those options.

The nomination is only part of the application, but it is the one with which most students are least familiar.  Applicants who have military connections (e.g., JROTC, relatives in the services) should reach out to each and every one of them for help.


Planning further ahead

For those with the luxury of time – students about to enter their sophomore or junior years – now is the time to lay the groundwork for your application and request for nomination.  This is because the service academies have some unique requirements for admission that require sustained effort.

First and foremost is fitness for duty.  Before you get your hopes up, determine as best you can whether you have any medical problems which could pose a barrier to admission.  A medical examination, the DoDMERB, is required for admission.

The Air Force is particularly stringent (see, but all of the academies have medical criteria.  Waivers may be obtained for certain conditions.

The academies also require evidence of physical fitness.  The Army includes a guide for aspiring cadets:

It also requires students to complete a fitness test before applying:

You will schedule your CFA with your physical education (PE) teacher. The Candidate Kit has the test form and instructions you can forward in PDF format. Your performance in six events will be judged:

  • Basketball throw (from a kneeling position)
  • Cadence pull-ups or flexed-arm hang (women’s option)
  • 40-yard shuttle run (for time)
  • Abdominal crunches (number completed in 2 minutes)
  • Push-ups (number completed in 2 minutes)
  • 1-mile run (for time)



Colleges proclaim that they seek students who have demonstrated leadership.  The service academies “walk the walk.”

Consider this advice from the Air Force:

So how do you prepare for a future at the Academy?

  • Study hard. Get the best grades you can in all subjects — especially English, math and science.
  • Join a sports team. If your school does not have an after-school sports program, you can usually find one at your local community park or recreation center.
  • Become a leader. Join a scouting program like Girl Scouts, Boy Scouts or Civil Air Patrol. Or join another school or local club and go for a leadership position like club president or secretary.
  • Demonstrate character. Consider activities that help others. Get involved with church groups or other organizations that may be helping members of your community.


Commitment to military life

Service academies are looking for leaders who will take well to military discipline.  One good way to demonstrate that quality is to join a local JROTC unit and, if possible, attend a summer program at a service academy.

Wikipedia offers a general (if lengthy) overview of the JROTC and a similar program, the NDCC:

These programs are intensive preparation for military life; evaluations and recommendations (and even nominations for appointment) from commanders are key.  The downside is that the commitment required to join such a unit may extend to attending very early morning sessions at a different location from the student’s high school.  Some parents even home-school their students as a way of accommodating the conflicting demands.

Another way of demonstrating a commitment to the academies is to attend one of their summer programs:



Air Force:

Coast Guard: (prospective merchant mariners also attend this program – the U.S. Merchant Marine Academy does not offer its own).

These camps are a pathway to an appointment.  As the Navy puts it: “Summer Seminar gives you a taste of life at the Academy and kick-starts your application journey for an appointment to the Academy.”

Finally, the Academies have the same way to demonstrate interest as do many colleges – sign up for their mailing lists.  For example, prospective applicants to the Air Force Academy should sign up to be “Future Falcons”



Just because the services look for fitness and leadership potential, do not assume that academics are at the bottom of their list of priorities.  Academic performance makes up 50% of the assessment for admission to the Air Force academy.  The Naval Academy emphasizes STEM coursework, with many cadets graduating from the Academy with engineering degrees.  Standardized testing is also important.  For example, the published mean SAT score for the Air Force academy is in the mid-1200s, with a mean ACT of 30.


Work on Plan B

Our nation’s service academies are competitive; admission is the exception, not the rule.  Therefore, students also need to apply to colleges in case they do not win an appointment.  The good news is that academic and leadership ability, along with extracurricular activities, make for a winning application to both service academies and colleges.  As you might expect, I can assist students with both endeavors.

This Blog is Ranked #1

Once again, college ranking season is upon us.  Timed for when high school juniors and their parents are starting to focus on college admissions for the following year, a whole host of magazines, newspapers, and websites publish their annual “rankings” of their “top” institutions of higher learning.

What are you – parent or student – supposed to do with all these opinions?  How do you know which college is the best, or, more important, the best for your student?

When buying a washing machine, many consumers start with Consumer Reports magazine.  Is there a “Consumer Reports” rating authority for colleges?

Alas, no.  Instead of one possibly authoritative source, there are well over a dozen contenders.

Start with traditional news media organizations that publish college rankings, including:

U.S. News and World Report: (the 800 lb. gorilla of college ratings)

The Wall Street Journal/Times Higher Education of London:

The Economist magazine:

Forbes magazine:

Money magazine:


Washington Monthly magazine:


You can also consult web sites which publish their own rankings, such as:


The Alumni Factor:

College Factual:




Let us not forget guidebooks.  Some are specialized, such as the “40 Colleges Which Change Lives.” Here, however, I am referring to the largest publishers, those whose guidebooks profile 300 or 400 of the “top” colleges in the nation.  Although these guidebooks, such as Fiske and the Princeton Review, do not rank colleges, their inclusion of each college in their books is an endorsement of sorts for that institution, or at least a culling from the herd of over 2,500 four-year institutions of higher learning.  These guidebooks also include specialized lists, such as private and public university “best buys”.

Why are there so many lists?  For the most part, money is the motivator.   The magazines gain subscribers and, for those readers who find them on the Internet, advertising dollars.  The guidebook companies sell more books.  Which book would you buy:  “382 Colleges” or “The Best 382 Colleges”?  Princeton Review chose the latter title for its book:

Another reason for the plethora of lists is that rating any institution, including a college, depends upon what is deemed most important to the reader.  Consider the large number of criteria available:

  1. School resources, including size of the endowment, availability of research equipment and funding for undergraduates.
  2. Selectivity – how difficult is to win admission.
  3. Affordability – usually a combination of price (tuition, room and board, fees) and availability and generosity of financial aid.
  4. Academic record of accepted students.
  5. Graduation rates, at 4 and 6 years.
  6. Student retention rates after freshman year.
  7. Reputation of the faculty.
  8. Faculty’s teaching ability.
  9. Student-professor ratio.
  10. Alumni contributions, both in percentage giving and amounts donated.
  11. Student “engagement” with professors.
  12. Student satisfaction with the college.
  13. Projected earnings for graduates; sometimes expressed as a ratio with tuition to arrive at an ROI (return on investment) for each college.
  14. Recruitment of disadvantaged students (e.g., race, income).
  15. College emphasis on service by its students (e.g., community service requirements).
  16. Student loan repayment rate.

Thus, determining “the #1 college” depends upon on what information is deemed important.

The key to using these lists is to understand:

  1. What criteria are used, and how those criteria are weighted, in arriving at the ranking.
  2. Whether the data used is sufficient to support the conclusion drawn from it.
  3. Whether the ranking relies on data which is subject to misinterpretation or even fraud.
  4. Whether the criteria are relevant to your student’s interests.


A few examples of “best practices” and, well, “less than best practices,” follow.


What criteria are used, and how are they weighed?

The most serious problem arises when the ranking is done in a “black box”, where only the ranking service knows what it considers important.

Consider the explanation offered by Niche (formerly College Prowler) concerning how it creates college rankings (

The Niche 2018 College Rankings are based on rigorous analysis of academic, admissions, financial, and student life data from the U.S. Department of Education along with millions of reviews from students and alumni. Because we have the most comprehensive data in the industry, we’re able to provide a more comprehensive suite of rankings across all school types.

Very impressive, if a bit vague concerning exactly what that data is, and how much of it is relevant to ranking colleges.  However, we can break it down.  The Department of Education collects a trove of data from colleges – the complete dataset is over 200 megabytes – and makes it public.  Most ranking services use this data, and sites such as (highly recommended) compile and present it in usable form.  See

Niche’s claim to fame appears to be that it combines some of that data with its proprietary student survey data.  Confusion results when it explains how it uses that data (emphasis added in bold).

With clean and comparable data, we then assigned weights for each factor. The goal of the weighting process was to ensure that no one factor could have a dramatic positive or negative impact on a particular school’s final score and that each school’s final score was a fair representation of the school’s performance. Weights were carefully determined by analyzing:

How different weights impacted the distribution of ranked schools;

Niche student user preferences and industry research;

After assigning weights, an overall score was calculated for each college by applying the assigned weights to each college’s individual factor scores. This overall score was then assigned a new standardized score (again a z-score, as described in step 3). This was the final score for each ranking.

Yes, but what factors – criteria – were used, and how were they weighted?  Niche does not say.  For example, if Niche weighted “reputation of faculty” at 90%, then the rankings would be skewed heavily in favor of prestigious schools.

In contrast, many ranking sites are transparent about how they use such data in arriving at their rankings.  See e.g., (U.S. News and World Report – listing factors and the weight assigned to each); (the Economist magazine factors and weights used in its ratings for British universities).  If we cannot discern what information underpins rankings, then rankings are not very helpful.

Further, Niche makes explicit what I suspect other rankings purveyors do quietly – it takes a “second look” at the data to make sure that it looks “right” before publishing its rankings.  Here is the quote from above, with different emphasis added in bold:

With clean and comparable data, we then assigned weights for each factor. The goal of the weighting process was to ensure that no one factor could have a dramatic positive or negative impact on a particular school’s final score and that each school’s final score was a fair representation of the school’s performance. Weights were carefully determined by analyzing:

How different weights impacted the distribution of ranked schools;

Niche student user preferences and industry research;

That certainly looks like Niche “tried out” different weightings for each of the unnamed criteria, and then changed those weights if the resulting rankings did not look “right”.  The last sentence even suggests that it adjusts its ranking to conform to what other ranking services report (Niche analyzes student preferences and “industry research”, i.e., how other college insiders rank the schools).  That level of subjectivity is understandable – sort of applying a “smell test” to the results – but it does not add confidence that the weighting is completely objective.  It also limits the possibility of “uncovering hidden gems”.  What is the point of rankings if a school cannot score at the top unless it “looks the part”?

The lesson here is that before relying upon a list, understand exactly what criteria are being used, and how they are being weighted.


Is the data sufficient to support the measurement? 

When you step on a scale, the datum that stares you in the face, no matter how unpleasant, is almost certainly sufficient to determine how much you weigh.  However, if three people step on the scale – one at a time – the average of their weights will be insufficient to determine the average weight of the population of the United States.

Some ranking criteria require complete datasets to be relevant, and those datasets are often very difficult to obtain.  For example, many surveys measure “student outcomes,” usually through a proxy such as average earnings after 1 year, 5 years, etc.  Unfortunately, when schools survey graduates about their employment, often only the graduates with “good news” to report answer.  Would you be eager to tell your alma mater that you are unemployed?  And even those with good news to report simply ignore the surveys – perhaps out of fear that a letter from the development department asking for funds will follow.  (On a personal note, for the last 38 years, UCLA has been able to track me to a new address within six months of my arrival – very impressive.)

For example, only 19% of University of Kansas graduates responded to an outcome survey.  See  The university then took the unusual step of looking up LinkedIn profiles to supplement responses.

Take any exemplary “placement rate” with a large grain of salt.  One item to look for when evaluating individual colleges’ placement rates is whether they use the NACE standard for survey responses.  See  Even then, be careful with any statistics about “full-time employment” – many colleges interpret Starbucks baristas as being fully employed.

The college rankings that rely on graduates’ salaries also suffer from incomplete datasets.  See (“Want to Search Earnings for English Majors?  You Can’t.”  New York Times, 12/1/17).

In addition, graduates tend to find work close to their colleges.  Unless adjusted for cost of living, student earning surveys will skew in favor of West and East Coast schools.

Other criteria are easy to compute, but relatively meaningless.  The percentage of students graduating after four or six years is susceptible to sample bias.  It is normal for many engineering students to take more than four years to graduate.  Unless you know the sample (MIT, small liberal arts college), low four-year graduation rates coupled with high six year-rates may be “normal”.  Obviously, any school where both rates are low will drop out of any ranking without ceremony.  A better measure is freshmen retention – if students are not returning after freshman year, the odds are higher that your student will not, either.


Some data is subject to “gaming,” or even fraud

Selectivity is the ratio of applicants to students admitted.  In 2016, Stanford only admitted 1 in 20 applicants, leaving it with a selectivity of 5%.  That means Stanford must be the best, because only the best students get in, correct?

Well, a lousy school will not attract enough students unless it relaxes admission standards.  But be careful with putting too much weight on that correlation, because many colleges “game” selectivity.  Methods include:

  1. Using “VIP applications”. Colleges send out these one page, no-fee, applications to thousands of students.  Because they do not require essays or application fees, many students fill them out and return them.  See  I commented on this in my “You’ve Got Mail” post (6/22/15).


  1. Adopting the Common Application. The Common Application is used by over 500 colleges.  Students need only fill out the application once to apply to all member colleges.  Most colleges require students using the Common Application to respond to one or more essay prompts unique to the school.  Stanford has 11 such prompts, although many of them are short.  Colleges who wish to encourage applications do not require any supplemental essays.


  1. Going test-optional. A growing “fair test” movement in college admissions eschews standardized tests (SAT, ACT) as merely reflecting affluence or penalizing students who do not perform well on standardized tests.  These schools allow students to apply without them.  Although this may be laudable, it can also increase applications.


Colleges using these tools can lower their selectivity number, which, remember, is the percentage of applicants admitted, simply by inducing more students to enroll while not increasing the number of students accepted.  Why would they do this?  The U.S. News and World Report includes a college’s selectivity number in its ranking system, and guidebooks list colleges’ selectivity ratings prominently.  Students use selectivity as a proxy for “desirability,” which results in even more applications; a vicious cycle for students ensues.

The U.S News rankings also factors in the SAT and ACT score of enrollees.  You might ask how colleges could possibly “game” such numbers.  Well, five colleges since 2012 have been caught reporting higher scores than students actually earned – we call that fraud.  And there may be more who have not been caught.  See


The criteria are not useful to students

U.S. News and World Report assigns 22.5% of its college ranking to “undergraduate academic reputation”.  What does that phrase mean?  Here is the explanation from U.S. News:

For the Best Colleges 2013 rankings in the National Universities category, 842 top college officials were surveyed in the spring of 2012 and 53 percent of those surveyed responded. Each individual was asked to rate peer schools’ undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished).

Those individuals who did not know enough about a school to evaluate it fairly were asked to mark “don’t know.” A school’s score is the average score of all the respondents who rated it. Responses of “don’t know” counted neither for nor against a school. 

The problem is that “reputation” is intangible – the reader must ask: “reputation according to who?”  At least U.S. News makes the “who” clear:  college officials and high schools counselors.  But should students care how college administrators regard their colleagues (some of whom will be rivals)?  Unless the school is nearby (in which case it may well be a rival), most college administrators are unable to make fine distinctions between colleges.  And high school counselors rarely follow up with their charges to determine their experiences after enrollment.

Indeed, shouldn’t students care a lot more about what employers think about the schools?  The Wall Street Journal published just such a list: — in 2010.

And some ranking services reject traditional criteria as merely a proxy for wealth.  For example, The Washington Monthly publishes its list as “our answer to U.S News & World Report, which relies on crude and easily manipulated measures of wealth, exclusivity, and prestige to evaluate schools.”

Alas, not many career-oriented students (and parents) will be interested in its alternative:

We rate schools based on their contribution to the public good in three broad categories: Social Mobility (recruiting and graduating low-income students), Research (producing cutting-edge scholarship and PhDs), and Service (encouraging students to give something back to their country).

Kudos to those who are.

The largest failing of college rankings is that they are usually so general as to be meaningless.  Students in STEM fields may not value small class sizes as much as a school’s facilities and labs.  Liberal arts students are just the opposite.  Students who are planning professional careers may not care as much about “outcome measures”, as opposed to acceptance rates at professional schools (which the ranking industry tends not to measure).  Wealthy families may not worry about financial aid – for less wealthy parents, their inquiry may begin – and end – with that criterion.


College rankings can be useful

With the caveats expressed above, college rankings do have their uses.  Here is how to use them:

  1. Understand what is being measured, and the weights assigned to each criteria.
  2. Understand what criteria are omitted, and determine if the omission is important to your student’s needs.
  3. Choose the most specific ranking possible. U.S. News and World Report, along with other services, publishes “sub-lists”, e.g., best undergraduate engineering programs.  Prospective engineers should start with that list.
  4. Consult more than one ranking. Schools that are consistently ranked highly may be more likely to deserve their rankings.
  5. Prepare to dig deeper. Sometimes variations in formulas make little difference in the rankings they produce.  Eight of the top 10 universities on the U.S. News list were also in the Journal/THE top 10.  The other two were right behind.  You will have to do your own research to tease out the differences among high-ranked schools, or make decisions based on other factors (g., geography, cost).

How do I use rankings?  First, I use them to ascertain which schools my clients will value highly.  I may have to overcome preconceived notions with research.  Second, I use them like a string around a finger – when I am composing a list of candidate schools, I check the rankings to make sure that I have evaluated all of the commonly considered schools.

I also use college matching services, such as BigFuture ( to create lists of schools, but that is grist for another article.