Friday, October 27, 2006

Aha! It IS possible to have statistically significant rankings!

I've been traveling a lot during my sabbatical, mostly talking about Enron and related issues, and I picked up a copy of Business Week in the airport yesterday. The latest issue ranked the top B-schools. Business Week's rankings are calculated very differently from the rankings by U.S. News & World Report.


What does Business Week do differently? Well, for one thing, it surveys people with direct knowledge of the various schools. Check out this quote: "Through all of this, the ranking has centered on one thing: customer satisfaction. We measure this by surveying not just thousands of students but the corporate recruiters who hire them." And how does Business Week conduct its survey?

45% of total score: student satisfaction surveys.
  • It sends out 50-question questionnaires to "16,565 Class of 2006 MBA graduates at 100 schools in North America, Europe, and Asia"--and gets an over-5o% response rate. The schools help Business Week contact the grads (except for Harvard and Wharton, but even for the graduates of those schools, Business Week was able to "reach 39% of the Class of 2006 at those two schools"). (This methodology reminds me of the Law School Survey of Student Engagement's methodology.)
  • It uses a web-based survey instrument for ease of response.
  • Its questionnaire covers such issues as teaching quality and career services.
  • The responses (culled to eliminate responses from low-response rate schools) make up 50% of the "student satisfaction" score.
  • The remaining half of the student satisfaction score comes from surveys of the class of 2002 (25% of student satisfaction score) and the class of 2004 (ditto).
  • All of the responses from the students are then reviewed by psychometricians to ensure that the data aren't skewed.

45%: recruiter polls.

  • Also a web-based survey.
  • Over 50% response rate.
  • Each company was allowed to fill out one survey, to reduce possible distortion of results.
  • "Recruiters were asked to rate their top 20 schools according to the quality of a B-school's grads and their company's experience with MBAs past and present. Companies could only rate schools at which they have actively recruited--on campus or off--in recent years." (Emphasis added.)
  • Each school's recruiter-poll score was divided by the number of companies recruiting at that school.
  • Business Week followed the same weighting of 50% for input by recruiters about the class of 2006 recruits, 25% for input about the class of 2004 recruits, and 25% for input about the class of 2002 recruits.
  • Once again, Business Week culled out the low-response-rate schools.

10%: intellectual capital.

"[W]e calculated each school's intellectual-capital rating by tallying faculty members' academic journal entries in 20 publications, from the Journal of Accounting Research to the Harvard Business Review. We also searched The New York Times, The Wall Street Journal, and BusinessWeek, adding points if a professor's book was reviewed there. The scores were then adjusted for faculty size. The final intellectual-capital score accounts for 10% of a school's final grade."

Now, compare Business Week's methodology to the methodology used by USNWR for its law school rankings:

Quality Assessment (weighted by .40)

Peer Assessment Score (.25) In the fall of 2005, law school deans, deans of academic affairs, the chair of faculty appointments, and the most recently tenured faculty members were asked to rate programs on a scale from "marginal" (1) to "outstanding" (5). Those individuals who did not know enough about a school to evaluate it fairly were asked to mark "don't know." A school's score is the average of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. About 67 percent of those surveyed responded.

Assessment Score by Lawyers/Judges (.15) In the fall of 2005, legal professionals, including the hiring partners of law firms, state attorneys general, and selected federal and state judges, were asked to rate programs on a scale from "marginal" (1) to "outstanding" (5). Those individuals who did not know enough about a school to evaluate it fairly were asked to mark "don't know." A school's score is the average of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. About 26 percent of those surveyed responded.

Selectivity (weighted by .25)

Median LSAT Scores (.125) The median of the scores on the Law School Admissions Test of the 2005 entering class of the full-time J.D. program.

Median Undergrad GPA (.10) The median of the undergraduate grade point average of the 2005 entering class of the full-time J.D. program.

Acceptance Rate (.025) The proportion of applicants to the full-time program who were accepted for entry into the 2005 entering class.

Placement Success (weighted by .20)

Employment Rates for Graduates
The employment rates for 2004 graduating class. Graduates who are working or pursuing graduate degrees are considered employed. Those graduates not seeking jobs are excluded.

Employment rates are measure[] at graduation (.04) and nine months after graduation (.14). For the nine-month employment rate, 25 percent of those whose status is unknown are counted as employed.

Bar Passage Rate (.02) The ratio of the school's bar passage rate of the 2004 graduating class to that jurisdiction's overall state bar passage rate for first-time test takers in summer 2004 and winter 2005. The jurisdiction listed is the state where the largest number of 2004 graduates took the state bar exam.

Faculty Resources (weighted by .15)

Expenditures Per Student The average expenditures per student for the 2004 and 2005 fiscal years. The average instruction, library, and supporting services (.0975) are measured, as are all other items, including financial aid (.015).

Student/Faculty Ratio (.03) The ratio of students to faculty members for the fall 2005 class, using the American Bar Association definition.

Library Resources (.0075) The total number of volumes and titles in the school's law library at the end of the 2005 fiscal year.

(By the way, to get the detailed methodology, I had to subscribe to USNWR's premium edition. The free version only describes the methodology in general terms.)

I'm on record in a number of different places regarding the false precision and dangerous use of the USNWR rankings. See my SSRN page for some of my work in this area: especially Ratings, Not Rankings and Having Our Cake and Eating It, Too.

What are some of the differences between Business Week's methodology for ranking B-schools and USNWR's methodology for ranking law schools? Among other things:

  • Business Week measures what happens during the degree program (student satisfaction) and how employers feel about the students graduating from various programs (employer satisfaction). USNWR measures inputs (LSATs and GPAs) that don't describe the student experience at the law schools. It does, however, provide two scores that have meaning to potential students -- placement rate and bar passage rate.
  • Business Week's survey has a bigger n of respondents when it comes to student satisfication, and it takes only one response from each company in the recruiter score.
  • Business Week uses more objective measures for the intellectual capital of the faculty of the B-schools. The closest ranking system of intellectual capital of law school faculties is Brian Leiter's system.
  • Business Week only ranks the top schools, not 100% of all B-schools.

Business Week's rankings aren't perfect -- like all rankings, Business Week's rankings exaggerate the difference among tightly compressed schools. Anyone who sees a real, meaningful difference in the quality of education among the very top schools needs to be buying bridges from Brooklyn and London. But at least Business Week is trying to get things right.

The hope for improving USNWR's rankings methodology, expressed in Ted Seto's draft, is optimistic, although I applaud Ted for trying. If we're not going to get rid of rankings that confuse more than they edify, at least let's have a variety of different rankings, so that potential students have different ways of looking at the data. And for goodness's sake, let's not confuse the rankings of institutions with the quality of individuals from those institutions.

4 Comments:

Blogger Alfred Brophy said...

Nancy,

Thanks for this. Good to know about the Business Week rankings.

I'm fearful of any survey of satisfaction in which the people being surveyed have a substantial interest in a positive (or negative) response. Once graduates realize that their program is being ranked based on how much they liked it (and they can, therefore, influence the value of their degree), they have a tremendous incentive to shade their responses. (The US peer rankings have similar, though not quite as great problems, of course.)

10/27/2006 6:50 PM  
Blogger Eric Goldman said...

Nancy, I've always thought it was odd that USNWR doesn't include any feedback from actual students in their rankings. However, Al is totally right that such feedback is gamable due to student self-interest in maximizing the value of their degrees. I suspect many B-schools have an implicit laundry that dirty laundry should be kept within the family, not aired through the BW surveys.

10/29/2006 4:18 PM  
Blogger Unknown said...

Al & Eric, you're both right. The alumni surveys are biased, and there's no real way to circumvent that w/o eliminating the info from those surveys. Even if they're biased, though, at least they give potential students SOME info about the educational experience at the schools, which is something that USNWR ignores. Maybe, over time, Business Week will place more emphasis on the employer ratings, which probably suffer from less automatic bias than the alumni surveys do. (I'm looking forward to seeing what Ted Seto comes up with regarding improving the USNWR rankings--maybe he'll have some way of measuring the experience of students during their degree programs. Go, Ted!)
N.

10/30/2006 4:39 PM  
Anonymous Anonymous said...

I've always been puzzled by the dramatic variability of B-School rankings--from those in the Wall Street Journal to those in Business Week. Student satisfaction and recruiter assessment might be more variable than the criteria adopted by US News.

But this just shows that each of these rating services should do more to telegraph to readers what exactly the rating is based on. Yes, the sources always give their ratings methodology in the article, but it should be more transparent yet. Perhaps there should even be a critique of the methodology--followed by a defense. Such a dialog would help explicate the value of any particular metric.

11/13/2006 3:21 AM  

Post a Comment

<< Home