Tuesday, June 12, 2012

Accuracy of Model of the 2013 USN&WR Law School Rankings

As in every year since 2005, I’ve again built a model of the U.S. News & World Report ("USN&WR") law school rankings. This latest effort generated a record-high r-squared coefficient: .998673. More about what that means—and more about the one law school that doesn’t fit—below. First, here’s a snapshot comparison of the scores of the most recent (USN&WR calls them “2013”) law school rankings and the model:



As that graphical comparison indicates, the model replicated USN&WR’s scores very closely. Indeed, the chart arguably overstates the differences between the two sets of scores because it shows precise scores for the model but scores rounded to the nearest one for USN&WR.

As I mentioned above, comparing the two data sets generates an r-squared coefficient of .998673. That comes very close to an r-squared of 1, which would show perfect correlation between the two sets of scores. Plainly, the model tracks the USN&WR law school rankings very closely.

In most cases, rounding to the nearest one, the model generated the same scores as those published by USN&WR. In four cases, the scores varied by 1 point. That’s not enough of a difference to fuss over, given that small variations inevitably arise from comparing the generated scores with the published, rounded ones. Consider, for instance, that USN&WR might have generated a score of 87.444 for the University of Virginia School of Law and published it as “87.” The model calculates Virginia’s score in the 2013 rankings as 88.009. The rounded and calculated scores differ by 1.009. But if we could compare the original USN&WR score with the model’s score would get difference of only .565 points. I won’t worry over so small a difference.

You know what does worry me, though? Look at the far right side of the chart above. That red “V” marks the 4.48 difference between the 34 points USN&WR gave to the University of Idaho School of Law and the score that the model generated. Idaho showed a similar anomaly in last year’s model, though then it was not alone. This year, only Idaho does much better in the published rankings than in the model.

[Crossposted at Agoraphilia and MoneyLaw.]

Labels: ,

5 Comments:

Anonymous Anonymous said...

Is your model meant to predict how many points a school so-ranked will receive (e.g., #43 will receive 60 points, #108 will receive 40 points, etc.)?

Are the inputs for your model the same as those for the USNWR scorer?

6/13/2012 1:43 PM  
Blogger Tom W. Bell said...

Not quite, Anon. The model aims to replicate the same score that USN&WR *did* assign each law school in the most recent (called "2013" even though they issued in 2012) rankings. That done, the model affords opportunities for prediction along the lines of "How would School X have done in the past rankings if . . . ." And, not surprisingly, that same analysis predicts how School X *will do* in the next rankings.

6/13/2012 2:34 PM  
Blogger Tom W. Bell said...

As for your second question, Anon: Yes, insofar as possible. Most data comes from USN&WR. It does not make public all of the data that go into its rankings, however. Some of the remaining data, all concerning expenditures, I get from the ABA. USN&WR also applies a cost of living factor to some of that data, which I have replicated to the best of my ability.

6/13/2012 2:36 PM  
Blogger Jeffrey Harrison said...

Care to speculate about the Idaho outcome?

6/13/2012 4:55 PM  
Anonymous Anonymous said...

Thanks so much for your thoughtful responses. (This stats newbie appreciates them.)

One last question:
Is there a way to determine the likelihood that a school's past rankings will determine its future rankings?

I have a hunch that the so-called 'quality assessment' is informed substantially by prior years' rankings.

6/27/2012 11:27 PM  

Post a Comment

<< Home