This year found me especially eager to find out how well my model would track USN&WR's law school rankings. After many sleepless hours massaging the data, I boarded a flight, squeezed in between two beefy road warriors, downed some coffee, and began muttering over my laptop. It doubtless startled my neighbors when, some time later, I thumped the return key, leaned back in my seat, raised my fists, and exclaimed, "YES!"
The above chart, which compares USN&WR's scores to those generated by my model, explains my triumphant glee. As you can see from comparing similar charts from 2005 and 2006, this year's model proved the most accurate, yet. That alone sufficed to put me, literally, in geek heaven.
That close fit between the published rankings and the model heralds good news more generally, too, though. It indicates that USN&WR ranked the top two tiers of law schools using data not grossly different from the data collected by the American Bar Association. My model uses the ABA data, you see. The congruence between the two sets of scores thus means we have less reason to worry this year than in years past that a law school gamed the rankings by telling USN&WR something different from what it told the ABA, or that USN&WR somehow mishandled the data.
If you like the USN&WR rankings, that should make you happy. And even if you don't much like them, you surely want the rankings to stick to the facts. (Or perhaps I should say, given doubts about many of the measurements that USN&WR uses, you surely don't want its rankings to reflect non-systematic errors.)
[Crossposted to Agoraphilia.]