Prof. Seto appears to have done a fine job of reverse engineering the USN&WR law school rankings. It would perhaps prove useful, however, were he to more clearly document how closely his results match those of the actual rankings. He might, for instance, simply offer something like the sort of graph I worked up comparing USN&WR's reported scores to those of my model.
Prof. Seto has made significant progress towards writing up a scholarly analysis of his modeling exercise. As readers here know, I published a long series of blog posts on the same topic (which Prof. Seto was kind enough to cite). I, too, have been putting together a paper on that topic. I'm happy to have Prof. Seto take the lead, though, as his draft paper ably lays out a great many crucial details about how the USN&WR law school rankings work. Citing his paper might let my own get more quickly to the interesting stuff: Analyzing how the rankings don't work.
Prof. Seto's draft paper offers a fair amount of analysis along those lines, too. At pp. 15-20, he offers a very telling critique of the reliability of the USN&WR rankings. He observes, in a nutshell, that "overall scores tell us something about direction, but very little about magnitude." P. 19.
Prof Seto uses his model to test for bias in schools' reputation scores, pp. 24-26. In so doing, he generates results that appear to contradict Prof. Brian Leiter's assertion that the reputation measure unfairly benefits schools on the East and West coasts. Seto argues, to the contrary, that the USN&WR rankings exhibit a bias in favor of schools in the Central time zone, as well as schools that tout "U. of [State]" titles. Given that Leiter's own University of Texas School of Law falls squarely within those parameters, Seto can expect some tough questioning on this front. (I'll note, for one thing, that Seto's treatment of the University of Florida School of Law does not appear to reflect my discovery of errors in that school's most recent USN&WR ranking.)
Prof. Seto's reverse engineering process differed from my own in some particulars. Whereas his model used forced mean and standard deviation similar to those of the schools for which USN&WR reported overall scores, for instance, my model used z-scores. P. 13. Perhaps he was right to do so; I'll have to think a bit more about the relative merits of Seto's approach. I note, however, that USN&WR has expressly stated that it uses z-scores, and that I borrowed that method in hopes of duplicating its rankings as closely as possible.
I admire Prof. Seto's zeal in trying to replicate USN&WR's proprietary cost-of-living indices. I (or rather my research assistant) worked up proxy numbers using Monster.com's online engine for comparing salaries across geographic regions. Seto, in contrast, put down good money for ACCRA figures, which he then tinkered with in an attempt to match USN&WR's overall scores. This, he found an exercise in frustration. Pp. 10-11. Like me, he found himself tripped up by errors in the ABA's printed take-offs. P. 11. I wonder, though, if he realized that a polite request to the right person at the ABA might have netted him corrected electronic data? That wouldn't have solved all of the problems he noted plague the take-offs, pp. 8-9, but it might have helped.
Parts III of Prof. Seto's paper, which for now remains largely unwritten, will doubtless excite the attention of law school administrators. It bears the title, "Managing Your School's Rankings." Personally, I'm more interested in the yet-to-be written Part IV, titled "Improving the U.S. News Ranking System." Prof. Seto's draft paper already offers us a wonderful addition to the burgeoning literature on assessing law school performance. The final version of his paper cannot fail to prove still more useful, yet.
[Crossposted to Agoraphilia.]