But what if you're trying to measure the relative impact schools have on the legal world? Now size becomes relevant. China's economy is important not because its GDP/capita is high but because it's large. Singapore, with a higher GDP/capita, rarely makes the front page of the Wall Street Journal. The same is true in law. More folks download articles by Harvard professors than download articles by any other law faculty. This may be because Harvard professors are more productive. Or it may be because Harvard's faculty is bigger. But the fact remains: More folks download articles by Harvard professors than download articles by any other faculty.
A thought experiment may be useful. Assume two schools. School A has 40 professors, each averaging 200 downloads/year. School B has 60 professors; 40 of them average 210 downloads/year; the remaining 20 average 150 downloads/year. Which is "better"? Without the 20 less-downloaded professors, School B would clearly be "better," at least as measured by downloads, and deserve to be higher ranked. Our question may therefore be reframed as follows: How should the addition of 20 scholars who average only 150 downloads/year affect our assessment of School B? Reasonable folks can disagree, but it strikes me that adding professors who add to the scholarship a school produces should never, by itself, lead us to downrank a school.
The issue is complicated by a measurement problem. Unfortunately, there is no appropriate standard measure of the size of a law school faculty. The ABA's measure is the only standard measure of which I am aware, but it includes adjuncts, clinical faculty, and emeriti, at least on a fractional basis. In addition, few seem to know how to follow the ABA's counting rules. As I have noted elsewhere (see Understanding the U.S. News Law School Rankings at 13), a majority of schools compute and report to U.S. News student/faculty ratios inconsistent with those computed by the ABA. For U.S. News purposes, law schools have an interest in overstating faculty size; some undoubtedly do so. When it comes time to compute scholarly output per faculty member, however, these same law schools argue: No! Our faculty is really much smaller. Per capita rankings are very sensitive to faculty counts. Since faculty counts seem unreliable (in the statistical sense), per capita rankings are likely to be unreliable as well.
Per capita SSRN downloads pose a particular problem, since a school's download count as computed by SSRN includes articles published by adjuncts, clinical faculty, emeriti, faculty on visits elsewhere, and even students. Yale student David Pozen, for example, now claims 7th place in recent downloads among Yale stakeholders, with 1116 recent downloads. To adjust SSRN download counts for faculty size, we would necessarily have to limit the SSRN downloads we take into account to those attributable to the faculty members we're counting. If someone else wants to undertake the enormous amount of work involved, please do.
The choice of who to count, however, is not merely a measurement problem; it relates as well to what it means for a school to be good. Returning to my hypothetical: Assume that the 20 lower-producing scholars at School B are active emeriti and clinical faculty. Both schools have 40 tenured or tenure-track faculty members. School B's 40 tenured or tenure-track faculty average higher download rates than School A's, but in addition School B boasts 20 emeriti and clinical faculty who also produce scholarship, albeit at lower rates. Should we count them? My answer is yes; their scholarship clearly adds to the impact School B has on the legal world. If we do, however, and if we insist on per capita ranking, School B will be ranked lower than School A.
Finally but perhaps less obviously, per capita measures bias rankings in favor of the status quo. Because of tenure, one of the most common ways to improve the quality of a faculty is to increase the size of the school. Returning again to my hypothetical: Assume that the 20 lower-performing professors at School B are left over from the good old days. For quite some time now, however, School B has done better than School A in its recruiting. As a result, its newer 40 professors outperform, on average, School A's 40 professors. Is School B better or worse than School A? If we insist on per capita measures, School B is worse -- and will be for some time to come. Per capita measures, in effect, penalize schools on the way up and benefit schools on the way down. They rank schools in significant part on what they did 30 years ago. As a member of the faculty of a school on the way up, I prefer ranking measures that focus on what schools have accomplished more recently. But I also think that such measures are objectively superior.
In his comment on my prior posting, Al Brody concludes: "Seems to me that the smaller schools are at a huge disadvantage in this kind of ranking scheme." Undoubtedly. But our choice of ranking measures should not turn on whose ox is gored. To reiterate the premise with which I began this posting: It should depend on what we're trying to measure. My goal is to measure overall impact and current accomplishment. If that is the goal, per capita measures won't do.