Per Capita Rankings
Should scholarly faculty rankings adjust for the size of each school's faculty? A fair question. My bottom-line answer is: It depends on what you're trying to measure. Obviously, if what you want to measure is average productivity, you have to divide whatever productivity measure you're using by the number of bodies. There's no getting around it.
But what if you're trying to measure the relative impact schools have on the legal world? Now size becomes relevant. China's economy is important not because its GDP/capita is high but because it's large. Singapore, with a higher GDP/capita, rarely makes the front page of the Wall Street Journal. The same is true in law. More folks download articles by Harvard professors than download articles by any other law faculty. This may be because Harvard professors are more productive. Or it may be because Harvard's faculty is bigger. But the fact remains: More folks download articles by Harvard professors than download articles by any other faculty.
A thought experiment may be useful. Assume two schools. School A has 40 professors, each averaging 200 downloads/year. School B has 60 professors; 40 of them average 210 downloads/year; the remaining 20 average 150 downloads/year. Which is "better"? Without the 20 less-downloaded professors, School B would clearly be "better," at least as measured by downloads, and deserve to be higher ranked. Our question may therefore be reframed as follows: How should the addition of 20 scholars who average only 150 downloads/year affect our assessment of School B? Reasonable folks can disagree, but it strikes me that adding professors who add to the scholarship a school produces should never, by itself, lead us to downrank a school.
The issue is complicated by a measurement problem. Unfortunately, there is no appropriate standard measure of the size of a law school faculty. The ABA's measure is the only standard measure of which I am aware, but it includes adjuncts, clinical faculty, and emeriti, at least on a fractional basis. In addition, few seem to know how to follow the ABA's counting rules. As I have noted elsewhere (see Understanding the U.S. News Law School Rankings at 13), a majority of schools compute and report to U.S. News student/faculty ratios inconsistent with those computed by the ABA. For U.S. News purposes, law schools have an interest in overstating faculty size; some undoubtedly do so. When it comes time to compute scholarly output per faculty member, however, these same law schools argue: No! Our faculty is really much smaller. Per capita rankings are very sensitive to faculty counts. Since faculty counts seem unreliable (in the statistical sense), per capita rankings are likely to be unreliable as well.
Per capita SSRN downloads pose a particular problem, since a school's download count as computed by SSRN includes articles published by adjuncts, clinical faculty, emeriti, faculty on visits elsewhere, and even students. Yale student David Pozen, for example, now claims 7th place in recent downloads among Yale stakeholders, with 1116 recent downloads. To adjust SSRN download counts for faculty size, we would necessarily have to limit the SSRN downloads we take into account to those attributable to the faculty members we're counting. If someone else wants to undertake the enormous amount of work involved, please do.
The choice of who to count, however, is not merely a measurement problem; it relates as well to what it means for a school to be good. Returning to my hypothetical: Assume that the 20 lower-producing scholars at School B are active emeriti and clinical faculty. Both schools have 40 tenured or tenure-track faculty members. School B's 40 tenured or tenure-track faculty average higher download rates than School A's, but in addition School B boasts 20 emeriti and clinical faculty who also produce scholarship, albeit at lower rates. Should we count them? My answer is yes; their scholarship clearly adds to the impact School B has on the legal world. If we do, however, and if we insist on per capita ranking, School B will be ranked lower than School A.
Finally but perhaps less obviously, per capita measures bias rankings in favor of the status quo. Because of tenure, one of the most common ways to improve the quality of a faculty is to increase the size of the school. Returning again to my hypothetical: Assume that the 20 lower-performing professors at School B are left over from the good old days. For quite some time now, however, School B has done better than School A in its recruiting. As a result, its newer 40 professors outperform, on average, School A's 40 professors. Is School B better or worse than School A? If we insist on per capita measures, School B is worse -- and will be for some time to come. Per capita measures, in effect, penalize schools on the way up and benefit schools on the way down. They rank schools in significant part on what they did 30 years ago. As a member of the faculty of a school on the way up, I prefer ranking measures that focus on what schools have accomplished more recently. But I also think that such measures are objectively superior.
In his comment on my prior posting, Al Brody concludes: "Seems to me that the smaller schools are at a huge disadvantage in this kind of ranking scheme." Undoubtedly. But our choice of ranking measures should not turn on whose ox is gored. To reiterate the premise with which I began this posting: It should depend on what we're trying to measure. My goal is to measure overall impact and current accomplishment. If that is the goal, per capita measures won't do.
But what if you're trying to measure the relative impact schools have on the legal world? Now size becomes relevant. China's economy is important not because its GDP/capita is high but because it's large. Singapore, with a higher GDP/capita, rarely makes the front page of the Wall Street Journal. The same is true in law. More folks download articles by Harvard professors than download articles by any other law faculty. This may be because Harvard professors are more productive. Or it may be because Harvard's faculty is bigger. But the fact remains: More folks download articles by Harvard professors than download articles by any other faculty.
A thought experiment may be useful. Assume two schools. School A has 40 professors, each averaging 200 downloads/year. School B has 60 professors; 40 of them average 210 downloads/year; the remaining 20 average 150 downloads/year. Which is "better"? Without the 20 less-downloaded professors, School B would clearly be "better," at least as measured by downloads, and deserve to be higher ranked. Our question may therefore be reframed as follows: How should the addition of 20 scholars who average only 150 downloads/year affect our assessment of School B? Reasonable folks can disagree, but it strikes me that adding professors who add to the scholarship a school produces should never, by itself, lead us to downrank a school.
The issue is complicated by a measurement problem. Unfortunately, there is no appropriate standard measure of the size of a law school faculty. The ABA's measure is the only standard measure of which I am aware, but it includes adjuncts, clinical faculty, and emeriti, at least on a fractional basis. In addition, few seem to know how to follow the ABA's counting rules. As I have noted elsewhere (see Understanding the U.S. News Law School Rankings at 13), a majority of schools compute and report to U.S. News student/faculty ratios inconsistent with those computed by the ABA. For U.S. News purposes, law schools have an interest in overstating faculty size; some undoubtedly do so. When it comes time to compute scholarly output per faculty member, however, these same law schools argue: No! Our faculty is really much smaller. Per capita rankings are very sensitive to faculty counts. Since faculty counts seem unreliable (in the statistical sense), per capita rankings are likely to be unreliable as well.
Per capita SSRN downloads pose a particular problem, since a school's download count as computed by SSRN includes articles published by adjuncts, clinical faculty, emeriti, faculty on visits elsewhere, and even students. Yale student David Pozen, for example, now claims 7th place in recent downloads among Yale stakeholders, with 1116 recent downloads. To adjust SSRN download counts for faculty size, we would necessarily have to limit the SSRN downloads we take into account to those attributable to the faculty members we're counting. If someone else wants to undertake the enormous amount of work involved, please do.
The choice of who to count, however, is not merely a measurement problem; it relates as well to what it means for a school to be good. Returning to my hypothetical: Assume that the 20 lower-producing scholars at School B are active emeriti and clinical faculty. Both schools have 40 tenured or tenure-track faculty members. School B's 40 tenured or tenure-track faculty average higher download rates than School A's, but in addition School B boasts 20 emeriti and clinical faculty who also produce scholarship, albeit at lower rates. Should we count them? My answer is yes; their scholarship clearly adds to the impact School B has on the legal world. If we do, however, and if we insist on per capita ranking, School B will be ranked lower than School A.
Finally but perhaps less obviously, per capita measures bias rankings in favor of the status quo. Because of tenure, one of the most common ways to improve the quality of a faculty is to increase the size of the school. Returning again to my hypothetical: Assume that the 20 lower-performing professors at School B are left over from the good old days. For quite some time now, however, School B has done better than School A in its recruiting. As a result, its newer 40 professors outperform, on average, School A's 40 professors. Is School B better or worse than School A? If we insist on per capita measures, School B is worse -- and will be for some time to come. Per capita measures, in effect, penalize schools on the way up and benefit schools on the way down. They rank schools in significant part on what they did 30 years ago. As a member of the faculty of a school on the way up, I prefer ranking measures that focus on what schools have accomplished more recently. But I also think that such measures are objectively superior.
In his comment on my prior posting, Al Brody concludes: "Seems to me that the smaller schools are at a huge disadvantage in this kind of ranking scheme." Undoubtedly. But our choice of ranking measures should not turn on whose ox is gored. To reiterate the premise with which I began this posting: It should depend on what we're trying to measure. My goal is to measure overall impact and current accomplishment. If that is the goal, per capita measures won't do.
2 Comments:
"My goal is to measure overall impact and current accomplishment."
Ted makes many good points. In particular, it is true that a large law school, all other factors being equal, will have a greater impact. Plus, like so many others, I am a bit of a rankings junkie.
Still it seems to me that "impact" is more likely to be related to the quality of a school's scholarship. Quality tells us more about the intellectual environment of a school and whether people "listen" to what is written. Of course, my assumption here is that "impact" means that someone --a court, another scholar -- actually relies on the article. Otherwise I am not sure what impact means.
Another thing that seems questionable to me is the emphasis on "recent articles" and the idea that more recent downloads mean a school is up and coming. Up in coming as far as SSRN downloads is about as far as I would go. The problem is that a seminal article written in 1995 is washed out by two short casenote articles written in 2007. Yet the person with the intellect to write the highly regarded piece may still be on the faculty, teaching and the article may have a continuing impact and will stick in the minds of others far beyond the "up and comers."
To be sure, citation levels cut against schools populated by younger scholars but that is in part because their scholarhip has actually not had an impact yet.
SSRN is particular suspect, to me at least, for one other reason. No one, as far as I know has assessed how many faculty upload their articles to SSRN. Some schools are totally on the SSRN bandwagon and others are not. Without this data the rankings seem to have little value.
I guess we could discuss and debate this forever. I have a proposal that should satisfy everyone. My proposal, aside from citation rankings, is that each of us make up a ranking that put us first individually and institutionally. And then, once the measure is established work on believing it. I'm still working on what mine would be.
Ted,
If you are trying to measure relative impact on the legal world, as you suggest, you must demonstrate that SSRN downloads are a useful and accurate indicator of such impact. But we know that this is false, not only because (as Jeff says) not all faculties or professor have joined the SSRN universe, but more also because academic fields use SSRN at different rates.
If we were to measure by SSRN downloads, economic analysis and corporate law issues are by a wide margin the most important and influential issues in the "legal world." Or, they could just be the subjects that interest people who happen to use SSRN a lot.
I think we all know which is correct, which is why SSRN downloads are more misleading than informative as a measure of impact on the legal world.
Brian
Post a Comment
<< Home