Faculty Citation Rankings
Following up my TaxProf Blog posts on The Most-Cited Tax Faculty and Tax as Vermont Avenue in Monopoly:
- Balkinization: Skepticism About Leiter's Citation Rankings, by Brian Tamanaha (St. John's)
- Legal History Blog: The Limits of Leiter's New Citation Study, by Mary L. Dudziak (USC)
Brian responds to each of these criticisms in:
- Once More Into the Citation Rankings Fray...
- Mary Dudziak Isn't Happy with the New Citation Rankings
See also:
- Concurring Opinions: Leiter Study Data: Concentration by School, by Jack Chin (Arizona)
- Essentially Contested America: What's Wrong with Ranking Legal Scholars?, by Robert Justin Lipkin (Widener)
- PrawfsBlawg: The Potential Pathologies of "Leiter-scores," by Ethan Leib (UC-Hastings)
Bernie Black and I discuss the pros and cons of citation rankings in Ranking Law Schools: Using SSRN to Measure Scholarly Performance, 81 Ind. L.J. 83, 92-95 (2006). We note:
[T]he literature suggests that citation counts are a respectable proxy for article quality, and correlate reasonably well with other measures. As with the other measures, however, citation counts have limitations. Some of these will average out at the school level, but not all or not fully. These include:
- Limited range of schools.
- Timing.
- Dynamism.
- Survey article and treatise bias.
- Field bias.
- Interdisciplinary and international work.
- The "industrious drudge" bias.
- "Academic surfers."
- The "classic mistake."
- Gender patterns.
- Odd results.
Our modest conclusion is that citation counts, like reputation surveys, productivity counts, and SSRN download counts, are imperfect measures that, taken together, can provide useful information in faculty rankings. For more, see TaxProf Blog.
0 Comments:
Post a Comment
<< Home