Saturday, March 10, 2007

Data for MoneyBall in Academia

My first post covered rankings in political science. Utilizing MoneyBall hiring in academia has the implicit purpose of raising in some rankings. Unlike baseball, where the number of wins provides a good ranking (but does it really - isn't baseball business, with the ultimate purpose of making money?), academic rankings are not quite as straightforward. Different methodologies (and criteria) tend to produce different results. Recent posts on this blog on law schools, prompted by Bill Henderson, take up similar ideas regarding legal academia.

In this post, I discuss the first part of the equation - what kind of data to use to implement MoneyBall hiring strategies? Based on what kind of information can we pick "underplaced" stars? PhD students and young academics don't have batting averages (there are plenty of scouting reports, though). I have been collecting some information on all assistant professors in all PhD granting political science departments (here's my website). I'll list the types of information I have collected that could help with MoneyPolitics, with some comments:

1. Publications information. The working hypothesis being that the more the better. In political science, this measure turns out to be quite tricky. First, what counts as a "publication?" I, for example, counted articles in journals covered by the Web of Science, a database of Thomson Scientific. My first (and most primitive) ranking, thus, is simply the number of "hits" a person had in that database as an author. Is this a good measure? Certainly not. The database does not have information on articles published in, for example, edited volumes; nor does it cover some relatively well-respected journals. And even more seriously - it does not cover books. In at least some fields of political science, book publishing is still considered to be the norm. Now, I do have book information in my database, but it's very tricky to comprise a "number of publications" measure where both articles and books are taken into account.

For law professors, getting to a reliable measure of number of publications seems to be fairly easy. Westlaw/Lexis are fairly comprehensive, and junior faculty does not seem to publish too many books. However, is the number of publications measure valid at all?

2. Quality of publications. Not all articles are equal. Having 10 publications in Podunk University Journal of Unimportant Problems is not a sign of star power. There are various journal rankings, but predicting the quality of an individual piece based on the ranking of the journal where it's published is very problematic. In my database, I have two measures that take quality into account. First, the Web of Science database provides some quality control (i.e. PUJUP is not a journal usually covered by the database). Second, I constructed a separate measure for number of articles published in three most "prestigious" political science journals (this is not uncontroversial, but a recent survey of political scientists places the three journals at the top).

In political science, this measure might show at least something - the journals use peer-review, and the reviewers for the top three journals are usually quite demanding. For law professors, constructing a measure based on placement in highly ranked journals seems much less helpful, due to the lack of peer review.

3. Citations. Another possibility for measuring quality (or maybe impact) is citations. Last year, I collected citations to every assistant professor in the Web of Science database (all cites to a person's work, including to books, book chapter etc. are included; but the cite has to be in a journal article covered by the database). This year, I just constructed a "top 40" ranking. The major problem in political science with citations measure is that the highest scoring persons on such ranking tend to get their citations from co-authored work (often, the senior author being a graduate school mentor). In law, citations measure seems much more helpful - there are few co-authored pieces; also, the citations come fairly quickly (unlike in political science, where it takes several years for even very prominent work to become highly cited).

4. Co-authorship in publications. For the same reason that co-authorship skews the citation measure, it may skew the "number of publications" (as well as "number of quality publications") measure. One has to take into account of that (for example, I constructed separate measures for number of publications/coauthorship).

5. Time to get to those publications. Finally, any measure probably needs to look into pace of getting publications out. I constructed measures for number of publications (and top publications) per years since obtaining PhD. Such a measure is far from perfect. For example, some people stay in graduate school much longer than others (just like some people have time to work on articles after JD but before entering legal academia, e.g. by getting a prestigious fellowship). Also, simply looking at time will most probably unduly disadvantage females.

There are probably a few other "objective" criteria to play MoneyPolitics (or MoneyLaw). For example, teaching evaluations provide some data on teaching quality. One could look at SSRN downloads (a very imperfect measure, to be sure). However, I am afraid that scouting reports (e.g. letters of recommendation) and subjective evaluations of those who hire (based on impressionistic evaluations of published work; also visiting positions in case of law schools - the practice of visiting positions turning into permanent positions is rare in political science) will remain the most important criteria for hiring in academia, legal or political. I don't think my data are worthless - they could be the basis for making some interesting generalizations (more on those at some other time). Also, they can draw attention to people who would otherwise remain unnoticed. And in any case, in order to play MoneyPolitics/MoneyLaw, one needs a starting point, even though imperfect.

1 Comments:

Blogger emfink said...

PhD students and young academics don't have batting averages

Back when I was in graduate school, a friend and I came up with the idea of "faculty baseball cards", with photos on the front and stats (e.g. "dissertations batted in") on the back. Being fairly lazy and un-entrepreneurial, we never followed through on the idea. But I'm sure they would have been a big hit.

3/10/2007 11:15 AM  

Post a Comment

<< Home