I have the feeling that were I competing with these folks as a 20-something Ph.D. candidate, as I was back in 1993, that I wouldn't be hired at Alabama. (Oh, yeah, I wasn't hired at Alabama as a rookie--so I guess I can be pretty sure: I wouldn't get an entry-level job here today!) Each year, it seems to me, the market gets tougher, though that's rank speculation on an issue where Bill Henderson and Paul Caron could probably give us some hard data.
Our faculty hiring committee is faced, as they all are, with trying to figure out whom to interview in DC. What strikes me about this process is how much it channels decisions. Those who get interviews will pretty much define the pool who get callbacks (we could and sometimes do go outside the people we interviewed in DC but that's rare), which will define the pool of people who get offers. So, MoneyLaw folks, this is the draft. This is where metrics matter. What should we be looking at? Of course, this question sent me back to the bible--Paul Caron's and Rafael Gely's instant classic, Moneyball.
A yes/no on publications can't quite do it anymore. Virtually everyone has published an article; in fact, many have published a lot of articles. I think placements are an even worse indictor of quality for entry-level people than they are for laterals, because I think entry level people often face enormous hurdles in placing articles (though in some cases they still know people at reviews and so are able to use connections to place articles above where their quality deserve). So my guess--though I acknowledge this hypothesis may be wrong--is that placements are a poorer indicator of quality for entry-level than for laterals.
What I find interesting--though not surprising--is how much we're all making decisions based on the factors that may not have great predictive ability. And in going through hundreds of these for initial cuts, I am forced to fall back on what's on the one-page form (with occasional glances at the full-resume). The form causes us to look to such factors as the J.D.-granting institution, law review, clerkship, big firm or prestigious government practice, as well as publications. In a surprising number of instances I'm familiar with the person's scholarship (but those are almost always in the area of legal history--and the needs of our law school being what they are, expertise in legal history is among the last subjects we're looking for. I will, however, later this semester have a few thoughts on the growth of legal history as a field and its importance as a "method.")
I think citations to a candidate's work aren't great measures of quality; most of the people in the FAR are too young to have many citations. And though I'm a fan of citations as a measure of overall quality of a law review, there is notorious field bias. Want citations? Write in areas like professional responsibility, intellectual property, criminal law, and constitutional law, not legal history and wills. I'll have some limited data on this one later in the semester.
We're also looking for laterals and here it's easier: it's largely a question of finding people who've actually produced good work. Here we have a track record. I'm not at all convinced that on average laterals get better after they are hired (though we can hope that faculty will continue to mature as scholars--they may learn more, see more connections, be able to bring together ideas from various fields). Law, like history (the other field I know something about) is a cumulative discipline.
At some point I hope we'll talk about how to evaluate lateral candidates--how, for instance, do we measure the quality of scholarship?
Back with you after APSA.
Alfred L. Brophy