If we did not have the rankings tail wagging the dog of law school operations and the focus were strictly on law school performance it seems like teaching quality would be a big factor. The problem is how do you evaluation teaching even if you wanted to? I mean really evaluated it.
Sure we have numerical rankings by the students which I think is the product of a 1960 – 1970s effort to make them feel more relevant and I do not discount what they tell us if someone is consistently ranked very low. (Very high is a completely different matter.) The problem with those evaluations is that, at least in law school, no one knows what they measure. I hear teachers frankly admit that they have higher evaluations in when they are funnier and do not “push” as hard. In one yet-to be-published study a professor found an inverse relationship between student scores on the final exam and how highly each had ranked him as a teacher.
I am wonder if there will be a day (or perhaps it has come) when someone is denied tenure, promotion or a pay raise based, in part, on teaching evaluations. It seems to me that a challenge to that decision could be based on the fact the reliability of the measuring instrument has not been tested.
This is another area in which the Moneyball/Moneylaw analogy gets thin. In baseball the statistics can be iffy in some ways. A poor win-loss record may reflect little about pitching effectiveness and a batting average alone may not say much about hitting in the clutch. But, over in baseball, there is something riding on having more detailed back-up numbers that get closer to the truth. In law the inattention to evaluating teaching not only indicates how difficult it is but, perhaps, how little it matters. Has any school hired away a professor from another based on teaching excellence?