Could this issue be addressed? I have not kept up with the literature but I recall efforts in undergraduate math courses to determine the relationship between student evaluations and teaching effectiveness. The teachers all used the same book and gave the same exam. In one study, after holding constant for as many variables as possible, there was a negative correlation between exam scores and the students’ evaluations of the teachers. When I taught economics we did the same thing in a principles course. We taught the same chapters and gave a common exam. In our case there was no correlation although there was a great range in how the students ranked the teachers.
Suppose a school had 4 contracts sections. The professors could agree on the same book and the same coverage and devise a common exam which they also all agreed to grade. (The grade for the experiment could be the average grade from the 4 professors.) With a total of 400 students, you would have 400 evaluations of the teachers and 400 final exam scores. LSAT, GPA, etc., could all be factor out so the focus would be on how much the students “learned” and what the students thought of the professor.
This would be terribly time consuming and there may be some statistical issues to iron out. Plus, I am not convinced that the exam – as opposed to what happens five years from now – is a great indicator of teaching effectiveness. Still, isn’t it about time that someone in our profession took a close look at student evaluations to determine if they tell us anything useful and, perhaps, to determine whether they are actually a disincentive to teaching effectively.