Teaching Evaluations Again
I have written before about teaching evaluations but have not done an in depth study of what they tell us. As a economist many years ago all of us teaching the basic course did use a common exam and compared student evaluations with how the students did and found no correlation. Some studies, however, do show some correlation between evaluations and learning. Lately an article by Deborah Merritt on the topic is close to reading a great book in terms being a page turner. "BIAS, THE BRAIN, AND STUDENT EVALUATIONS OF TEACHING," 82 St, John's L. Rev. 235. It's 2008 but I think it has been out for a few months. Here is an excerpt.
This leads to two questions. How would a Moneylaw school evaluate teaching? If tenure, promotions and salaries are based on student evaluations would be fair to view the process as arbitrary and capricious?
Yet, the research on student evaluations is troubling. It confirms not some connection between a professor's style and student evaluations, but an overwhelming link between those two factors. Nonverbal behaviors appear to matter much more than anything else in student ratings. Enthusiastic gestures and vocal tones can mask gobbledygook, smiles count more than sample exam questions, and impressions formed in thirty seconds accurately foretell end-of-semester evaluations. The strong connection between mere nonverbal behaviors and student evaluations creates a very narrow definition of good teaching. By relying on the current student evaluation system, law schools implicitly endorse an inflexible, largely stylistic, and homogeneous description of good teaching. Rather than encouraging faculty to use nonverbal behaviors to complement excellent classroom content, organization, and explanations, the present evaluation system largely eliminates the "dog" of substance, leaving only the "tail" of style to designate good teaching. Neither law students nor faculty benefit from such a narrow definition of good teaching." (notes deleted).This article is chuck full of summaries of experiments relating to teaching evals. My favorites are those that show a few seconds of a teacher on tape with the sound off. A group of students is then asked to evaluate the teacher on a number of measures. These evaluations -- again based on sound off seconds -- turn out to be remarkable close to the evaluations the same teachers receive at the end of a semester from their regular classes. In short, looks, movement, expressions, etc, may trump everything else. Later on in the article Professor Merritt reports on a study that seem to indicate that whatever the students are responding to has virtually nothing to do with objective measures of learning.
This leads to two questions. How would a Moneylaw school evaluate teaching? If tenure, promotions and salaries are based on student evaluations would be fair to view the process as arbitrary and capricious?
3 Comments:
A Moneylaw school might not give up on semester-end evaluations, but it would supplement that information by asking for first-week evaluations (first-impressions surveys), mid-week evaluations, and two-years-out evaluations. I didn't know how much I learned in college in some classes until I got out of college. I'm sure the same will be true of law school.
Of course, the school will also be smart enough to properly evaluate the two-years-out surveys because there won't be 100% return, and they'll need to understand which kinds of people will be more likely to return the evaluation forms.
The Moneylaw school will also be more dedicated to actual evaluation of teaching based on classroom observation by other teachers, not just once per semester (at best!), but using a few different people a few different times each throughout the semester. (The Moneylaw school will not tolerate grousing from Prof. X about how s/he just can't come in on Friday morning to evaluate Prof. Y. The Moneylaw dean will have long ago traded such malcontents to Kansas City for a minor league prospect.)
1. In what are MoneyLaw schools competing? If it is student satisfaction, they will encourage behaviors that affect student evaluations. If it is long-term student satisfaction, it may well be the very same thing. If it is quality of education, I think the whole MoneyLaw premise of assessment, and competition, may be out the window.
2. Your summary didn't focus on Merritt's central concern -- that evaluations were likely to be systematically biased on the basis of race. That, like gender, can't be gamed by an individual instructor.
3. Assessment by graduates are important and potentially insightful. They may be more probative of course evaluations than instructor evaluations; I imagine some courses (say, evidence, or tax, or trial practice) have much higher salience than others.
4. If educational quality is the target, evaluations are at the wrong end of the production process. To continue to abuse the sports metaphor: the equivalent of a law school would be a baseball team that ignored high school and college performance (because they don't exist in hiring for the legal academy in teaching terms), abandoned its farm system and spring training (because there is no training in law school for teaching), and put blinders on every player (because there is no real attempt to establish and share best practices, or to even encourage mutual observation).
5. If we had to shoehorn teaching evaluations into the metaphor, it'd be like All-Star balloting, but worse: asking the fans to assess performance on the field (sort of), but only those fans sitting in the stands nearest the player in question, and with a very pronounced risk -- coloring the judgment of many voters -- that the more athletic players will (just after fan balloting, at the end of the game) nail them with a ball or a piece of bat . . . and will, in any event, be voting on them with much more significant consequences, such as the loss of season tickets.
Ani: You are hard to please.
Post a Comment
<< Home