Monday, August 31, 2009

How Top-Ranked Law Schools Got That Way, Pt. 3

Part one and part two of this series focused on the top law schools in U.S. News and World Report's 2010 rankings, offering graphs and analysis to explain why those schools did so well. This part rounds out the series by way of contrast. Here, we focus on the law schools that ranked 41-51 in the most recent USN&WR rankings, those that ranked 94-100, and the eight schools that filled out the bottom of the rankings.

Weighted & Itemized Z-Scores, 2010 Model, Schools Ranked 41-51

The above chart shows the weighted and itemized z-scores of law schools about 1/3rd of the way from the top of the 2010 USN&WR rankings. Note the sharp downward jog at Over$/Stu—a residual effect, perhaps, of the stupendously large Over$/Stu numbers we earlier saw among the very top schools. Note, too, that three schools here—GMU, BYU, and American U.—buck the prevailing trend by earning lower scores under PeerRep than under BarRep (GMU's line hides behind BYU's). As you work down from the top of the rankings, GMU offers the first instance of that sort of inversion; all of the more highly ranked schools have larger itemized z-scores for PeerRep than for BarRep. It raises an interesting question; Why did lawyers and judges rank those schools so much more highly than fellow academics did?


Weighted & Itemized Z-Scores, 2010 Model, Schools Ranked 94-100

The above chart shows the weighted, itemized z-scores of the law schools ranked 94-100 in the 2010 USN&WR rankings—about the middle of all of the 182 schools in the rankings. As we might have expected, the lines bounce around more wildly on the left, where they trace the impact of the more heavily weighted z-scores, than on the right, where z-scores matter relatively little, pro or con. Beyond that, however, no one pattern characterizes schools in this range.


Weighted & Itemized Z-Scores, 2010 Model, Bottom-Ranked Schools

The above chart shows the weighted and itemized z-scores of law schools that probably did the worst in the 2010 USN&WR rankings. I say, "probably," because USN&WR does not reveal the scores of schools in the bottom two tiers of its rankings; these eight schools did the worst in my model of the rankings. Given that uncertainty, as well as for reasons explained elsewhere, I decline to name these schools.

Here, as with the schools at the very top of the rankings, we see a relatively uniform set of lines. All of the lines trend upward, of course. These schools did badly in the rankings exactly because they earned strongly negative z-scores in the most heavily weighted categories, displayed to the left. Several of these schools did very badly on the Emp9 measure, and one had a materially poor BarPass score. Another of them did surprisingly well on Over$/Stu, perhaps demonstrating that, while the very top schools boasted very high Over$/Stu scores, no amount of expenditures-per-student can salvage otherwise dismal z-scores.

[Crossposted at Agoraphilia, MoneyLaw.]

Labels: ,

Thursday, August 27, 2009

Best Value Law Schools

NJ Cover_Page_2 The National Jurist has released its ranking of the Best Value Law Schools in the September 2009 issue:

The National Jurist identified 65 law schools that carry a low price tag and are able to prepare their students incredibly well for today's competitive job market. In determining what makes a law school a "best value," we first looked at tuition, considering only public schools with an in-state tuition less than $25,000, and private schools with an annual tuition that comes in under $30,000. We then narrowed the playing field again by including only schools that had an employment rate of at least 85% and a school bar psasage rate that was higher than their state average. We then ranked schools, giving greatest weight to tuition, followed closely by employment statistics.

For a chart of the Top 25 Value Law Schools, and a chart of the Top 65 Value Law Schools with their corresponding U.S. News rank, see here.

Wednesday, August 26, 2009

"Annual" Multiple Choice Testing Post

It's been nearly two years since my "annual" post opposing multiple choice examinations for law students. The last one generated some good comments and can be found here. I still find the question intriguing. Before going on a bit, some basics. First, I am writing about machine graded exams; not multiple choice or true/false with explanation questions which are actually short essays that focus the students on specific topics. Second, I am not really writing about the mixed exam in which some "objective" (what a crazy thing to call them) questions are included with the essays. Third, I sincerely want the multiple choice machine graded (MCMG) supporters to be right. I hate grading more than anything else associated with my job. Finally, I think the whole matter presents a wonderful opportunity to examine self governance. More specifically, has anyone actually studied the effects of MCMG exams as opposed to essay exams or is the trend toward MCMG exams strictly a matter of convenience?

Here is what I like to know: If you use MCMG aren't you teaching a different course than if you use essay exams. I am not saying the teacher is doing anything differently but aren't the students "hearing" and making note of different things? Which course should be taught?

Do teachers at the fancy schools use MCMG exams? If so, does that mean the today's law schools are hiring people who are good at MCMG exams? If so, is that reflected in their teaching, testing and ultimately their evaluation of today's students?

What does it mean when someone defends MCMG by saying it produced a "great curve" or a "normal distribution." Does that mean the students were tested on the right things, whatever they are? I suppose you would get a normal distribution if you used a soft-ball throwing contest.

What does it mean when someone defends MCMG by saying the same students do well on both types of tests. What is the connection between that and what they are learning and teaching effectiveness?

Has anyone using MCMG exams actually studied how to write "good" multiple choice questions?
As a comment to my last post on this, Nancy Rappaport had some interesting views.

If you use MCMG exams, how do you perform the diagnostic element of teaching and testing? By that I mean the process of identifying individual and group weaknesses in reasoning and expression so you can adjust your teaching the next time around.

Having said all this and revealed my distrust if MCMG exams, I realize that some of the same questions could be asked about essay exams. What is the connection between good essay exam writing and a student's potential as an attorney, judge or law professor? I think I have a better chance of spotting the ones with great potential when they are forced to reveal themselves in an essay. But, that too has not been tested. In effect, our testing needs to be tested.

At one level what worries me the most is the thought that if essay exams could be graded even faster than MCMG exams, a fair number of law professors would switch back and then defend the new position as consistent with good teaching and evaluation.

Sunday, August 23, 2009

How Top-Ranked Law Schools Got That Way, Pt. 2

In the first post in this series, I discussed the mysterious distribution of maximum z-scores in the top two tiers of law schools in U.S. News & World Report's 2010 rankings, and focused on the top-12 schools to solve that mystery. In brief, among the very top schools, employment nine months after graduation" ("Emp9") varies too little to make much of a difference in the schools' overall scores, whereas overhead expenditures/student ("Over$/Stu") varies so greatly as to almost swamp the impact of the other factors that USN&WR uses in its rankings. Here, in part two, I focus on the top 22 law schools in USN&WR's 2010 rankings. In addition to the Emp9 and Over$/Stu effects observed earlier, this wider study uncovers some other interesting patterns.

Weighted & Itemized Z-Scores, 2010 Model, Top-22 Schools

The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-22 Schools," offers a snapshot comparison of how a wide swath of the top schools performed in the most recent USN&WR rankings. It reveals that the same effects we observed earlier, among just the top-12 schools, reach at least another ten schools down in the rankings. With the exception of Emory and Georgetown, Emp9 scores (indicated by the dark blue band) barely change from one top-22 school to another. Over$/Stu scores, in contrast (indicated by the middle green hue), vary widely; compare Yale's extraordinary performance on that measure with, for instance, Boston University's.

This graph also reveals some other interesting effects. Like the Emp9 measure, the Emp0 measure (for "Employment at Graduation," indicated in yellow-green) varies little from school to school. Indeed, it varies even less than the Emp9 measure does. Why so? Because all of these top schools reported such high employment rates. All but Minnesota reported Emp0 rates above 90%, and all but Georgetown, USC, and Washington U. reported rates above 95%.

These top 22 schools also reported very similar LSATs. Their weighted z-scores for that measure, indicated here in light blue, range from only.20 to .15. The weighed z-scores for GPA, in contrast, marked in dark green, range from .24 to .06.

As the graph indicates, the measures worth 3% or less of a school's overall score—student/faculty ratio, acceptance rate, Bar exam pass rate, financial aid expenditures/student, and library volumes and equivalents—in general make very little difference in the ranking of these schools. One exception to that rule pops up in the BarPass scores (in dark orange) of the California schools, which benefit from a quirk in the way that USN&WR measures Bar Pass rates. Another interesting exception appears in Harvard's Lib score (in white)—only thanks to its vastly larger law library does Harvard edge out Stanford in this ranking.

To best understand how a few law schools made it to the top of USN&WR's rankings, we should contrast their performances with those of the many schools that did not do as well. I'll thus sample the statistics of the law schools that ranked 41-51 in the most recent USN&WR rankings, those that ranked 94-100, and the eight schools that filled out the bottom of the rankings. Please look for that in the next post.

[Crossposted at Agoraphilia, MoneyLaw.]

Labels: ,

Thursday, August 20, 2009

How Top-Ranked Law Schools Got That Way, Pt. 1

How do law schools make it to the top of the U.S. News & World Report rankings? USN&WR ranks law schools based on 12 factors, each of which counts for a certain percentage of a school's total score. Peer Reputation counts for 25% of each law school's overall score, for instance, whereas Bar Passage Rate counts for only 2%. More precisely, USN&WR calculates z-scores (dimensionless statistical measures of relative performance) for each of the 12 factors for each school, multiplies those z-scores by various percentages, and sums each school's weighted, itemized z-scores to generate an overall score the school. USN&WR then rescales the scores to run from 100 to zero and ranks law schools accordingly.

In earlier posts I described my model of the most recent U.S. News & World Report law school rankings (the "2010 Rankings"), quantified its accuracy, and published itemized z-scores for the top two tiers of schools. (Separately, I also suggested some reforms that might improve the rankings.) Studying those z-scores reveals a great deal about how the top-ranked law schools got that way. The lessons hardly jump out from the table of numbers, though, so allow me to here offer some illustrative graphs.

Weighted & Itemized Z-Scores of Top 100 Law Schools in Model of 2010 USN&WR Rankings

The above graph, "Weighted & Itemized Z-Scores of Top 100 Law Schools in Model of 2010 USN&WR Rankings," reveals an interesting phenomenon. The items on the left of the graph count for more of each school's overall score, whereas the items on right count for less. We would thus expect the line tracing the maximum weighted z-scores for each item to drop from a high, at PeerRep (a measure of a school's reputation, worth 25% of its overall score), to a low, at Lib (a measure of library volumes and equivalents, worth only .75%). Instead, however, the maximum line droops at Emp9 (employment nine months after graduation) and soars at Over$/Stu (overhead expenditures per student). The next graph helps to explain that mystery.

Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools

The above graph, "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools," reveals two notable phenomena. First, the Emp9 z-scores, despite potentially counting for 14% of each school's overall score, lie so close together that they do little to distinguish one school from another. In practice, then, the Emp9 factor does not really affect 14% of these law schools' overall scores in the USN&WR rankings. (Much the same holds true of top schools outside of these 12, too.)

Second, the Over$/Stu z-scores range quite widely, with Yale having more than double the score of all but two schools, Harvard and Stanford, which themselves manage less than two-thirds Yale's Over$/Stu score. That wide spread gives the Over$/Stu score an especially powerful influence on Yale's overall score, making it almost as important as Yale's PeerRep score and much more important than any of the school's remaining 10 z-scores. In effect, Yale's extraordinary expenditures per student buy it a tenured slot at number one. (I observed a similar effect in last year's rankings.)

Other interesting patterns appear in "Weighted & Itemized Z-Scores, 2010 Model, Top-12 Schools." Note, for instance, that Virginia manages to remain in the top-12 despite an unusually low Over$/Stu score. The school's strong performance in other areas makes up the difference. Though it is not easy to discern from the graph, Virginia's reputation and GPA scores fall in the middle of these top-12 schools' scores. Northwestern offers something of a mirror image on that count, as it remains close to the bottom of the top-12 despite a disproportionately strong Over$/Stu score. The school's comparatively low PeerRep and BarRep scores (the lowest of those in the top-12) and GPA (nearly tied for the lowest) score pull it down; Northwestern's Over$/Stu score saves it.

[Since I find I'm running on a bit, I'll offer some other graphs and commentary in a later post or posts.]

[Crossposted at Agoraphilia, MoneyLaw.]

Labels: ,

Monday, August 17, 2009

Transfer Fees

I have not read Why England Lose: and other Curious Football Phenomena Explained by Simon Kuper and Stefan Szymanski but this excerpt of a review of the book in the August 13th issue of the Economist caught my eye:

"A third myth is that clubs cannot buy success. They can, so long as they spend on players’ wages rather than on transfers. Almost 90% of the variation in the positions of leading English teams is explained by wage bills. Transfer fees contribute little. New managers hoping to make their mark often waste money. Stars of recent World Cups or European championships are overrated. So are older players. So, curiously, are Brazilians and blonds."

I guess the best example of this in baseball is the Red Sox and Dice-K. But I wondered if there are transfer fees in law teaching and could the same phenomena be at work. I could only think of one transfer and and one that is like a transfer fee.

At my School, if you take a sabbatical you must come back for at least a year. If not, as I understand it, either the person leaving or, more likely the destination school must provide compensation. To me that is very similar to a transfer fee but certainly not of the magnitude of those you read about in soccer.

Another practice that has the same effect is the treatment of a trailing spouse. The trailing spouse matter usually involves privileged people who have come to believe that, unlike the lower classes, they should not be put to life's hard choices. At my University for a time (and maybe even now) there was a plan. If one department wanted to hire a person who had a trailing spouse, that department would pitch in 1/3 of the trailer's salary. The department hiring the trailer would pay 1/3 and the central administration would pay 1/3.

So, suppose a department found a good candidate and offered $100,000. The the trailing spouse matter is then raised and plan is put into action. The trailer's salary will be $90,ooo. Listing it as the trailer's salary is a nice way to let the trailer save face but in every reality, the new faculty member is being paid at least $130,000, not $100,000.

Is this a tranfer fee? Obviously it is not because ultimately it becomes, indirectly, part of the wage of the new hire. On the other hand, the first department had a budget to spend on the "player" of $100,000. If it had known that it really had a budget of $130,000 it could have shopped at a different and more productive level. Put differently, if the school had considered what it was actually paying for its new hire, it could have hired someone better. As with the transfer fee, for the total amount paid, a better decision could be made.

Are there other academic hiring transfer fees? Not sure.

Tuesday, August 04, 2009

Reforms Suggested by Modeling the Law School Rankings

As I recently observed, the close fit between law schools' scores in U.S. News & World Report's rankings and the scores of those same schools in my model of the ranking "suggests that law schools did not try game the rankings by telling USN&WR one thing and the ABA . . . another." Since both Robert Morse, Director of Data Research for USN&WR, and the ABA Journal saw fit to comment on that observation, perhaps I should clarify a few points.

First, I have no way of knowing whether or not law schools misstated the facts, by accident or otherwise, to both the ABA and USN&WR. The fit between USN&WR's scores and my model's scores indicates only that law schools reported, or misreported, the same facts to each party.

Second, this sort of consistency test speaks only to those measures USN&WR uses in its rankings, that it does not publish with its rankings, and that the ABA collects from law schools: median LSAT, median GPA, overhead expenditures/student, financial aid/student, and library size. Measures that USN&WR uses and publishes—reputation among peers and at the Bar, employment nine months after graduation, employment at graduation, student/faculty ratio, acceptance rate, and Bar exam performance—go straight into my model, so I do not have occasion to test their consistency against ABA data. In some cases—the reputation scores and the employment at graduation measure, the ABA does not collect the data at all. This proves especially troubling with regard to the latter. We have little assurance that USN&WR double-checks what schools report under the heading of "Employment at Graduation," and no easy way to double-check that data ourselves.

Third, and consequently, USN&WR could improve the reliability of its rankings by implementing some simple reforms. I suggested three such reforms some time ago. USN&WR has largely implemented two of them by making its questionnaire more closely mirror the ABA's and by publishing corrections and explanations when it discovers errors in its rankings. (I claim no credit for that development, however; I assume that USN&WR acted of its own volition and in its own interest.)

Another of my suggested reforms remains as yet unrealized, however, so allow me to repeat it, here: USN&WR should publish all of the data that it uses in ranking law schools. It could easily make that data available on its website, if not in the print edition of its rankings. Doing so would both provide law students with useful information and allow others to help USN&WR double-check its figures.

To that, I now add this proposed reform: USN&WR should either convince the ABA to collect data on law school graduates' employment rates at graduation or discontinue using that data in its law school rankings. That data largely duplicates the more trustworthy (but still notoriously suspect) "Employment at Nine Months" data collected by the ABA and used by USN&WR in its rankings. And, unlike that data, law schools do not report "Employment at Graduation" numbers under the threat of ABA sanctions. We cannot trust the employment at graduation figures and USN&WR does not need them.

Among the reforms I suggested some two years ago I also included one directed at the ABA, calling on it to publish online, in an easily accessible format, all of the data that it collects from law schools and that USN&WR uses in its rankings. I fear that, in contrast to USN&WR, the ABA moved retrograde on that front. I leave that cause for another day, however; here I wanted to focus on what my model can tell us about USN&WR's rankings.

[Crossposted at Agoraphilia, MoneyLaw.]

Labels: ,