Monday, March 31, 2008

Cognitive dissonance and the USNWR rankings

As Brian Leiter correctly points out (here), the schools listed on Paul Caron's post (here) are trying to eat their cake and have it, too: objecting to USNWR's rankings, on the one hand, as being misleading, arbitrary, etc., and publicizing the heck out of their own results, on the other hand. This type of cognitive dissonance probably stems from the overwhelming pressure that deans face from students, alumni, and university administrators to move ever-higher in the rankings (akin to inventing a perpetual motion machine). Unless the press releases are debunking the rankings, though, as part of the releases themselves, or somehow distinguishing between the overall rankings and some component parts (going up in bar passage rates, for example, or being mentioned in the specialty rankings), those deans who have signed onto the "we hate the rankings" letter but are also touting their own school's overall rankings have some explaining to do.

Thursday, March 27, 2008

Putting my money where my mouth is....

In keeping with my comments in the recent ABA Journal article on the rankings (here), talking about improving USNWR's rankings to make them, well, useful would be helpful. Over at Concurring Opinions, Daniel Solove has started such a project (here). Here are his suggestions:
1. The reputation surveys are not sent out broadly enough. They go out only to deans and to newly-tenured professors. A broader cross-section of law school faculties should be used in the poll.

2. A more granular reputation scoring system should be used. The current 1-5 score isn't granular enough. For starters, how do top schools like Yale and Harvard have average scores less than 5. Who's giving them a score of 4? Seems fishy to me. Suppose a dean thinks Yale is the best and that Chicago is excellent -- not quite as good as Yale, but very close. Yale therefore gets a 5. Does that mean Chicago gets a 4? That's a big drop. Giving Chicago a 5 says it is equal, which may not be the dean's view. There's a problem here -- the scale isn't granular enough.

3. The number of library volumes shouldn't be a part of the scoring system. This strikes me as a silly factor in ranking law schools.

Here's mine:

1. (Fantasyland) Get rid of ordinal rankings, and group schools into meaningful clusters. There's no way that USNWR will make this change. Ordinal rankings sell, even though the distinctions between 1 and 5 (or, for that matter, between 1 and 15, or 15 and 30, etc.) are negligible.

2. Make the reputational scores a much smaller part of the rankings. These scores are based on surveys, folks, and surveys sent to a very small number of people (see Solove's point, above). There are all sorts of problems with halo effects lingering around schools and with people rating schools without knowing much--or anything--about them.

3. If USNWR's rankings are supposed to be a consumer guide, focus on things important to the consumers (potential law students): for one thing, parse out the job placement stats, and force law schools to report how many graduates are working as lawyers, in what types of practices, in which markets, etc., so that the old trick of hiring grads for a few months is no longer advantageous. Toss out the LSAT and UGPA stuff altogether, leaving it for the ABA-LSAC Official Guide to ABA-Approved Law Schools. Who applies to a particular law school doesn't tell the consumer much about what the law school itself is like--it just perpetuates halo effects and distorts the admissions process. Keep in bar passage rates. Try to figure out if there's a way to track the learning experience for the students. (I doubt it. How do you track how good the professors are at teaching or whether they're available for office hours?) Add in measurable facilities or curricular issues, perhaps, if there's a way not to game them.



The whole problem with the rankings is that they're being gamed by some schools, which further distorts an already distorted process. Figure out ways to measure what matters to law students, and (just as important) come up with measures that are hard to game. Ultimately, it's going to be hard to justify a purely quantitative model that maintains honesty and that is useful to the consumer.

But you weigh in: what should USNWR do differently in future rankings?

Monday, March 17, 2008

Some thoughts on Jeff Skilling's appeal and his allegations about the suppression of exculpatory evidence

The more I see groups of people making obviously bad decisions with clearly foreseeable repercussions, the more I realize that law has very little power to regulate human behavior unless the law takes into account the fact that humans tend toward certain types of cognitive errors.  For how my frustration with legal "fixes" for problems plays out with Jeff Skilling's allegations of prosecutorial misconduct, see my post (here) over at the Legal Profession Blog.

Friday, March 14, 2008

Some fun reads

Paul Secunda's Tales of a Law Professor Lateral Nothing (here) and J. Robert Brown, Jr.'s Law Blogging, Law Scholarship, and Law School Rankings (here).  Thanks go to Jeff Lipshaw and the Legal Profession blog for turning me on to Secunda's piece, which turned me on to Brown's piece!

Thursday, March 13, 2008

Bravo to Brian Leiter for his suggestion about blogging on the upcoming USNWR rankings!

Brian has posted a very sensible suggestion (here) about blogging about the upcoming USNWR rankings, which come out in a couple of weeks. As he suggests, blogging about the overall rankings is mostly a "garbage in, garbage out" exercise: the overall rankings have too many problems that specific postings on them is like shooting fish in a barrel. Posts on the components of the overall rankings (or about other observations regarding the rankings) may well be more useful.

Thanks, Brian!

And some thoughts before the rankings come out: as always, the improvements made by many schools won't be recognized by the rankings, because most schools are improving, but the rankings are relative; aside from the truly exceptional law schools, the little (and I mean tiny!)differences in some components are going to cause some truly out-of-whack results for some schools; the whole idea that a school is "one better" than the school immediately below it is preposterous, just as the idea in This Is Spinal Tap that the amp was "one louder" because it "went to eleven" was laughable; and my sympathies go out to the various law deans across the country who are stocking up on antacids right now.

Tuesday, March 04, 2008

More on the Deadwood Report

Bob Morse of USNWR has blogged about it (see here), and he suggests that deans and faculty members may not be so sanguine about having the claims in their publications challenged by the Green Bag's Deadwood Report.  I think that all of us are looking forward to seeing how the methodology works:  every school's website and publications is different, and verifying claims will be no easy task.  Cheerleading for a school's programs is one thing, and I was happy to cheerlead for both schools for which I had the privilege of serving as dean.  But making statements that bear no relationship to reality is another thing altogether, which is why I welcome Green Bag's efforts.  Does anyone think that, at some point, the consumers of these various rankings systems will give up and go straight to the facts, such as the ones published by the joint ABA/LSAC Official Guide to Law Schools?  How much spin should consumers really need?