1. The reputation surveys are not sent out broadly enough. They go out only to deans and to newly-tenured professors. A broader cross-section of law school faculties should be used in the poll.
2. A more granular reputation scoring system should be used. The current 1-5 score isn't granular enough. For starters, how do top schools like Yale and Harvard have average scores less than 5. Who's giving them a score of 4? Seems fishy to me. Suppose a dean thinks Yale is the best and that Chicago is excellent -- not quite as good as Yale, but very close. Yale therefore gets a 5. Does that mean Chicago gets a 4? That's a big drop. Giving Chicago a 5 says it is equal, which may not be the dean's view. There's a problem here -- the scale isn't granular enough.
3. The number of library volumes shouldn't be a part of the scoring system. This strikes me as a silly factor in ranking law schools.
1. (Fantasyland) Get rid of ordinal rankings, and group schools into meaningful clusters. There's no way that USNWR will make this change. Ordinal rankings sell, even though the distinctions between 1 and 5 (or, for that matter, between 1 and 15, or 15 and 30, etc.) are negligible.
2. Make the reputational scores a much smaller part of the rankings. These scores are based on surveys, folks, and surveys sent to a very small number of people (see Solove's point, above). There are all sorts of problems with halo effects lingering around schools and with people rating schools without knowing much--or anything--about them.
3. If USNWR's rankings are supposed to be a consumer guide, focus on things important to the consumers (potential law students): for one thing, parse out the job placement stats, and force law schools to report how many graduates are working as lawyers, in what types of practices, in which markets, etc., so that the old trick of hiring grads for a few months is no longer advantageous. Toss out the LSAT and UGPA stuff altogether, leaving it for the ABA-LSAC Official Guide to ABA-Approved Law Schools. Who applies to a particular law school doesn't tell the consumer much about what the law school itself is like--it just perpetuates halo effects and distorts the admissions process. Keep in bar passage rates. Try to figure out if there's a way to track the learning experience for the students. (I doubt it. How do you track how good the professors are at teaching or whether they're available for office hours?) Add in measurable facilities or curricular issues, perhaps, if there's a way not to game them.
The whole problem with the rankings is that they're being gamed by some schools, which further distorts an already distorted process. Figure out ways to measure what matters to law students, and (just as important) come up with measures that are hard to game. Ultimately, it's going to be hard to justify a purely quantitative model that maintains honesty and that is useful to the consumer.
But you weigh in: what should USNWR do differently in future rankings?