Brian Leiter's Law School Reports

Brian Leiter
University of Chicago Law School

A Member of the Law Professor Blogs Network

Monday, May 31, 2010

The Best Methods for Measuring the Scholarly Quality of a Faculty

So with over 250 votes cast, our earlier poll is now complete; herewith the results:

1. There is no reliable method  (Condorcet winner: wins contests with all other choices)
2. Impact/citation studies  loses to There is no reliable method by 125–115
3. Reputational surveys  loses to There is no reliable method by 130–110, loses to Impact/citation studies by 118–107
4. SSRN Downloads  loses to There is no reliable method by 163–65, loses to Reputational surveys by 173–51

I'm a bit puzzled by the victory of "there is no reliable method," though at least some readers told me they chose it as a proxy for "none of the above."  That would make more sense, since I assume all those who voted for "no reliable method" are in habit of adjudging some faculties better than others, so they must actually believe there is some rational basis for those judgments.  Alternatively, perhaps some readers took "reliable" to mean wholly accurate or infallible, and then, of course, one would have to agree.

I, personally, ranked reputational surveys first--not the kind U.S. News conducts, of course, but well-designed surveys of scholarly experts, who are given real information, seem to me the best gauge--certainlty, that is how good schools make appointments, based on evaluations by experts, either within the school or from outside.  But, interestingly, impact/citation studies slighly beat out reputational surveys.  If there was any real consensus, here, it was that SSRN downloads are not a very good measure, which certainly seems right.

Thoughts from readers who might care to explain their own votes or comment on the results?  Signed comments only.

http://leiterlawschool.typepad.com/leiter/2010/05/the-best-methods-for-measuring-the-scholarly-quality-of-a-faculty.html

Rankings | Permalink

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8341c659b53ef0134828b0449970c

Listed below are links to weblogs that reference The Best Methods for Measuring the Scholarly Quality of a Faculty:

Comments

Interesting poll. I didn't vote, but I would be in the "no reliable method" crowd - at least beyond the top 10 or maybe 15 schools. Here' my reasoning:

1. Citation studies are interesting, but don't always reflect full scholarly quality. People who write in more popular areas will be cited more. It also doesn't really take breadth of a faculty into account. Thus, a few super highly cited IP and Con Law folks can drown out really great but largely uncited labor folks. I also worry about the Matthew effect - later scholars at better known schools may get cited more even though someone lesser known may have broken the scholarly ground. Thus, for the superstars, citation studies work nicely, but to really make it work for the top 100 schools, it's a bit harder.

2. Reputational studies solve many of the problems of the citation studies IF there is sufficient information. The big problem is the IF. Most scholars, I suspect, don't know the work of most other scholars at other schools, other than (again) the bigger (and not so big) names at the better known schools. I looked at the list of U of C alumni in teaching recently, and was genuinely surprised at where some of my classmates and cohorts are teaching - including at some top schools. If I don't even know where my classmates are teaching, how can I effectively and objectively judge _any_ complete faculty?

I suspect that you could do a reputational study, but it would have to be two part. The first would be a "general sense" study - how prominent are a schools scholars at conferences, books, general writing, etc. The second would be subject matter survey - what is the quality of the work by certain scholars in my field. I would expect people to be more familiar with that. If you did that kind of a survey, you might be able to come up with some rankings based on overall and specialized writing.

Citation studies might then be used to corroborate or enhance the findings.

Posted by: Michael Risch | May 31, 2010 12:23:53 PM

Professor Risch's comments suppose that to be useful each respondent to a survey has to have complete knowledge, but that supposition seems to me quite mistaken. One of the points of a survey is to aggregate lots of partial information into a better informed picture than any individual could produce. To serve this function, the survey has to have a proper balance of partial perspectives, but that is far less difficult than finding respondents each of whom has complete knowledge!

Posted by: Brian | May 31, 2010 1:06:57 PM

I think we are talking about the same thing. Phase 1 of my proposed survey would be partial information outside subject area. Phase 2 would be partial information based on subject area. Between the two, you could aggregate partial information. I just suggest splitting it because not doing so may lead to smaller sample sizes than desired for particular people. Also, splitting allows you to infer conclusions that you can't from a generalized survey.

Posted by: Michael Risch | May 31, 2010 3:18:17 PM

I did take the poll and, surprisingly, the outcome channelled my vote. The "no reliable method" choice has to be first if any real meaning is given to the word "reliable." All the other answers pale before that one.

The halo effect of our strongly elite structure is just so strong that it overpowers the other methonds.

Posted by: Mike Zimmer | May 31, 2010 6:29:40 PM

I find it odd that people discredit things for imperfection. Nothing is perfect. As you point out, throwing your hands up in the air and saying "no reliable method" is pretty foolish.

All methods do suffer a halo effect. Profs at higher ranked schools will get higher reputations and citations. But citation counts and reputational studies, though biased, are probably the best way for an unknown to break through. Without them, there's nothing but halo effect.

Posted by: Frank Cross | Jun 1, 2010 7:47:50 AM

Unless and until there is some account (and/or consensus) for the proper normative relationship between scholarly QUANTITY and QUALITY, I think the "no reliable measure" is perhaps the best answer. Consider, as markers of relative extremes, the scholarly work of Elena Kagan and Cass Sunstein. Before they became part of the Obama Administration, Kagan was a high quality, very low quantity scholar; Susnstein was a high quality, very high quantity scholar.

On a data-driven cites/impact and/or download ranking, Sunstein would perform many, many, many magnitudes better than Kagan, and so too would a good number of much lesser-quality (but higher quantity) scholars. Reputation surveys might be a bit better, but the scores here would likely reflect a lot of the voters' view of the quality/quantity debate --- e.g., those who favor a few great efforts (perhaps because they work slow) would rank Kagan more highly than those appreciate lots of output even if speedy product prevents each work from being perfected.

You rightly note, Brian, that "good schools make appointments" based on reputation surveys, though I see lots of express (and hidden) debate over this quality/quantity issue in this context. These days, in part because of data-driven cites/impact and/or download rankings, it seems quantity often tends to win over quality.

Put all this together, and I think it is fair to assert there is no reliable metric of a merely a faculty's scholarly QUALITY. If you mean to assess serious productive scholars or scholarly influence or importance, then I think a viable metric is possible. But if we are seeking an assessment of "quality," I think the poll voters got it about right.

Posted by: Doug Berman | Jun 1, 2010 12:58:04 PM

I was thinking about the quality v. quantity issue as well - citation studies seems to handle that, though, as the number of citations grows with both quality and quantity. I don't know that it's the right mix, though, and you would have to normalize by total citations in the field overall, I would think.

In response to other comments, I wasn't suggesting we shouldn't try to rank, or that such efforts have no value. I wouldn't read posts like these and certainly wouldn't comment if I thought them valueless. I'm all for someone listing me highly on some survey (nod, wink).

Even though they have some value and benefit, they are still unreliable for what they purport to measure. That doesn't mean the effort shouldn't be made, just that it is worth recognizing the limits of such studies.

Posted by: Michael Risch | Jun 1, 2010 5:11:21 PM

I wonder if you could do a similar poll on teaching quality and whether there are reliable methods of measurement? I realize that the ABA Section of Legal Education is working on this in regards to Outcomes (http://www.abanet.org/legaled/committees/OutcomeMeasures.doc). Furthermore, is there way to measure scholarship/teaching correlations (if any)?

Posted by: John Mayer | Jun 2, 2010 10:11:43 AM

Post a comment