Brian Leiter's Law School Reports

Brian Leiter
University of Chicago Law School

A Member of the Law Professor Blogs Network

Tuesday, May 27, 2008

Student-Edited Law Reviews, Once Again

Professor Tamanaha (St. John's) raises the issue here, noting that in order to overcome the non-merit-based nature of editorial review by student-edited law reviews, we need "to come to a collective recognition that the placement of an article is not itself a measure of its quality. Law professors often say this, but deep down they don't really believe it because elite journals have magical names."

I am very interested to hear to what extent readers think this is still accurate.  I certainly have colleagues--colleagues whom I highly regard, I should add--who will sometimes say of a job candidate, "Well, she had an article in the Michigan Law Review," as though that meant something (other than that the article will be more widely read than if it were in the Indiana Law Journal), and I am always astounded, and often point out how absurd this is.  Some of this is generational:  older colleagues are far more likely than younger colleague to cite law review placement in discussing an article.  But I am curious about the experience/impressions of others in the legal academy. 

As long as your e-mail address and ISP confirm that you are a legal academic, I'll permit anonymous postings here.  Post only once, comments may take awhile to appear.

http://leiterlawschool.typepad.com/leiter/2008/05/student-edited.html

| Permalink

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8341c659b53ef00e5528875f68834

Listed below are links to weblogs that reference Student-Edited Law Reviews, Once Again:

Comments

It is astonishing: the disconnect I have seen in colleagues (at various schools), who really should know better, and from their own experience! They know, from their own articles, and those of friends they have read in manuscript, and from the articles they have read, published in the journals, that there is little correlation between journal placement and article quality. (What correlation there is, is largely due to the law journals' positive response to better-known names and more prestigious letter-heads; to the extent that these in turn reflect authorial quality -- and even that is, at best, imperfect. However, those factors will be absent when comparing the placement of entry-level candidates, and largely absent for junior-level lateral hiring.) And yet, though my colleagues know, with one part of their brain, that placement indicates little beyond some special skill in choosing fashionable topics or in getting law editors' attention, at the same time they will emphasize, in relation to hiring decisions, (as Brian Tamanaha reported) the candidates' article placements. I have argued against this consistently for years, and some faculty will admit the point, in private, but still seem swayed when decisions have to be made. It may be, as Brian T. offers, a matter of the magic of the high-prestige journal names. It may also be (a related point) that these colleagues have tried so hard themselves (either with success or not) to get their pieces into these "better" journals that they cannot help but believe that the objective is worth chasing, and that its obtaining indicates something.

Posted by: Brian Bix | May 27, 2008 7:43:54 PM

It seems to be an issue. If you look here, for example, http://www.concurringopinions.com/archives/2008/05/advice_for_law.html, the advice is to have "well placed" articles. Later discussion says that this means top-100 journal, and certainly not a specialty journal. This, despite the fact that third tier schools can produce outstanding volumes: http://prawfsblawg.blogs.com/prawfsblawg/2008/02/superb-law-and.html

On a side note, how true is it today that the Michigan Law Review will be that much more widely read than Indiana? If one is doing research at least, will any competent professor disregard an on-point article in any top-100 law review? In any law review if written by a law professor in the appropriate field?

Do people still get subscriptions to law reviews that they regularly read in paper without doing a lexis or westlaw query (or getting a reprint in the mail)?

Posted by: Michael Risch | May 28, 2008 3:52:40 AM

Giving the law reviews some unwarranted import seems a fair price to pay for the many, many hours of labour the students put in checking citations.

I.e., the true cost of fixing the system is going to be a matter not so much of convincing professors to give due disregard to laws reviews (which will be a difficult row to hoe given the prevalence of professors from 'top tier' law review backgrounds), but rather of convincing professors to accept the labour burden of doing peer review work.

Posted by: Craig Agule | May 28, 2008 6:13:52 AM

I second Brian B's post: it is really remarkable how smart people are still inferring quality -- and even sometimes its opposite -- based solely on placement. It seems to be the irrepressible power of proxies. Heuristics get us into this mess (student law review editors use CV's to pick pieces) and then they exacerbate it. And in response to Michael's query, I share the sense that electronic search is the vastly more important mode of selection/delivery today. But I still think that the mailed off-print of an article in Michigan gets one a lot more "oomph" and attn than an article in Indiana (and that's not to knock Indiana!!!).

Posted by: Jamison Colburn | May 28, 2008 6:17:22 AM

To be contrarian, I think that placement is relevant information and people are viewing the world in too binary a fashion. Perhaps it once was that people put too much weight on placement but that's not a reason to put zero weight on it. I think articles published in the high ranked journals are on average better. Now, you can debate the size of the correlation coefficient, and there are lots of exceptions.

Using only my own articles, which should avoid bias, I think there is definitely a correlation with placement. But of course not so high that it should be relied upon too heavily.

This may be of some benefit -- there are fields with peer reviewed journals where promotion and tenure decisions are made exclusively on placement. But the correlation with quality is exaggerated for peer reviewed journals too, and at least lawprofs recognize the need to read articles and independently assess them, rather than simply relyingon placement.

Posted by: frank cross | May 28, 2008 8:53:25 AM

I agree that placement is not very highly correlated with quality (although, as Frank Cross says, the correlation is probably not zero). But articles in the "better" journals do get read more, and their authors get positive publicity. To answer Michael Risch's question, for example, I have the library circulate to me each new issue of the top 5 or 10 law reviews; I glance at the covers and read any article that looks interesting, whether or not I'm currently working in the area. I sometimes copy the first page and put it in a file for something I might work on later. Maybe that's old-fashioned, but I suspect I'm not the only one who still has hard-copy issues cross her desk (I know many of my colleagues at Vanderbilt do). When I actually research a topic, of course, I look specifically for articles on point and read them no matter where they're published. Bottom line: If a school is looking to hire quality faculty, placement should be given little or no weight. But if it's looking to improve its visibility regardless of quality . . . .

Posted by: Suzanna Sherry | May 28, 2008 9:25:43 AM

My sense is also that placement has some psychological impact on how faculty treat an article. The impact is certainly far too strong, and in our new tenure guidelines at my school we specifically state that placement should not be considered in an attempt to counter that impact.

Still, might there be something to Frank Cross's claim that there is some correlation between placement and quality? Perhaps. As long as there is even a very small correlation between actual quality and the evaluation of student editors, the submission process may improve the signal value somewhat. A paper that places very low has probably been very widely rejected--that conveys information so long as the rejections aren't completely random. Similarly, a paper that places highly has often moved up the food chain through expedited review, and thus been accepted by a number of journals, which again conveys some information if acceptances aren't random.

My colleague Brian is of course right that many things like letterhead affect the process in ways they shouldn't. That should of course affect how one tries to extract any sort of information from placement. A Harvard professor placing well doesn't convey much information, but maybe a Harvard professor placing unexpectedly poorly does convey something. Conversely, an unknown practitioner placing poorly conveys little, but an unknown practitioner placing well may be a genuine positive signal.

Posted by: Brett McDonnell | May 28, 2008 11:03:31 AM

Missing from this discussion is a comparative element. Using the identity of a law review as a proxy for the quality of the articles it contains may be foolish, but surely peer-reviewed journals have some tendencies that are similar to law reviews. They are hardly immune from poor editing or article selection based on fads (or ideological predispositions). Furthermore, it has been my experience that anonymity in article selection can be difficult to maintain, particualrly within small sub-fields. Do people believe that a placement in a peer-reviewed journal advances one's career more than a placement in a "top" law journal? It seems to me that it is seen as a stronger proxy for quality than law review placements, if only because there are so many fewer slots in peer-reviewed journals.

Posted by: Reuel Schiller | May 28, 2008 11:58:57 AM

I can't speak about other fields, but peer review in philosophy works fairly well. There is work in the Yale Law Journal and Harvard Law Review that is at the intellectual level of an overly ambitious undergraduate; this simply doesn't happen in any of the good peer-reviewed philosophy journals, ever: all the work is of good, professional quality. Indeed, in the best of the journals--like Philosophical Review and Ethics and Nous, among others--I'd be confident in saying that appearing in that journal is a quite reliable proxy for good quality work. (Of the prominent journals known to law professors, the exception to the above generalization would be Philosophy & Public Affairs, which had fairly corrupt editorial practices for awhile, though it seems to have reformed admirably under Charles Beitz.)

Posted by: Brian Leiter | May 28, 2008 12:12:30 PM

Let's say there are two job candidates, Xavier and Yannick, each with one article, X and Y, respectively. X is in a top 5 journal (Fancy Ivy L. Rev.). Y ends up in a journal in the 30-40 range (Mid State L. Rev.). We are trying to decide which of the two to hire. Should we put any weight on the placement of their articles?

First, we have to imagine a case in which it would make sense to put any weight on journal placement generally--which is essentially to defer to the judgment of journal editors over our own. This will make sense only if (a) the editors have more expertise in the area than we do, or (b) we don't have time to read the article ourselves. Given the student-edited nature of law reviews, (a) will not typically obtain. So we have to assume that (b) does. Let's say that we are in a hurry (or there are too many candidates or whatever) and don't have time to read the article ourselves.

So, should we prefer Xavier to Yannick because he placed in Fancy Ivy L. Rev.? Here is an argument that we should.

Leave aside those fortunate few who are likely to have their articles selected by top journals solely on the basis of their CV. Given that the concern is, at least in the initial post, to determine the extent to which article placement should be relevant to evaluation of job candidates this doesn't seem overly unreasonable. Also leave aside 'in house' article placements, since those are often arrived at through different, less competitive means.

One thing that people often complain about in the student-edited world of multiple simultaneous submissions is the 'expedite' game, where article authors place an article at a lower-ranked journal and then call journal editors higher up on the food chain to see if they are interested in the piece (under the guise of needing a decision prior to the arrival of a newly present deadline).

This is an annoying feature of the process for all involved, certainly. But one thing that it means is that many of the articles eventually placed in the 'top' journals (particularly by people without the name recognition or 'courage' to submit only to the top 10 journals) will have been accepted by many journals below these top journals (and withdrawn from many others).

Given that they are both written by job market candidates, it is plausible to believe that both articles, X and Y, were submitted to the same 75 to 100 journals (or more). X ends up being accepted by a top 5 journal (Fancy Ivy L. Rev.). Y ends up in a journal in the 30-40 range (Mid State L. Rev.).

So there are two possible situations for X. X might have been submitted to those 75-100 journals and *only* accepted at Fancy Ivy L. Rev. Or X might have been accepted at some or many journals below Fancy Ivy L. Rev. (It is not uncommon for an article placed in a top 5 journal to have been accepted at 5 or more journals 'below' the top 5 journal and expedited up.)

So, too, with Y. Y might have only been accepted at Mid State L. Rev., or it might have been accepted at some of the journals below Mid State L. Rev. as well.

The key is that it is reasonable to believe that X both (a) was accepted at journals below its final placement and (b) was accepted at more total journals than Y. It would be odd for Xavier to somehow 'hit a homerun' without any singles along the way (though this might happen), and there are simply more journals 'below' Fancy Ivy L. Rev. at which X might have been accepted without ultimately being placed in those journals.

If it is reasonable to believe (a) and (b), then it is more likely that X is good/great than that Y is good/great. We can get to this conclusion simply by knowing the journal they both end up in if these assumptions are true:

(i) That the Condorcet jury theorem is correct.

(ii) That each article faces a number of accept/reject choices from many different student editors, and that these choices are made based upon the assessment of whether the article is good/great or not.

(iii) That student editors are better than random at determining whether an article is good/great or not.

(iv) That student editors from different journals make these determinations in the statistically independent sense required for the jury theorem to apply.

Given these assumptions, if it is reasonable to believe that X was accepted at more journals than Y (as argued above), and if it is reasonable to believe that X was accepted at several or many journals (each employing many individual readers, and often requiring all of those readers to come down on the side of acceptance/goodness/greatness), then, given just this universe of information, it is more reasonable to believe that X is good/great than it is to believe that Y is good/great. (More carefully, it is more certain that X is good/great than it is that Y is good/great.)

Another way of seeing the point: it is likely that X has been deemed good/great by a number of student editors that significantly exceeds the number of student editors that deemed Y good/great. This is similar to a situation in which we are trying to assess whether a coin is unfair or not, and for coin A we know that it has landed heads 4 times and tails one time, and for coin B we know that it has landed heads 80 times and tails 20 times. It would be unreasonable to believe that A was more likely to be unfair than B, and it would be quite reasonable to believe that B is more likely to be unfair than A.

Of course, it would be nice to know how many total journals X and Y were submitted to, how many they were withdrawn from, were rejected from, and so on. But we are assuming that we don't have this information.

Note that this result follows even assuming that student editors are all equal, as long as they are all better than random. According to the Condorcet jury theorem, it becomes more certain more quickly if the quality of student editors improves as one goes up the journal ranks (since X has been accepted higher up whereas Y has not).

It is hard to see which of (i)-(iv) would be rejected. (iii) is probably the most tempting, but better than random is pretty weak.

Posted by: Alex Guerrero | May 28, 2008 1:02:52 PM

A colleague and I wrote an op-ed about this issue last year for Findlaw. There's truth in humor. Here's the link:
http://writ.news.findlaw.com/commentary/20070511_hawley.html

Posted by: Scott Gerber | May 28, 2008 3:25:56 PM

With regard to Alex Guerrero's post: I think he is mistaken in his analysis in a number of ways. Since the ratcheting process supposes that journals do not look at an article until someone else has already made an offer, one can't suppose that (iv) applies--it is just not true that the decisions are independent. Also, Condorcet jury theorem presupposes an objectively "correct" answer. One can easily believe that students (as a group) have one conception of what a "great" article is that may well (and probably is) different from what a law professor thinks is a "great" article. This puts into question (iii). Given the emphasis on CV, I doubt (ii) holds as a general rule. I will grant him (i): The Condorcet Jury Theorem is certainly correct. It is the application of it that leads to error.

Posted by: Paul Edelman | May 28, 2008 5:50:19 PM

Reuel Schiller writes: "Do people believe that a placement in a peer-reviewed journal advances one's career more than a placement in a "top" law journal? It seems to me that it is seen as a stronger proxy for quality than law review placements, if only because there are so many fewer slots in peer-reviewed journals."

While I agree with Reuel that there are problems with the peer review process, it's also the case that the very top history journals -- the ones hardest to place in (e.g. Journal of American History and American Historical Review) are hands down better at quality control than the top law reviews. But since law faculty don't tend to know about journals in other fields, publishing in these journals does more to enhance one's reputation among historians than in the law world. I still think it's important to publish in these journals, both because the peer review feedback is so valuable, and because seriously engaging the broader field of historians helps one's scholarship over the long term -- even if the value of the publication doesn't really register with your law colleagues.

Posted by: Mary Dudziak | May 28, 2008 8:16:32 PM

In my fields (law and economics, law and finance, and empirical legal studies), law reviews provide zero quality control, absolutely none. Controlling for the place of an author’s employment (which editors get from a CV as a proxy for quality), I’ve observed zero correlation between the quality of the article and its placement.

Often, things are even worse – top law reviews provide a *negative* quality control for law-econ and empirical papers, selecting the *worst* stuff. That’s because student editors like papers on hot topics with wildly strong conclusions and sweeping “policy implications” – those papers are most often methodologically incompetent political proclamations. Good empirical and law-econ work is subtle, narrow, technical, full of numbers, caveats, and reservations. Just not sexy enough for students. I’d say a good 90% of “empirical” articles published by top-50 law reviews would get a failing grade if submitted as course papers to an econometrics seminar. And most of what passes as “law and economics” at top-10 law reviews has no chance of a peer-reviewed publication of any sort, ever.

My view is, not only should we not pay any attention to the name of a law reivew on an author's CV, but we should treat a law review publication as equivalent to an SSRN posting for tenure and promotion purposes.

Posted by: Kate Litvak | May 29, 2008 6:36:05 AM

Though I wholeheartedly agree that for anything even remotely 'specialized' (i.e. anything law and ___ or anything not covered in the first two years of law school) peer-review is the only way to ensure actual quality control, I want to respond to a few of Paul Edelman's points about the jury theorem point I was suggesting with respect to less-specialized articles.

(And I should highlight that the point is a very weak one, just that if X is in a significantly better law review than Y, then it is somewhat more likely that X is good/great than it is that Y is good/great--if that is all the information one has about X and Y. There is no reason to think that article placement should be relevant to those making a tenure decision who have read the work carefully, interacted with the person extensively, and so on.)

First, regarding 'independence.' Just as a matter of journal mechanics, it is not true that the 'expedite' or 'ratcheting' process supposes that journals do not look at an article until someone else has already made an offer. Often, many articles are already cut by the time a call comes in to 'expedite' the piece. They are not re-reviewed. Also, learning that the article has been accepted at a lower journal just changes *when* the higher journal looks at it; it doesn't affect *whether* the staff looks at it, nor does it influence the decision of the higher journal editors. (There are just far too many articles that are expedited for there to be any real influence in this way.) So I think it is plausible that (iv) applies. The probability of Columbia L. Rev. editors getting the 'right answer' (that an article is good or bad) is equal to the probability of them getting the 'right answer' given that Oregon L. Rev. decided that the article is good or bad. Or so it seems to me.

(The weak psychological bump the article might get is almost surely unimportant or non-existent, since most journals work by not giving most of the reviewing staff information about whether the article currently before them has lower offers, where they might be from, or how many there might be.)

Second, regarding there being an 'objectively' correct answer. All that is required is that what students think makes an article great and what professors think makes an article great is the same, or roughly the same. Surely things aren't quite as bad as suggested... Both student editors and professors value clarity, argumentative rigor, ingenuity, concise expression, originality, 'significance' of the contribution, mastery of the relevant literature, and so on. Experts in an area will certainly be better at assessing whether an article possesses these properties or not, but presumably even non-specialist faculty members would feel competent to evaluate whether an article possesses many of these properties. It seems that diligent students, reading hundreds of articles, will come to have a decent ability to assess some, if not many, of these.

I suppose the worry is that the truly important properties--the ones that separate a competent article from a great one--are things such as originality and 'significance', properties that students are not better than random at identifying. (And for which it is relatively easy to 'mislead' students about using certain kinds of introductory hyperbole.) But this still seems like a strong view (better than random is pretty weak), and given the extent to which students perform diligent 'preemption' checks, get consultations from area experts at their law school, and have taken classes that survey the relevant area in some detail, making the case that students are not better than random would seem to require evidence (and more than just notable instances of good articles being rejected and bad ones accepted). This evidence might be forthcoming in the more specialized subfields (law and econ, law and phil, etc.), but it's not clear that it is generally available.

I do think that Kate Litvak's points about the possibility of systematic negative error with regard to certain topics is an important one. This is almost certainly present when it comes to the evaluation of articles on hot topics or 'accessible' topics of various sorts; there is reason to worry that students are going to be systematically biased in favor of such articles. These errors might be magnified when the article is otherwise difficult to evaluate, as in the case of law and econ pieces.

Finally, on the CV point. I think this point can be overstated. First, the CV of the person is often not available to the reviewing staff. They might know the person's name, but that often means little to the largely ignorant reviewers. And about half of the staff will prefer articles by known folks, but at least half will prefer articles by unknown folks (enjoying finding the diamond, or giving good but undervalued work its due, etc.). A law review in the top ten will see over 2000 articles a year, looking to accept maybe (at most) a total of 30-40 of these to end up with 20-25 actual articles. The vast majority of these articles are from professors at decent to good law schools, most of whom the staff won't know anything about. There are just far too many articles submitted from people with 'standout' CVs for that to play a substantive discriminating factor. I do agree, however, that there is no reason not to move to a blind-reviewing system.

The real crime, to my mind, is the preference given to in-house pieces, and the pressure to accept those pieces that some faculties place on their students. (This is a widespread problem, not particular to my experience.) This is bad for scholarship and bad for the relationship between students and faculty, and I can't see the arguments in support of it.

Posted by: Alex Guerrero | May 29, 2008 8:26:42 AM

I would add another comparative point. It is absolutely true that law review placement is generally a poor proxy for article quality. There is no substitute for reading the article yourself. However, the question is how it compares as a proxy to other criteria used in the hiring process.

For example, the law school a candidate attended is given a tremendous amount of weight; clerkships are also given a great deal of weight. And yet the law school a person attended is often contingent on things like a candidate's LSAT score and whether they needed the money so they took a scholarship, not the person's relative intelligence.

Similarly, clerkships are often awarded for weird reasons -- connections, politics, affinity for a particular school, etc. -- that have little to do with merit. Strong recommendations can be the byproduct of years of brown-nosing, not true ability. And yet schools tend to value these criteria nonetheless.

It seems to me that law review placement is a fair consideration along with all the other considerations, with the caveat that we understand that like the other proxies, it too is a proxy of uncertain value.

Posted by: Orin Kerr | May 29, 2008 10:48:07 AM

Just to clarify my comment above, I should add that I was considering the question from the standpoint of assessing entry-level candidates, which it seems to me is the context in which it arises in the most important way.

Posted by: Orin Kerr | May 29, 2008 10:54:10 AM

Adding to Orin Kerr's point and addressing Alex Guerrero's assumptions, which it is true that resume won't get you accepted at a law review, the assumption that fails is that resume won't get you rejected. For example, entry level candidates in practice have a much harder time getting published than those in VAPs or Fellowships - I know from experience having done one of each.

Also, commenting on the "more read" point and review of full volumes when they come out, I wonder whether SSRN has helped level the playing field. I usually review abstracts and interesting articles long before I know what law review they will appear in.

Posted by: Michael Risch | May 30, 2008 5:37:37 AM

Orin,

I'd guess the power of good placements is much more important on the lateral market than it is on the entry-level market. Very few entry-level candidates have great placements anyway; undoubtedly, a good placement is used in that context too -- but the fixation on placement biases the lateral market much more dramatically, I'd guess. In short, top placements are often career-makers for many junior people looking to move. They lead to visiting offers and lateral offers to schools where it gets even easier to get good placements. The rich get richer and so on.

At the lateral level, I'd guess that many of the other bad proxies drop away. Does any top school looking at laterals really care about JD institution and clerkship? I'd be very surprised if that were true. By contrast, placements play a huge role in getting a lateral committee's attention.

None of this is to say that placement is meaningless. Often the better schools will give you better comments and better edits, which can lead to better articles. And there is no doubt that the placement has an effect on whether it will be read and cited. With schools getting more Leiter-sensitive (how often will our faculty be cited?), they are rational to pursue those with good placements.

But I still think we are all better off reading and making our own judgments; sub-contracting this work to our students is ultimately senseless. We should read and select articles and the students can still do the bluebooking and editing.

Posted by: Ethan Leib | May 30, 2008 11:35:18 AM

Post a comment