« October 2007 | Main | December 2007 »

November 29, 2007

Ranking of Law Schools by Placement in Federal Appellate Court Clerkships

UPDATE:  Please note that the data, below, is not entirely accurate, as the comments make clear--and some corrections have already been recorded at the Clerkship Blog--follow the links, below.

======================================

The Federal Appellate Clerkship Blog is reporting clerkship placement for the 2008-09 term.  I don't know if the statistics are reliable, though some of the stats I know about look to be correct.

Her are the top 25 law schools by the total number of graduates securing circuit court clekrships:

1.  Harvard University (55)
2.  Yale University (50)
3.  Stanford University (26)
4.  University of Chicago (25)
5.  Columbia University (22)
6.  University of Michigan, Ann Arbor (20)
7.  University of Texas, Austin (19)
8.  Georgetown University (18)
8.  New York University (18)
8.  University of California, Los Angeles (18)
11. Northwestern University (16)
12. University of Virginia (14)
13. Duke University (12)
13. University of Pennsylvania (12)
15. University of California, Berkeley (10)
16. Vanderbilt University (9)
17. University of Minnesota (7)
17. University of Notre Dame (7)
19. Brooklyn Law School (6)
20. University of Iowa (5)
20. Washington University, St. Louis (5)
22. Catholic University (4)
22. George Washington University (4)
22. University of California, Hastings (4)
22. University of North Carolina, Chapel Hill (4)

Here are the top 25 law schools by the percentage of clerks for 2008-09 based on the total 2008 graduating class (i.e., not the total number actually seeking clerkships):

1.  Yale University (26.5%)
2.  Stanford University (15.2%)
3.  University of Chicago (13.0%)
4.  Harvard University (9.9%)
5.  Northwestern University (6.9%)
6.  Duke University (5.9%)
7.  Columbia University (5.7%)
8.  University of California, Los Angeles (5.4%)
8.  University of Michigan, Ann Arbor (5.4%)
10. University of Pennsylvania (4.8%)
11. Vanderbilt University (4.7%)
12. University of Texas, Austin (4.3%)
13. New York University (4.0%)
14. University of Notre Dame (3.9%)
15. University of California, Berkeley (3.8%)
16. University of Virginia (3.7%)
17. Georgetown University (3.1%)
18. University of Minnesota (2.7%)
19. University of Iowa (2.4%)
20. Washington University, St. Louis (2.1%)
21. University of Richmond (1.9%)
22. University of North Carolina, Chapel Hill (1.7%)
23. Cornell University (1.6%)
23. University of Illinois (1.6%)
23. Washington & Lee University (1.6%)

Posted by Brian Leiter on November 29, 2007 in Rankings | Permalink | Comments (4) | TrackBack

November 28, 2007

A Third Public Law School for New York?

The State University of New York at Binghamton is now making plans for a new law school.

Posted by Brian Leiter on November 28, 2007 in Of Academic Interest | Permalink | TrackBack

November 27, 2007

Once more into the citation rankings fray...

Brian Tamanaha (St. John's) raises some issues that deserve comment.  He writes:

My objection to use of citations as a proxy for "impact" is not the claim that articles and books may have an influence without being cited in law review articles, although this is clearly the case. [I have read and learned from Isaiah Berlin, for example, but have never cited him].

This would, indeed, be a quite feeble objection, for reasons that are no doubt obvious to Brian and everyone else.  (I can think of no example of someone with a substantial impact on legal scholarship in his or her field who is not cited at least sometimes!)

Rather, the problem has to do with the bizarre citation practices that have developed in U.S. law reviews. Law reviews typically require that almost every assertion be backed up by a reference; articles often have in excess of 400 footnotes, nearly one for every sentence [invited and symposium pieces escape these constraints].

As a result, law professors are required to produce reams of citations, even for commonplace assertions, a task they sometimes push off on research assistants. Over time, stock or standard citations develop, which are cited again and again. An easy way to come up with a citation is to plumb (or loot) the footnotes of earlier articles on the subject. A lot of parasitic opportunism of this kind takes place because it is an efficient way to come up with the required footnote....

Owing to this practice (common?), the fact that a book or article is cited does not necessarily indicate that it was read by the law professor who cited it. Even if the professor actually reads it, moreover, the citation does not mean the article or book cited had any impact on the professor, particularly when the citation is produced after the passage was written. Again, many sources are cited solely because a citation is required by law reviews.

I have no quarrel with the facts described here by Brian, the real issue is their import.  Any one citation might, indeed, have the flaws noted (it might have been added by an RA, be standard boilerplate, etc.--I've noted this issue myself).  We would need real evidence, however, that large numbers of citations to a scholar did not, in fact, indicate scholarly impact:  and there is no evidence (literally none) that with respect to scholars whose work is being cited 500 or 1,000 times in a seven-year period that this can all be "explained away" by the citations practices Brian describes accurately enough.

A more refined measure of impact or influence would count only the times when a source is actually discussed in the article in some fashion, even minimally....

I agree this would be a better measure; it would also be logistically impossible to carry out for several hundred scholars with tens of thousands of citations.

Even if this problem is corrected, there are other serious problems with Leiter's citation study as a measure of impact on legal scholarship.

Consider, for example, Leiter's ranking of Critical Theorists. Roberto Unger is ranked 20th, with 480 citations. Setting aside what one might think of the merits of critical theory, it is absurd to suggest that the "true measure" of Unger's impact in this field places him behind all the others cited. His Knowledge and Politics and Law in Modern Society influenced a generation of critical theorists (and others), although these works might not be cited very often today. This example alone demonstrates that the citation study is deeply flawed as a measure of impact.

It actually shows nothing of the kind, partly because Brian has misstated what the result means.  It means that during the recent period studied, 2000-2007, Unger' s impact on legal scholarship was not as great as, say, Catharine MacKinnon's or Richard Delgado's, which strikes me as quite plausible.  Law in Modern Society never had much impact, and Knowledge and Politics faded with the demise of Critical Legal Studies more than a decade ago.  Will Unger have a longer-term impact than some of those whose work is being cited more often in recent years?  Quite possibly.  But this was, quite explicitly, not a study of "all-time" scholarly impact and importance; I don't think such a study could be meaningfully done with respect to our contemporaries. 

Brian also misinterprets the meaning of the ordinal listing.  As I stated at the beginning of the study:  "The particular ordinal rank within the top ten or twenty means very little, but the lists do tend to be fairly representative of the major scholars in the field...."   In other words, I explicitly caution against reading the data as meaning that #15 has more impact than #20 (though at the extremes [e.g., #5 vs. #20] that is probably a safe conclusion to draw in many cases).

Take a look at the "Law & Philosophy" ranking. A case can be made that Duncan Kennedy (1290 citations) and Roberto Unger, both relegated (or banished?) by Leiter to the Critical Theorists list, should also have been included on this list (both placing in the top ten, with Kennedy second). Leiter will no doubt assert that they do not engage in "legal philosophy" proper, which is a plausible claim, though by no means uncontroversial (Nussbaum and Waldron, on the list, also do much work that does not fit within a narrow definition of "legal philosophy"). Even conceding this, one might ask why such a narrowly defined category was utilized that excludes such important contemporary legal theorists.

Brian, who did his graduate work at Harvard Law School, is admirably loyal to his former teachers!  But he makes a number of misleading claims in this short paragraph.  No one, anywhere, is listed in more than one "top ten" or "top twenty" specialty listing; to the extent reasonable, scholars are placed in the broad area where most of their work falls.  It is obviously uncontroversial to put Kennedy and Unger in "Critical Theories."  The only question that might be raised is whether they should also appear in the unranked list of "highly cited scholars" who don't work exclusively in a particular field, in this case "Law and Philosophy."  The objection to including them here is not, contrary to Brian, that they "do not engage in 'legal philosophy' proper" (since, as Brian notices, there are others on the list who don't work in legal philosophy proper), it is that there is nothing "philosophical" about their work, on any conception of philosophical work.  (The category is "law and philosophy," not  legal philosophy, which would be quite narrow:  it is meant to capture a rich array of philosophically informed work about law, from general jurisprudence to criminal law theory and much else.)  That is obvious in the case of Kennedy, whose treatment of philosophical matters is superficial; it might be more arguable in the case of Unger, since the philosophical content of Knowledge and Politics so clearly tracks the Left Hegelian style of argument in Lukacs's History and Class Consciousness, but as Brian probably knows, Unger's scholarly impact outside the legal academy is largely with social theorists on some Sociology and Politcs faculties, and not with philosophers.  (Let me add, to preempt a standard refrain from those who aren't very philosophically competent, that this has nothing to do with Anglophone versus Continental traditions in philosophy; Unger is not a meaningful contributor to the latter traditions, as I am in a reasonably good position to know.  There is a lot of sophomoric work of a purportedly philosophical nature that purports to insulate itself from criticism by claiming to be "Continental" not "analytic."  But this is sheer nonsense:  "Continental" does not mean philosophically incompetent and superficial, and it insults the brilliant figures in the post-Kantian traditions to invoke their important work as justifying the silliness and incompetence of some law professors [here I am not thinking of Unger, just to be clear].)

Another general problem with the ranking is that many people are cited for work in other fields: Raz for moral theory; Waldron for political theory; Leiter for his rankings; and so forth. This is true for many professors, not just those in legal philosophy. Leiter does not correct for this, which undermines the accuracy of the rankings (relative position and who makes the cut).

Work in moral and political theory, just like my work on the epistemology of evidence law, clearly falls within the scope of the category "Law and Philosophy."  But Brian is correct that for almost everyone on the list there are "noise" citations:  e.g., to my ranking site (which accounts for about 1-2% of my citations), or to completely unphilosophical work, or a mere acknowledgment.  Faculty were put on the lists when about 75% of their citations were to work in the specialty.  If we were able to correct for all the "noise," this might affect relative positions, but as noted already, relative positions are not very meaningful. 

Our culture suffers from a ferocious ranking fetish. Leiter's citation study feeds the beast, when we should instead be starving it.

This seems a pleasantly high-minded sentiment, but in fact I think it is both silly and pernicious.  First, there happens to be this news magazine, U.S. News & World Report, that produces famously unreliable rankings of law schools based on a host of largely non-academic criteria or unreliable data.  We can put our head in the sand, and pretend it doesn't exist and pretend that students don't have a reasonable interest in comparative metrics of university quality, or we can do something.  I prefer doing something, like producing defensible comparative metrics that pertain to actual aspects of academic and professional excellence.  Second, rankings, when done well, provide useful information; Richard Posner puts the point aptly:

There is a tradeoff in communications between information content and what I'll call absorption cost. Ranking does very well on the latter score--a ranking conveys an evaluation with great economy to the recipient; it gives the recipient an evaluation of multiple alternatives (in this case, alternative schools) at a glance.

As Posner goes on to note:

But a ranking's information content often is small, because a ranking does not reveal the size of the value differences between the ranks....The quality difference between number 1 and number 2, or between the top 10 and the bottom 10, may be very great, but the quality difference between number 100 and number 200 may be small, at least relative to the appearance created by such a large rank-order difference.

The information content of college rankings, as in the case of U.S. News & World Report's rankings, is particularly low because these are composite rankings. That is, different attributes are ranked, and the ranks then combined (often with weighting) to produce a final ranking. Ordinarily the weighting (even if every subordinate ranking is given the same weight) is arbitrary, which makes the final rank arbitrary. U.S. News & World Report ranks 15 separate indicators of quality to create its composite ranking of colleges.

The rankings I have produced, including this one, avoid these defects.

Let me conclude by quoting a commenter from another blog, who well-expressed my view about most of the criticisms of the ranking data I compile:

The sorts of criticisms noted...can be taken in two ways--one reasonable and the other stupid. The reasonable way is as either noting things that might be improved on in the future or else as noting things that should lead smart consumers of the rankings to ask more questions or otherwise serve as caveats on using the rankings. I'm sure that Leiter has no objections to such remarks. The stupid way to mean them would be to believe that there could be a perfect ranking system, one that combines all desirable elements and has no undesirable ones. It's hard to tell here which way the remarks are meant so I'll assume it's the good way. Many critics of rankings, however, seem pretty clearly to mean the stupid thing....

Finally, the idea that rankings of schools in general is bad seems silly to me. Unless one thinks there is no significant difference between the schools (an unlikely proposition) then rankings can be useful to students. They are not perfect but of course anyone who uses them in a stupid way probably ought not go to law school....

Posted by Brian Leiter on November 27, 2007 in Rankings | Permalink | TrackBack

November 26, 2007

Rutgers's Patterson Takes Up Part-Time Post at Swansea

Legal philosopher and commercial law scholar Dennis Patterson, Distinguished Professor of Law at Rutgers University, Camden, has taken up a part-time post as Professor of Jurisprudence and International Trade at Swansea University School of Law in the U.K., where he will teach for eight weeks each year as part of Swansea's new LLM program in Global Legal Studies.

Posted by Brian Leiter on November 26, 2007 in Faculty News | Permalink | TrackBack

November 24, 2007

South Carolina's Bar Exam Scandal Gets Worse

Jim Chen (Louisville) has the latest details.

Posted by Brian Leiter on November 24, 2007 in Legal Profession | Permalink | TrackBack

November 23, 2007

Mary Dudziak isn't happy with the new citation rankings

Her comments on the "Legal History" listing are here and she felt the need to post a link to her comments here as well after I linked to that post.  Professor Dudziak was actually one of the runner-ups in "Legal History," so this isn't just sour grapes on her part.  But let's see what she has to say and whether it has any merit:

Brian Leiter's rankings are not a true measure of "scholarly impact," especially in a field like legal history. The study is confined to the Westlaw JLR database which only includes legal publications.
The study is a "true measure" of what it purports to measure, namely, scholarly impact in legal scholarship.  It is, as I explicitly note, an imperfect measure (of influence, of quality, of importance, etc.), but that's a different matter.  Whether or not a different database would produce significantly different results is an empirical question; Professor Dudziak appears to assume an answer, but I don't know of any actual evidence supporting her assumption.  Contrary to the impression Professor Dudziak gives, the Westlaw JLR database includes a large number of interdisciplinary journals (including, e.g., American Journal of Legal History and Law and History Review), as well as many foreign legal periodicals (the majority from Anglophone countries not surprisingly).
What does this miss? Leading scholars will have an impact that ranges beyond their fields and beyond their nations. But the Westlaw database cannot measure impact beyond the legal academy, and the important global reach of many American legal scholars is not measured. All but a very few journals in the database are U.S.-based.
Some "leading scholars will have an impact that ranges beyond their fields and beyond their nations" and some won't, so it's silly to generalize.  It will depend on what we mean by "leading," and, more importantly, by the sub-field we are discussing.  Many specialties within legal scholarship are nation-specific which, quite reasonably, means their influence "beyond their nations" tends to be slight. 
It is true that the Westlaw JLR database is a lousy database if one is trying to measure influence "beyond the legal academy."  Should anyone have thought that's what this exercise was about, I hereby reiterate that it is not. 

The impact of interdisciplinary scholars, in particular, will be under-counted. For serious interdisciplinary scholars, especially J.D./Ph.D.s, the true measure of scholarly success is to be seen as leading figure both within the legal academy and within the Ph.D. field. To further one’s scholarship within the Ph.D. field, an interdisciplinary scholar will publish in the field’s leading peer-reviewed journals. If in the humanities and perhaps social sciences, they will publish books.
I am puzzled, again, by the confidence with which Professor Dudziak issues pronouncements about what interdisciplinary scholars aspire to achieve.  No doubt she speaks for some (maybe even the majority), but not for others.  Surely Professor Dudziak knows that there are some non-law disciplines in which the study of law and legal phenomena is not held to be very important or held in high esteem; that is one reason some interdisciplinary scholars might prefer to be in law schools and to write for academic lawyers.
This leads to two under-counting problems. First, the Westlaw JLR database will miss citations to the scholar’s work in journals other than law reviews -- this includes journals in the Ph.D. field.
This is indisputably true, but what does it mean?  On the evidence I've seen in books and non-law journals, I'm probably the most-cited Nietzsche scholar in the English-language secondary literature in recent years, and all of that, alas, counts for naught in my own study!  How sad.  But is it significant?  Not in a study measuring impact in legal scholarship.
For American legal historians, this would include citations in the Journal of American History, American Historical Review, and other history journals. Second, legal scholars often confine their research to the same Westlaw database, and so they don’t find and cite to relevant books and articles.

Fair point:  the case of legal historians may be quite different.  Perhaps if citations in these journals were counted, the top ten list would change a bit (maybe quite a bit, though I'm skeptical about that).  I hope Professor Dudziak will do the study, since this would illuminate the empirical issue she raises.

The limitations of this sort of study are not ameliorated by separating out a field like legal history. Using the Westlaw database will undercount those scholars who have a stronger impact across scholarly journals (beyond those in the legal database), and who do more publishing in books and peer reviewed history articles.  Even a more comprehensive citation study will skew in favor of scholars in larger sub-fields (e.g. American history as compared to medieval studies).

Again, these are empirical claims, that may be true, or may not.  The one I'm confident is true is that the Westlaw JLR database will be skewed, as Professor Dudziak notes, towards American history, which explains the under-counting of extremely eminent and influential legal historians like R.H. Helmholz who work on earlier and non-U.S. periods.

It is also important to point out that Leiter does not count legal historians with appointments outside of law schools. A number of leaders in the field have such appointments.
I'm not sure why it's "important" to point out what should be obvious, given that the study was explicitly confined to law professors, i.e., those holding tenure-stream positions in law schools.  It was so confined because my "law school ranking" site (that's its name) is a source of information for prospective law students, not prospective PhD students in history.
UPDATE:  More thoughts from Professor Dudziak here.

Posted by Brian Leiter on November 23, 2007 in Rankings | Permalink | TrackBack

November 21, 2007

Citation Rankings as a Monopoly Board

This is pretty funny (as well as revealing).

Posted by Brian Leiter on November 21, 2007 in Rankings | Permalink | TrackBack

How to Pass the South Carolina Bar Exam

Be related to someone politically powerful, it seems:

The South Carolina Bar has called on the state's high court to explain why it changed the grades of 20 people -- including the children of a judge and legislator -- from "fail" to "pass" on the state bar exam taken in July....

The Bar encouraged the state Supreme Court to "further explain what happened and take steps to avoid a recurrence of these events." The Bar also said it regretted that the controversy had generated criticism of the state's legal profession....

Would-be lawyers must pass the bar exam to practice law in South Carolina, and the Supreme Court has the final say on the grades.

In an earlier statement, the high court said it decided Nov. 1 -- a week after grades were posted -- to throw out the test section on wills, trusts and estates, increasing the number who passed to 448, after its clerk learned of a "scoring error" by the section's grader. The Supreme Court has issued no further explanation.

The additional 20 included Catherine Harrison, daughter of House Judiciary Chairman Jim Harrison, R-Columbia; and Kendall Burch, daughter of Circuit Court Judge Paul Burch, raising questions on blogs and media reports about good ol' boy politics.

Both Rep. Harrison and Judge Burch have acknowledged contacting court officials but say it was not to lobby on their daughters' behalf.

Harrison said Friday he called George Hearn, chairman of the Board of Law Examiners, after his daughter told him "almost everybody she talked to had failed that one section." He said he asked only whether the failure rate for the section was abnormally high.

With Hearn out of town, Harrison called the Supreme Court's clerk to ask the same question. Harrison said he had no other contact with court officials and knew nothing about the court's reasoning.

Posted by Brian Leiter on November 21, 2007 in Legal Profession | Permalink | TrackBack

November 20, 2007

Most Cited Law Professors by Specialty--A Few More Corrections...

...are now on-line, including two scholars who were wrongly omitted from the prior top ten lists:  Paul Finkelman (Albany Law School) in Legal History, and Steven Lubet (Northwestern) in Legal Ethics.  An up-dated listing of the top 15 schools based on representation on these lists is being compiled.  I've also added in "Other highly cited scholars" who don't work exclusively in one of the ranked areas, but who had more than 1,000 citations, a change that was important, in particular, for NYU and, to lesser extents, Stanford and Michigan.  (The top five will now be 1.  Yale, 2.  Stanford, 3.  Chicago, 4.  Harvard, 5.  NYU.)

Posted by Brian Leiter on November 20, 2007 in Rankings | Permalink | TrackBack

The *Real* Reasons Some Canadian Law Schools Are Switching to the JD

My friend Leslie Green--a longtime faculty member at Osgoode Hall School of Law of York University, Toronto, who is now Professor of the Philosophy of Law at Oxford University--writes:

Ms McNish shows a surprising lack of interest in the real reasons why a couple of Canadian law deans were eager to adopt the US label for their undergraduate law degrees. She uncritically reports that, 'Unlike in Canada, British law graduates are not required to earn an undergraduate degree as a prerequisite... As a result, global law firms typically pay law grads with JDs substantially more than Canadians packing LLBs."

There is no evidence whatever for this claim.   A lower salary is not "typically" offered to LLB graduates from Canada's best law schools.

Nor are global law firms in London paying Oxford and Cambridge graduates less than those from Harvard or Yale--and here the first degree in law is called a BA. The claim is simply false.

Moreover, everyone in the Canadian legal academy knows the real origins of this change, which include (in no order): (a) an attempt at brand-differentiation by a couple of law schools keen to ration access by price; (b) a cheap sop to law students who are being charged much higher fees for the same, and sometimes, worse legal educations than students got in those very schools not so long ago; and (c) continentalist sucking-up.

I taught for many years in elite law schools in both Canada and the US.

Here, at Oxford, where law is an unabashed first degree, our students are no weaker than those who start it as a second undergraduate degree in the US or Canada. There is no evidence that they make worse, or worse-paid lawyers than their equivalent cohorts in the same or comparable firms. Do people really think that market-savvy global law firms do not know that, other things being equal, BA=LLB=JD? (Which is not to deny, of course, that they are also savvy about the quality of the education the students received under the various labels.)

No anonymous comments; post only once, comments may take awhile to appear.

Posted by Brian Leiter on November 20, 2007 | Permalink | Comments (8) | TrackBack