October 26, 2016
Conventional wisdom that I've endorsed before is that job seekers get most of their callback interviews within the two weeks after the FRC. "Most" here means that after those two weeks, a candidate who got four or five callbacks in the first two weeks might pick up one more, and might even pick it up rather late in the process. This is based on anecdotal evidence involving Chicago candidates over the last eight years, so I'd be curious to hear from others who have noticed other patterns. Submit your comment only once, it may take awhile to appear. Thanks.
October 20, 2016
According to Professor Lawsky, there were 86 law schools at the FRC this past weekend in Washington, DC, compared to 89 in 2015. This doesn't account for the number of slots schools are looking to fill, but my guess is that, like last year, we will see at least 80 new tenure-track academic faculty hired, perhaps a bit higher.
The 94 in 2013 is misleading, since that was a year in which many schools went to the FRC but did no hiring, due to budgetary stresses. The real contrast, of course, is with the last reasonably good year on the market, 2012-13, when 142 schools participated in the FRC.
September 13, 2016
MOVING TO FRONT (ORIGINALLY POSTED 2011)--STILL RELEVANT (I did not update the link to the thread)
PrawfsBlawg hosts many informative threads related to the job market, to which we often link, but this one still seems to me counter-productive, and I continue to urge our candidates to ignore it. The problem is not the misinformation (though there is always some, whether malicious or inadvertent), but that the "information" posted is always woefully incomplete, and so tends to increase the anxiety or blood pressure of other candidates for no good reason. Imagine, you are a job seeker working in IP, and you see that some anonymous soul posts on this thread that the University of My Dreams (UMD), which is hiring in IP, has called to schedule an interview, and yet you have heard nothing! Panic sets in. Of course, anonymous soul usually doesn't voulnteer that s/he has a significant other on the UMD faculty, or that s/he is a diversity candidate in a year when UMD is desperate to increase the diversity of its faculty, or that s/he went to school with a key member of the hiring committee, and so on. Most schools schedule interviews over a period of several weeks, and the vast majority of interviews won't be scheduled until later in September. Bear that in mind should the temptation to look at this incomplete information prove irressistible, and also bear in mind that behind each anonymous posting there is often more of a story than simply, "I got an interview with UMD."
September 05, 2016
As noted previously, this was the smallest FAR--382 applicants--in decades. Two other striking data points: more than 100 of those 382 applicants have a PhD; and only three are former Supreme Court clerks (two of those three are our candidates!). How might those data points be connected? Here's an hypothesis: the now astronomical big firm signing bonuses for SCOTUS clerks--$300,000 in some cases--are keeping them in practice in greater numbers; by contrast, JD/PhDs are training for academia, and so are making up a bigger and bigger share of the candidates.
September 01, 2016
This is based on the first FAR, and includes SJDs and LLMs, as well as JDs:
Harvard University (35)
Georgetown University (31)
Yale University (26)
New York University (25)
University of Michigan (18)
Columbia University (16)
Northwestern University (14)
Stanford University (12)
University of California, Berkeley (12)
University of Pennsylvania (9)
George Washington University (8)
Cornell University (6)
University of Texas, Austin (5)
University of Virginia (5)
Duke University (4)
University of Wisconsin, Madison (4)
Emory University (3)
University of California, Los Angeles (3)
University of Chicago (3)
University of Minnesota, Twin Cities (3)
As I noted, this is an unusually small contingent for Chicago this year (we usually have 6-10 candidates), but we do work closely with the vast majority of our alums to time their entry to the teaching market when they can put their best feet forward. Based on past success rates, I fear some schools may have too many graduates on the market.
August 18, 2016
...which is down at least fifty or more 28 from last year (I can't find the number, if someone has it, please shoot me an e-mail). That's good news for the job seekers, as I think early indications are that, like last year, we will see at least 80 new tenure-track academic hires as we did last year (up from roughly 65 each of 2014-15 and 2013-14).
UPDATE: Thanks to Roger Ford (New Hampshire) for flagging this useful chart courtesy of Sarah Lawsky (Northwestern), which shows the drop off from 2015-16 is not as great as I remembered (I was probably confusing it with 2014-15).
ANOTHER: 58% of the candidates took their law degree from one of the sixteen law schools that produce the most law teachers (i.e., Yale, Harvard, Chicago, Stanford, Columbia, Michigan, NYU, Berkeley, Virginia, Penn, Northwestern, Cornell, Georgetown, Duke, Texas, UCLA); almost 20% earned a degree from the first four (Yale, Harvard, Chicago, Stanford).
August 11, 2016
I am struck by how many schools are interested in some aspect of criminal law/procedure and also in evidence. Health law is also in demand this year. I'm encouraged to see a number of schools back in the market for tenure-track faculty who had been out for awhile. More next week.
August 05, 2016
The latest data from LSAC here. For 2015-16, LSATs taken were up a bit more than 4% from the prior year, while applications were up about 1%. So what does this latest data on June test-takers mean? Probably that this year will be like last in terms of volume of applications. Stability in the applicant pool is, of course, enough for schools to plan their budgets into the future and do faculty hiring.
August 02, 2016
The other day I remarked on what should have been obvious, namely, that Google Scholar rankings of law reviews by impact are nonsense, providing prospective authors with no meaningful information about the relative impact of publishing an article in comparable law reviews. (Did you know that it's better to publish in the Fordham Law Review for impact than in the Duke Law Journal?) The reason is simple: the Google Scholar rankings do not adjust for the volume of output--law reviews that turn out more issues and articles each year will rank higher than otherwise comparable law reviews (with actual comparable impact) simply because of the volume of output.
When Google Scholar rankings of philosophy journals first came out, a journal called Synthese came out #1. Synthese is a good journal, but it was obviously nonsense that the average impact of an article there was greater than any of the actual top journals in philosophy. The key fact about Synthese is that it publishes five to ten times as many articles per year than the top philosophy journals. When another philosopher adjusted the Google Scholar results for volume of publication, Synthese dropped from #1 to #24.
Alas, various law professors have dug in their heels trying to explain that this nonsense Google Scholar ranking of law reviews is not, in fact, affected by volume of output. I was initially astonished, but now see that many naïve enthusiasts apparently do not not understand the metrics and do not realize how sloppy Google Scholar is in terms of what it picks up.
Let's start with the formula Google Scholar uses in its journal rankings:
The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.
The h-core of a publication is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.
The h-median of a publication is the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the articles in the h-core.
Finally, the h5-index, h5-core, and h5-median of a publication are, respectively, the h-index, h-core, and h-median of only those of its articles that were published in the last five complete calendar years.
Obviously, any journal that publishes more articles per year has more chances of publishing highly-cited articles, which then affects both the h-core result and the h-median result. But that's only part of the problem, though that problem is real and obvious enough. The much more serious problem is that Google Scholar picks up a lot of "noise," i.e., citations that aren't really citations. So, for example, Google Scholar records as a citation any reference to the contents of the law review in an index of legal periodicals. Any journal that publishes more issues will appear more often in such indices obviously. Google Scholar picks up self-references in a journal to the articles it has published in a given year. Google Scholar even picks up SSRN "working paper series" postings in which all other articles by someone on a faculty are also listed at the end as from that school. (Google Scholar gradually purges some of these fake cites, but it takes a long time.) Volume of publication inflates a journal's "impact" ranking because Google Scholar is not as discerning as some law professors think.