January 17, 2017
We just updated our charts about law journal submissions, expedites, and rankings from different sources for the Spring 2017 submission season covering the 203 main journals of each law school.
A couple of the highlights from this round of revisions are:
First, again the chart includes as much information as possible about what law reviews are not accepting submissions right now and what dates they say they'll resume accepting submissions. Most of this is not specific dates, because the journals tend to post only imprecise statements about how the journal is not currently accepting submissions but will start doing so at some point in spring.
Second, while 72 law reviews still prefer or require submission through ExpressO, the movement toward the number of journals using and preferring Scholastica continues: 27 schools now require Scholastica as the exclusive avenue for submissions, with 25 more preferring or strongly preferring it, and 25 accepting articles submitted through either ExpressO or Scholastica,.
The first chart contains information about each journal’s preferences about methods for submitting articles (e.g., e-mail, ExpressO, Scholastica, or regular mail), as well as special formatting requirements and how to request an expedited review. The second chart contains rankings information from U.S. News and World Report as well as data from Washington & Lee’s law review website.
Information for Submitting Articles to Law Reviews and Journals: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1019029
The Washington & Lee data on citations to law reviews is not very useful, since it does not correct for volume of publication. As a rule of thumb, law review status tracks the hosting law school's status, though the further down the hierarchy one goes, the less meaningful the distinctions become. 2nd-tier specialty journals at some top schools can offer be a better bet than the main law review at other schools--you need to ask colleagues in your specialty to find out.
October 07, 2016
The distinguished criminal law scholar Susan Bandes (DePaul) invited me to share a story she recently shared via a listserve:
In September I posted an article on SSRN (What Executioners Can--and Cannot--Teach Us About the Death Penalty http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2835145). I chose the allotted 10 subject matter classifications. I learned yesterday that three of these classifications were rejected: criminal law e-journal, criminal procedure e-journal and criminology e-journal. I was surprised by all of these rejections, and especially criminal law, since the article is focused on the purposes of punishment, a core criminal law concern. I called SSRN this morning, and they explained to me that SSRN sometimes rejects classifications, even when they are substantively appropriate, if they view them as overlapping with other classifications. In this case, they accepted my "corrections and sentencing" classification, and apparently viewed the criminal law, criminal procedure, and criminology e-journals as overlapping with corrections and sentencing and therefore rejected all three of those broader classifications. In short, the only criminal-law related e-journal in which my article will be listed is corrections and sentencing. I asked SSRN to review this decision, which they are now doing.
To my mind, there are a few problems with this way of doing things:
First of all, I haven't checked the subscription numbers, but it's hard to believe that the corrections and sentencing journal reaches nearly the same audience as the journals with broader classifications, such as criminal law and criminal procedure. As both an author and a reader, I expect relevant articles to be included in the broader topic areas. What is the interest in refusing to include an article in an e-journal squarely within its substantive reach? I suppose the goal is to avoid inundating e-journal readers. Is this an adequate justification? (it might be; that's a genuine question).
Second, SSRN authors are permitted 10 classification choices at the outset. My article will now be distributed in only 7 of the 10 journals I chose. Until now I assumed such rejections were based on substance. To the extent they aren't, shouldn't SSRN give us the allotted 10 journals to disseminate our work?
And finally, for those of us who care about such things (and I count myself among that group), CrimProf Blog has a nice feature: it lists the top ten downloads in the Criminal Law e-journal and the Criminal Procedure e-journal. That's a very reasonable choice of e-journals, since one would think they cover the broadest substantive areas. But for those who like to read--and for those who hope sometimes to be included on--the CrimProf blog list, SSRN's practice of rejecting relevant articles from those classifications (for reasons that cannot be predicted) is all the more problematic.
Professor Bandes tells me that "on appeal," the article was included in the criminal procedure journal! Why the criminal law e-journal excluded a piece on the death penalty by a leading criminal law scholar--who knows? Interestingly, the problem is somewhat the opposite for the "Jurisprudence & Legal Philosophy" e-journal, which (though better than in the past) often contains articles that are neither jurisprudence nor legal philosophy. (Please, if your work isn't jurisprudence or legal philosophy, don't put it there!) Here are some examples of recent articles that appeared in, but do not belong in, the "Jurisprudence & Legal Philosophy" e-journal:
Law and Macroeconomics: The Law and Economics of Recessions
New Wine in Old Wineskins: Metaphor and Legal Research
The Impact of Biological Psychiatry on the Law: Evidence, Blame and Social Solidarity
No doubt these are useful and interesting articles, but those of us subscribing to that e-journal aren't expect these pieces!
August 02, 2016
The other day I remarked on what should have been obvious, namely, that Google Scholar rankings of law reviews by impact are nonsense, providing prospective authors with no meaningful information about the relative impact of publishing an article in comparable law reviews. (Did you know that it's better to publish in the Fordham Law Review for impact than in the Duke Law Journal?) The reason is simple: the Google Scholar rankings do not adjust for the volume of output--law reviews that turn out more issues and articles each year will rank higher than otherwise comparable law reviews (with actual comparable impact) simply because of the volume of output.
When Google Scholar rankings of philosophy journals first came out, a journal called Synthese came out #1. Synthese is a good journal, but it was obviously nonsense that the average impact of an article there was greater than any of the actual top journals in philosophy. The key fact about Synthese is that it publishes five to ten times as many articles per year than the top philosophy journals. When another philosopher adjusted the Google Scholar results for volume of publication, Synthese dropped from #1 to #24.
Alas, various law professors have dug in their heels trying to explain that this nonsense Google Scholar ranking of law reviews is not, in fact, affected by volume of output. I was initially astonished, but now see that many naïve enthusiasts apparently do not not understand the metrics and do not realize how sloppy Google Scholar is in terms of what it picks up.
Let's start with the formula Google Scholar uses in its journal rankings:
The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.
The h-core of a publication is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.
The h-median of a publication is the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the articles in the h-core.
Finally, the h5-index, h5-core, and h5-median of a publication are, respectively, the h-index, h-core, and h-median of only those of its articles that were published in the last five complete calendar years.
Obviously, any journal that publishes more articles per year has more chances of publishing highly-cited articles, which then affects both the h-core result and the h-median result. But that's only part of the problem, though that problem is real and obvious enough. The much more serious problem is that Google Scholar picks up a lot of "noise," i.e., citations that aren't really citations. So, for example, Google Scholar records as a citation any reference to the contents of the law review in an index of legal periodicals. Any journal that publishes more issues will appear more often in such indices obviously. Google Scholar picks up self-references in a journal to the articles it has published in a given year. Google Scholar even picks up SSRN "working paper series" postings in which all other articles by someone on a faculty are also listed at the end as from that school. (Google Scholar gradually purges some of these fake cites, but it takes a long time.) Volume of publication inflates a journal's "impact" ranking because Google Scholar is not as discerning as some law professors think.
July 20, 2016
Prof. Jeff Sovern (St. John's) writes:
I have been wondering about the extent of law professors’ ethical obligations to disclose when their research has been supported by a grant from a group with a stake in the findings, and because you are the de facto moderator of the law professor village square, I wondered if you would consider posting the item below to your blog and seeking comment. I apologize for its length.
A grant that results in the publication of a law review article or similar publication should be acknowledged in the article, but what about later work in the same general area that espouses a policy position consistent with what the grantor would have wanted? That issue is germane to a 2013 article in The Nation, The Scholars Who Shill for Wall Street which criticized academics (notably, George Mason’s Todd Zywicki) for failing to disclose in papers, congressional testimony, speeches, op-eds, etc. compensated work for the financial industry. The AALS has been rather vague on this subject, but here’s what it said in its Statement of Good Practices by Law Professors in the Discharge of Their Ethical and Professional Responsibilities: “Sponsored or remunerated research should always be acknowledged with full disclosure of the interests of the parties. If views expressed in an article were also espoused in the course of representation of a client or in consulting, this should be acknowledged.” It’s not at all clear to me that the conduct described in The Nation article violated that policy.
My own concern is more personal. My law school (St. John’s) accepted a grant from an organization with ties to a particular industry. My co-authors and I conducted a survey financed by this grant (we had to purchase a software license, compensate those who completed the survey, and so on) and published a law review article about our findings. We had complete control over the survey and what we wrote about our findings and the grantor did not comment on them; in all respects, its behavior was exemplary. We acknowledged the funder in the article. Later, I wrote some op-eds about our work, and acknowledged the grantor again. Still later, I wrote op-eds about the broader subject, giving no more than a sentence to our research, or not mentioning it at all. Do I have an obligation in the later op-eds to mention the grantor? Would readers want to know that my law school accepted money from the grantor which supported my research? If your answer is no, do you see anything wrong with the conduct described in The Nation article? If you answer is yes, would it be different if the funder were not associated with a particular industry or point of view?
Perhaps the AALS would consider updating and elaborating on its statement. It might be a good project for professors specializing in professional responsibility. When the AALS re-evaluates a school for membership every seven years, does it inquire into compliance with this aspect of its Statement of Good Practices? Should it?
Good questions, I've opened it for comments. (Submit your comment only once, comments are moderated, and may take awhile to appear.)
July 18, 2016
July 12, 2016
June 30, 2016
June 18, 2016
New York Times reporter Noam Scheiber was kind enough to respond to my open letter and ask if I could point to anything specifically factually wrong with his story. My response is below.
Thanks so much for responding. Yes, there are at least 6 factual errors in the article, and several misleading statements.
I’ll start with my interview with Acosta from earlier today, and then we can discuss empirics. Here’s what Acosta said:
"There’s no way I could pay back my student loans under a 10-year standard payment plan. With my current income, I can support myself and my family, but I need to keep my loan payments low for now. I’ve been practicing law since May, and I’m on track to make $40,000 this year. I think my income will go up over time, but I don’t know if it will be enough for me to pay back my loans without debt forgiveness after 20 years. What happens is up in the air. I’m optimistic that I can make this work and pay my student loans. I view the glass now as half full.
Valparaiso did not mislead me about employment prospects. I had done my research. I knew the job market was competitive going in. I knew what debt I was walking into. I think very few Americans don’t have debt, but for me it was an investment. I saw the debt as an investment in my career, my future, and my family.
Valparaiso gave a guy like me, a non-traditional student a shot at becoming a lawyer. Most law schools say they take a holistic approach, but they don’t really do it. I had to work hard to overcome adversity, and they gave me a shot to go to law school and to succeed. They gave me a shot at something that I wanted to do where most law schools wouldn’t.
My situation might be different from other law students who start law school right out of college. I was older and I have a family to support."
On to empirics.
The story states that:
“While demand for other white-collar jobs has rebounded since the recession, law firms and corporations are finding that they can make do with far fewer full-time lawyers than before.”
This is incorrect.
First, the number of jobs for lawyers has increased beyond pre-recession levels (2007 or earlier), both in absolute terms and relative to growth in overall employment. (error #1)
Focusing only on lawyers working full-time in law firms or for businesses (I’m not sure why you exclude those working in government), there are more full-time corporate and law firm lawyers in 2014 according to the U.S. Census Bureau’s Current Population Survey (CPS)—870,000—than in 2007—786,000. There have been more full-time corporate and law firm lawyers in every year from 2009 on than there were in 2007 and earlier.
You were looking at NALP or ABA data, which is measured at a single point in time—9 or 10 months after graduation—and is therefore much less representative of outcomes for law graduates—even recent law graduates—than Census data. Indeed, many law graduates who will eventually gain admission to a state bar will not have done so as of the date when NALP collects data. NALP and the ABA also use different definitions from the Census, so you cannot readily use their data to compare law graduates to others.
The trend of growth in lawyer jobs holds true for other cuts of the data (all lawyers; all full time lawyers) using other data sources—U.S. Census or Department of Labor (BLS OES) data.[i]
This is in spite of large declines in law school enrollments, which would be expected to reduce the number of working lawyers.
Second, employment has not rebounded to pre-recession (2007 or earlier) levels outside of law. (error #2)
June 17, 2016
An Open Letter to New York Times Journalist Noam Scheiber: Journalists Should Consult Peer-Reviewed Research, Not Bloggers (Michael Simkovic)
Dear Mr. Scheiber:
Have you seen this line of peer-reviewed research, which estimates the boost to earning from a law degree including the substantial proportion of law graduates who do not practice law?
- Michael Simkovic & Frank McIntyre, The Economic Value of a Law Degree, 43 J. Legal Stud. 249 (2014)
- Michael Simkovic & Frank McIntyre, The Economic Value of a Law Degree (2013)
- The Economic Value of a Law Degree PowerPoint Presentation
High quality nationally representative data from the U.S. Census Bureau, analyzed using standard and widely accepted econometric techniques, shows that even toward the bottom of the distribution, the value of a law degree (relative to a terminal bachelor’s degree) is much greater than the costs.
All of the data suggests that this has not changed since the financial crisis. The economy is worse and young people are facing more challenges in the job market, but law graduates continue to have the same relative advantage over bachelor’s degree holders as they have had in the past:
These findings have been covered in the New York Times before:
- Michael Simkovic, Overall Stagnation in Legal Jobs Hides Underlying Shifts, The New York Times Dealbook, April 1, 2016
- Steven Davidoff Solomon, Law Schools and Industry Show Signs of Life, Despite Forecasts of Doom, The New York Times Dealbook, March 31, 2015
- Steven Davidoff Solomon, Debating, Yet Again, the Worth of Law School, The New York Times Dealbook, June 18, 2013
Data from the U.S. Census and the Department of Labor Bureau of Labor Statistics shows that the number of lawyers has grown since the financial crisis, both in absolute terms and relative to overall employment.
Data from the Department of Education shows that law school graduates, even from very low-ranked law schools, have exceptionally low student loan default rates.
I have a number of concerns about factual inaccuracies in your recent story, “An Expensive Law Degree, and No Place to Use It” and your reliance on “experts” such as Paul Campos who lack any technical expertise or even basic financial or statistical literacy.
Your readers would receive more reliable information if you concentrated less on sources like Paul Campos and internet “scamblogs” and focused instead on peer-reviewed research by professional economists using high quality data and well-established methods of statistical analysis.
June 18, 2016: Noam Scheiber replies and I respond by re-interviewing Acosta and pointing out specific factual errors in Scheiber's story.
June 20, 2016: I explain different data sources that are useful for counting lawyers.
June 21, 2016: Steven Davidoff Solomon weighs in at N.Y. Times Dealbook, citing my research and supporting my points.
June 21, 2016, 10:05pm EST: Noam Scheiber sent a lengthy response by email and posted his response to his facebook page. Scheiber informs me that his response was reviewed by his editors at the New York Times.
June 24: I responded to Scheiber and explain Why The New York Times Should Correct The Remaining Factual Errors in Its Law School Coverage. In response, the New York Times posted a correction to the most minor of the 5 remaining errors.
June 09, 2016
Journalism researcher: To correct misinformation, essential to monitor and respond immediately (Michael Simkovic)
Scholars Strategy Network's No Jargon: 13: The Misinformation Age
Professor Brian Southwell explains why people tend to believe false information and discusses strategies for correcting the public perception of misinformation. Southwell is a professor of Mass Communication at University of North Carolina at Chapel Hill.