Can It Really Be True That Half of Academic Papers Are Never Read?
By Arthur G. Jago
A recent Chronicle opinion essay arguing that the tenure process can be quite unfair included this line: “At least one study found that the average academic article is read by about 10 people, and half of these articles are never read at all.” In a commentary that I was otherwise in complete agreement with, I found that particular statement quite unbelievable. First, the magnitude of the assertions was simply astonishing. Second, I was perplexed by how someone could design a study to empirically determine that some published articles were never read. Such a study was beyond my imagination; the pseudo-logical fallacy of “proving the negative” came to mind.
I contacted the author and was provided her source, an article in Smithsonian, the magazine. This article actually qualified (somewhat) the implausible claim by asserting that 50 percent of papers are never read by anyone “other than their authors, referees and journal editors.” I guess it is some consolation to know that humans do indeed write, review, and select most manuscripts for publication, although we do know that computer-written gibberish occasionally makes it into print, into citation indices, and into researchers’ h-values.
A link in the Smithsonian article points to Indiana University as its source for the statistics, but this proved inaccurate. The Smithsonian author redirected me to the actual source, a 2007 article by Lokman Meho in Physics World, the magazine of the London-based Institute of Physics. When I asked Meho for his source of the cited statistics, he told me that “this statement was added to my paper by the editor of the journal at the time and I unfortunately did not ask from where he got this information before the paper was published.” The Meho article has been formally cited over 300 times.
In turn, I contacted the editor of Physics World from 2007. He told me that “it was indeed” something that he had inserted during editing, from material provided to him in a communications course taken at Imperial College London in 2001. I contacted the instructor of that course, now retired, who told me he could not provide me with a specific reference to what is now “ancient history” but that “everything in those notes had a source, but whether I cross-checked them all before banging the notes out, I doubt.”
The Physics World editor suggested that the Imperial College course material may have been based on a 1991 article in Science. However, I discovered that that article was not about unread research but was rather about uncited research. Not being read is a sufficient condition for not being cited; however, not being cited is neither a necessary nor a sufficient condition for not being read — i.e., not being cited says nothing about an article having been read or unread. As a striking illustration of the difference, a 2010 article was recently identified in Nature as an online paper that has never been cited but has been viewed 1,500 times and downloaded 500 times. (The paradox, of course, is that this uncited paper is not now uncited, by virtue of it being cited for its uncitedness.)
Frustrated, I ended my search for the bibliographic equivalent of “patient zero.” The original source of the fantastical claim that the average academic article has “about 10 readers” may never be known for sure.
In the bigger picture, it is certainly true that much of published research has limited readership. As a young scholar — and well before electronic journal access — I was quite amazed to learn that one of the five most prestigious academic journals in my field (business management) had a worldwide circulation, including all libraries, of a mere 800 copies. Indeed, our audiences are often quite small, and some large percentage of articles undoubtedly have very little impact.
However, because an assertion is intuitively appealing or reinforcing of existing beliefs does not justify misstatements of fact or the distortion or embellishment of what can be documented. In their communications with me, all of the participants in this tale — good people, to be sure — recognized an absence of sound justification in their actions.
Even when a primary source is accurate, a reference to it may still be quite problematic when an author relies upon a flawed secondary source but cites, instead, the primary source. Using statistical modeling of recurring identical misprints in bibliographic entries, two UCLA engineers estimate that “only about 20 percent of citers read the original” article that they claim as a source in their own reference lists. Stated otherwise, 80 percent of citers are not readers, and the secondary flaws they encounter they themselves propagate in their own articles.
This object lesson in the perils of relying on secondary sources reminds us all that our readers place a trust in us each time we put words to paper. We have a duty, on behalf of all authors, to do our best to fulfill that trust when we produce those words. A single mistake — a bibliographic patient zero — may be quite small and entirely unintentional. However, it can infect the literature like a self-duplicating virus and become amplified with time.
In a 2009 essay, the Pulitzer Prize winner John McPhee noted that “any error is everlasting” and quoted Sara Lippincott, a New Yorker fact-checker, that once an error gets into print it “will live on and on in libraries carefully catalogued, scrupulously indexed … silicon-chipped, deceiving researcher after researcher down through the ages, all of whom will make new errors on the strength of the original errors, and so on and on into an exponential explosion of errata.” Lesson learned.
Arthur G. Jago is a professor emeritus of management at the University of Missouri at Columbia. He has published articles in, among other journals, the very prestigious but not widely read Organizational Behavior and Human Decision Processes.
The Journal That Couldn’t Stop Citing Itself
In a four-paragraph editorial published in 2014, the “Journal of Criminal Justice” made 47 citations, all of other pieces that had appeared in the same publication.
By Tom Bartlett SEPTEMBER 23, 2015 PREMIUM
The Journal of Criminal Justice has been on a roll. Once considered a somewhat middling publication — not in the same league as top journals like Criminology and Justice Quarterly— it is now ranked No. 1 in the field according to its impact factor, which measures the average number of citations a journal receives and is meant to indicate which titles are generating the most buzz.
Rocketing to No. 1 is even more impressive when you find out that in 2012 the Journal of Criminal Justice was way back in 22nd place. That’s quite a leap!
Predictably, that sharp uptick made some researchers in a field devoted to misdeeds a tad, shall we say, suspicious. Among them was Thomas Baker, an assistant professor of criminal justice at the University of Central Florida. So Mr. Baker did what good researchers in all fields do: He took a hard look at the data. Then, after emailing it to a few friends, he decided to publish what he found in the field’s widely read newsletter, The Criminologist.
What he found was this: Much of the rise in the journal’s impact factor was due to citations in articles published in the Journal of Criminal Justice itself.
That impact factors can be gamed is news to no one who has paid any attention to academic publishing in the last, oh, couple of decades. For instance, in 2012 Thomson Reuters, which publishes the rankings, dropped 51 journals from its list for trying to artificially inflate their statuses. In one instance, several medical journals were banned after apparently colluding in a kind of you-cite-my-articles, I’ll-cite-yours arrangement.
Looking back over the last two years of articles published in the Journal of Criminal Justice,Mr. Baker noticed that many of the citations had appeared in editorials and articles written or co-written by Matt DeLisi, the editor in chief, who is a professor of criminal justice at Iowa State University. Of the 328 citations made to the journal in 2012 and 2013, 157 appeared in the pages of the journal itself, and 90 of those 157 papers had Mr. DeLisi’s name at the top.
In the most eyebrow-raising instance, one four-paragraph editorial, published in 2014, didn’t take up even a single page yet managed to have 47 citations, all to the Journal of Criminal Justice. Notably, all but three of the citations were from 2012 and 2013, the years used to calculate the most recent impact factor. (All three of the 2011 citations were to articles on which Mr. DeLisi was an author.)
Without those self-citations, the Journal of Criminal Justice would still have improved significantly under Mr. DeLisi’s stewardship, moving from 22nd place to 10th, according to Mr. Baker’s calculations. But it wouldn’t be leading the pack.
‘Questionable Editorial Decisions’
So the journal is clearly, brazenly gaming the system, right?
No, says Mr. DeLisi, who took over as editor in 2010. In an interview, he explains that he was citing research that was relevant to the articles, and nothing else. “If someone writes an editorial, of course there are going to be citations,” Mr. DeLisi says. His critics “seem to think that the only reason one does editorials is for citations,” he says.
As for citing so many of the papers in his own journal, Mr. DeLisi says this was a way of further recognizing the authors, not pumping up the stats.
Mr. DeLisi notes that Eric Baumer, the editor of the Criminologist newsletter, is also a co-editor of the journal Criminology, which was knocked down a peg by the rise of the Journal of Criminal Justice. “There are all kinds of interesting conflicts of interest,” Mr. DeLisi says. He is now working on a written response to Mr. Baker’s piece.
What this will mean for the journal’s reputation is unclear. If Thomson Reuters decided the journal was, in fact, gaming its impact factor, it could be banned from the rankings. (A call to Thomson Reuters went unreturned.)
One well-known scholar, Bob Bursik, a professor of criminology and criminal justice at the University of Missouri at St. Louis, thinks the damage has already been done. “Intentional or not, in my mind, there is no question that the meteoric rise in the ranking of JCJ was due mostly to highly questionable editorial decisions,” he writes in an email. Those decisions, he says, render the impact-factor ranking “misleading and meaningless.”
Skirmishes like this are not uncommon in the curious little world of academic research. What’s a bit unusual here is that Mr. Baker actually made the effort to crunch the numbers and figure out what was going on. At first he wasn’t so sure he wanted to disseminate what he had uncovered. He is, as he says, “very untenured,” and getting a reputation as a rabble-rouser might not be the best move. Some senior professors advised him against going forward. “I guess I’m strong-headed that way,” he says.
There’s a philosophical angle here, too. As editor, Mr. DeLisi has shifted the focus of the Journal of Criminal Justice from a more traditional approach to one that emphasizes biosocial factors — that is, looking at the environmental and biological influences that affect criminal behavior. There’s a divide in the field between those for and against a biosocial view.
But Mr. Baker is all for biosocial theory. He praises the journal’s change in focus and its tendency to publish pieces that would have trouble finding a home elsewhere. Several years ago, under Mr. DeLisi’s editorship, Mr. Baker even published an article in the Journal of Criminal Justice. “That’s the article of mine with the most citations,” he says. “Of course, some of those cites are from DeLisi.”
Tom Bartlett is a senior writer who covers science and other things. Follow him on Twitter @tebartl.