Among the critiques of academic journals is that they tend to publish only articles finding positive results. That is to say, if a researcher sets out to prove A, but in fact finds no evidence of A, that paper is less likely to get published than a similar paper in which the author found that A is true. Here is one example. But how do we know these studies of publication bais aren’t themselves getting published because of the bais toward publishing positive results? Two authors set out to test the theory. But as Andrew Gelman quipped:
They conclude, “We found no evidence of publication bias in reports on publication bias.” But of course that’s the sort of finding regarding publication bias of findings on publication bias that you’d expect would get published. What we really need is a careful meta-analysis to estimate the level of publication bias in studies of publication bias of studies of publication bias.
Indeed, getting articles published in major journals is of enormous importance to academics as this article by Steven Landsburg demonstrates. In it researchers survey economists and find they would sacrifice several months of life for a publication in a major economics journal.
They may not have to go to such extremes, however, since several papers have recently been randomly constructed by computers and received positive feedback. This included one mathematics paper, which…
…was created using Mathgen, an online random maths paper generator. Mathgen has a set of rules that define how papers are arranged in sections and what kinds of sentence make up a section and how those sentences are made up from different categories of technical and non-technical words. It creates beautifully formatted papers with the conventional structure, complete with equations and citations but, alas, totally devoid of meaning.
You can read the article here, which also includes cases of “fake” papers being published in computer science and social criticism journals.
Several organizations are searching to reduce the barriers to publication including the Public Library of Science and PeerJ, which is offering authors a one-time flat fee for lifetime publishing rights.
Replicability is also a problem:
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.