By Adam Rogers
Every week science journalists get a bunch of emails from various Respectable Scientific Journals telling us, in advance, what articles those journals are going to publish. When I started in this game, these tables of contents came by fax; today, in the future, they’re downloadable PDFs. The quo for all this quid is that we agree not to publish anything until a set time and day.
Itâ€™s called an embargo, and it is in some senses the anticlimax of a long storyâ€”the story of a scientific discovery. Sure, journalists might focus on the eureka moment or the fascinating details of the methods some scientist used. Massive gravity interferometers! Drilling into Earthâ€™s crust! Robot spaceship studies a comet! But often, implicit in these kind of stories is a less pulse-pounding headline: Article Published.
That doesn’t mean it’s not news, or not important, or wrong. No! Quite the opposite. These are the atoms from which we humans assemble molecules of understanding. A peer-reviewed journal article is the way scientists say we found out a thing, and perhaps more critically hereâ€™s our data and our methods so you can see why we think itâ€™s true. â€œPeer reviewâ€� means that experts have read that article, commented on it, and assented to its publication.
But that said, the rigamarole around scientific publishingâ€”from submitting to a journal, to having relevant scientists review and approve the work, to publishing on a set dayâ€”is a social construction. This is the plodding, collaborative-but-combative dynamic that turns the labor of science into, well, Science. And Cell, Nature, the New England Journal of Medicine, and thousands of other journals.
I bring all this up because earlier this week I got advance word about an article describing, ironically, how this entire system is crumbling at the edges. It was embargoed for Wednesday morning, which means I missed it. It made the whooshing sound that Douglas Adams onomatopoetically ascribed to deadlines.
If you believe this new paper, though, that’s totally OK. In 1990 physicists began sharing drafts of their articles before publication and peer review; as the internet expanded, so too did this server for â€œpreprints,â€� called the ArXiv. (Thatâ€™s not an X. Itâ€™s the Greek letter Chi, pronounced â€œkai.â€� Get it?) Today the ArXiv hosts more than 1.3 million papers in physics, math, astronomy, and other hard sciences. In 2013, the life sciences got preprinty too, when Cold Spring Harbor Lab started hosting the BioRxiv. (say â€œbio-archive;â€� not my fault). Since then, prepublication sharing of articles has taken off like a jet racing for altitude over a storm.
But not for everyone. Anecdotally, researchers have understood for years that scientists in some fields were more likely to share their results, prepublicationâ€”at conferences, socially, and via preprint serversâ€”than others. No one really knew why, or who.
The paper I got an email about on Sunday (but am only allowed to tell you about as of today) describes the results of a survey of more than 7,000 working research scientists from nine different major fields. According to that survey, three core features of a given scientific discipline determine whether its adherents are likely to post all their data on a slide at a conference, or post it on a preprint server: norms within the field (that is, the traditions passed on by colleagues and teachers), the overall level of competitiveness in the field, and the potential for commercialization of new results.
The stakes of sharing are complicated. On the plus side, you get potential collaborators, and people who can extend your work. On the minus side, they might scoop youâ€”solving the problem youâ€™ve brought up before you can, and thereby grabbing all the kudos, grants, Nobel prizes, and so on. â€œOne canâ€™t clearly say whether prepublication disclosure is good or bad,â€� says Jerry Thursby, an economist at Georgia Tech and one of the authors of the study. â€œIf you reduce the size of the prize people donâ€™t work as hard, but you want people disclosing early so others can build upon that.â€�
Most likely to share early were mathematicians and social scientists. Basic researchers and people working in medical schools were the most tight-fisted.
Now, youâ€™re thinking that the important question here is why. And thatâ€™s a good question. Nobody knows. â€œIf you talk to mathematicians, you get the feeling that itâ€™s because math is so formulaic you can define the boundaries of what youâ€™re doing. In biological sciences, thatâ€™s much more difficult,â€� Thursby says. In math, in other words, you get an answer. Thatâ€™s tough to scoop. Hotter fields, like say biotech, where the stakes are patents and venture capital, reward a more parsimonious approach. The same goes for fields with limited resources. â€œWhy are the norms different? Why is competition different?â€� Thursby says. â€œIs it a function of the scientific process in a field, or an outcome of the way science is done in that field? Or is it something else?â€�
But the better question is what difference those differences make. â€œWhat would happen if mathematicians were more competitive?â€� Thursby asks. â€œWould you get more mathematics or less?â€� Like, could you optimize the norms and incentives of a field to make it more productive? To learn more about the world?
The trick to thinking like that is realizing that the whole systemâ€”peer review, journal publication, embargoes, and even articles in the general press like this oneâ€”is, in fact, a little arbitrary. Just as the peer review system of journal publication is itself an ever-evolving construction, so too are the unspoken rules that govern which scientists share what. You know how some people say that science is socially constructed, and then some other smart ass invites them to step out a fifth-floor window to see just how socially constructed gravity really is? Good point, funny person! Except when the gravity-ologists decide to write up the equations governing how fast those postmodernists falls and the size of the splats when they hit the ground, their decisions about where and how to publish them are as socially constructed as tax policy.
For every mythology that says peer review started in the mid 1600s with the advent of the Royal Societies of science in Europe, thereâ€™s a counter-history from someone like Ivan Oransky, co-founder of the invaluable science watchdog Retraction Watch, who points out that the modern idea of peer review is just a few decades old at best. â€œThe 1970s is when Nature started rigorously peer-reviewing everything,â€� Oransky says. Like, James Watson and Francis Crickâ€™s paper describing the structure of DNA? Wasnâ€™t peer reviewed. (Neither was Rosalind Franklinâ€™s critical contribution to the discovery, published in the same issue of Nature.) â€œIâ€™m pretty sure itâ€™s still right,â€� Oransky says.
Because thatâ€™s the hollow core at the center of this story. Peer review is a gateway to legitimacy, but only because scientists (and their funding organizations) make it so. â€œLong before the digital publishing revolution, some people would say, â€˜oh, itâ€™s been peer reviewed and must be correct,â€™ and others would say, â€˜itâ€™s been peer reviewed and therefore itâ€™s gone through a filter.â€™ And that filter is sometimes worse than not having a filter,â€� says Jonathan Eisen, a microbiologist at UC Davis and open-science advocate. (Heâ€™s on the board of BioRxiv.) â€œThereâ€™s reasonable evidence that trying to get something published in the snooty, high-impact-factor journals may correlate with making something more wrong than if you hadnâ€™t.â€�
So itâ€™ll take new models and new mythologies to get people in any field to shift to a new way to share knowledge. The research still has to be right; having other scientists look at it is still a good idea. Oransky points at F1000research, a preprint server that also lets peers review whatâ€™s published, and then shifts the status of articles after review so that theyâ€™ll show up on respected academic search engines like PubMed and Google Scholarâ€”and so granting agencies can see researchers doing the work theyâ€™ve promised. â€œIf people got credit for this, theyâ€™d all do it,â€� Eisen says. â€œItâ€™s not that complicated. Most people want to share info sooner rather than later.â€� The story of scientific publishing is a long one, but it isnâ€™t over.
More Great WIRED Stories