Move along, nothing to see here

Most academic articles tell a success story. Few take the risk of documenting their failures. This needs to stop.
David Clode via Unsplash

Imagine a giant hospital with no signs on any of the doors. You want to visit your aunt, who is somewhere in this hospital. Since you don’t know where you are going, you walk down the hallways opening one door after another. You get lost, and end up looking through the same hallway twice. You are probably not the only visitor at the hospital, and some visitors might find the people they are looking for.

Of course, if someone who had already found the way left notes as to which patient was behind which door, that would be a big help for you. It would save you a lot of time. It would save even more time if you no longer had to open the doors to the bathrooms, operating rooms, storage rooms, and nurses’ lounges if someone else had already seen there were no patients in these rooms and left behind a note for those who came behind him.

In science, leaving these kinds of notes for others is rare. Most published studies say something more along the lines of “My patient is here” and describe their positive research results. These include studies designed to prove statistical correlations, such as “Medication A results in significantly higher blood pressure levels than a placebo”. Such studies confirm the researcher’s initial hypothesis. According to the journal Forschung & Lehre (Research & Teaching), this type of work makes up 90 percent of all studies published.

Studies that say something like “No patients here” are much rarer. Such a study might say, for instance, that “Medication B has no influence on blood pressure levels”. This is one of the reasons so many effects published in psychological journals cannot be confirmed by subsequent studies. That means researchers keep opening the same door over and over again, only to discover it was a waste of time and money. In the worst-case scenario, they may waste lab animals—or even human subjects.

This issue also impacts meta-studies. These are studies that summarise various data sets published by other researchers, then analyse them jointly. If only studies proving an alleged effect are published, while studies that do not show the effect are not published, then the database used in the meta-study will be fundamentally distorted.

Why is the system so out of balance?

Are scientists egotists, uninterested in leaving notes for others? Are they embarrassed they accidentally walked into the operating room? Or are the journals to blame, for never giving space to negative results?

Academic journals tend to seek out articles that can generate a high level of attention and multiple citations. Academics have been aware of this problem for many years, however. Multiple attempts have been made to establish journals specifically for publishing negative results, such as “New Negatives in Plant Science”, distributed by Elsevier, which went online in 2015 but was shut down again in 2017, apparently due to a lack of interest from researchers.

The few articles published during this time showed that journals can respond to this need. French researchers, for instance, published that they were unable to verify the existence of certain molecules in plant cell chloroplasts despite conducting multiple experiments—extremely important information for other researchers in the field, yet a result that would be almost impossible to publish in a classic journal. However, the generally low level of interest among researchers shows that simply providing a publication venue will not solve the problem.

Why and when do researchers publish their data?

Producing an academic article is a significant undertaking. Even experienced academics may take weeks to create all of the graphics, texts, and bibliographies they need—not to mention the long peer review process. In return, scientists hope to add a prestigious entry to their publication list. Since the majority of publishing scientists work on temporary contracts, and since their publication list serves as the backbone for their next application, most of them simply cannot afford the luxury of an altruistic publication like those in “New Negatives”. Instead, they do nothing with the data, which remains inaccessible, locked in a file stored on their local computer.

In the age of digital networking, this almost seems ridiculous. After all, it would be easy to make one’s data accessible to other researchers. Today, there are plenty of well-equipped databases that allow this kind of non-curated data exchange. Open repositories like FigShare, Zenodo, or GitHub allow researchers to exchange images, raw data, scripts, and meta-data.

This is a good and useful step, but there is often one key component missing: the scientific context, the hypothesis, the integration—in short, everything that makes an academic article useful. In addition, eliminating the peer review process makes the information less trustworthy and unlikely to be cited in other works. Despite the small amount of work involved, such sharing still takes altruism—and the willingness of the scientific community to use the platform to conduct their research. Unsurprisingly, most entries on Zenodo are references to existing publications, not unpublished data sets. Publication following a peer-review process therefore remains the gold standard for publishing data, as it should.

So what conclusions can we draw?

Yes, scientists are egotists when it comes to publishing their data. They have to be. Yes, failure is embarrassing, and has a negative influence on your career. Nevertheless, it is professional journals themselves that may provide a way out of this dilemma.

Academic articles are the best way to communicate data. They are the international currency of an academic career. Repositories are a great addition to academic articles, but cannot replace them. At the same time, experience has shown that successful professional journals do not want to associate their names with failure. To combat these issues, all professional journals should set up a “New Negatives” article category, similar to a “Short Communication” for interesting interim results or a “Review” for summarising applicable literature.

Slightly different standards have to be created for the peer review process than those currently used for positive research papers. Thorough justifications must be required for the study’s hypothesis and design, and the study must be plausible enough to ensure that it did not fail due to a lack of experimental skill, poor handling, or insufficient statistics. In contrast to a repository, this will help effectively avoid publishing bad research—in contrast to failed research.

In this kind of model, a “major failure” involving a large amount of work and time is published in a more frequently cited journal than the “minor failure” of a less expensive or time-consuming experiment. Having the journal’s name in their publication list would not be seen as a blemish, and all of the researcher’s hard work would pay off. In addition, this would encourage scientists to invest more time and work in designing their studies from the start to ensure they are still suitable for publishing even if they do fail.

Of course, any attempt to implement such a model would meet with resistance from publishers, and that resistance could mount to the level shown towards the open source movement. It would not, however, threaten the underlying business model used by publishers, and could even complement it.

If it were implemented, a visitor to the giant hospital of science might even have a chance to find their aunt before she is released.