Where is the boundary?

Where does good science stop and academic misconduct start? The answer is surprisingly complex. Searching for clues in the grey zone.
Drew Hays via Unsplash

Consider this case: A molecular biologist is processing scanned genes on a computer. He increases the contrast to make his evidence clearer. Was he wrong to do so?

A physicist publishes a study. She withholds some of her data, planning to use it for future publications. Is she doing anything wrong?

Or consider a chemist researching the effectiveness of a certain substance. He decides not to publish his findings when he discovers the substance is not effective—not the result the company financing him hoped he would find. Is this misconduct?

Let us imagine white represents ideal academic practice. To stay with the same metaphor, let’s say the opposite, black, represents plagiarism, fraud, and manipulation. And between the two? The natural sciences, in particular, are grappling with a whole range of practices that border on misconduct. They represent the grey area between human error and sleight of hand. Data may not be falsified per se, but it may be cleaned up or left out. Results might be exaggerated and findings might be overblown. Welcome to the academic grey zone.

The grey zone and these kinds of borderline cases are a problem for academia. They are difficult to detect, and ultimately harm the reputation of research itself. But where does misconduct actually begin? And how can we differentiate intentional fraud from accidental errors, or deception from ignorance?

The grey zone

Sarah Schießl-Weidenweber understands the grey zone from experience. She tries to illuminate it as she assesses articles during the peer review process. Schießl-Weidenweber researches plant genetics at the University of Gießen. When reviewing studies, she has noticed how scientists sometimes distort their research results to match their working hypothesis.

The “genome-wide association studies” frequently used in genetics are a good example. Researchers use this method to identify genes that could influence a certain characteristic. Results can be interpreted in very different ways by not mentioning certain features, says Schießl-Weidenweber. “There are a large number of genes in a genome, and the likelihood that a certain gene will show up randomly in a certain area is not zero. So, researchers want to know what other genes are in the area.” Without this information, there is no way to know whether the researcher is drawing a meaningful conclusion. The biochemist has often returned essays for revision or further detail because of this problem.

Studies that cannot be reproduced also fall into this grey zone. They may withhold data or may have overly specific testing conditions. Schießl-Weidenweber says that statistical errors are also common. “Because almost no one can get it right.”

The quality of the data published is another frequent target of criticism. Samples may be criticised for being too small, or data sets may be doctored. Such misconduct is known as “beautification” within the academy.

The damage

It is difficult or impossible to know how widespread such misconduct truly is. The available statistics are little more than an approximation. Last year, Bavarian Broadcasting initiated a nationwide survey of ombudsmen at German universities and research institutions. The survey showed that a total of 1,124 suspected cases of fraud were reported between 2012 and 2016. Of these, a total of 246 were investigated by in-house commissions.

Notably, the number of scientific papers withdrawn due to academic misconduct has skyrocketed since the early 2000s. According to statistics from the journal Nature, the figure grew tenfold between 2001 and 2011, even though the number of publications grew by just 44 per cent in the same time period. Experts are divided over whether research practices are actually growing more corrupt, or whether more errors and instances of cheating are simply being discovered.

What is clear is that misconduct harms the academy. For Stefan Hornbostel from the German Centre for Higher Education Research and Science Studies, the academy itself is an object of research. He knows what it means when a fraudulent or manipulated study gets published. Empirical knowledge itself is not in danger, he says. “Most cases are discovered and sorted out somehow.” However, the damage is always much greater if the alleged results have had a strong influence on the general public’s behaviour. The vaccination debate is a good example. Problematic, in some cases inaccurate, studies have made many people fear vaccine injuries more than the danger of not getting vaccinated. The consequences of this misunderstanding are still unfolding.
Hornbostel believes there is also ethical damage to worry about: “If academics see these kinds of strategies being effective, then a kind of moral erosion takes place within science itself.”

The causes

The Deutsche Forschungsgemeinschaft (DFG—German Research Association) reprimanded a Cologne physician in 2014 and excluded him from applying for research funding for four years. According to the charges, the young scientist intentionally manipulated data from cardiac studies. The researcher himself said he was only on a temporary contract and worried about his career, feeling under pressure and unable to meet the expectations set by his research group leader.

The case illustrates one of Stefan Hornbostel’s hypotheses: Increasing pressure within the academy fosters increasing misconduct. “Especially in Germany, the pressure is intensified because researchers rely on external funding. This pressure has grown exponentially over the last 25 years,” he says. This creates stress. Projects must be completed within a limited time frame, contracts are temporary, and expectations are high. Young scientists, in particular, need to produce results. If these results are not quite ready for publication, they might be tempted to “cook the books” a little. Their entire bio might depend on it.

Vanity and a desire to succeed also play a role when researchers cross boundaries, says Hornbostel. There are also honest errors, pesky yet avoidable mistakes caused by poor methodology, not intentional fraud.

The border

Most research institutions and universities in Germany define misconduct as part of their statutes or guidelines on good academic practice. Regulations from the University of Regensburg name “falsifying” and “inventing” data, as well as “removing primary data,” for instance.

However, black is easier to define than grey. The academy is not governed by laws or constraints, but by academic freedom. The question, then, is how to define the border where white crosses over to grey. How can we differentiate mistakes from manipulation?

“It really is very difficult,” says Joachim Heberle. “How can you test whether someone is being honest?” Heberle is a professor of physics at the Free University of Berlin. He is also on the board of the “German Research Ombudsmen” established by the Senate of the DFG. As part of his position, Heberle reviews possible violations of good academic practice alongside his colleagues—around 100 cases a year. He says that almost all of them come from this grey zone.

Faced with such cases, it is Heberle’s job to mediate. He says that the academy is susceptible to misconduct because science is founded on honesty. There will always be a grey zone. There must also, however, always be a presumption of innocence. Mistakes do happen. Repeated patterns like articles that contain passages from other works, however, indicate intentional misconduct.

The ombudsman is in favour of looking at each case individually and considering every shade of grey. The fictitious example we cited at the start of this post is a good case in point:

  • Post-editing gene scans on the computer? He says this is not misconduct per se, but instead is a common practice. Accentuating what you want to say is allowed, for instance by slightly increasing the contrast so that findings come through in the printed image. Of course, the researcher must describe what was edited and how, using which software. Cutting, distorting, or combining different images, however, indicates clear misconduct.
  • What about a “divide and conquer” strategy where data is purposefully withheld for future publications? Definitely objectionable, says Heberle. The more dividing you do, the worse it is. You should simply say everything you know. Researchers are only justified in creating two publications when the data set is particularly large.
  • And not publishing failed studies in applied research? Heberle thinks the practice is not a good one, but that this isn’t necessarily misconduct. Ultimately, researchers have obligations to their clients, and labour law and patent law factors also play a role. This can nevertheless be harmful to the academy as a whole, since “knowledge is lost.”

The reaction

The academic system has already reacted in many different ways. Academic researcher Hornbostel names “pre-registration” of clinical studies as one example. In this practice, the parameters of a study are defined and documented before it ever begins. This might include sample sizes or hypotheses, for instance. With these parameters set in stone, there is less room for manipulation.

The open-science movement is also working to illuminate the grey zone. Researchers can report their concerns about publications on platforms like PubPeer, and the page Retraction Watch lists withdrawn publications. However, open-access platforms have also created new problems, such as “predatory journals” that publish almost all of the manuscripts submitted for a fee, without appropriate quality controlling.

In addition, universities are offering more events designed to teach good academic practices. Hornbostel believes the processes used to train and educate young academics need improvement. He says that the limits of what is allowed must be clearly outlined.

The problem of misconduct, however, runs even deeper, putting the reward and reputation mechanisms the academy runs on under the microscope. Hornbostel says there should be more of an effort made to scrutinise the academy itself. Too few replicated studies are funded, because there is little incentive for scientists to repeat others’ results. “You are not going to win many awards with that kind of study,” he says. Instead, practices like withholding data are rewarded, as researchers gain more prestige from adding publications. This increases their chance of having a long-term academic career. Hornbostel says that “organisations must make one thing clear: Yes, we do demand performance, but our focus is on quality and not quantity.”

Joachim Heberle and his colleagues on the ombudsman board want to reduce misconduct through education. They publicise the DFG guidelines “Sicherung guter wissenschaftlicher Praxis” (Fostering good academic practice), which contain basic rules and are updated continuously, as a kind of charter for academia. Their work is clearly needed, as a 2014 survey showed that less than half of all departments in Germany are familiar with the DFG guidelines.

Ultimately, however, grey zones will continue to exist. They are actually essential in a way, as the ombudsmen and academic experts agree. Too much control, too much transparency, and too many regulations could endanger good research, says Stefan Hornbostel: “Science does not automatically produce new information. Instead, it may strike out on the wrong track or contain false assumptions or even erroneous ‘knowledge’.” This is why bold approaches can often result in research breakthroughs. “If we expect science to be innovative and creative, and to deliver ground-breaking insights, then it will always involve a grey zone where common methodological standards may be broken.” This innovative potential should never be stifled.

As Hornbostel notes, mistakes are all a part of science.