Roughly 70 percent of medical studies do not adequately report the number of animals used in experiments, according to a PLOS Biology report. The finding raises large questions about the validity of those experiments.
The team behind the PLOS Biology paper followed the “flow of animals,” such as rats and mice, used to test novel drugs against cancer and strokes in medical studies.
“In two thirds of the cases, irrespective of whether it was stroke research or cancer research, we couldn’t even say what happened to the animals, because it was not reported,” co-author Ulrich Dirnagl told DW.
Animal attrition, or the loss of animals during medical experimentation, can present a problem when researchers attempt to remove certain animals from a given group to achieve a desired result. Unlike in human experiments, researchers were not “blind” to the group of a given animal in many of the 100 studies randomly selected from the fields of cancer and stroke research.
“You may find reasons post-hoc, after you have done the study, to exclude one or two animals. Then the [bias] is there,” Dirnagl said.
It’s a neurotoxin
In one paper Dirnagl was asked to review a few years ago, for example, ten animals were mentioned at the beginning of the study. At the end, however, the paper listed only seven animals as having successfully undergone treatment. Dirnagl was confused as to what had happened to the three missing animals.
“So I wrote back to the editor and said, ‘I can’t look at this paper before I know what happened to the animals.’ ”
A year passed. The paper, he says roughly bore the name “Substance X is a neuroprotectant in strokes” – meaning it helps – and hadn’t been published.
Then he received a re-routed email from the original authors (kept “blind” to protect their and his identities): “It said, ‘Thank you, we took your comment very seriously. We looked at our study, and we now found that that this substance we were studying is not a neuroprotectant. In fact it’s a neurotoxin.’ ”
The authors concluded that the three animals they had excluded from the study had in fact been killed by the drug in question.
The paper was given a new title, to the effect “Substance X is a neurotoxin,” Dirnagl says.
“This is maybe an extreme example, but I think it’s not uncommon. We found a substantial number of animals that didn’t match.”
There are good reasons to exclude study subjects from clinical and preclinical studies: The subjects might die, their physiological parameters might ultimately exclude them from a study, or – in the case of humans – they might relocate.
And though there are criteria for whether an animal should or should not be excluded from a medical study, journals publishing the results of those studies often rely on self-reporting.
“Those guidelines exist, but the problem is that they are not enforced,” Drinagl says. “Usually authors click a button when they submit a paper and say ‘I’m in compliance with these guidelines.’ ”
They were not in compliance, however, in 70 percent of the cases, as the team of German and and US researchers at the Charité Universitätsmedizin Berlinfound.
The team also ran theoretical simulations checking the effects of excluding one, two, three and even more animals from some of the studies.
“It was even astonishing to us how dramatic the effects can be. We saw that even excluding one or two animals in such a study can produce totally different results.”
Scientists are drawn toward “spectacular results,” Dirnagl says, since they lead to professorships and funding.
The objectivity of medical research has come under recent scrutiny ever since the journal “Science” published a paper in August 2015 showing that a majority of psychological studies could not be reproduced.