How Scientists Fail To Impact Controversies in Epidemiology
“…the scientific community is not engaged in a collaborative effort to
arrive at a data-informed consensus on the matter...” This strong
indictment of the scientific community for how it proceeds or fails to
proceed to help society resolve scientific controversies such as the
one surrounding the use of salt in the diet is the subject of a recent
paper in the International Journal of Epidemiology. The title of the
paper by co-authors Ludovic Trinquart, David Johns
and Sandro Galea from the Mailman School of Public Health at
Columbia and the Boston University School of Public Health is “Why do
we think we know what we know? A metaknowledge analysis of the salt
controversy”.
What Is The Scientific
Community Engaged In?
For
decades a growing scientific controversy within the public health
community has surrounded the contribution of a high salt diet to
cardiovascular disease. While organizations such as the WHO and the
CDC recommend reducing salt intake for most people, those within the
scientific community continue to argue both sides of the debate The
authors systematically reviewed 269 reports published from 1978 to
2014, including primary studies, systematic reviews, guidelines,
comments, letters and reviews, overall finding a remarkably strong
polarization of scientific reports pertaining to salt intake and
cardiovascular health outcomes or mortality. As they state, “we found
that the published literature bears little imprint of an ongoing
controversy, but rather contains two almost distinct and disparate
lines of scholarship, one supporting and one contradicting the
hypothesis that salt reduction in populations will improve clinical
outcomes.”
Exploration of Bias
To
examine citation bias (the citation or non-citation of studies based
on the result), the authors first classified reports as supportive
(54%), contradictory (33%), or inconclusive (13%) of the hypothesis
that salt reduction leads to health benefits. They next mapped a
network of the citations within these reports, applying an analytical
modeling technique that allowed them to quantify the probability of
a citation link between studies. This analysis revealed significant
citation bias, as authors were 50% more likely to cite studies that
came to a similar conclusion as their own.
Further remapping of the citation network based on authorship of
reports found clustering within networks of scientists, with only 25
and 28% of authors responsible for 75% of contradictory and supportive
reports, respectively. This finding suggests a disproportionately
small number of prolific authors dominate the field on both sides of
the controversy, perpetuating division. Furthermore, they found few
collaborations between those holding opposing viewpoints on the
controversy.
Bias In Systematic
Reviews
Finally, the authors examined the consistency of citations in
systematic review articles finding a surprisingly high level of
variation in primary studies included. In the 10 systematic reviews
including a total of only 48 different primary studies, they found
that the estimated probability of a study that is cited by one review
being cited by another review was just 27%. In addition, the
probability that a primary study was cited in a particular review was
even lower (22%) if that study reached a conclusion that was
contradictory to the review rather than supportive. This finding is
particularly surprising, and as the authors argue, is due to more than
just differences in selection criteria, but additionally reflects a
fundamental disagreement in the field about what counts as good
evidence.
Good Evidence
Contested
They point to concerns with the methodological quality of the existing
reports of randomized trials relating sodium intake to cardiovascular
outcomes as one potential source of this disagreement. However, they
argue that authors of systematic reviews must remain objective
regardless and their analysis shows that the inclusion or exclusion of
specific primary studies directly influences the conclusions of these
systematic reviews, reinforcing uncertainty and perpetuating the
divide within the field. These findings lead the authors to the harsh
conclusion noted at the outset of this article and to recommend that
an effort towards truly collaborative argumentation may be needed to
address particularly difficult scientific questions.
While previous studies have addressed citation bias, Trinquart et al
argue that their analytical approach is novel in that it allows for
empirical quantification of these factors and could be useful for the
analysis of both other areas of unresolved scientific controversy and
those where there is a high degree of consensus. ■
|