Epi Wit & Wisdom Articles
Taubes Interview Stimulates
Additional Readers to Respond (5 of 6)
Wide Variety of Views Expressed
[Editor’s Note: We continue to
receive letters about our interview with Gary Taubes and some of our
readers have expressed the hope that we continue publishing them to
maintain the ongoing discussion. Here are four additional letters,
including reactions from Gary Taubes.]
Provocative Letter From Canadian
Colleague
Dear Editor,
I personally believe that epi is
sliding into a crisis and is in denial. Steve Milloy, for example, is
having a field day undermining confidence in the discipline at a time
when media attention has never been stronger but the funding available
to do careful, meticulous studies is decreasing. One huge part of the
problem is that epidemiologists have marginalized themselves by
becoming phenomenologists, uninterested in mechanism. This is partly a
function of who has become an epidemiologist in recent years.
In the past, physicians with
clinical training and typically some biomedical science in their
background would learn the basics of epidemiology for a specific
application. They knew that their interest was in a particular disease
or for public health applications or that their career would lie in
testing hypotheses about diseases they were familiar with using
epidemiological methods. Their command of the methodology may or may
not have been strong; a handful, of course, were superb but many were
relatively mediocre taken strictly as methodologists. Generally
speaking, however, they knew exactly what they were studying and had a
mental picture of the pathophysiology to guide them.
The entrance of large numbers of
PhD epidemiologists made for a vastly more sophisticated discipline
and great advances in methodology. Interaction with biostatisticians
was also easier because of the common commitment to research training
and the similarity of graduate study. (Medical school is a
qualitatively different experience.) The new generation of
epidemiologists, where PhDs greatly outnumber MDs, is far more capable
of tackling difficult problems with the sophisticated tools available.
The problem is that they do not necessarily understand what they are
studying.
Recently, I participated in a
major international meeting on air quality and health effects. In the
middle of one concurrent session on respiratory disease and PM10, the
moderator, who is highly respected, paused and declared that he did
not really know what “chronic obstructive pulmonary disease” really
was and how it differed from asthma except that the ICD codes were
different. The fact is that he is dealing at a level of abstraction
that does not force him to understand what the data mean and how the
underlying problem is structured biologically. Sooner or later, this
lacuna of ignorance will result in a serious mistake in interpretation
and analysis.
Epidemiology is a method, as
reflected by Miettinen’s concept of it as “occurrence res-earch.” It
is not content. One either learns the content or partners with a
collaborator who does. Practicing epidemiology in isolation is
dangerous and will sooner or later lead to error. Cum-ulatively, these
errors lead to the discrediting of the field as “junk science” in the
term of Milloy or “bad science” in the term of Gary Taubes.
The growing crisis of
credibility in epidemiology will not be resolved by resorting to
increasingly sophisticated methods of analysis. An example of this, in
my opinion, is the disappointing experience with meta-analysis, which
has led to misleading results even in clinical trials where the method
should be most applicable. It may be addressed by going back to
fundamentals and thinking through epidemiology as at bottom, a
descriptive science quite capable of testing hypo- theses but not
truly experimental, much like astronomy and particle physics.
I am pleased to see that the Epi
Monitor is sensitive to these issues and has sent a powerful wake up
call to a scientific community increasingly at risk.
Dr. Tee L. Guidotti
Taubes’ Response: Dr. Guidotti
raises an interesting point, and one that would, pardon the
expression, bear further research. While it’s probably a reliable
statement that a researcher who knows little of his actual subject is
teetering on the abyss, I’d still hesitate to promote the idea that
MDs, with the rare exception, are a good bet to do good science.
••••••••••••••••••••••
Taubes is “Refreshing”
Dear Editor,
I found Mr. Taubes’ comments
refreshing. They gave voice to similar apprehensions that I have been
harboring for quite some time. I came to environmental epidemiology
from another discipline, clinical veterinary medicine. It didn’t take
me long to figure out that epidemiology is by definition a relatively
crude science, since the investigators cannot control the experiment.
In fact, one could argue that epidemiology is not a science at all,
but is a series of observational research tools that are used by a
variety of disciplines (medicine, demography, public health,
psychology, etc).
Being a crude research tool, it
would seem reasonable to expect that epidemiology is appropriate for
measuring gross effects. In fact, epidemiology has been very good at
identifying gross effects, such as the 10X increase in lung cancer
among smokers, the enormous risk of angiosarcoma and leukemia posed by
uncontrolled vinyl chloride and benzene exposures, and the beneficial
effects of diets rich in fruits and vegetables. The problem is,
epidemiologists started to think that if they could refine the method
a little and increase the numbers to make it more precise, they could
use this crude tool to detect small effects due to small exposures (or
in some cases large effects due to small exposures). Basically, they
could use a hammer to turn a screw, if the hammer were fancy enough.
I’ve noticed that some have
become so wedded to this idea that they believe things that violate
common sense. For example, Bruce Ames and others have been preaching
for years that low-level man-made chemical exposures were unlikely to
be important causes for cancer. But Ames, et al have been routinely
hooted down by laboratory researchers and epidemiologists alike.
Furthermore, the arguments used to refute their statements violate
common sense.
One reason given to explain away
Ames’ treason is evolution: man has evolved defense mechanisms to
natural carcinogens that don’t exist for man-made ones. However, many
vegetables (tomatoes, broccoli, etc.) have only been consumed by
people for a relatively few generations and there is no selection
pressure (cancer doesn’t usually strike until your reproductive life
is over). Also, didn’t we just say that a large intake of fruits and
vegetables is protective of cancer? How could that be, if pesticides
in food (man-made or natural) were major risk factors?
Taubes implies that epidemiology
is a very inefficient discipline, and I would have to agree. Often,
millions to hundreds of millions of dollars have been spent on
numerous epidemiologic studies of a particular chemical exposure. The
results are almost invariably inconclusive. Yet, the call then goes
out that “more research is needed.” This suggests that we should spend
increasingly limited societal resources to try and pin down what must
be (at best) weak effects (epidemiology is good at detecting large
effects) that result in rare diseases among a small subset of
maximally exposed individuals. A perfect example of this with which I
am familiar is the purported association between 2,4-D and cancer.
Recent expert committee reviews
of the 2,4-D/cancer association, such as the 1989 Grahm Committee
(Environmental Health Perspectives, 1991, 96:213-222) and the 1994 EPA
SAP/SAB review, examined numerous case-control, cohort and laboratory
studies to evaluate this issue. The Grahm committee examined at least
a dozen well-designed case-control studies alone. Both reports found
only weak and/or inconclusive evidence of an association between 2,4-D
and lymphoma. Yet, both reports called for more research on this
issue. If I understand this correctly, that means spending millions of
more dollars to try and see if a chemical causes a slight increase in
a rare cancer among the maximally exposed population (pesticide
applicators). All this, even though we know that protective practices
limit exposure. Rather than needing more research, I would say we have
already spent too much.
Of course, just as Mr. Taubes
indicates, the cry usually goes out that “People are dying! If 2,4-D
could possibly cause lymphoma then we could save hundreds of lives.”
Well, anything is possible and it’s impossible to prove a negative.
But unless we all want to live like the Unibomber, back in the good
old days of no chemicals, (except for maybe the uncontrolled
occupational exposures in the early dye houses, coal furnaces and
mines) we have to direct our resources where they will get the best
return. Right now, as it is currently practiced, traditional
environmental epidemiology is not it.
John Bukowski
Taubes’ Response: Dr. Bubowski’s
comments are refreshing, as well. A classic selection bias of
letters-to-the-editor is that only people who disagree with you take
the time to write. It worries me that here those who agree are mostly
those doing the writing, and I wonder if either the medium or the
message is being ignored.
••••••••••••••••••••••
Reader Remembers Earlier Taubes
Article
Dear Editor,
I enjoyed reading the interview
with Gary Taubes, in the Epi Monitor, as much as I enjoyed reading his
“Epidemiology Faces Its Limits” in Science last year, and as much as I
disliked reading his stirring defense of “violence epidemiologists” in
Science in 1992. Since the violence epidemiologists’ work is rather
lower than the standards of those Taubes criticizes in ’95
and ’96, does that mean that he, like epidemiology, bases his view of
“science” on how committed he is to a particular conclusion? Or is he
reconsidering his defense of the junk science he defended four years
ago?
Paul H. Blackman, PhD
Taubes’ Response: Dr. Blackman
is director of Research for the National Rifle Association. His memory
is long, as well as almost accurate. It would not be correct to call
the 1992 piece a “stirring defense.” It was a piece of reportage. On
the other hand, Blackman was right; I allowed my opinions on the
subject to color my reporting. I wasn’t as skeptical as I otherwise
might have been and as I would be today. On the third hand, compared
to much of the research purporting to give evidence for a salubrious
effect to handgun ownership, the violence epidemiology work looks
Nobel-caliber.
Published August/September 1996
v
|