|
Epi Wit & Wisdom Articles
Readers React to Taubes
Interview
(4 of 6)
Respondents Agree and Disagree
at the Same Time
Thoughtful Reaction from
Michigan
State University
Dear Sir,
Why is it that I find myself in
total agreement with every scientific principle articulated by Taubes,
but completely unable to recognize the epidemiologic landscape he
describes? At every interdisciplinary conference or committee in which
I have participated over the past twenty years, no matter whether the
other participants were laboratory scientists, clinicians or
policy-makers, it is the epidemiologists who have counseled caution in
data interpretation, who have exercised, in other words, just the kind
of self-critical stance that Taubes advocates.
Perhaps Taubes has trouble
recognizing who is and who is not an epidemiologist; not everyone who
produces a field study merits the appellation. Taubes points out that
the electromagnetic field (EMF) dispute is driven by epidemiologic
results. Partly true, but those epidemiologic results originated in
the work of non-epidemiologist Wertheimer and nonepidemiologist
(physicist) Leeper, and were continued by others without epidemiologic
credentials as well as by some mainstream epidemiologists. And having
participated in a national panel on EMF results, and having reviewed
data on behavioral effects of EMF, I can assure Taubes that there is
no lack of laboratory scientists who tout their experimental evidence
as evidence of EMF harm. (1)
I recently had the opportunity
to talk to Diane Dumanoski, one of the authors of Our Stolen Future, a
book with forward by Vice-President Gore and which is now in its sixth
printing since March, that argues that chemicals with estrogen-like
properties are threatening the future of mankind. (2) I pointed out to
her that many of the “epidemiologic” studies she cites as supportive
of her thesis are methodologically very weak, and were performed by
non-epidemiologists (e.g., Carlsen on sperm counts, Jacobsen on PCBs
and development, Blair on immunological effects of DES). By contrast,
negative research by well-trained epidemiologists (e.g., Bertazzi on
cancer in Seveso, Krieger on breast cancer and pesticides) is
relegated to footnotes. Her response was in some way like Taubes’:
“All epidemiologic studies are problematic and criticizable.” Thus,
does Taubes, by condemning the field in general, support just those
alarmists whose mission he condemns.
Most conspicuously lacking from
Taubes’ formulation is the historical and public health perspective
necessary to contextualize epidemiologic science. There have been many
instances where the first report of an epidemiologic association has
been most unimpressive scientifically, indeed nothing more than
“signals on the borderline of noise.” I would ask Taubes to go back to
the first report of the association of aspirin and Reye’s syndrome
(3); the first (non-randomized) trial of folate to prevent neural tube
defects(4); the first suggestion that congenital cataracts could be
caused by the rubella virus (5); the first report of an association
between thalidomide and phocomelia (6). He may be surprised by the
weaknesses of those studies of associations subsequently established
as causal.
And I may remind him that the
association of DES with vaginal adenocarcinoma in the offspring was
based on just seven exposed cases and one unexposed (7). And to go
back further, Jenner’s first demonstration that vaccination could
prevent smallpox was most disputable, and Lind satisfied himself that
limes prevented scurvy on the basis of just two cured patients.
Because public health triumphs are often initiated by “small signals,”
we continue not to label new epidemiologic findings as “pathological
science” too quickly, in spite of our widespread reputation for
skepticism.
So, I advise younger members of
the epidemiologic profession to listen carefully to what Taubes says,
and to strive to emulate the self-critical attitude that he so
eloquently describes. And I would go even further and suggest that our
profession be more critical than we are now of studies that do not
live up to our own self-imposed standards, particularly when multiple
tests of the data are performed without pre-specification by
hypothesis.
But at the same time, I would
tell them not to be too disheartened by external criticism, since our
profession (in all modesty) has done more to benefit humankind in the
last fifty years than any other scientific discipline, physics
included. I will be glad to compare our accomplishments--which include
sorting out the major risk factors for coronary disease (which now
kills half as many in the US as it did thirty years ago), and teaching
us that smoking causes the number one cancer in the West and that
hepatitis B virus causes the number one cancer in the East--with those
of any other discipline. And how many millions of lives have been
saved by the eradication of smallpox since 1978?
If Gary Taubes would like to
publicly debate the proposition that epidemiology is pathological
science, I gladly offer to take the con position.
Nigel Paneth, MD, MPH
Taubes’ Response: Curiously
enough, I also find myself in agreement with much of what Dr. Paneth
says, but I will discuss only the disagreements, beginning with EMF.
What he says about Leeper and Wertheimer not being epidemiologists is
true, but less true about the other epidemiologists involved in the
issue such as David Savitz, Anders Ahlbom, London et al. And yes,
there is some horrendous bench science dredged up to argue the EMF
case by some biologists of dubious distinction. But they work together
with the epidemiologists in a perversely fascinating symbiotic
relationship. The epidemiologists will agree when pressed that their
findings are near meaningless, but they will defend them by claiming
they confirm the bio- logical lab work. The laboratory scientists will
agree, when pressed, that their work is near meaningless, and then
defend it with the epidemiology. It’s a hell of a way to build a house
of cards.
Regarding Our Stolen Future and
the environmental estrogens story, I couldn’t agree with him more. It
strikes me as weak science all around, with some of the weakest being
the epidemiology. I don’t find it particularly meaningful that
Dumanoski allegedly defends bad science with the same phrases I use to
criticize it.
Finally, regarding the
statement: “Because public health triumphs are often initiated by
‘small signals’, we continue not to label new epidemiologic findings
as ‘pathological science’ too quickly, in spite of our widespread
reputation for skepticism.” The point is not to label new findings as
pathological, but to question at what point they might become
pathological, and whether the field has enough of the traditional
scientific defense mechanisms in place to recognize it when they do. I
suggest that epidemiologists spend more time critically (and even
publicly) appraising and improving their own research, and less time
trumpeting their past accomplishments and offering to verbally have at
anyone who should express curiosity about how much that research might
be in need of improvement.
••••••••••••••••••••••
Confusing Experimental and
Observational Science
Dear Sir,
Mr. Taubes’ frequent comparisons
between high energy physics and epidemiology demonstrate his
fundamental failure to understand the differences between experimental
and observational science. Physicists who have doubts about the
accuracy of their results can check their equipment, alter the
experimental conditions and repeat the experiment. An epidemiologist,
on the other hand, does not have this luxury. This is a science that
proceeds based on repeated studies in different populations. No
credible epidemiologist will come to a conclusion about disease
causality based on a single observational study. The problem is not
with epidemiologists or epidemiology, as he suggests, but with the
failure of his colleagues in the press to understand the science. To
suggest that most or even many published epidemiological studies
represent the product of data dredging to seek funding is the height
of arrogance. I submit that someone who once refused a position at CNN
because he couldn’t smoke on the job should be more prepared to accept
the validity of epidemiological research.
Bob Morris
Taubes’ Response: The point is
not whether I understand the fundamental differences between
experimental and observational science, but if experimental scientists
are so easily misled, isn’t it safe to assume that observational
scientists—without all the benefits of a lab at hand—can be even more
easily misled, and thus should be even more skeptical of their own
results?
As for the problem being with
the press, this is the same line I’ve heard over and over again. This
is what Angell and Kassirer wrote in the New England Journal. Lord
knows, it is certainly true, but it strikes me as similar to the
argument that “guns don’t kill people, people do.” It’s a fact of life
that studies will be misinterpreted by the media. So instead of
blaming the media, how about not pulling the trigger. Instead of
reporting your findings in such a way that they’re newsworthy, which
they’re almost assuredly not, how about reporting them with the
caveat-laden, jargon-filled, scientifically skeptical dullness the
findings deserve. If your study is at odds with multiple previous
studies on the subject, why not tell the reporter that those studies
are probably right, and yours is probably wrong, which is likely to be
the case. The reporter will probably respond, “then why should I write
a story about your findings?” Instead of the knee-jerk response, which
is to give him an angle— “Well, I mean, I could be right, after all,
it’s a brilliantly conceived study.”—why not say, “there is no reason.
It’s a nonstory.” The same holds for the finding of a possible
causative effect in the noise level. Why not say truthfully, “we did
what we could, we exhausted our funding, we published our findings,
the results were ambiguous, but if you squint real hard you can see
the hint of a whisper of something that is probably not there.” After
all, it’s probably not. If the reporter asks for betting odds, be
realistic: ten to one against, 100 to one against. If history is any
indication, those are very reasonable odds, and they are probably high
enough to send the reporters looking for a better story with which to
open the nightly news. (It’s hard to justify a heading that says
“Researchers report finding that secondhand cigarette smoke is a long
shot to cause breast cancer.”) The worst that might happen is the
reporter will get huffy, believing that somehow you wasted his or her
time. The next time a reporter comes calling I suggest you (the royal
“you”) ask yourself: “What do I hope to gain...” or “What can I
possibly gain by letting the results of my study make it to the
press?”
I no longer smoke.
••••••••••••••••••••••
Epidemiology Seen More Broadly
Dear Sir,
You did a great thing by
publishing your interview with Gary Taubes.
As an example of an initial
effect that further, more careful study showed to be very much
stronger, I offer the effect of condoms in preventing AIDS.
The initial studies were
negative or showed condoms to be a risk factor. Of course, there was
strong biological reason to believe in the protective effect of
condoms and likewise the sources of negative confounding that hid the
effects were equally clear. As an example of a positive finding that
was eliminated, breast implants and connective tissue diseases are a
good example. Of course, his argument that the damage was not undone
is quite correct.
I think you should have put
epidemiology more in the context of professional decision making. Our
job is not always to be right, but to gather evidence that assists in
making public health decisions and then making those decisions. The
decision to wait and criticise further may be a decision that costs
thousands of lives. We will make wrong decisions. Our task is to
improve the quality of the decisions.
My feeling is that epidemiology
puts very strong and artifactual limits on itself by adopting the
general purpose of finding risk factors that lead to differential
rates of disease in exposed and unexposed individuals. To me, that can
be a productive purpose. It is almost never, however, the most
productive purpose that epidemiologists should be pursuing in their
quest to find new ways to prevent and control disease. What
epidemiology should pursue is ever more elaborate and testable theory
regarding how patterns of potential causes in populations generate
patterns of disease in those populations. When we confine our
theorizing and pattern evaluation to associations between exposure and
disease in individuals, when we confine the parameters we estimate to
the parameters of data models instead of the patterns of causal
models, we paint ourselves into a trap where Gary Taubes’ criticisms
often ring true. My commentary in the May 15 issue of AJPH elaborates
on this more extensively.
Gary Taubes actually paints an
inappropriate picture of how science works. He treats science as a
process of making yes or no decisions about individual causal
hypotheses. I am afraid that a lot of epidemiologists do the same
thing so that makes epidemiologists a pretty easy target for the
Taubes frame of mind. In fact, no science works that way. There is no
refutationist criteria that can successfully reject any proposed
theory. And yet all theory, all models of how the real world really
behaves, are wrong. That is to say that all models make refutable
assumptions. But some work to lead to effective control actions.
Jim Koopman, MD, MPH
Taubes’ Response: Rather than
responding myself to Dr. Koopman’s comment about how science works,
and there being no refutationist criteria that can successfully reject
any causal hypothesis, I will let Richard Feynman stand in for me. The
following is from “The Character of Physical Law”: “In general we look
for a new law by the following process. First we guess it. Then we
compute the consequences of the guess to see what would be implied if
this law that we guessed is right. Then we compare the result of the
computation to nature, with experiment or experience, compare it
directly with observation, to see if it works. If it disagrees with
the experiment, it is wrong. In that simple statement is the key to
science. It does not make any difference how beautiful your guess is.
It does not make any difference how smart you are, who made the guess,
or what his name is—if it disagrees with experiment, it is wrong. That
is all there is to it. It is true that one has to check a little to
make sure that it is wrong because whoever did the experiment may have
reported incorrectly or there may have been some feature in the
experiment that was not noticed, some dirt or something, or the man
who computed the consequences, even though it may have been the one
who made the guesses, could have made some mistake in the analysis.
These are obvious remarks, so when I say--if it disagrees with
experiment it is wrong-- I mean after the experiment has been checked,
the calculations have been checked, and the thing has been rubbed back
and forth a few times to make sure that the consequences are logical
consequences from the guess, and that in fact it disagrees with a very
carefully checked experiment...
Another thing I must point out
is that you cannot prove a vague theory wrong. If the guess that you
make is poorly expressed and rather vague, and the method that you use
for figuring out the consequences is a little vague —you are not sure,
and you say, “I think everything’s right because it’s all due to so
and so, and such and such do this and that more or less, and I can
sort of explain how this works...” then you see that this theory is
good, because it cannot be proved wrong! Also if the process of
computing the consequences is indefinite, then with a little skill any
experimental results can be made to look like the expected
consequences. You are probably familiar with that in other fields. ‘A’
hates his mother. The reason is, of course, because she did not caress
him or love him enough when he was a child. But if you investigate you
find out that as a matter of fact she did love him very much, and
everything was all right. Well then, it was because she was
overindulgent when he was a child! By having a vague theory it is
possible to get either result. The cure for this one is the following:
if it were possible to state exactly, ahead of time, how much love is
not enough, and how much love is over-indulgent, then there would be a
perfectly legitimate theory against which you could make tests. It is
usually said when this is pointed out--when you are dealing with
psychological matters things can’t be defined so precisely. Yes, but
then you cannot claim to know anything about it.”
••••••••••••••••••••••
Warning for Taubes and Others
Dear Sir,
Gary Taubes will be ill-advised
to accept the concept (propounded by a number of epidemiologists) that
mismeasurement of exposure can only work to “make the effect smaller
than it really is.” For any particular epidemiological study that
investigates a causal risk factor and in which each study subject has
the same probablility of being misclassified with respect to exposure,
it is incorrect to infer that the measurement of effect obtained from
the study--for example, rate ratio or relative risk--can only be
increased if more reliable information were to be obtained such that
all misclassification could be removed. The concept Gary Taubes was
being asked to accept refers to a study of infinite size. The
confusion of the infinite with the particular has led to
over-confidence.
Tom Sorahan
Published July 1996 v
|
|