Toward a general theory of pathological science

Nicholas J. Turro

O ne of the most interesting challenges a practicing scientist faces is explaining to a non-scientist how science works. Though science is one of many "ways of knowing," and not a perfect one, it seems to be the best that the human mind has been able to develop. Scientists in general understand the tentative nature of the scientific process yet seem able to proceed in their everyday activities with remarkable self-assurance that they are dealing with a set of "truths" that allow them to proceed rapidly with a seemingly unending series of stunning predictions and verifications, followed rapidly by important technological applications. This makes the explanatory challenge harder. How can one be so sure about what one "knows" and at the same time avoid the arrogance of a know-it-all?

In this context, another challenge is to explain how science handles extraordinary claims. How can a scientist tell whether a remarkable idea may lead to the Nobel Prize, which is awarded for science that changes the way scientists think and know--or to the Ig Nobel Prize, awarded to work that exemplifies the scientific process apparently gone amok?

Throughout the history of science, the distinction between revolutionary science and what we can call pathological science has not always been so clear. Both extremes of the spectrum are characterized by a common trait: the ability of a scientist (or a community of scientists) to "think outside the box." But what is the box that they must be outside of? I would identify it as the paradigm, the concept popularized by Thomas Kuhn in The Nature of Scientific Revolutions.1 Science makes quantum jumps when a paradigm shifts, but an intricate process is required to confirm that an extraordinary claim is revolutionary--the stuff of true paradigm shifts--and separate it from those that are eventually shown to be pathological, destined for the dustbin of scientific history.

For example, during the past century Max Planck published some mathematical computations intended to describe an anomaly in the classical theory of light. The anomaly was termed the "ultraviolet catastrophe," which will give some idea of how severely it disturbed the physics community. Planck made the sensational suggestion that if light were "quantized" and consisted of bits of energy rather than a continuum of energy, which was the dogma of the classical theory of light, the anomaly disappeared. At the time (and now, to some) this was a preposterous suggestion, contrary to all known experience. Yet a few years after the paper, Einstein connected Planck's suggestion to an other anomaly involving the way light causes electrons to be kicked out of a metal (the basis of the "electric eyes" that operate security doors everywhere). For the next several decades the physics community endured a battle royale, with the ideas of quantum mechanics emerging triumphant, if still resiliently resistant to explanation in terms of ordinary experience.

The emergence of quantum chemistry warns us that no matter how bizarre a scientific claim may be, or how remote from ordinary experience, it can still eventually be accepted and embraced by the relevant community. But what of the many remarkable claims that have been proposed, debated, and then eventually dismissed as pathological? How was each decision made, and how do we know the decision was correct? Are there any rules scientists can follow to minimize falling into the trap of pathologies?

Mechanisms of delusion

Troubled science takes many forms, from pseudoscience (irrational or mystical systems of thought dressed up in ostensibly scientific jargon, often complex but never rigorous) to junk science (methodologically sloppy research usually conducted to advance some extrascientific agenda or to prevail in litigation) to outright fraud. I am concerned here not with dishonest practices, which are rarely intellectually interesting, but with serious investigations leading down pathways that ultimately prove erroneous. As Nobel-winning chemist Irving Langmuir said in his famous General Electric lecture on the topic, "These are cases where there is no dishonesty involved, but where people are tricked into false results by the lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking, or threshold interactions."2

The road to greater scientific truth is not just littered with history's errors; it is built through a process of constant error correction. If we accept Kuhn's description of scientific progress as a succession of revolutions, or paradigm shifts, resulting from the constant effort to reconcile new results with dominant paradigms, then a scientific field's moments of crisis--when different factions contend over whether an idea will turn out to be revolutionary or absurd--tell us a great deal about how knowledge is constructed, tested, and defended. For this reason, understanding pathological science can help a researcher better understand, and perform, reliable science.

Kuhn posits that in the conduct of normal, everyday science, researchers sometimes obtain anomalous results; the scrupulous scientist investigates these oddities through experiments intended to disprove the anomalies and reinforce the current reigning paradigm. (We can call this pattern of paradigm-guided scrutiny the First Law of Parodynamics.) If the anomalies persist, this process often gives rise to a period of intense debate and experimental work, with one community impeaching the correctness of the paradigm and another defending it. A key result may suddenly emerge, supporting the paradigm and revealing the challenging anomaly as pathological; on rare and treasured occasions, a key result convincingly supports a significant revision of the paradigm. (Nobel Prizes often follow.)

Whatever paradigm may govern a scientific specialty at any given time, it helps frame and organize a researcher's thinking, acting as a kind of Baedeker for inquiries within the specialty and as a safeguard against pathological work. When a science is in a potentially revolutionary phase, a dominant paradigm can be a prison, preventing researchers from following promising new leads, but more often it is a form of conceptual prophylaxis, something not to be abandoned without peril. Science often turns pathological when investigators venture outside their familiar paradigm without becoming sufficiently versed in another one. An informative example is the "cold fusion" fiasco of 1989, in which electrochemists Stanley Pons and Martin Fleischmann could probably have saved themselves quite a bit of opprobrium if they had sought objective advice from high-energy physicists about the significance of detecting a flux of neutrons or gamma rays in their apparatus.

Since only a few anomalies ever lead to revolutionary change--while most working scientists dream of making exactly that type of discovery, the stuff of which prestige is made--there is an understandable temptation to interpret anomalies as meaningful. In deed, mathematicians have a term that quantifies an idea's newness, its intellectual potential (in a sense akin to "potential energy," measuring the idea's distance from what an existing paradigm would predict): surprisal. Surprisal indicates the possibility of a revolutionary payoff--and also the chance that the idea will turn out to be delusional. Because high-surprisal investigations are both high-gain and high-risk, the need for the counterpoise of skepticism among investigators studying such topics is acute.

The history of scientific misfires indicates certain patterns, as described by Figure 1, which I have adapted from a previous publication.3 Anomalous results, unexplainable within a field's governing paradigm and important enough to cause true intellectual alarm within the community, can throw the field into crisis--an unpredictable state analogous to a catastrophe point in topological mathematics. Resolution can take one of three forms. The idea's revolutionary potential may be revealed as more apparent than real, and this "pseudocritical" state resolves to reinforce the original paradigm; the idea may foster a true Kuhnian revolution, and a new paradigm arises; or the idea may prove pathological, in which case only zealots continue to pursue it, and the paradigm structure is untouched. When a field is in the critical prerevolutionary/prepathological phase, a principle I will call the Second Law of Parodynamics comes into effect: The more drastic the departure from an established paradigm, the greater the chance of either a revolutionary or a pathological outcome--and the more urgent the need for awareness of the mechanisms whereby cognitive errors tend to arise.

One way confusion can enter a scientist's thinking involves disruption of the natural conceptual progression through four categories of ideas, ranked in decreasing order of surprisal. For convenience, we can refer to these as the paradigm's four Ps:

  • The possible comprises all ideas that do not violate the most basic and global principles of science (e.g., the second law of thermodynamics; fundamental conservation laws).

  • The plausible describes ideas that are clearly possible and would be tenable if we could envision circumstances under which they could be tested. (In the case of "polywater" or polymerized H2O molecules, discussed in more detail below, there was no a priori reason why a chemical reaction yielding such a substance could not occur and move downhill in potential energy, as any reaction must; the idea was implausible but not impossible.)

  • The probable describes "normal science" as Kuhn used the term: incremental explorations that apply a paradigm and may extend its scope but do not threaten to overturn it. Science regularly makes orderly incursions into the realm of the unknown, expanding what is known without raising an eyebrow over the probability of the results.

  • The proven applies to unsurprising exercises in puzzle-solving, te routine application of known principles, working firmly within a stable paradigm. Much of scientific education takes place here, though student work is fully capable of venturing into the other areas.

The borders separating these ideas--particularly the line between the first two--are not as clear in practice as in theory, especially when a result is of interest to two or more distinct specialties. A high-surprisal hypothesis may appear impossible from one vantage point, while a different field's paradigm makes it clear that the hypothesis is well within the realm of the possible and merely stretches the limits of plausibility or probability. But only after an idea has run the scientific community's gauntlet--surviving rigorous experimental and interpretive efforts to falsify it--can it be said to move from questions of possibility to a probable or proven status. Pathological science occurs when an investigator cuts this process short, prematurely trading in scrutiny for advocacy.

Sometimes there's no there there

Langmuir's classic symptoms of pathological science [see sidebar] attribute many errors to various forms of subjective judgment. Uncertainty is part of all science, and subjective judgments are inescapable in most fields, but statistically marginal phenomena on the threshold of human perception, with a low signal-to-noise ratio, are easy to misinterpret. (Langmuir himself detected this phenomenon in the Columbia laboratory of Bergen Davis and Arthur Barnes in 1930; these physicists believed they were detecting a phenomenon called "electron capture" by alpha particles in a magnetic field, but Langmuir found that in their six-hour marathon sessions counting scintillations on a screen in a darkened room, they also counted visual hallucinations, which are common in such circumstances, and dismissed observations that conflicted with their interpretation.)

Observers commonly select and discard some of their scattered data points because of suspected confounding conditions or experimental error; in some contexts this borders on cheating, but more often it is simply a reasonable selection process. In all fields, the requirement of reproducibility within statistical limits guards against this kind of observer error. The observation by Martin Gardner of Scientific American that bad science is often the work of "hermit scientists"4 such as Immanuel Velikovsky or L. Ron Hubbard, who have little or no professional interaction with peers, reflects a failure of that mechanism of independent reproducibility or falsification.

The red flag of pathology should thus appear any time a researcher offers resistance to the challenge of reproducibility, claiming that only a certain special system (or even certain investigators) can generate the anomalous result. A notorious case is found in Jacques Benveniste's "infinite dilution" studies, which held that antibody solutions remained biologically effective even when diluted so thoroughly that no molecules of the solute were detectable in the fluid, implying that water somehow retains a memory of molecules that have been dissolved in it. If confirmed, this hypothesis--not impossible, but highly implausible according to chemical paradigms--would have overthrown some of chemistry's fundamental beliefs about the properties of liquid water, but independent investigators found that the Benveniste lab did little to control for observer bias or sample contamination, excluded conflicting measurements, massaged the statistics, and neglected to investigate reasons for failures of reproducibility. 5 Rather than acknowledge that his group had been pursuing a pathological inquiry, Benveniste has clung to his theory, accused his critics of sour grapes in the face of a new idea--and acquired not one but two Ig Nobel prizes for improbable research (the second for a more recent claim that a solution's biological activity can be digitized and transferred to a different water sample via e-mail). Seldom has an investigation matched Langmuir's symptomatology so smoothly.

Interpreting something into existence

Not all scientific pathologies, however, are covered by Langmuir's list. Another instance of pathological science, the bandwagon over polywater following Nikolai Fedyakin and B.V. Deryagin's work in the 1960s, illustrates a common type of cognitive shortc oming: a failure to seriously consider alternative hypotheses to explain an unusual result. The dense liquid called polywater that Deryagin and other researchers were able to produce through condensation in tiny capillaries--reproducibly, it should be noted, and with exhaustive attention to controlling physicochemical variables and answering the critiques of colleagues--ultimately turned out to be an artifact caused by impurities in ordinary water. Deryagin and a worldwide network of adherents to his theory pursued the polywater concept to extraordinary lengths, in part because of plausible theories about the behavior of water molecules in ultrafine capillaries (and in part because of heavy funding from the U.S. Navy, which took an interest in possible military applications6). However, when purification tests using more sophisticated equipment convinced Deryagin to reconsider an obvious hypothesis he had previously rejected--that his polywater was contaminated ordinary water--he readi ly and honestly conceded that his original experiments were flawed, invalidating any interpretations based on these results.

The scientific process healed itself in this case, though not as quickly as it would have if Deryagin and others had kept Occam's razor in mind and given more weight to the simplest available explanation. A favored hypothesis can develop its own momentum, especially when a researcher invests his or her prestige or professional self-identification in one idea to the exclusion of competing (and often more parsimonious) explanations for the results. Having formed a pet conclusion, the scientist often defends it using its own terms, models, and assumptions. Assuming one's conclusion, rather than challenging the hypothesis before accepting a conclusion, introduces logical circularity into the interpretation of results.

A sensational result also can be inextricably confused with a sensational interpretation. Pathological science often involves a relatively sensational interpretation of an unexceptional observation; the cold fusion episode offers an instructive example. The observation of an atomic explosion dwarfs any interpretation of the causal relations and scientific principles leading to it, and on some level Pons and Fleischmann may have been comparing the energy-generating potential of deuterium/palladium cold fusion to that of the bomb (which used a fission reaction). Lacking a spectacular observation, they generated all the media "bang" they needed through their interpretation.

As Nature associate editor Philip Ball points out in H2O: A Biography (NY: Farrar & Straus, forthcoming), the chemistry of water has attracted more than its share of pathological investigations; the stories of cold fusion, polywater, and infinite dilution, all involving properties of water, provide shining (or perhaps glaring) examples of how implausible ideas can run amok. Perhaps it is because it is essential to life, has numerous properties that are indeed anomalous (or at least ill-understood), and is rich in metaphoric connotations, water seems to bring out the unskeptical enthusiast in some researchers. And perhaps because the dream of converting water into a cheap and plentiful fuel held particular promise in the years following the OPEC-induced energy crisis in the West, considerations of wealth and fame inevitably intruded into the Utah laboratories where palladium electrodes allegedly electrolysed a heavy-water solution.

Surrounding all the common shortcuts around established scientific method are the human flaws that imperil any kind of enterprise. Extrascientific considerations such as media attention, professional standing, promises of monetary gain, ideological predilections, hubris Nobelicus, and pressures from interested parties outside the scientific community all can contribute to self-delusion. The exigencies of funding tempt even the most scrupulous basic researcher to overstate practical benefits when describing new work to potential supporters. Today's academic environment--which can appear more like a media fishbowl than an ivory tower--also presents the scientist with ample channels to speak to the general public, with a considerable risk of misrepresenting the content, purpose, and potential of a scientific discovery, either in an effort to simplify professional jargon (the ever-present problem of "dumbing it down") or in the highly contagious enthusiasm over an untested idea. These are not paradigm questions, of course, just questions of objectivity.

Advice for the working revolutionist

Clearly, scientific progress would be impossible if researchers always played it safe within a dominant paradigm, discarding disturbing results or shying away from daring hypotheses. Some of today's most robust discoveries and most promising research subjects--manned space flight, wave-particle duality, C60 (buckminsterfullerene or "buckyball") molecules, high-temperature superconductivity, ad infinitum--once struck mainstream scientific opinion as completely implausible. Working researchers have practical steps they can take to lower the chances that today's "eureka!" will be tomorrow's Ig Nobel:

  • Always generate and test several plausible hypotheses to explain a result.

  • Use imaginative experimental design to increase objectivities and decrease the chances that the initial observation contains artifacts.

  • Let the best available paradigm be your guide, until you're certain that your results require revision of the paradigm.

  • Be conservative about the concepts of statistical significance and margin of error, especially when analyzing phenomena on the threshold between signal and noise.

  • Reproduce, reproduce, reproduce.

  • Discuss surprising findings openly with peers (through both formal and informal channels, inside and outside one's own specialty), and make constructive use of the critiques that arise.

  • When discussing research with non-scientists--especially those holding microphones, cameras, notebooks, or checkbooks--avoid the temptations to overinterpret results, oversimplify your explanations, or promise the moon in practical applications.

  • If further studies falsify your hypothesis, acknowledge it with grace and learn from the experience. Blind leads are nothing to be ashamed of; they are inseparable from the progress of science. Any number of pathological investigations give way eventually to one like quantum mechanics--which necessitated a few adjustments to the law of conservation of mass but ultimately withstood criticism, explained results that Newtonian theory couldn't explain, and revolutionized physics. The same communal corrective processes that falsified one theory verified the other; that's how science operates and why it almost always works.

  • Do the unthinkable: Try your very best to find faults in your experiment or to falsify your interpretation. If this is done fairly, objectively, and passionately, even if you turn out to be wrong, you will be true to your science, and you will be admired by the community for your intellectual courage and dedication to the scientific ethos.


1. Kuhn, Thomas. The Structure of Scientific Revolutions. 2nd ed. Chicago: U of Chicago, 1970.

2. Langmuir, Irving (transcribed and ed., Robert N. Hall). Pathological science. Physics Today 42 (Oct. 1989): 36-48.

3. Turro, Nicholas J. Geometric and topological thinking in organic chemistry. Angew Chem Int Ed Engl 25 (1986):882-901.

4. Gardner, Martin. Science Good, Bad, and Bogus. Buffalo: Prometheus Books, 1981.

5. Maddox, John, James Randi, and Walter W. Stewart: "High-dilution" experiments a delusion. Nature 334 (1988): 287-290. Reply by Jacques Benveniste, p. 291.

6. Brian Pethica, adjunct senior research scientist at Columbia's Krumb School of Mines and one of the principal Western investigators of polywater while heading the Unilever Research Laboratory near Liverpool, predicted at the time that chemists "would see polywater papers until the Navy's $4 million runs out."


Related links...

  • Nicholas J. Turro, "Geometric and Topological Thinking in Organic Chemistry"

  • Skeptics Society

  • Center for the Scientific Investigation of Claims of the Paranormal, publishers of Skeptical Inquirer

  • New York Area Skeptics

  • The Skeptic, U.K.

  • "The Flight from Science and Reason," 1995 conference, New York Academy of Sciences

  • Junk Science quiz, Union of Concerned Scientists

  • Donald Simanek's page: extensive collection of resources on pseudoscience, urban legends, hoaxes, etc.

  • Peter W. Huber, author of Galileo's Revenge: Junk Science in the Courtroom (NY: Basic Books, 1991) and other writings on fallible scientific testimony

  • Steven J. Milloy's "Junk Science" page

  • David M. Sander's "Weird Science" page, Tulane Dept. of Microbiology and Immunology (highly critical of Milloy's perspective and non-research background)

  • Alistair B. Fraser's "Bad Science" page, Penn State Dept. of Meteorology

  • Science Frontiers, physicist William Corliss's newsletter on scientific anomalies

  • Roahn H. Wynar's Clearinghouse of Pseudoscience and Quackery, U. of Texas at Austin, Dept. of Physics

  • Guru's Lair collection of pseudoscience links

  • Resources on scientific ethics, Brian Tissue, Virginia Polytechnic Institute Dept. of Chemistry

  • James Randi Educational Foundation, dedicated to rational debunking of pseudoscience; gives Flying Pig ("Pigasus") Trophies for outrageous claims

  • Junk science and pseudoscience glossary, Robert Todd Carroll's "Skeptic's Dictionary," Sacramento City College Philosophy Dept.

  • "Distinguishing Science and Pseudoscience," American Family Foundation's Cult Information Service

  • "Technopolitics" series, PBS (covers public scientific issues; devotes attention to debunking pseudoscience)

  • David Goodstein, "Whatever Happened to Cold Fusion?"

  • Cold Fusion Times

  • Alan Lightman, "A Cataclysm of Thought," Atlantic Monthly, January 1999 (review of John Stachel, ed., Einstein's Miraculous Year: Five Papers that Changed the Face of Physics [Princeton: Princeton UP, 1998])

  • The Anomalist (journal dedicated to the proposition that "there is more mystery than knowledge in the world," claiming a perspective neither credulous nor skeptical toward scientific anomalies)

  • Fortean Times, dedicated to unexplained, paranormal, and generally Mullerian/Scullyesque phenomena


  • NICHOLAS J. TURRO, Ph.D., is William P. Schweitzer Professor of Chemistry at Columbia, author of Modern Molecular Photochemistry (Menlo Park, Calif.: Benjamin/Cummings, 1978) and of more than 600 scientific papers, and 1998 recipient of the Frontiers in Biological Chemistry award, given by the Max Planck Institute for Radiation Chemistry in Mülheim, Germany. This essay presents some ideas that he delivered in a series of addresses at the Max Planck Institute in October 1998.


    Photo Credits:
    Special Effect: Howard R. Roberts
    Photos: Photos To Go
    Buckyball: Arthur Hill Physics Image Gallery
    Magnetic levitation train: courtesy of Alusuisse Road & Rail AG
    Manned space flight: NASA
    Pons and Fleischmann: AP / Wide World Photos
    Figure 1: Howard R. Roberts and Jodi Miller