Modeling Trumps Data

DDP Newsletter September 2011, Volume XXIX, No. 5.

The official faith of America is in Science. This idol is an erroneous construct of what the scientific quest for truth really is. When officials say that “the Science has spoken,” these days they generally mean that white-coated priests are interpreting the oracular conclusions derived from a mathematical model or a computer program.Mathematics is a beautiful, powerful, indispensable tool. It is, however, an abstraction. Science requires actual measurements and observations of Nature. If the data do not agree with the predictions of the model, the model must be questioned. Data that are validated and replicated cannot simply be ignored or discarded to save the reputation of a model, however beautiful or politically expedient.

The whole basis of the U.S. regulatory regime rests on models that have become powerful ideologies: linear risk models, and climate models. The models are wrong.

“To claim that the scientific discipline of medicine got the dose-response wrong and with this error damaged our health, environment, and economy sounds wrong, irresponsible, and unfair to such a dignified and life-serving profession,” writes Edward Calabrese (“Toxicology rewrites its history and rethinks its future: giving equal focus to both harmful and beneficial effects,” Environ Toxicol Chem 2011;30:2658-2673. doi: 10.1002/etc.687). Calabrese defends this claim in a historical detective adventure.

The fundamental principle of toxicology and pharmacology—the dose-response relationship—arose from a bitter political dispute, a power struggle between homeopathy and traditional medicine. The latter arose from the “heroic” medicine of the 18th and 19th centuries, which often, though with the best of intentions, tortured patients before sending them to an early grave. The impact of Samuel Hahnemann’s establishment of homeopathy was compared to that of Martin Luther’s posting the 95 Theses. While it may never have cured anybody, and any apparent benefit might have been a placebo effect, homeopathy was unlikely to harm or kill patients, and was gaining market share.

Homeopaths claimed that the dose-response is biphasic. The paradoxic stimulatory effect of low doses was called hormesis. Physician/pharmacologist Hugo Schulz proposed it in 1884 to explain the effects of varying doses of disinfectants on yeast metabolism. He also suggested it as an explanation for a striking series of clinical observations by Bloeudau, in which a homeopathic preparation (veratrine) was successfully used to treat gastroenteritis. Schulz did not subscribe to the homeopathic belief in extreme dilutions (which result in zero concentration of the agent), but he was unfairly linked to it, derided as the Greifswald Homeopath, and ostracized for the rest of his 50-year academic career.

The threshold dose-response model was accepted by traditional medicine, largely through the work of pharmacology professor Alfred J. Clark. Though an outstanding scholar, researcher, and mathematician, Clark failed to present or discuss Schulz’s  dose-response model, or even to try to refute the substantial body of widely published research that supported it. He relied on the monotonic probit dose-response model, which mathematically constrains responses to asymptotically approach the control response, and never drop below it. This biostatistical model denies the existence of a biphasic response. Measurements that didn’t fit were disregarded (censored).

Although never experimentally proven, the threshold dose-response model became the gold standard for regulating exposure to chemical carcinogens, accepted by the scientific elite, “leaving no room for confusion, debate, or compromise.”

Calabrese and associates found that over a 70-year period, no attempt to assess the capacity of the threshold dose-response to make predictions in the low-dose zone had ever been published. Hence, they undertook their own review of published data. If the threshold model is correct, the ratio of responses above and below the control value should be very close to 1; in fact, it exceeded this value by 250%. Responses displayed a consistent pattern, closely paralleling the hormetic model.

In the second half of the 20th century, a group of detractors arose, who supported the linear no-threshold model (see July issue), which became dogma, despite the evidence. For example, the largest rodent study ever conducted, involving some 24,000 mice, provided strong evidence that low doses of a chemical carcinogen are beneficial.

The hormesis concept is becoming central in the fields of anti-aging medicine, performance enhancement, and biogerontology. It is likely, however, in Calabrese’s view, that government research funding will continue to ignore hormesis, lest it challenge environmental exposure standards.

Calabrese notes that hormetic effects are not necessarily beneficial. Harmful ones might include endocrine disruption (e.g. early puberty) or the capacity of numerous anti-tumor drugs to stimulate proliferation of tumor cells.

Substituting ideologically appealing models for data can cause harm either from costly regulatory overkill, or from missing unexpected effects.

CLIMATE MODELS CAN’T BE VALIDATED

In defending its “Endangerment Finding” that carbon dioxide and other greenhouse gases are a threat to public health and welfare, the U.S. Environmental Protection Agency (EPA) declares that “climate models have been properly validated.” IPCC expert reviewer Vincent Gray, however, states that the Intergovernmental Panel on Climate Change cannot call the results of its models predictions —because the models have never been validated. Hence, results are referred to as “projections” (The Week That Was 10/8/11).

Current climate models cannot be validated at all, writes Fred Singer, because they are nonlinear chaotic models that produce different results each time they are run. In efforts to show consistency between results and observations, error bars have been extended so far as to be almost meaningless. While at least 10 runs are needed to establish a mean, the model with the greatest number of published runs is one from Japan, which had five, showing widely varying results.

Table 2.11 in an appendix of the IPCC Fourth Assessment Report assesses the level of understanding of 16 Forcing Factors found in the models: in 11, the level is “low” or “very low.” The Pacific Decadal Oscillation (PDO), among other possibly important factors, is not considered at all. For the EPA to claim models are validated is the equivalent of a “government agency certifying that a passenger aircraft is reliable after the engineers state that the reliability of 69% of the components is low to very low, and some important components may be missing” (ibid., available at www.sepp.org).

Another missing component, now coming to attention, is the effect of cosmic rays on clouds. According to scientists at the European Organization for Nuclear Research (CERN), cosmic rays may enhance the formation of pre-cloud seeds as much as ten-fold. The results might have been available 10 years ago had the research been funded. Based on satellite data since 1979, Danish physicist Henrik Svensmark found a correlation between solar activity, cloud cover, and cosmic rays. But when he reported possible climatic implications at a 1996 space conference, the sitting IPCC chair called him “scientifically…irresponsible”; it was a “really bad career move” (TWTW 9/10/11).

Climate models were supported by $32 billion in U.S. government spending alone between 1989 and 2009, writes Paul Driessen—without finding a single piece of evidence that human-caused CO2 emissions significantly affect climate (http://tinyurl.com/kqx4pe).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.