Monday, May 12, 2014

Experimental evidence vs. quantitative theory

I wrote previously about the myth of the scientific method as it pertains to the realities of scientific endeavors. The perception of science and scientific research by the non-scientific community is that of a sterile Vulcan-like endeavor completely divorced from emotion and humanity.
Indeed, thinking of science as using the scientific method portrays science as an activity that is hugely unnatural: human beings are not by nature objective, judicious, disinterested, skeptical; rather, human beings jump to conclusions on flimsy evidence and then defend their beliefs irrationally. The widely held myth of the scientific method is one reason that scientists are often stereotyped is cold, even inhuman.1
Some branches of science, like chemistry, are data-rich while others, like physics, are more theory driven. Researchers must adapt their methods and implement different approaches to achieve scientific progress. This diversification may result in drastically differing points of view to what constitutes good scientific practice. A geologist generally bases his or her theories on the observable evidence. Continental drift was realized and eventually accepted as fact based on the shapes of the continents, seismographic data from undersea earthquakes, and the similarities of fossil records and mountain ranges across fitting continents. There was no hypothesis based on mathematical principles that presupposed the existence of Pangaea. Physics, by contrast, is very oriented towards theory. Modern physics is taught through mathematically formulated laws. Observations are meant to confirm the theory and deviations discovered through testing are usually explained away as experimental errors.
Naturally, then, physicists tend to regard quantitative theory as the epitome of science and of scientificity; and, secretly or not so secretly, they see geology and geologists as somewhat less than highly scientific. So, too, physicists have learned that it is possible to find distinct, single causes for the variety of phenomena with which they deal, the phenomena themselves being identifiably and distinctly discrete. And for those reasons, and also because they can control all the relevant factors, physicists know that they can perform “crucial experiments” that compel nature to deliver definitive answers. Geologists, on the other hand, learn that their phenomena overlap one another, that diverse “causes” conjointly produce any given geological circumstance, and that the most scientific approach is not that of seeking crucial tests but that of “multiple working hypothesis,” for in geology one must, over long periods of time, be willing to countenance the possibility that any one of several competing explanations may ultimately turn out the best.2
We should be careful not to crown one methodology as the one true path to enlightenment. Different tools are required for different jobs. Scientists have to work with what’s available to them. Not every problem can be solved by mathematics or experimentation alone.
Some scientists thus do a lot of speculating, whereas others do virtually none, and there is no warrant to call the one approach scientific and the other not. It is just the case that different aspects of nature yield to investigation at different rates and in different ways, and so scientists come to differ in all manner of things. Whenever a generalization is made about science or about scientists, disregarding thereby the fact that there are so may distinct sorts of science, misconception is promulgated. What is true or fruitful for a field that is mature, data rich, and relatively quantitative (thermodynamics, say) is scientific for that specialty even though it may be entirely inappropriate and therefore unscientific for a field that is young, descriptive, and data poor (some bits of planetary science, say).3
When I examine the history of nutrition science, I find it hard to categorize it into either of these molds. It’s not that I believe modern nutrition science has mastered a perfect balance of experimental vs. theoretical evidence. Quite the opposite. Nutritional advice prior to the early 20th century was based heavily on experimental human evidence. Many low carbohydrate diet devotees are familiar with William Banting and his booklet called Letter on Corpulence, Addressed to the Public. The chain of events that led Banting to his new diet traces back to early diabetes research.
Eventually, in August of 1862 Banting consulted a noted Fellow of the Royal College of Surgeons: an ear, nose and throat specialist. Dr. William Harvey. It was an historic meeting. Dr. Harvey had recently returned from a symposium in Paris where he had heard Dr. Claude Bernard, a renowned physiologist, talk of a new theory about the part the liver played in the disease of diabetes. Bernard believed that the liver, as well as secreting bile, also secreted a sugar-like substance that it made from elements of the blood passing through it. This started Harvey’s thinking about the roles of the various food elements in diabetes and he began a major course of research into the whole question of the way in which fats, sugars and starches affected the body.4
We now know in hindsight that Dr. Bernard’s model for glucose metabolism and diabetes isn’t entirely accurate. His hypotheses were speculation based on observations and deduction. However, William Harvey’s prescription to Banting was closer to being right than the hypothesis it was based on. This is because the evidence of the diet’s effectiveness preceded the ideas created to explain why it worked. The results of carbohydrate restriction have been known throughout history and studied repeatedly under controlled environments. Even today, there is disagreement among some why low carbohydrate diets fare better than low fat diets. Some argue it’s purely a result of satiety and therefore allows for better calorie restriction. Most point to the hormonal effects of insulin driven by blood sugar. Regardless, the results begat the hypotheses, not vice-versa. Things began to change for nutrition in the early 20th century with advent of the lipid hypothesis.
The evidence initially cited in support of the hypothesis came almost exclusively from animal research—particularly in rabbits. In 1913, the Russian pathologist Nikolaj Anitschkow reported that he could induce atherosclerotic-type lesions in rabbits by feeding them olive oil and cholesterol. Rabbits, though, are herbivores and would never consume such high-cholesterol diets naturally. And though the rabbits did develop cholesterol-filled lesions in their arteries, they developed them in their tendons and connective tissues, too, suggesting that theirs was a kind of storage disease; they had no way to metabolize the cholesterol they were force-fed. “The condition produced in the animal was referred to, often contemptuously, as the ‘cholesterol disease of rabbits,’” wrote the Harvard clinician Timothy Leary in 1935.5
What made this hypothesis so attractive was not that it conclusively mirrored experimental results in humans. It was because serum cholesterol was able to be measured in human patients. There was no consistent and demonstrable evidence that dietary cholesterol, and the later implicated dietary fats, were responsible for atherosclerosis in humans. Yet, the pressing need to solve the newly uncovered scourge of heart disease led to the lipid hypothesis and diet-heart hypothesis to be considered facts by which all subsequent research would be judged. The field of nutrition science seemed to flip from experimental evidence to theory driven in a short period of time. The problem is the “theory” was never truly a theory. There was no mathematical proof that predicted atherosclerosis based on serum cholesterol levels in humans. Medical science couldn’t provide us with something so concrete, predictive, and falsifiable as a formula. However, they gave us something else: observational studies. Data could be collected and manipulated to provide the evidence that confirmed the hypotheses. One of the most infamous was Ancel Keys’s Seven Countries Study.
Despite the legendary status of the Seven Countries Study, it was fatally flawed, like its predecessor, the six-country analysis Keys published in 1953 using only national diet and death statistics to support his points. For one thing, Keys chose seven countries he knew in advance would support his hypothesis. Had Keys chosen at random, or, say, chosen France and Switzerland rather than Japan and Finland, he would likely have seen no effect from saturated fat, and there might be no such thing today as the French paradox—a nation that consumes copious saturated fat but has comparatively little heart disease.6
The bias from the lipid hypothesis and diet-heart hypothesis has resulted in possible suppression (and maybe even fraud) when research results are tabulated. Chris Masterjohn wrote about this based on the results of a meta-analysis in the American Journal of Clinical Nutrition. You can read more about it here:

I don’t wish to demonize scientists, in general, for believing they may have discovered something truly revolutionary based on relatively scant evidence. History is replete with examples of seemingly underdog scientists who happened to be on the right side of an argument while railing against the entrenched establishment.
And all the sciences offer instances where major new discoveries have not been accepted for quite a while because they ran counter to existing beliefs—consider the discoveries of Herman Helmholtz and Max Plank, of Joseph Lister in medicine, of Oliver Heaviside in mathematical physics, Thomas Young’s wave theory of light, and the cases of Louis Pasteur and Gregor Mendel and Svante Arrhenius and on and on and on.7
xkcd comic: Revolutionary
Munroe, Randall. "Revolutionary." xkcd.
Most people probably never heard of Dr. Claude Bernard and his ultimately inaccurate model of diabetes. In fact, just about every incorrect scientific idea, with a few notable exceptions, are mostly left to history’s dustbin.
The corpus of science at any stage always includes only what it has, up until then, stood the test of time. We see nothing in it of the trial and error, backing and filling, dismantling and rearranging that actually took place in the past, be that centuries ago or just a few years ago. Only when we read the actual accounts written by early students of nature do we begin to realize how many errors and false starts there were that left no traces in modern scientific texts. One can give excellent, objective, rational grounds now for the science in the textbooks, but that does not mean that it was actually assembled in an impartial, rational, steady manner.8
So, how can we keep ourselves honest? What of the scientific method? If it doesn’t reflect the reality of scientific research, then what use is it?
That the scientific method is a myth, that it does not explain the success of science and that scientists in practice do not follow the method, does not mean that the method itself should now be ignored or disparaged. Rather, it should be seen as an ideal—an admittedly unattainable ideal—not as a description of actual practice.9
We cannot cease all research that falls short of our lofty ideals. Science is not as neat and orderly as many of us seem to believe. Its history is littered with reputable scientists taking shortcuts, hacks, and using fudge factors to make their hypotheses work. Nevertheless, the scientific method serves to convey an idealized experience and to establish scientific behavioral models of conduct. Observational data serves a noble purpose. It helps us find the cracks that we can dig to find the treasure we’re looking for. However, we cannot be so careless to simply point at a fissure and proclaim authoritatively that, “There be gold in thar dem hills!” That requires digging, and that’s the part that’s woefully missing in modern nutrition science.


  1. Bauer, Henry H. “The So-called Scientific Method.” Scientific Literacy and the Myth of the Scientific Method. Urbana: University of Illinois Press, 1992. 32. Print.
  2. p. 27.
  3. p. 31.
  4. Groves, Barry. “William Banting Father of the Low-Carbohydrate Diet.” Weston A. Price Foundation. N.p., 30 Apr. 2003. Web. 6 May 2014. <>.
  5. Taubes, Gary. “The Eisenhower Paradox.” Good Calories, Bad Calories: Challenging the Conventional Wisdom on Diet, Weight Control, and Disease. New York: Knopf, 2007. 36. Print.
  6. “The Inadequacy of Lesser Evidence.” 54.
  7. Bauer, Henry H. 23.
  8. p. 36.
  9. p. 39.

No comments:

Post a Comment