In my previous blog, I ended by poking fun of the people who superficially cling to the word “science” as the end all of reasoning, however what I did not mention is the other spectrum of people who see science as a never ending string of contradictions and thus find no value in it. These people often dismiss the importance of science, even though they drive a car to work, fly to Cabo, write emails, take aspirin, receive vaccinations for deadly diseases, and watch movies on a phone (which is supposedly smart) that fits in their pocket. Clearly science is just spinning its wheels. Joking aside, these criticisms draw from the self-correcting nature of science, however this is actually a virtue if the way in which scientific theories are arrived at is fully understood.
Theory vs Hypothesis
If you could x-ray science, it could be said that its bones are made up of theories and hypotheses. It is important to distinguish here that the colloquial use of the term “theory” is vastly different from its use in science. Theories, much like hypotheses, make predictions, however a theory involves predictions that are rarely (if ever) unsupported because empirical tests have corroborated them numerous times.
For example, when Darwin proposed natural selection as the mechanism for evolution, he created numerous predictions that could not be readily tested in his time. However, with the incredible advances in molecular biology, we have been able to examine variations in the sequencing of nucleic acids between organisms (i.e. DNA analysis). Among many other things, Darwin predicted that humans and primates both evolved from a common ape-like ancestor. Supposing that Darwin was correct, then we would expect to see fewer mutations in the DNA sequences between humans and primates then between humans and cats—since according to the theory, humans and other primates share a common mom and dad much more recently than humans and cats do. Thus, the finding that humans share about 95% of their DNA with chimpanzees is just one of many findings that corroborates the theory of evolution.1
Other theories you may be familiar with are atomic theory, the theory of relativity, germ theory, cell theory, and so on and so forth. Often, people will try to impugn a theory by saying statements to the effect of, “evolution is just a theory.” However, once someone understands how theories are established, this statement is as ridiculous as saying “germs and atoms are just theories.” Really what they mean to try and say—although falsely—is that they are hypotheses, as if to suggest that science wasn’t really sure one way or the other.
Hypotheses come in many varieties, they can be based on lots of evidence, some evidence, or sometimes no evidence at all, and the longer they bounce around without being refuted, the stronger the evidence becomes. Atomic theory, for example, started off as a hypothesis back with the pre-Socratics around 400ish BCE. Naturally, scientific exploration has fine-tuned it like a small European sports car, however the initial hypothesis has managed to maintain its central idea: that the universe is composed of atoms. Evidence supporting atomic theory can be seen in particle accelerators, when you toss salt in water, cook oil in a pan, or even when you watch ice melt into water.
Hypotheses become theories when they are extensively supported with experimental and observational data
The philosopher Karl Popper emblazoned the importance of falsifiability in science, and while many of Popper’s ideas were tremendously controversial, the importance of falsifiability in science is one that everyone seems to happily agree on.
I found that those of my friends who were admirers of Marx, Freud and Adler were impressed by a number of points common to these theories and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: The world was full of verifications of the theory. Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth, who refused to see it, either because it was against their class interest or because of their repression which were still “unanalysed” and crying aloud for treatment. (Popper, 1963)
Einstein provides a very nice example of how falsifiability plays a role in science even when a theory seems extremely intangible. In 1905, Einstein declared in his publication of the Special Theory of Relativity that E=MC2. In 1916, Einstein then published the General Theory of Relativity, which predicted that light bends with the distortions of space and time (i.e. is affected by gravity). The claim was so radical—so audacious—that the astrophysicist and philosopher, Arthur Eddington, made an expedition to an island on the western coast of Africa to test Einstein’s claim. If Einstein was correct then the stars near the Sun would be in a different position than normally anticipated because the distortion of space around the Sun would have caused the light to bend—and vice versa, if the stars near the Sun did not change position, then the theory could be rejected. This kind of testability is implicit in determining whether a hypothesis or theory is falsifiable. As you might have guessed, Eddington’s photographs of the stars near the Sun during the solar eclipse successfully demonstrated Einstein’s predictions. Initially, if I had asked you to think of a test where you could determine whether or not light bends, you would probably scratch your head for a while (as would I).
It is a great exercise to think about the testability of a claim by thinking about what evidence would be needed in order to falsify it. For example, Freud claimed that conflicts between an individual’s conscious and unconscious mind resulted in neurotic behavior; problematically, since by definition the unconscious mind is “not available to introspection,” it is not testable. If it is not testable, it is not falsifiable. If it is not falsifiable, it isn’t science.
Assumptions: the pitfall of a good hypothesis
Naturally, the results of experiments do not always support the hypothesis, however this does not mean that the hypothesis is wrong. The magic of good critical thinking is finding where a hypothesis might be carrying hidden baggage—more specifically: assumptions. Remember Semmelweis and the use of chlorinated lime solution? Semmelweis hypothesized that if the cause of childbed fever was in fact a contamination of putrid matter from the morgue, then the introduction of something that would kill the putrid matter (i.e. an antiseptic) would result in stopping the contamination. While he guessed well, this hypothesis carried an assumption that a chlorinated lime solution works as an antiseptic. However, since this predates germ theory, nobody even knew what an antiseptic was (let alone an effective one). An even stronger example of the difficulty with assumptions played out historically in what is called Stellar Parallax.
A great debate that began in ancient Greece and did not end until Copernicus in the 16th century was whether the universe is Geocentric (Earth at the center) or Heliocentric (Sun at the center). Being clever at geometry, the Greeks thought of a way to test the two competing hypotheses. If the universe was heliocentric, then there would be a calculable difference between the stars in the sky at one time of the year versus another—however, if the universe was geocentric, then there would be no change. The Greeks measured, and sure enough, they concluded that no, the stars did not seem to change, ergo the universe must be geocentric.
You can replicate this test by closing your left eye and extending your thumb out in front of you, pick an object you can cover with the width of your thumb (ideally about 3-10ft away), then open your left eye and close your right (i.e. switch the closed eye)—alternating back and forth you should see the object(s) behind your thumb bounce back and forth. Looking at the diagram below, “July” would be your left eye’s perspective and “January” would be your right eye’s perspective.
The first problem for the Greeks was that they never imagined that the stars were over 40 trillion kilometers away (4 light years), thus they didn’t realize how small the changes they were looking for really were—for perspective, the Earth is 40,000km in circumference, so you would have to complete a million journeys around the Earth a million times in order to cover the distance between us and our next closest star Alpha Centauri. The second problem for the ancients was that they didn’t know which star to try and pick out to use as a reference point, so it was a hypothesis with assumptions that were nearly impossible to know at the time.
The moral of the story is that just because a test fails a hypothesis, we cannot simply throw it away because the negative result might simply be the outcome of an uncontrolled confound in the experimental design. This means that if the results of an experiment do not support a hypothesis, we go back and think, “what were my controls” “do I have any assumptions in my methods” “are there confounds I haven’t controlled for” etc.
The Scientific Method
Putting all these pieces together, you will probably recall from one of your high school science classes a flow chart that looks somewhat like this:
I call this Version 1.0 because this is the scientific method in a vacuum. Science historian Steven Shapin, a professor at Harvard University, makes a compelling argument that one of the most important elements which shaped the Scientific Revolution was the way in which scientists began working together to coordinate research.3 This emphasizes an aspect of the scientific method that version 1.0 is completely lacking, and that is its unique social structure.
Science cannot evolve if the knowledge of the past is too unreliable to expand on, nor can it evolve without the influence of society and the benefits it has on the human condition. Peer-review is a function required for the highest level of publication and capitalizes on how scientists maintain correspondence and quality control; thus, while the quality can vary, peer-review and editorial standards are one of the most significant elements of good science. They are the measure by which all science is scrutinized for miscalculations, methodological flaws, invalid conclusions, and sometimes, deception.
Scientific Method 1.0 – (http://courses.washington.edu/esrm430/sm.jpg)
Scientific Method 2.0 – (http://arstechnica.com/science/2009/03/building-a-better-way-of-understanding-science/)
(1) Britten, R.J. 2002. ‘Divergence between samples of chimpanzee and human DNA sequences is 5% counting indels.’ Proceedings National Academy Science 99:13633-13635
(2) Lewin, K. (1952). Field theory in social science: Selected theoretical papers by Kurt Lewin. London: Tavistock.
(3) Shapin, S. (1996). The Scientific Revolution. Chicago: University of Chicago Press.