Dissecting Behavior

Dogs, Science, and the Biology of Behavior

Month: September, 2013

Blogging Down Our Brains

“By giving us the opinions of the uneducated, journalism keeps us in touch with the ignorance of the community.” – Oscar Wilde

Anyone interested in furthering his or her knowledge of a subject is faced with a depressing enigma: the amount of bad information outnumbers the amount of good information.  As a person who lives life trying to acquire a deeper understanding of the world, bad information really pisses me off.

Recently, a blog was published titled “5 Incredible Ways Dogs Can Read Your Mind” (Emery, 2013).  In it, the author presents several claims to support the premise that dogs can read our minds.  Formally, this is referred to as Theory of Mind, which Emery defines as the “understanding that other beings have different perceptions, and that those perceptions can be valuable” (Coren, 2011a).  This is wrong.  To be fair, even scientists do not always define it accurately, however choosing a blogger to reference who actually took the time to read the conclusions and discussions of the studies they were reviewing might have been a beneficial start.

The correct definition:

Theory of mind is the ability to make accurate inferences to understand the behavior of other animals because of abstract (theory-like) representations of the causal relationship between unobservable mental states and observable behavior (Premack & Woodruff, 1978, as cited in Penn & Povinelli, 2007—emphasis my own).

More concisely, theory of mind requires the ability to know that the behavior of another animal is a product of their cognitive state—this is distinctly different from responding to environmental factors, including that of an animal’s behavior (Udell & Wynne, 2011).  While many authors have described theory of mind as dated and potentially no longer useful (e.g. Horowitz, 2011), the goal of this blog isn’t to provide alternative evidence as to whether or not theory of mind is a valid concept in non-human animals such as dogs—best to let that war rage on in the academic community.  Instead, this blog is two-fold: 1) to correct the dirge of fallacies that went viral with Emery’s blog and 2) to use it as a model for understanding how important the source of information is.  With that as our base, let’s start at the top.  If you haven’t read the blog I am referring to yet, here is the link.

Note: the titles of each section below correlate to the paragraph titles of Emery’s blog and are here to give sign posts regarding what statements I am pulling apart: they are not claims I am making.

#5: Dogs are Capable of Empathy

“Yawning is a phenomenon directly connected to empathy, and as such has only been found to occur in species capable of empathizing (i.e. humans, and other primates), and only then within a single species.” (Emery, 2013)

First, yawning behavior is widespread and believed to be common to ALL vertebrates: including mammals, birds, fish, amphibians and reptiles (Baenninger, 1997).  Second, there are EIGHT different hypotheses regarding the function of yawning, if it is indeed even functional, and so the communication (empathy) hypothesis is just one of several ‘stabs’ at why animals yawn (Guggisberg et al. 2010).  Problematically, the communication hypothesis is not unilaterally supported and the studies that do support it are plagued with a lack of controls to rule out competing hypotheses that would be direct confounds to the results.  Even if we accepted that contagious yawning as a function of empathy was viable and true, researchers employing more stringent methods have been unable to conclude that dogs show signs of contagious yawning behavior (Harr et al., 2009).

“So obviously dogs have an uncanny ability to read our emotions … but how? Well, it’s because all humans, whether right- or left-handed, display our emotions predominantly on the right side of our faces.” (Emery, 2013)

You’d think with a 50/50 chance she might have gotten this one right merely by dumb luck—but no, humans display their emotions predominantly on their LEFT side, not their right (Borod et al. 1997).  Regardless of problems between lefts and rights, associative learning is an alternate explanation for any gaze bias observed in dogs.  However, perhaps ironically, it could even be argued that if dogs could read our mind, then they wouldn’t need to check in with the more emotional side of our face to know if we are just slightly angry about the Christmas ham getting eaten, or really angry.  Empathy is an equally hot topic as theory of mind and definitions historically have tended to overlap—often stressing the importance of “cognitive perspective taking” (Davis 1983).  Despite the sticky separation of these concepts, empathy refers to the ability of one individual to infer and share the emotional spectrums of another (Gallese, 2003; Völlm et al., 2006).  Thus, regarding whether or not non-human animals have empathy, gaze research simply cannot possibly answer the question.

#4: Dogs Understand That Your Visual Perspective Is Different from Their Own

Yes, they do, but does this constitute evidence that they possess theory of mind?  Opponents to the “perspective taking” element supporting theory of mind make a compelling counterargument: simply that all animals learn.  If a moose walks into a tree, they do not turn around and walk the other way because at some point in their life they learned they could walk around it.  A group of gazelles foraging and scanning the environment have learned to scan for predators because being eaten by a cheetah sucks.  A prey animal thus wouldn’t survive very long without some knowledge of potential threats in the environment.  Knowing this, if a gazelle stops eating, freezes, and looks across the field, the fact that all the gazelles are likely to stop foraging and check for danger does not prove the presence of theory of mind, because alternatively they could be responding due to empirical knowledge that it is in their interest to keep a look out for hungry kittens.

If you’re walking along the street and you see someone looking up, we are likely to look up as well.  The novelty of seeing someone looking up is a pretty strong stimulus to evoke our social facilitation (looking up as well), just like the gazelle and their knowledge that the environment contains dangers to be aware of, we understand that pianos or stock brokers falling on our head is also likely to put a dent in our afternoon.

Many researchers have demonstrated how dogs and wolves have varying abilities to search around visual barriers (e.g. Bräuer et al., 2004; Range et al., 2011; Virányi et al., 2009); however, ultimately here is what you have to decide for yourself:

  1. The dog is reading your mind and knows that you are looking at an object around the corner
  2. The dog notices that your eyes are looking somewhere to their right or left (an observable behavior) and is curious to investigate – oh there is a barrier?  Hmmm, let me walk around that

I think this research is interesting, but even so, this does not constitute evidence for theory of mind because it does not rule out competing hypotheses; such that a dog could be simply taking information from a visual environment—not attenuating to the cognitive states to understand the causal relationship between unobserved mental states and observed behavior.  Just my opinion, argument 2 seems much more practical and is further supported by our understanding that dogs are extremely sensitive to gazing since it is one of the most common signals they use in agonistic (conflict) behavior.

More egregiously though, Emery continues and states that dogs will abandon all morality and go for a piece of food the second you close your eyes, or turn your back, or place a barrier between you and the food, and this is a complete misinterpretation to the research done on this phenomena and thus absolute gibberish.  Leaving a food item alone is trainable.  Browse around YouTube and you will find plenty of videos where a dog is told to wait before his or her dinner bowl is set down, and then the owner walks out of the room, or even the house, before returning to release the dog to eat the food.  Honestly, a solid ‘leave it’ is one of the easiest behaviors to train, so this kind of research has to be interpreted very carefully regarding what it actually means, if anything, for the lives of our dogs.

#3 Dogs Assume That You Know Something They Don’t

As if I wasn’t already pounding my head against the desk, the author then uses the observation that dogs want to eat what we are eating as support for doggie mind reading abilities.  Unfortunately, this is not a trait unique to dogs (or humans for that matter).  Many social mammals select food preference by their group’s behavior.  For example, rats learn from group members how to determine what to eat and will learn to avoid the smell of poisoned food, a neophobic response—this is one of the reasons why rat poison doesn’t eliminate rat populations (Galef & Clark, 1971).  It is a fascinating behavior, but it does not require mind reading—rather rudimentary social facilitation.

#2 Dogs Understand Pointing

“…but the fact of the matter is that dogs and humans are the only two species currently clinging to our big blue spaceball who understand the point of pointing.”  (Emery, 2013)

Other than wolves (Udell et al., 2008), cats (Miklosi et al., 2005), parrots (Giret et al., 2009), bats (Hall et al., 2011), Jackdaws (Von Bayern & Emery, N., 2009), goats (Kaminski et al., (2005), dolphins (Pack & Herman, 2004), fur seals (Scheumann & Call, 2004), Ravens (Schloegel et al., 2007; for a review, see Udell et al., 2012)… hmm, only two species you say?  Monty Python jokes about the Spanish Inquisition aside, the ability for an animal to learn that they can walk around a tree is no different from the ability to learn that a finger might be directing towards food.  Animals who learn this distinction are socialized to people—period.  Nobody has snatched a dog that has never seen a human, tossed it in a room, pointed at a cup with food inside, and seen the dog dive in and say “thank you master!”  No, it would be shaking in the corner terrified for its life.  Animals who have been socialized to humans respond to pointing and other human communicative gestures (e.g. gazing and pointing with foot): pick your species.  Variance in this skill can be as easily explained by the failure for many animals to follow directions (just ask any school teacher how many times they have to remind students to write their name at the top of a test).


#1 Dogs Know When You Like Someone Else More

Finally, Ms. Emery claims that oxytocin is a “love- and jealousy-related hormone” (Coren, 2011b).  This claim comes from a single study involving humans playing a computer game (Shamay-Tsoory et al., 2009), however the conclusions the authors make can be reinterpreted to fit the standard functional understanding of oxytocin (Tops, 2010).  Oxytocin is a mammalian hormone that triggers milk letdown in nursing females and is involved in a wide variety of social behaviors: such as increasing pleasure during orgasm, increasing time of social contact, facilitating memory of sexual partners, protects fetal neurons from injury during delivery, improves navigational strategies, and works with vasopressin receptors to aid pair-bonding (Breedlove et al., 2010).

If you don’t see references: assume the author is an idiot

It should be clear by now that cracked.com might be just about the worst source for dog behavior science, and if you have been following some of the citation trails, Psychology Today might appear as a questionable source as well.  There are more authors writing books and blogging about dogs than there are dogs in family homes and they range from people with high school diplomas to PhDs. Citations and a reference list is an excellent way to begin to decipher the quality of information, however it is not everything either.

While bad information frustrates the daylights out of me, ultimately, the burden falls on the consumer to be sure to examine the evidence. This is one of many reasons why a list of references is so important, and why it is best to assume the author is likely an idiot if they don’t bother to acknowledge the sources of their information in a clear, concise reference section at the end.  If there is no reference list, than be sure to ask yourself whether you believe you are reading an opinion piece or an opinion piece veiled as accurate science.


Baenninger, R. (1997). On yawning and its functions. Psychonomic bulletin & review, 4(2), 198–207.

Bräuer, J., Call, J., & Tomasello, M. (2004). Visual perspective taking in dogs (Canis familiaris) in the presence of barriers. Applied Animal Behaviour Science, 88(3–4), 299–317. doi:10.1016/j.applanim.2004.03.004

Breedlove, S. M., Watson, N. V., & Rosenzweig, M. R. (2010). Biological Psychology: An Introduction to Behavioral, Cognitive, and Clinical Neuroscience, Sixth Edition (6th ed.). Sinauer Associates, Inc.

Borod, J. C., Haywood, C. S., & Koff, E. (1997). Neuropsychological aspects of facial asymmetry during emotional expression: A review of the normal adult literature. Neuropsychology Review, 7(1), 41–60.

Coren, S. (2011a). Can Your Dog Read Your Mind? Retrieved September 23, 2013, from http://www.psychologytoday.com/blog/canine-corner/201106/can-your-dog-read-your-mind

Coren, S. (2011b). Do Dogs Feel Jealousy and Envy? Retrieved September 23, 2013, from http://www.psychologytoday.com/blog/canine-corner/201111/do-dogs-feel-jealousy-and-envy

Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44(1), 113–126. doi:10.1037/0022-3514.44.1.113

Emery, L. (2013). 5 Incredible Ways Dogs Can Read Your Mind.  Retrieved September 23, 2013, from http://www.cracked.com/article_20572_5-incredible-ways-dogs-can-read-your-mind.html

Galef, B. G., & Clark, M. M. (1971). Social factors in the poison avoidance and feeding behavior of wild and domesticated rat pups. Journal of Comparative and Physiological Psychology, 75(3), 341–357. doi:10.1037/h0030937

Gallese, V. (2003). The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity. Psychopathology, 36(4), 171–180. doi:72786

Giret, N., Miklósi, Á., Kreutzer, M., & Bovet, D. (2008). Use of experimenter-given cues by African gray parrots (Psittacus erithacus). Animal Cognition, 12(1), 1–10. doi:10.1007/s10071-008-0163-2

Guggisberg, A. G., Mathis, J., Schnider, A., & Hess, C. W. (2010). Why do we yawn? Neuroscience & Biobehavioral Reviews, 34(8), 1267–1276. doi:10.1016/j.neubiorev.2010.03.008

Hall, N. J., Udell, M. A. R., Dorey, N. R., Walsh, A. L., & Wynne, C. D. L. (2011). Megachiropteran bats (Pteropus) utilize human referential stimuli to locate hidden food. Journal of comparative psychology (Washington, D.C.: 1983), 125(3), 341–346. doi:10.1037/a0023680

Harr, A. L., Gilbert, V. R., & Phillips, K. A. (2009). Do dogs (Canis familiaris) show contagious yawning? Animal Cognition, 12(6), 833–837. doi:10.1007/s10071-009-0233-0

Horowitz, A. (2011). Theory of mind in dogs? Examining method and concept. Learning & Behavior, 39(4), 314–317. doi:10.3758/s13420-011-0041-7

Kaminski, J., Riedel, J., Call, J., & Tomasello, M. (2005). Domestic goats, Capra hircus, follow gaze direction and use social cues in an object choice task. Animal Behaviour, 69(1), 11–18. doi:10.1016/j.anbehav.2004.05.008

Miklósi, Á., Pongrácz, P., Lakatos, G., Topál, J., & Csányi, V. (2005). A Comparative Study of the Use of Visual Communicative Signals in Interactions Between Dogs (Canis familiaris) and Humans and Cats (Felis catus) and Humans. Journal of Comparative Psychology, 119(2), 179–186. doi:10.1037/0735-7036.119.2.179

Pack, A. A., & Herman, L. M. (2004). Bottlenosed dolphins (Tursiops truncatus) comprehend the referent of both static and dynamic human gazing and pointing in an object-choice task. Journal of comparative psychology (Washington, D.C.: 1983), 118(2), 160–171. doi:10.1037/0735-7036.118.2.160

Penn, D. C., & Povinelli, D. J. (2007). On the lack of evidence that non-human animals possess anything remotely resembling a “theory of mind.” Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 731–744. doi:10.1098/rstb.2006.2023

Range, F., & Virányi, Z. (2011). Development of Gaze Following Abilities in Wolves (Canis Lupus). PLoS ONE, 6(2), e16888. doi:10.1371/journal.pone.0016888

Scheumann, M., & Call, J. (2004). The use of experimenter-given cues by South African fur seals (Arctocephalus pusillus). Animal cognition, 7(4), 224–230. doi:10.1007/s10071-004-0216-0

Schloegl, C., Kotrschal, K., & Bugnyar, T. (2008). Do common ravens (Corvus corax) rely on human or conspecific gaze cues to detect hidden food? Animal cognition, 11(2), 231–241. doi:10.1007/s10071-007-0105-4

Shamay-Tsoory, S. G., Fischer, M., Dvash, J., Harari, H., Perach-Bloom, N., & Levkovitz, Y. (2009). Intranasal Administration of Oxytocin Increases Envy and Schadenfreude (Gloating). Biological Psychiatry, 66(9), 864–870. doi:10.1016/j.biopsych.2009.06.009

Tops, M. (2010). Oxytocin: Envy or Engagement in Others? The Striatum, Psychopathy, and Molecular Mechanisms of Addiction, 67(1), e5–e6. doi:10.1016/j.biopsych.2009.08.032

Udell, M. A. R., Dorey, N. R., & Wynne, C. D. L. (2008). Wolves outperform dogs in following human social cues. Animal Behaviour, 76(6), 1767–1773. doi:10.1016/j.anbehav.2008.07.028

Udell, M., & Wynne, C. (2011). Reevaluating canine perspective-taking behavior. Learning & Behavior, 39(4), 318–323. doi:10.3758/s13420-011-0043-5

Udell, M. A., Spencer, J. M., Dorey, N. R., & Wynne, C. D. (2012). Human-socialized wolves follow diverse human gestures and they may not be alone. Int. J. Comp. Psychol, 25, 97–117.

Virányi, Z., Gácsi, M., Kubinyi, E., Topál, J., Belényi, B., Ujfalussy, D., & Miklósi, Á. (2008). Comprehension of human pointing gestures in young human-reared wolves (Canis lupus) and dogs (Canis familiaris). Animal Cognition, 11(3), 373–387. doi:10.1007/s10071-007-0127-y

Völlm, B. A., Taylor, A. N. W., Richardson, P., Corcoran, R., Stirling, J., McKie, S., … Elliott, R. (2006). Neuronal correlates of theory of mind and empathy: A functional magnetic resonance imaging study in a nonverbal task. NeuroImage, 29(1), 90–98. doi:10.1016/j.neuroimage.2005.07.022

Von Bayern, A. M. P., & Emery, N. J. (2009). Jackdaws Respond to Human Attentional States and Communicative Cues in Different Contexts. Current Biology, 19(7), 602–606. doi:10.1016/j.cub.2009.02.062

Myths, Legends, and Science: Part 2

[Part 1]

In my previous blog, I ended by poking fun of the people who superficially cling to the word “science” as the end all of reasoning, however what I did not mention is the other spectrum of people who see science as a never ending string of contradictions and thus find no value in it.  These people often dismiss the importance of science, even though they drive a car to work, fly to Cabo, write emails, take aspirin, receive vaccinations for deadly diseases, and watch movies on a phone (which is supposedly smart) that fits in their pocket.  Clearly science is just spinning its wheels.  Joking aside, these criticisms draw from the self-correcting nature of science, however this is actually a virtue if the way in which scientific theories are arrived at is fully understood.


Theory vs Hypothesis

If you could x-ray science, it could be said that its bones are made up of theories and hypotheses.  It is important to distinguish here that the colloquial use of the term “theory” is vastly different from its use in science.  Theories, much like hypotheses, make predictions, however a theory involves predictions that are rarely (if ever) unsupported because empirical tests have corroborated them numerous times.

For example, when Darwin proposed natural selection as the mechanism for evolution, he created numerous predictions that could not be readily tested in his time.  However, with the incredible advances in molecular biology, we have been able to examine variations in the sequencing of nucleic acids between organisms (i.e. DNA analysis).  Among many other things, Darwin predicted that humans and primates both evolved from a common ape-like ancestor.  Supposing that Darwin was correct, then we would expect to see fewer mutations in the DNA sequences between humans and primates then between humans and cats—since according to the theory, humans and other primates share a common mom and dad much more recently than humans and cats do.  Thus, the finding that humans share about 95% of their DNA with chimpanzees is just one of many findings that corroborates the theory of evolution.1


Credit: M.F. Bonnan

Other theories you may be familiar with are atomic theory, the theory of relativity, germ theory, cell theory, and so on and so forth.  Often, people will try to impugn a theory by saying statements to the effect of, “evolution is just a theory.”  However, once someone understands how theories are established, this statement is as ridiculous as saying “germs and atoms are just theories.”  Really what they mean to try and say—although falsely—is that they are hypotheses, as if to suggest that science wasn’t really sure one way or the other.

Hypotheses come in many varieties, they can be based on lots of evidence, some evidence, or sometimes no evidence at all, and the longer they bounce around without being refuted, the stronger the evidence becomes.  Atomic theory, for example, started off as a hypothesis back with the pre-Socratics around 400ish BCE.  Naturally, scientific exploration has fine-tuned it like a small European sports car, however the initial hypothesis has managed to maintain its central idea: that the universe is composed of atoms.  Evidence supporting atomic theory can be seen in particle accelerators, when you toss salt in water, cook oil in a pan, or even when you watch ice melt into water.

Hypotheses become theories when they are extensively supported with experimental and observational data

kurt-lewin-good theoryKurt Lewin (1952)2


The philosopher Karl Popper emblazoned the importance of falsifiability in science, and while many of Popper’s ideas were tremendously controversial, the importance of falsifiability in science is one that everyone seems to happily agree on.

I found that those of my friends who were admirers of Marx, Freud and Adler were impressed by a number of points common to these theories and especially by their apparent explanatory power.  These theories appeared to be able to explain practically everything that happened within the fields to which they referred.  The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated.  Once your eyes were thus opened you saw confirming instances everywhere: The world was full of verifications of the theory.  Whatever happened always confirmed it.  Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth, who refused to see it, either because it was against their class interest or because of their repression which were still “unanalysed” and crying aloud for treatment.  (Popper, 1963)


Einstein provides a very nice example of how falsifiability plays a role in science even when a theory seems extremely intangible.  In 1905, Einstein declared in his publication of the Special Theory of Relativity that E=MC2.  In 1916, Einstein then published the General Theory of Relativity, which predicted that light bends with the distortions of space and time (i.e. is affected by gravity).  The claim was so radical—so audacious—that the astrophysicist and philosopher, Arthur Eddington, made an expedition to an island on the western coast of Africa to test Einstein’s claim.  If Einstein was correct then the stars near the Sun would be in a different position than normally anticipated because the distortion of space around the Sun would have caused the light to bend—and vice versa, if the stars near the Sun did not change position, then the theory could be rejected.  This kind of testability is implicit in determining whether a hypothesis or theory is falsifiable.  As you might have guessed, Eddington’s photographs of the stars near the Sun during the solar eclipse successfully demonstrated Einstein’s predictions.  Initially, if I had asked you to think of a test where you could determine whether or not light bends, you would probably scratch your head for a while (as would I).


Photograph from the island of Principe

It is a great exercise to think about the testability of a claim by thinking about what evidence would be needed in order to falsify it.  For example, Freud claimed that conflicts between an individual’s conscious and unconscious mind resulted in neurotic behavior; problematically, since by definition the unconscious mind is “not available to introspection,” it is not testable.  If it is not testable, it is not falsifiable.  If it is not falsifiable, it isn’t science.

Assumptions: the pitfall of a good hypothesis

Naturally, the results of experiments do not always support the hypothesis, however this does not mean that the hypothesis is wrong.  The magic of good critical thinking is finding where a hypothesis might be carrying hidden baggage—more specifically: assumptions.  Remember Semmelweis and the use of chlorinated lime solution?  Semmelweis hypothesized that if the cause of childbed fever was in fact a contamination of putrid matter from the morgue, then the introduction of something that would kill the putrid matter (i.e. an antiseptic) would result in stopping the contamination.  While he guessed well, this hypothesis carried an assumption that a chlorinated lime solution works as an antiseptic.  However, since this predates germ theory, nobody even knew what an antiseptic was (let alone an effective one).  An even stronger example of the difficulty with assumptions played out historically in what is called Stellar Parallax.

A great debate that began in ancient Greece and did not end until Copernicus in the 16th century was whether the universe is Geocentric (Earth at the center) or Heliocentric (Sun at the center).  Being clever at geometry, the Greeks thought of a way to test the two competing hypotheses.  If the universe was heliocentric, then there would be a calculable difference between the stars in the sky at one time of the year versus another—however, if the universe was geocentric, then there would be no change.  The Greeks measured, and sure enough, they concluded that no, the stars did not seem to change, ergo the universe must be geocentric.

You can replicate this test by closing your left eye and extending your thumb out in front of you, pick an object you can cover with the width of your thumb (ideally about 3-10ft away), then open your left eye and close your right (i.e. switch the closed eye)—alternating back and forth you should see the object(s) behind your thumb bounce back and forth.  Looking at the diagram below, “July” would be your left eye’s perspective and “January” would be your right eye’s perspective.


Stellar Parallax

The first problem for the Greeks was that they never imagined that the stars were over 40 trillion kilometers away (4 light years), thus they didn’t realize how small the changes they were looking for really were—for perspective, the Earth is 40,000km in circumference, so you would have to complete a million journeys around the Earth a million times in order to cover the distance between us and our next closest star Alpha Centauri. The second problem for the ancients was that they didn’t know which star to try and pick out to use as a reference point, so it was a hypothesis with assumptions that were nearly impossible to know at the time.

The moral of the story is that just because a test fails a hypothesis, we cannot simply throw it away because the negative result might simply be the outcome of an uncontrolled confound in the experimental design.  This means that if the results of an experiment do not support a hypothesis, we go back and think, “what were my controls” “do I have any assumptions in my methods” “are there confounds I haven’t controlled for” etc.

The Scientific Method

Putting all these pieces together, you will probably recall from one of your high school science classes a flow chart that looks somewhat like this:


Scientific Method – Version 1.0

I call this Version 1.0 because this is the scientific method in a vacuum.  Science historian Steven Shapin, a professor at Harvard University, makes a compelling argument that one of the most important elements which shaped the Scientific Revolution was the way in which scientists began working together to coordinate research.3  This emphasizes an aspect of the scientific method that version 1.0 is completely lacking, and that is its unique social structure.


Scientific Method – Version 2.0

Science cannot evolve if the knowledge of the past is too unreliable to expand on, nor can it evolve without the influence of society and the benefits it has on the human condition.  Peer-review is a function required for the highest level of publication and capitalizes on how scientists maintain correspondence and quality control; thus, while the quality can vary, peer-review and editorial standards are one of the most significant elements of good science. They are the measure by which all science is scrutinized for miscalculations, methodological flaws, invalid conclusions, and sometimes, deception.

Image Credits:

Scientific Method 1.0 – (http://courses.washington.edu/esrm430/sm.jpg)

Scientific Method 2.0 – (http://arstechnica.com/science/2009/03/building-a-better-way-of-understanding-science/)


(1) Britten, R.J. 2002. ‘Divergence between samples of chimpanzee and human DNA sequences is 5% counting indels.’ Proceedings National Academy Science 99:13633-13635

(2) Lewin, K. (1952). Field theory in social science: Selected theoretical papers by Kurt Lewin. London: Tavistock.

(3) Shapin, S. (1996).  The Scientific Revolution.  Chicago: University of Chicago Press.

Myths, Legends, and Science: Part 1

One of the most common words in today’s dog industry is the word “science”: such as science-based, scientifically proven, or backed by science.  Problematically, the word is often utilized in an attempt to punctuate that an idea, product, method, or concept is simply fact, and that anyone who disagrees with it is likely ignorant, uneducated, or just plain wrong.  Sure, arguments and debates are essential to science, and in order for those to happen we have to have strong opinions.  However, there are a growing number of people who have started resorting to the word “science” without knowing the methods or conclusions that constitutes the evidence behind their claims and with the extra assumption that science only has a singular opinion beyond reproach.  The problem is that this isn’t really what science is about, nor is it how we got to where we are today.


What is Science?

Most people would agree that physics, chemistry, and biology are science—often referred to as the “natural” sciences as they passionately try to unravel the mysteries of the universe.  Although what about other subjects, such as mathematics?  Music?  Astronomy?  Philosophy?  Psychology?  Economics?  Metaphysics?  Logic?  What makes one subject a science in our minds and another a pseudo-science?  While my goal is not to simply list what subjects I personally believe are and are not science, by the end of my series I hope you can make that determination for yourself, it is interesting that as far as history is concerned, ‘science’ is actually a relatively new word.  The roots of what is now modern (western) science began in ancient Greece with the advancements of philosophy and the infamous Aristotle (384 to 322 BCE).

As a philosopher, Aristotle’s celebrity was incalculably immense.  Described in Dante’s Inferno as “the master of those who know,” Aristotle wrote about the world and the heavens in ways that still permeate modern science.  His greatness as a philosopher set the stage so strongly that for about 2,000 years, those we would call scientists throughout history actually referred to themselves as “natural philosophers.”  This can be seen as late as even Isaac Newton’s publication of Philosophiae naturalis principia mathematica (Mathematical Principles of Natural Philosophy) in the 18th century.


Aristotle and Zeno’s Paradoxes

Zeno of Elea was a philosopher from the 5th century (BCE)—although what a great name for an evil space villain.  Zeno was an extreme rationalist who argued that it was reason, and reason alone, that could give us the gateway into an understanding of the way things are.  He believed that the senses (i.e. seeing, hearing, smelling, touching) were tainted as a tool for building knowledge and used several paradoxes as evidence that even observations about motion (as in an object moving) were actually just figments of the senses.

Zeno’s Dichotomy Paradox (dichotomy literally means “cutting in two”): imagine a dog running towards a stationary object.  The object is at a finite distance D and the running happens in a finite time of T.  Zeno claimed that in order to travel D, the dog must first travel the first half of D, then half of the distance that remains, followed by half the distance that remains of that, followed by half the distance that remains of that as well (i.e. the dog would travel half of D, then a quarter of D, then an eighth of D, then a sixteenth of D, then 1/32nd of D, 1/64th of D…) etc. ad infinitum—infinitely.  Following this logic, it would then have to be assumed that the dog will have to travel an infinite number of distances in a finite amount of time.  To Zeno, this was a contradiction; therefore, assuming that an object moves because we see it move is a false assumption when it is simply illogical for an object to complete an infinite number of distances within a finite amount of time.


Aristotle, however, came along with a resolution to Zeno’s Dichotomy Paradox.  Instead of arguing with the conclusion, even though clearly the conclusion is absurd, Aristotle created a resolution to the paradox by focusing on the assumptions within the argument.  By focusing on the paradox’s construction, Aristotle demonstrated two of the most important elements that science is built on—reasoning and logic—which is what made him so infamous (not his primitive hypotheses of the heavens which is often what is taught in history class).  While Aristotle’s resolutions are apt, the paradox wasn’t laid to rest until modern mathematics came up with a mathematical proof to rationally explain Zeno’s Dichotomy.1

ImageModern notation for solving Zeno’s Paradox

Deductive vs Inductive Reasoning 

There are two distinct forms of reasoning that can be used to make a claim: deductive and inductive reasoning.  Deductive reasoning takes a very large premise and narrows it down to a smaller conclusion.

  1. All dogs have noses.
  2. Muffy is a dog.
  3. Therefore, Muffy has a nose.

The power of deductive reasoning is that when the premises are true, and the argument construction is valid, the conclusion is undeniably true—I don’t know many people who would argue that Muffy doesn’t have a nose.  However, at the same time, deductive reasoning can be tricky because it could be built on a false premise.  Here is another deductive argument:

  1. All atoms have one or more protons.
  2. Carbon is an atom.
  3. Therefore, carbon has one or more protons.

We can only say that this is undeniably true (i.e. “sound”) if we have examined every atom in the universe.  However, despite not having examined every atom in the universe, we still accept that all atoms have one or more protons because of Inductive Reasoning.  Inductive reasoning would look like this:

  1. Every atom we have found so far has one or more protons.
  2. Therefore the next atom we find will have one or more protons.

If we assume this is true, based on this inductive reasoning, that the next atom we find will have one or more protons, then it is sound to conclude that carbon has one or more protons.  However, inductive reasoning cannot prove because it generalizes from a finite sample, thus it is able to suggest that a hypothesis is probably true.  Like a car with no warranty: it makes no guarantees.

Semmelweis and ‘Childbed Fever’

Ignaz Semmelweis was a Hungarian physician whose story expands this concept of the importance of inductive reasoning in science.  In the mid-19th century, Semmelweis worked at a hospital in Vienna where there were two maternity divisions, however problematically, about 12% to 17% of the women who entered the First Division to give birth began to subsequently die with what was called childbed fever (a horrific death with symptoms including organ failure and edema), while only about 2-3% of the woman who entered the Second Division suffered the same fate.  Systematically, Semmelweis formed several hypotheses to try and discover the cause of the mortality rate in the first division.

The first hypothesis was that the deaths were due to Atmospheric Influences.  Before germ theory, people believed epidemics were passed through atmospheric events, however to Semmelweis, this seemed impossible because it did not explain why women who gave birth on the street on route to the hospital had a higher survival rate than the women in the first division, nor why two different wings of the same hospital would consistently have different mortality rates—so this hypothesis was thrown out.  Other hypotheses included: overcrowding; giving birth on the back instead of the side (it was common for women to give birth on their sides at this time); diet; rough handling by doctors; and Death by the terrifying and debilitating presence of Priests (my personal favorite even though it was unsupported)2.  While many of these were also thrown out due to a lack of logic or probability, Semmelweis ran experiments where he had the priests take different routes through the hospital and where he had all the mothers in first division give birth on their sides instead of on their back—no luck.

After almost four years of trying to solve the problem, a colleague of Semmelweis’ received a puncture wound from a student’s scalpel in the morgue and died of the exact same symptoms as the women of first division.  It suddenly occurred to him that the medical students—who not coincidentally had begun additional training by performing autopsies on cadavers about four years prior—were often traveling straight from the morgue to the delivery room and often still smelled of rotting flesh (believe it or not, medicine really has come a long way).  Semmelweis then instituted a protocol that medical students had to wash their hands in a chlorine and lime solution before heading to the delivery room in the first division.  Because this predates germ theory, Semmelweis had only decided to try a chlorinated lime solution because it was effective at removing the smell accumulated from working on cadavers.  Regardless, within no time at all, the mortality rate in the first division dropped 90%.  Sadly however, mandatory hand washing caused a huge uproar and Semmelweis was politically ruined (despite having evidential vindication—i.e. women stopped dropping like flies) for even suggesting that invisible putrid matter derived from dead and living organisms might be the cause of the mortality rates.  Eventually, despite his discovery, Semmelweis was dismissed from the Vienna hospital only to then be forced to move back to Budapest due to harassment from the Vienna medical community before eventually being committed to a mental institution.  Apparently doctors really didn’t want to have to wash their hands…


The first vertical line represents the beginning of autopsy investigation in Vienna (Wien)
The second vertical line represents the introduction of handwashing procedures

Important to the question of science, however, is that even though Semmelweis solved the problem it turned out that his hypothesis was actually still somewhat incorrect.  As it turns out, the women dying from childbed fever were actually dying from a genial tract sepsis often caused by bacterial infections of Staphylococcus (staph infections)—not putrid matter derived from living and cadaverous organisms.  One could argue, “well, what’s the difference?”  If I suggested (as Einstein did, and emphasized by my incredible high school physics teacher) that gravity did not involve gravitons but rather bends and distortions in space and time, you would agree they are two significantly different hypotheses, regardless of the observational outcome.  Inductive reasoning is a powerful tool, however conceptually and historically we know that it creates an understanding about probability, not fact.

Science is…

Dictionary: Science is the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.

ImageBroken down, we get two parts.  The first part starts with the idea of the intellectual and practical nature of science.  The history of science is filled with the Galileans (those who, like Galileo Galilei, believe in science for the sake of science—the intellectual nature) and the Baconians (referring to those who, like Francis Bacon, believe science has to have a purpose—the practical nature).  More broadly and simply we can summarize this as the two primary types of scientific investigation: theoretical and applied.  The theory of relativity would be exemplary of theoretical science and research in medicine would be exemplary of applied science.

The second part of the definition very generally describes what is known as the “scientific method” which I will go into more detail in my next blog.  For now it is suffice to understand that thanks to great thinkers like Aristotle, science is built on experience (i.e. it is empirical).  The method of science utilizes techniques designed to solve conceptual problems of our experience in the real world.  Why is the sky blue?  Why do dogs like to hump certain people’s legs?  Why does coffee wake us up?

While this definition is a great start, science is also much more.  For instance, science is falsifiable; it is exploratory; it is beholden to concise and logical arguments; it is damaged by bias; and most importantly, it is ever changing.  Science does not ascertain facts, nor does it establish truths.  Science is about examining the current evidence, asking new questions, and modifying our preexisting conclusions based on new explorations.

So the next time someone uses the word “science” as a definitive proof for an argument, remember that a true scientist is both cautious and careful when making claims and would never stoop so low as to insult the intelligence of someone by defaulting to the word “science” to win an argument against them—especially without establishing who’s experiences they are referring to.

Blog Continued in Part 2



(1) An awarded and readable overview of Greek science and philosophy can be found in G.E.R Lloyd, Greek Science After Aristotle (New York, NY: W.W. Norton & Company, Inc., 1973)

(2) For more information about Semmelweis and his life, including detailed accounts and translations of his writing, check out W.J. Sinclair, Semmelweis: His Life and His Doctrine (Manchester, England: Manchester University Press, 1909)

Image sources

Science proves you’re wrong: Zazzle.com
Newton’s Principae Naturalis Mathematica: NPR.org
Zeno’s Dichotomy Paradox: http://berto-meister.blogspot.com/2013/04/what-is-zenos-dichotomy-paradox.html
Notation for sum of an infinite series: Wikipedia
Table of mortality data from Vienna hospital: Wikipedia
Warning Science in Progress: Zazzle.com
“Science” image: http://www.gdfalksen.com/post/52184550214