Dissecting Behavior

Dogs, Science, and the Biology of Behavior

Tag: science

Bad Science: Quadrants of Operant Conditioning

People become dog trainers for various reasons. Often, these individuals will talk about a dog’s “performance,” yet this undoubtedly has a variety of interpretations. After all, what is performance? Is it speed? Strength? Accuracy? Reliability? Chat up a few trainers involved in any professional sport (canine or human) and you will see that there are numerous beliefs both for which methods produce the best results for the desired performance as well as for what reasons. Should our toes be pointing straight ahead or at an angle when doing a squat? Should we stretch before or after an activity? With dogs though, the question is even more convoluted because here the concerns are not just about performance: they are also about welfare.

Animal welfare is a vast topic and one that cannot be approached from A-Z in a single sitting. Many philosophers and scientists devote their entire lives to traversing the quagmires of non-human animal welfare issues and so I am not going to put all of my roulette chips down on 28 black and defend my choice in the never ending spin of the animal welfare debate wheel. In a perfect world, conversations are always productive. In the actual world, this is often a rare occurrence. But to the point, the conversation that might be the least productive in dog welfare is the assertion that techniques which use positive reinforcement and negative punishment are ethical; while techniques which use positive punishment and negative reinforcement are unethical. This issue is so emotionally charged and emblazoned in the industry that often the supporting evidence for a claim about the ethical nature of a technique revolves solely around the interpretation of what quadrant of operant conditioning the technique relies on.

Skinners-box1-277x300Common illustration of basic learning concepts in a 2×2 grid often called “the Quadrants.”
(note, while often attributed to B.F. Skinner, this is a false attribution)

For example, many trainers claim that a technique called Behavior Adjustment Training (BAT) is unethical because they see it as negative reinforcement. For those unfamiliar, BAT removes a stimulus (which a dog finds threatening) at a distance great enough for the dog to remain calm and not show signs of being overly agonistic (such as growling, snarling, barking, etc.) [note: typically it is the dog that is moving, not the scary stimulus].  Because you repeatedly remove something (in this case, the thing the dog doesn’t “like”) to reinforce calmer behavior, many trainers label this type of training as negative reinforcement—and because negative reinforcement is claimed to be unethical, BAT must therefore be unethical.  Problematically, BAT was designed to steer owners away from using harsh punishments and the method itself creates no signs of undue harm on the dog; so, if the interpretation of the quadrants of operant conditioning cause trainers to conclude that BAT is unethical, then there is a serious problem with the convention because calling BAT unethical is like calling Mr. Snuffleupagus from Sesame Street a serial rapist.

Since 1975, various scientists have pointed out that learning events can rarely, if ever, be labeled solely as positive or negative (e.g. Michael 1975, Baron & Galizio, 2005; Baron & Galizio, 2006; Tonneau, 2007). For example, imagine a rat in a black box at freezing temperature. They have a lever which activates a heater for a short period of time. As the rat stays in the box, an increase (reinforcement) in lever-pressing is noticed over time. Here is the paradox: does lever-pressing increase because of the addition (positive) of heat or the cessation (negative) of cold? The answer is yes.

[This example is paraphrased from an actual experiment conducted by Weiss & Laties published in Science in 1961]

In the physical universe, the addition of one stimulus is always met with the removal of another stimulus. Regardless of what type of matter (energy) this stimulus is, energy cannot be created or destroyed, and so within any closed system you have to remove something to add something and you have to add something to remove something. This is a fundamental property of the universe and is analogous to the idea that two opposing baseball teams cannot win the same game: in order for one team to win, another team has to simultaneously lose.  This prompts us to ask two questions: 1) are the quadrants of operant conditioning mutually exclusive?; and 2) if they are not mutually exclusive, then are we able to stipulate that they are not occurring at the same time during a learning event?

Most examples of what dog trainers consider positive reinforcement rely significantly on negative reinforcement elements (e.g. the removal of hunger). Food is great, but as a motivator we are removing hunger (negative reinforcement), however it is also positive reinforcement for the obvious reason that we are adding food.  This might seem unimportant for the lives of most dogs who are fed to the point of obesity, however in behavior research, most animals are deprived of food before reinforcement begins in learning paradigms, therefore the contingency of food as positive reinforcement is being given to an animal deprived of enough food prior to testing to cause a 15% decrease in body mass.  For perspective, imagine a 180-pound male losing 27 pounds before being handed a cheeseburger as reinforcement and you might appreciate how removing deprivation is not only perhaps a better description of the actual science of reinforcement but also a significant motivator for a rat to start pressing levers in their black box.

A classic example used popularly in psychology textbooks is the example of an aspirin as a negative reinforcement. The idea is that the removal of a headache might increase future aspirin-taking behavior, thus the removal of the aversive headache could be said to increase the frequency of the behavior—or more concisely, the aspirin is negatively reinforcing aspirin-taking behavior. However, we are adding aspirin to the system, so what do we say about the addition of a stimulus that causes the removal of another stimulus that overall causes a consequence which increases or decreases the frequency of the antecedent behavior? Vis-à-vis “aspirin-logic,” the addition of food that removes the feeling of hunger would have to be negative reinforcement as well.  By now it should be clear that there is no mutual exclusivity to the reasoning behind the popular interpretations of the quadrants of operant conditioning, and therefore any conclusion that relies on such a demarcation is neither logical nor scientific.  Simply put, analyzing behavior with a system that relies on the Tweedledee-Tweedledum characterization of reinforcement and punishment (Marr, 2006) in a universe that is beholden to the conservation of energy is a product of improper, massive oversimplification.

It should be appreciated that the difficulty of negotiating positive versus negative effects within a system is common to science.  For example, biologists that enjoy old-fashioned terminology will describe the movement of an organism in relationship to a stimulus a “taxis;” positive taxis is therefore movement toward a stimulus, while negative taxis is movement away from a stimulus. If the reference point for the behavior is the change to the environment (e.g. the appearance of a prey animal) then naturally we would instinctively describe the motion of a predator towards the prey as positive taxis. However, let us instead change the animal to an herbivore like an elk. Imagine a large group of elk munching away on some delicious savory grass. Overtime, the elk wear down the presence of grass in the area they are feeding and they then move toward another area which has more grass present. Are the elk moving toward an area of more food to forage on or away from an area of less food to forage on (i.e. is it positive taxis toward new grass or negative taxis away from no grass)?

It is important to remember that much of what we use to categorize nature are simply conventions, and sometimes their creation is no more sophisticated than what one person decided while reading the latest issue of Science while sitting on the can. One of my favorite illustrations of the sometimes arbitrary nature of conventions is in the way physicists describe torque motions. In physics, a torque that generates movement counterclockwise is notated with a positive force and a torque that generates movement clockwise is notated with a negative force. Why? Because if you replicate the motion of an object moving counterclockwise with your fingers on your right hand, your thumb is pointing up, and if you replicate the motion of an object moving clockwise with your fingers on your right hand, your thumb is pointing down. For this reason it is called the thumb rule.

ImageDespite the overwhelming issue, research papers and essays are still frequently published describing events that are “positive reinforcement” or “negative reinforcement,” therefore this is by no means just a dog industry issue.  Furthermore, responses to these criticisms fail in addressing the issue head on, are unable to provide sound counterarguments, and/or fall back on the pragmatic argument: “well, we don’t have anything better so it is better than nothing.” There are a couple problems with the pragmatic argument. First, define what is “better?” Quadrants create a paradigm view that cannot be supported without the existence of quadrants, so if “better” requires a convention that maintains the theory-laden beliefs of operant conditioning then I would say the pragmatists are correct, just like creationism cannot exist without a God who created the universe as the central hypothesis.  If “better” requires only the need to describe learning events then the pragmatists are definitively wrong because the concepts of reinforcement and punishment are descriptive enough in and of themselves as positive/negative distinctions always have to be clarified further with methodological explanation.

But all of this side steps the heart of the issue: harsh punishment creates the negative and deleterious results we are familiar with because of the threat it presents to the organism.  The ethics here are measured through actual harm, not through the way an animal learned something. Indeed, many dogs might not learn much of anything that is objectively quantifiable in an operant classification after being swung around on a choke chain in a helicopter swing or kicked in the ribs, thus we couldn’t say these events belong to any quadrant because we have to first establish the learned behavior that is operating on the environment.

Ethics does not have a quadrant. It is a complex web of issues that are rarely cut and dry and conversations about dog training through positive and negative quadrant distinctions only obfuscate the discussion at hand. Kicking a dog is unethical because it is harmful and cruel: not because it is “positive punishment.” Dangling a dog in the air as it suffocates is unethical because it too is harmful, cruel and abusive. You cannot design an experiment to show that the Yankees won is true but the Red Sox lost is false in the same way it is impossible to falsify whether it is the addition of a treat or the removal of hunger acting during a learning event. Pragmatists will say “oh, whatever, it’s not a big deal because I know the difference.” Problematically, it’s not only unhelpful to the conversation but it is also unscientific. Science is falsifiable, if it is not, it is not science.

ImageReferences:

Baron, A., & Galizio, M. (2005). Positive and negative reinforcement: Should the distinction be preserved? The Behavior Analyst / MABA, 28(2), 85–98.

Baron, A., & Galizio, M. (2006). The distinction between positive and negative reinforcement: Use with care. The Behavior Analyst, 29(1), 141.

Marr, M. J. (2006). Through the Looking Glass: Symmetry in Behavioral Principles? The Behavior Analyst, 29(1), 125.

Michael, J. (1975). Positive and Negative Reinforcement, a Distinction That Is No Longer Necessary; Or a Better Way to Talk about Bad Things. Behaviorism, 3(1), 33–44.

Tonneau, F. (2007). Behaviorism and Chisholm’s Challenge. Behavior and Philosophy, 35, 139–148.

Weiss, B., & Laties, V. G. (1961). Behavioral Thermoregulation. Science, 133(3464), 1588–1588. doi:10.1126/science.133.3464.1588

Myths, Legends, and Science: Part 2

[Part 1]

In my previous blog, I ended by poking fun of the people who superficially cling to the word “science” as the end all of reasoning, however what I did not mention is the other spectrum of people who see science as a never ending string of contradictions and thus find no value in it.  These people often dismiss the importance of science, even though they drive a car to work, fly to Cabo, write emails, take aspirin, receive vaccinations for deadly diseases, and watch movies on a phone (which is supposedly smart) that fits in their pocket.  Clearly science is just spinning its wheels.  Joking aside, these criticisms draw from the self-correcting nature of science, however this is actually a virtue if the way in which scientific theories are arrived at is fully understood.

ApocalypseKilgore2

Theory vs Hypothesis

If you could x-ray science, it could be said that its bones are made up of theories and hypotheses.  It is important to distinguish here that the colloquial use of the term “theory” is vastly different from its use in science.  Theories, much like hypotheses, make predictions, however a theory involves predictions that are rarely (if ever) unsupported because empirical tests have corroborated them numerous times.

For example, when Darwin proposed natural selection as the mechanism for evolution, he created numerous predictions that could not be readily tested in his time.  However, with the incredible advances in molecular biology, we have been able to examine variations in the sequencing of nucleic acids between organisms (i.e. DNA analysis).  Among many other things, Darwin predicted that humans and primates both evolved from a common ape-like ancestor.  Supposing that Darwin was correct, then we would expect to see fewer mutations in the DNA sequences between humans and primates then between humans and cats—since according to the theory, humans and other primates share a common mom and dad much more recently than humans and cats do.  Thus, the finding that humans share about 95% of their DNA with chimpanzees is just one of many findings that corroborates the theory of evolution.1

evolution

Credit: M.F. Bonnan

Other theories you may be familiar with are atomic theory, the theory of relativity, germ theory, cell theory, and so on and so forth.  Often, people will try to impugn a theory by saying statements to the effect of, “evolution is just a theory.”  However, once someone understands how theories are established, this statement is as ridiculous as saying “germs and atoms are just theories.”  Really what they mean to try and say—although falsely—is that they are hypotheses, as if to suggest that science wasn’t really sure one way or the other.

Hypotheses come in many varieties, they can be based on lots of evidence, some evidence, or sometimes no evidence at all, and the longer they bounce around without being refuted, the stronger the evidence becomes.  Atomic theory, for example, started off as a hypothesis back with the pre-Socratics around 400ish BCE.  Naturally, scientific exploration has fine-tuned it like a small European sports car, however the initial hypothesis has managed to maintain its central idea: that the universe is composed of atoms.  Evidence supporting atomic theory can be seen in particle accelerators, when you toss salt in water, cook oil in a pan, or even when you watch ice melt into water.

Hypotheses become theories when they are extensively supported with experimental and observational data

kurt-lewin-good theoryKurt Lewin (1952)2

Falsifiable

The philosopher Karl Popper emblazoned the importance of falsifiability in science, and while many of Popper’s ideas were tremendously controversial, the importance of falsifiability in science is one that everyone seems to happily agree on.

I found that those of my friends who were admirers of Marx, Freud and Adler were impressed by a number of points common to these theories and especially by their apparent explanatory power.  These theories appeared to be able to explain practically everything that happened within the fields to which they referred.  The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated.  Once your eyes were thus opened you saw confirming instances everywhere: The world was full of verifications of the theory.  Whatever happened always confirmed it.  Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth, who refused to see it, either because it was against their class interest or because of their repression which were still “unanalysed” and crying aloud for treatment.  (Popper, 1963)

 Karl-Popper-Quotes-1

Einstein provides a very nice example of how falsifiability plays a role in science even when a theory seems extremely intangible.  In 1905, Einstein declared in his publication of the Special Theory of Relativity that E=MC2.  In 1916, Einstein then published the General Theory of Relativity, which predicted that light bends with the distortions of space and time (i.e. is affected by gravity).  The claim was so radical—so audacious—that the astrophysicist and philosopher, Arthur Eddington, made an expedition to an island on the western coast of Africa to test Einstein’s claim.  If Einstein was correct then the stars near the Sun would be in a different position than normally anticipated because the distortion of space around the Sun would have caused the light to bend—and vice versa, if the stars near the Sun did not change position, then the theory could be rejected.  This kind of testability is implicit in determining whether a hypothesis or theory is falsifiable.  As you might have guessed, Eddington’s photographs of the stars near the Sun during the solar eclipse successfully demonstrated Einstein’s predictions.  Initially, if I had asked you to think of a test where you could determine whether or not light bends, you would probably scratch your head for a while (as would I).

principe_eclipse

Photograph from the island of Principe

It is a great exercise to think about the testability of a claim by thinking about what evidence would be needed in order to falsify it.  For example, Freud claimed that conflicts between an individual’s conscious and unconscious mind resulted in neurotic behavior; problematically, since by definition the unconscious mind is “not available to introspection,” it is not testable.  If it is not testable, it is not falsifiable.  If it is not falsifiable, it isn’t science.

Assumptions: the pitfall of a good hypothesis

Naturally, the results of experiments do not always support the hypothesis, however this does not mean that the hypothesis is wrong.  The magic of good critical thinking is finding where a hypothesis might be carrying hidden baggage—more specifically: assumptions.  Remember Semmelweis and the use of chlorinated lime solution?  Semmelweis hypothesized that if the cause of childbed fever was in fact a contamination of putrid matter from the morgue, then the introduction of something that would kill the putrid matter (i.e. an antiseptic) would result in stopping the contamination.  While he guessed well, this hypothesis carried an assumption that a chlorinated lime solution works as an antiseptic.  However, since this predates germ theory, nobody even knew what an antiseptic was (let alone an effective one).  An even stronger example of the difficulty with assumptions played out historically in what is called Stellar Parallax.

A great debate that began in ancient Greece and did not end until Copernicus in the 16th century was whether the universe is Geocentric (Earth at the center) or Heliocentric (Sun at the center).  Being clever at geometry, the Greeks thought of a way to test the two competing hypotheses.  If the universe was heliocentric, then there would be a calculable difference between the stars in the sky at one time of the year versus another—however, if the universe was geocentric, then there would be no change.  The Greeks measured, and sure enough, they concluded that no, the stars did not seem to change, ergo the universe must be geocentric.

You can replicate this test by closing your left eye and extending your thumb out in front of you, pick an object you can cover with the width of your thumb (ideally about 3-10ft away), then open your left eye and close your right (i.e. switch the closed eye)—alternating back and forth you should see the object(s) behind your thumb bounce back and forth.  Looking at the diagram below, “July” would be your left eye’s perspective and “January” would be your right eye’s perspective.

16-03

Stellar Parallax

The first problem for the Greeks was that they never imagined that the stars were over 40 trillion kilometers away (4 light years), thus they didn’t realize how small the changes they were looking for really were—for perspective, the Earth is 40,000km in circumference, so you would have to complete a million journeys around the Earth a million times in order to cover the distance between us and our next closest star Alpha Centauri. The second problem for the ancients was that they didn’t know which star to try and pick out to use as a reference point, so it was a hypothesis with assumptions that were nearly impossible to know at the time.

The moral of the story is that just because a test fails a hypothesis, we cannot simply throw it away because the negative result might simply be the outcome of an uncontrolled confound in the experimental design.  This means that if the results of an experiment do not support a hypothesis, we go back and think, “what were my controls” “do I have any assumptions in my methods” “are there confounds I haven’t controlled for” etc.

The Scientific Method

Putting all these pieces together, you will probably recall from one of your high school science classes a flow chart that looks somewhat like this:

sm

Scientific Method – Version 1.0

I call this Version 1.0 because this is the scientific method in a vacuum.  Science historian Steven Shapin, a professor at Harvard University, makes a compelling argument that one of the most important elements which shaped the Scientific Revolution was the way in which scientists began working together to coordinate research.3  This emphasizes an aspect of the scientific method that version 1.0 is completely lacking, and that is its unique social structure.

science_process_diagram_big

Scientific Method – Version 2.0

Science cannot evolve if the knowledge of the past is too unreliable to expand on, nor can it evolve without the influence of society and the benefits it has on the human condition.  Peer-review is a function required for the highest level of publication and capitalizes on how scientists maintain correspondence and quality control; thus, while the quality can vary, peer-review and editorial standards are one of the most significant elements of good science. They are the measure by which all science is scrutinized for miscalculations, methodological flaws, invalid conclusions, and sometimes, deception.

Image Credits:

Scientific Method 1.0 – (http://courses.washington.edu/esrm430/sm.jpg)

Scientific Method 2.0 – (http://arstechnica.com/science/2009/03/building-a-better-way-of-understanding-science/)

References:

(1) Britten, R.J. 2002. ‘Divergence between samples of chimpanzee and human DNA sequences is 5% counting indels.’ Proceedings National Academy Science 99:13633-13635

(2) Lewin, K. (1952). Field theory in social science: Selected theoretical papers by Kurt Lewin. London: Tavistock.

(3) Shapin, S. (1996).  The Scientific Revolution.  Chicago: University of Chicago Press.

Myths, Legends, and Science: Part 1

One of the most common words in today’s dog industry is the word “science”: such as science-based, scientifically proven, or backed by science.  Problematically, the word is often utilized in an attempt to punctuate that an idea, product, method, or concept is simply fact, and that anyone who disagrees with it is likely ignorant, uneducated, or just plain wrong.  Sure, arguments and debates are essential to science, and in order for those to happen we have to have strong opinions.  However, there are a growing number of people who have started resorting to the word “science” without knowing the methods or conclusions that constitutes the evidence behind their claims and with the extra assumption that science only has a singular opinion beyond reproach.  The problem is that this isn’t really what science is about, nor is it how we got to where we are today.

Image

What is Science?

Most people would agree that physics, chemistry, and biology are science—often referred to as the “natural” sciences as they passionately try to unravel the mysteries of the universe.  Although what about other subjects, such as mathematics?  Music?  Astronomy?  Philosophy?  Psychology?  Economics?  Metaphysics?  Logic?  What makes one subject a science in our minds and another a pseudo-science?  While my goal is not to simply list what subjects I personally believe are and are not science, by the end of my series I hope you can make that determination for yourself, it is interesting that as far as history is concerned, ‘science’ is actually a relatively new word.  The roots of what is now modern (western) science began in ancient Greece with the advancements of philosophy and the infamous Aristotle (384 to 322 BCE).

As a philosopher, Aristotle’s celebrity was incalculably immense.  Described in Dante’s Inferno as “the master of those who know,” Aristotle wrote about the world and the heavens in ways that still permeate modern science.  His greatness as a philosopher set the stage so strongly that for about 2,000 years, those we would call scientists throughout history actually referred to themselves as “natural philosophers.”  This can be seen as late as even Isaac Newton’s publication of Philosophiae naturalis principia mathematica (Mathematical Principles of Natural Philosophy) in the 18th century.

Image

Aristotle and Zeno’s Paradoxes

Zeno of Elea was a philosopher from the 5th century (BCE)—although what a great name for an evil space villain.  Zeno was an extreme rationalist who argued that it was reason, and reason alone, that could give us the gateway into an understanding of the way things are.  He believed that the senses (i.e. seeing, hearing, smelling, touching) were tainted as a tool for building knowledge and used several paradoxes as evidence that even observations about motion (as in an object moving) were actually just figments of the senses.

Zeno’s Dichotomy Paradox (dichotomy literally means “cutting in two”): imagine a dog running towards a stationary object.  The object is at a finite distance D and the running happens in a finite time of T.  Zeno claimed that in order to travel D, the dog must first travel the first half of D, then half of the distance that remains, followed by half the distance that remains of that, followed by half the distance that remains of that as well (i.e. the dog would travel half of D, then a quarter of D, then an eighth of D, then a sixteenth of D, then 1/32nd of D, 1/64th of D…) etc. ad infinitum—infinitely.  Following this logic, it would then have to be assumed that the dog will have to travel an infinite number of distances in a finite amount of time.  To Zeno, this was a contradiction; therefore, assuming that an object moves because we see it move is a false assumption when it is simply illogical for an object to complete an infinite number of distances within a finite amount of time.

Zenoindex

Aristotle, however, came along with a resolution to Zeno’s Dichotomy Paradox.  Instead of arguing with the conclusion, even though clearly the conclusion is absurd, Aristotle created a resolution to the paradox by focusing on the assumptions within the argument.  By focusing on the paradox’s construction, Aristotle demonstrated two of the most important elements that science is built on—reasoning and logic—which is what made him so infamous (not his primitive hypotheses of the heavens which is often what is taught in history class).  While Aristotle’s resolutions are apt, the paradox wasn’t laid to rest until modern mathematics came up with a mathematical proof to rationally explain Zeno’s Dichotomy.1

ImageModern notation for solving Zeno’s Paradox

Deductive vs Inductive Reasoning 

There are two distinct forms of reasoning that can be used to make a claim: deductive and inductive reasoning.  Deductive reasoning takes a very large premise and narrows it down to a smaller conclusion.

  1. All dogs have noses.
  2. Muffy is a dog.
  3. Therefore, Muffy has a nose.

The power of deductive reasoning is that when the premises are true, and the argument construction is valid, the conclusion is undeniably true—I don’t know many people who would argue that Muffy doesn’t have a nose.  However, at the same time, deductive reasoning can be tricky because it could be built on a false premise.  Here is another deductive argument:

  1. All atoms have one or more protons.
  2. Carbon is an atom.
  3. Therefore, carbon has one or more protons.

We can only say that this is undeniably true (i.e. “sound”) if we have examined every atom in the universe.  However, despite not having examined every atom in the universe, we still accept that all atoms have one or more protons because of Inductive Reasoning.  Inductive reasoning would look like this:

  1. Every atom we have found so far has one or more protons.
  2. Therefore the next atom we find will have one or more protons.

If we assume this is true, based on this inductive reasoning, that the next atom we find will have one or more protons, then it is sound to conclude that carbon has one or more protons.  However, inductive reasoning cannot prove because it generalizes from a finite sample, thus it is able to suggest that a hypothesis is probably true.  Like a car with no warranty: it makes no guarantees.

Semmelweis and ‘Childbed Fever’

Ignaz Semmelweis was a Hungarian physician whose story expands this concept of the importance of inductive reasoning in science.  In the mid-19th century, Semmelweis worked at a hospital in Vienna where there were two maternity divisions, however problematically, about 12% to 17% of the women who entered the First Division to give birth began to subsequently die with what was called childbed fever (a horrific death with symptoms including organ failure and edema), while only about 2-3% of the woman who entered the Second Division suffered the same fate.  Systematically, Semmelweis formed several hypotheses to try and discover the cause of the mortality rate in the first division.

The first hypothesis was that the deaths were due to Atmospheric Influences.  Before germ theory, people believed epidemics were passed through atmospheric events, however to Semmelweis, this seemed impossible because it did not explain why women who gave birth on the street on route to the hospital had a higher survival rate than the women in the first division, nor why two different wings of the same hospital would consistently have different mortality rates—so this hypothesis was thrown out.  Other hypotheses included: overcrowding; giving birth on the back instead of the side (it was common for women to give birth on their sides at this time); diet; rough handling by doctors; and Death by the terrifying and debilitating presence of Priests (my personal favorite even though it was unsupported)2.  While many of these were also thrown out due to a lack of logic or probability, Semmelweis ran experiments where he had the priests take different routes through the hospital and where he had all the mothers in first division give birth on their sides instead of on their back—no luck.

After almost four years of trying to solve the problem, a colleague of Semmelweis’ received a puncture wound from a student’s scalpel in the morgue and died of the exact same symptoms as the women of first division.  It suddenly occurred to him that the medical students—who not coincidentally had begun additional training by performing autopsies on cadavers about four years prior—were often traveling straight from the morgue to the delivery room and often still smelled of rotting flesh (believe it or not, medicine really has come a long way).  Semmelweis then instituted a protocol that medical students had to wash their hands in a chlorine and lime solution before heading to the delivery room in the first division.  Because this predates germ theory, Semmelweis had only decided to try a chlorinated lime solution because it was effective at removing the smell accumulated from working on cadavers.  Regardless, within no time at all, the mortality rate in the first division dropped 90%.  Sadly however, mandatory hand washing caused a huge uproar and Semmelweis was politically ruined (despite having evidential vindication—i.e. women stopped dropping like flies) for even suggesting that invisible putrid matter derived from dead and living organisms might be the cause of the mortality rates.  Eventually, despite his discovery, Semmelweis was dismissed from the Vienna hospital only to then be forced to move back to Budapest due to harassment from the Vienna medical community before eventually being committed to a mental institution.  Apparently doctors really didn’t want to have to wash their hands…

Image

The first vertical line represents the beginning of autopsy investigation in Vienna (Wien)
The second vertical line represents the introduction of handwashing procedures

Important to the question of science, however, is that even though Semmelweis solved the problem it turned out that his hypothesis was actually still somewhat incorrect.  As it turns out, the women dying from childbed fever were actually dying from a genial tract sepsis often caused by bacterial infections of Staphylococcus (staph infections)—not putrid matter derived from living and cadaverous organisms.  One could argue, “well, what’s the difference?”  If I suggested (as Einstein did, and emphasized by my incredible high school physics teacher) that gravity did not involve gravitons but rather bends and distortions in space and time, you would agree they are two significantly different hypotheses, regardless of the observational outcome.  Inductive reasoning is a powerful tool, however conceptually and historically we know that it creates an understanding about probability, not fact.

Science is…

Dictionary: Science is the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment.

ImageBroken down, we get two parts.  The first part starts with the idea of the intellectual and practical nature of science.  The history of science is filled with the Galileans (those who, like Galileo Galilei, believe in science for the sake of science—the intellectual nature) and the Baconians (referring to those who, like Francis Bacon, believe science has to have a purpose—the practical nature).  More broadly and simply we can summarize this as the two primary types of scientific investigation: theoretical and applied.  The theory of relativity would be exemplary of theoretical science and research in medicine would be exemplary of applied science.

The second part of the definition very generally describes what is known as the “scientific method” which I will go into more detail in my next blog.  For now it is suffice to understand that thanks to great thinkers like Aristotle, science is built on experience (i.e. it is empirical).  The method of science utilizes techniques designed to solve conceptual problems of our experience in the real world.  Why is the sky blue?  Why do dogs like to hump certain people’s legs?  Why does coffee wake us up?

While this definition is a great start, science is also much more.  For instance, science is falsifiable; it is exploratory; it is beholden to concise and logical arguments; it is damaged by bias; and most importantly, it is ever changing.  Science does not ascertain facts, nor does it establish truths.  Science is about examining the current evidence, asking new questions, and modifying our preexisting conclusions based on new explorations.

So the next time someone uses the word “science” as a definitive proof for an argument, remember that a true scientist is both cautious and careful when making claims and would never stoop so low as to insult the intelligence of someone by defaulting to the word “science” to win an argument against them—especially without establishing who’s experiences they are referring to.

Blog Continued in Part 2

Image

References

(1) An awarded and readable overview of Greek science and philosophy can be found in G.E.R Lloyd, Greek Science After Aristotle (New York, NY: W.W. Norton & Company, Inc., 1973)

(2) For more information about Semmelweis and his life, including detailed accounts and translations of his writing, check out W.J. Sinclair, Semmelweis: His Life and His Doctrine (Manchester, England: Manchester University Press, 1909)

Image sources

Science proves you’re wrong: Zazzle.com
Newton’s Principae Naturalis Mathematica: NPR.org
Zeno’s Dichotomy Paradox: http://berto-meister.blogspot.com/2013/04/what-is-zenos-dichotomy-paradox.html
Notation for sum of an infinite series: Wikipedia
Table of mortality data from Vienna hospital: Wikipedia
Warning Science in Progress: Zazzle.com
“Science” image: http://www.gdfalksen.com/post/52184550214

Pseudo-behavioral science

Recently I saw a study from the Journal of Experimental Analysis of Behavior shared via the American Veterinary Society of Animal Behavior (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1397788/#!po=50.0000).  In it, the authors claim that a higher rate of reinforcement for a behavior creates a stronger resistance to the extinction of the behavior when reinforcement is removed: a very broad claim given the niche experiment.  Reading the abstract, most anyone would be happy to accept their claim, especially professionals who are always on the prowl for more evidence to support their particular belief system.  However, this is a great example of why we have to be careful about what sources we decide constitute science.

There are several problems in the JEAB study linked above:

First, two experiments with 3 to 4 starving rats (of one species) in strict confinement cannot be expected to explain the behavior of other healthy animals such as dogs:

“The subjects were 4 male Long Evans hooded rats, about a year old at the start of the experiment. Obtained as juveniles (about 150 g), they were gradually (over several months) brought to a weight of 335 g (± 15 g) and maintained at that level by free access to food blocks in their home cages for 1 to 1.5 hrs after each session. (Ator, 1991, provides a rationale for this method of food deprivation for rats)” [emphasis my own].

It should be highlighted that one of the rats died after condition 6 and a second rat did not follow one of the extinction conditions because it appeared ill.  Yet the deprivation, which resulted in illness or death in 50% of their animals, is rationalized and considered necessary.

Second, it is unclear if they actually found anything.  In addition to the small population and no statistically significant findings, this study is a general discussion on mathematical principles, not behavioral observations.  Both experiments reported in the study required manipulation of their data in order for it to fit their hypothesis.  Let me repeat, the authors willingly admitted to throwing out data they ‘didn’t like’.  The authors justify this as removing an outlier, but some pause has to be taken because it is not scientific to willfully remove data in order to prove a hypothesis or theory.  Thankfully, the authors do contribute this paragraph appropriately:

“Basing conclusions solely on adjusted data, however, can be risky. For any set of data, some adjustment can be found to generate whatever new relation one might wish for. If the adjustment is selected arbitrarily, the relation that emerges will be arbitrary as well and thus misleading about the relevant behavioral processes. The question, then, is whether a particular adjustment can be justified on grounds beyond its ability to produce a particular outcome.”

It is beyond the realm of my understanding that radical behaviorists believe that this formula accurately depicts the phenomenon of the process of behavioral extinction, regardless of species, function, and ecology: log(Bx/B0) = -x(c+dr)/ra .  It is especially incredible to me that such hypotheses are being generated due to results that are undergoing fraudulent statistical p-hacking whereby statistics are calculated over and over and populations and data adjusted until the authors find the results they are looking for (which in this study couldn’t even result in any results being statistically significant).  A real scientist would never throw out a chunk of their data so they could prove a mathematical formula fit a complex biological behavior, nor would they observe the death and illness of half their animals as something only needing mention in a footnote of the appendix.

This is not science; this is torture and mathematical perversion.