Recently I saw a study from the Journal of Experimental Analysis of Behavior shared via the American Veterinary Society of Animal Behavior (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1397788/#!po=50.0000). In it, the authors claim that a higher rate of reinforcement for a behavior creates a stronger resistance to the extinction of the behavior when reinforcement is removed: a very broad claim given the niche experiment. Reading the abstract, most anyone would be happy to accept their claim, especially professionals who are always on the prowl for more evidence to support their particular belief system. However, this is a great example of why we have to be careful about what sources we decide constitute science.
There are several problems in the JEAB study linked above:
First, two experiments with 3 to 4 starving rats (of one species) in strict confinement cannot be expected to explain the behavior of other healthy animals such as dogs:
“The subjects were 4 male Long Evans hooded rats, about a year old at the start of the experiment. Obtained as juveniles (about 150 g), they were gradually (over several months) brought to a weight of 335 g (± 15 g) and maintained at that level by free access to food blocks in their home cages for 1 to 1.5 hrs after each session. (Ator, 1991, provides a rationale for this method of food deprivation for rats)” [emphasis my own].
It should be highlighted that one of the rats died after condition 6 and a second rat did not follow one of the extinction conditions because it appeared ill. Yet the deprivation, which resulted in illness or death in 50% of their animals, is rationalized and considered necessary.
Second, it is unclear if they actually found anything. In addition to the small population and no statistically significant findings, this study is a general discussion on mathematical principles, not behavioral observations. Both experiments reported in the study required manipulation of their data in order for it to fit their hypothesis. Let me repeat, the authors willingly admitted to throwing out data they ‘didn’t like’. The authors justify this as removing an outlier, but some pause has to be taken because it is not scientific to willfully remove data in order to prove a hypothesis or theory. Thankfully, the authors do contribute this paragraph appropriately:
“Basing conclusions solely on adjusted data, however, can be risky. For any set of data, some adjustment can be found to generate whatever new relation one might wish for. If the adjustment is selected arbitrarily, the relation that emerges will be arbitrary as well and thus misleading about the relevant behavioral processes. The question, then, is whether a particular adjustment can be justified on grounds beyond its ability to produce a particular outcome.”
It is beyond the realm of my understanding that radical behaviorists believe that this formula accurately depicts the phenomenon of the process of behavioral extinction, regardless of species, function, and ecology: log(Bx/B0) = -x(c+dr)/ra . It is especially incredible to me that such hypotheses are being generated due to results that are undergoing fraudulent statistical p-hacking whereby statistics are calculated over and over and populations and data adjusted until the authors find the results they are looking for (which in this study couldn’t even result in any results being statistically significant). A real scientist would never throw out a chunk of their data so they could prove a mathematical formula fit a complex biological behavior, nor would they observe the death and illness of half their animals as something only needing mention in a footnote of the appendix.
This is not science; this is torture and mathematical perversion.