“What?” Said the Chinchilla

The Daily WTF is a site that collects programmers’ horror stories. I thought the following horror story [it’s the second story on the page] provides a good example of why it’s important to double-check the code of scientific software.

Long ago, I worked as a programmer at a university’s hearing research lab. They were awarded a large government grant to study the effects of different kinds of noise on hearing. For the really loud and really faint noises, the researchers used animal subjects with ears that are similar to human ears. Specifically, chinchillas.
 
The chinchillas would be put in to a special chamber for several hours at a time to have their hearing tested. Since the little rodents don’t respond so well to questions like, “which sound is louder?,” a good amount of time had to be spent training them to jump over a little bar in their chamber whenever they heard a beep.
 
Because a large part of the research project was to study the long term effects of hearing, the tests would have to be run twenty-four hours a day, seven days a week, for several years. Obviously, it was pretty important that the chinchilla testing be automated. But not very important, though. If it had been very important, they would have had someone other than a grad student write it.
 
I joined the team about a year into the project and was tasked with rewriting the beep-jump-reward program. It was a ridiculous mess of spaghetti code that seemed to have more GOTO statements than actual code. There were no comments anywhere nor any documentation on what the program’s algorithm was for controlling the beeps and rewards.
 
After a little while, I was able to figure out the algorithm and rewrite the application. A month or two later, the rewrite was put into production. I documented my work, said my goodbyes, and moved on to my next contract.
 
A year or so later, the researchers compiled the data and noticed some very surprising results: the chinchillas were a lot more hearing-impaired than they should have been. While this may not seem too big a deal, the findings would have some serious ramifications. Occupational noise-exposure laws would be changed, lawsuits would be filed, and billions would be spent correcting the issue.
 
Before publishing the results, another team of researchers went over the data and study with a fine-toothed comb to ensure that the results were correct. And whammo, they find a bug in my code. Under certain conditions, one part of the application did not correctly check that the chinchilla jumped at the right time. This meant that the program would deny the chinchilla a food pellet, giving it negative feedback when it in-fact did the right thing. This led to so some rather confused chinchillas which had no idea when they were actually supposed to jump.
 
In the end, over a year’s worth of data was thrown out, a few man-years of work was wasted, and there were a whole lot of cute little rodents that were rather confused and hard-of-hearing. I still feel bad for deafening those poor chinchillas…

This story highlights the secondary importance that many scientists still accord to software in their research. In this case, the writing of a critical piece of software was assigned to a naive grad student who worked without oversight. The researchers apparently did not stop to think that one minor error in that software would invalidate the results of the entire experiment.

Note that the error was only caught by a second, completely independent team of researchers who went over the first team’s code “with a fine-toothed comb”. Note also that they didn’t just review the data the software generated but the actual software itself. Something that was never done with the CRU software until a whistle blower exposed it to the world.

This is how software used in finance, the military and (hopefully) most regulatory agencies is double and triple checked. Contrast this with the superficial and amateurish way that the CRU (and presumably all other) climatologists created, maintained and tested their software.

The extraordinary thing about the entire political-scientific “climate change” complex is the vast disconnect between the unprecedented magnitude of the public policy we will base on this data and the quality of the software code that generates the data. It is like finding out that the software used to decide whether or not to launch nuclear weapons was written by an overly bright teenager in his high school computer lab.

Given that hundreds of millions of lives depend on us getting the science on “climate change” just right, we need to hold the software that climatologists use to the same standards we demand for financial, medical and military software.

If we don’t get it right, deaf chinchillas will be the least of our worries.

6 thoughts on ““What?” Said the Chinchilla”

  1. When I worked at the Douglas Aircraft Company wind tunnel in El Segundo, back in the dark ages, we had a very practical demonstration of the power of a single plus/minus sign. We would get projects to test. We had had nothing to do with the development and were dependent on the engineers that designed the device for its behavior. One day, when I was fortunately away, the guys got a new design for the cowl of a jet engine nacelle. It had a bulb shaped central cone, sort of like a spinner but it was not attached to the compressor shaft. It was mounted in the intake on rather thin struts. The model was put in the four foot wind tunnel and mounted in the plenum chamber. There was a quartz window for the camera to take Schlieren photos during the run.

    Well the big butterfly valve was opened and, as the wind velocity approached Mach I, the wind tunnel staff who were watching were horrified to see the bulb device break loose of its struts and go UP THE TUNNEL AGAINST THE FLOW ! Everybody yelled a warning and guys grabbed onto the steel columns that held up the roof. About ten seconds later, the bulb came back down the tunnel at Mach I, hit the quartz window and broke it, and the tunnel decompressed into the building.

    Fortunately, this had been anticipated and the roof was mounted on tracks that allowed it to lift about a foot. Needless to say, they had about 400 miles an hour of wind for a couple of seconds there. Major damage was limited and nobody got blown away as it dissipated in a couple of seconds but it was exciting. I later heard that when the engineers went over the calculations, they found a plus sign swapped for a minus.

  2. While the quality of the code and the software development process at CRU is horrendous the problems with the climate models do not stop there. Even if the code was perfect the climate scientists input assumptions that bias their models towards a preordained conclusion (warming). Their assumptions in terms of a feedback effect for the increased CO2 levels are now being shown to be in error by R. Lindzen of MIT.

    And the climate modelers seem unaware that models of complex phenomena must be tested extensively on out of sample data (i.e. data not used in constructing the models themselves)before they have any validity as inputs into decision making. When they are tested on “new” data they have failed miserably to correctly predict temperature level.

    This whole climate science area is beginning to make me think of Lysenkoism, phrenology, Piltdown Man, etc. as analogies to it.

  3. Class Of 71 Alum,

    And the climate modelers seem unaware that models of complex phenomena must be tested extensively on out of sample data (i.e. data not used in constructing the models themselves)before they have any validity as inputs into decision making.

    Yes, using the data used to create a model to test a models predictions runs the risk of creating a “fitted” model. In a fitted model, mathematical relationships creep in that do not represent natural process but just mathematical means of reproducing the data used to create the model.

  4. One of my previous jobs involved developing a portable process control system to be used to demonstrate the feasibility of computer control for chemical processes. The first job we tried it on was a fiber spinning plant. We collected the data and gave it to a Chem. Engineer with computer experience who was to model the chemical process. After a period of time that person was ready to display the results of their model. I was greatly dismayed when that engineer blithely displayed a graph of real data from the data gathered with the results of the model. At the first major excursion, the model went in the opposite direction from reality. None of the Chem. Engineers questioned this. That was one of my signals to look for other employment.

    Reality trumps computer models.

  5. One horror story that went around the aerospace programming community in the 80s was that a fly-by-wire fighter unexpectedly flipped upside-down upon crossing the Equator for the first time, presumably because the latitude went negative.

    This illustrates that it is just as important to spend as much time and funds on testing as on the object program. Then of course, the test software needs to be well-written and tested, which can lead to a never-ending process and, thank Darwin, employ a lot of us programmers.

Comments are closed.