The real shocking revelation in the Climategate incident isn’t the emails that show influential scientists possibly engaging in the disruption of the scientific process and possibly even committing legal fraud. Those emails might be explained away.
No, the real shocking revelation lies in the computer code and data that were dumped along with the emails. Arguably, these are the most important computer programs in the world. These programs generate the data that is used to create the climate models which purport to show an inevitable catastrophic warming caused by human activity. It is on the basis of these programs that we are supposed to massively reengineer the entire planetary economy and technology base.
The dumped files revealed that those critical programs are complete and utter train wrecks.
It’s hard to explain to non-programmers just how bad the code is but I will try. Suppose the code was a motorcycle. Based on the repeated statements that Catastrophic Anthropogenic Global Warming was “settled science” you would expect that the computer code that helped settle the science would look like this…
…when in reality it looks like this:
Yes, it’s that bad.
Programmers all over the world have begun wading through the code and they have been stunned by how bad it is. It’s quite clearly amateurish and nothing but an accumulation of seat-of-the-pants hacks and patches.
How did this happen?
I don’t think it resulted from any conscious fraud or deception on the part of the scientists. Instead, I think the problem arose from the simple fact that scientists do not know how to manage a large, long-term software project.
Scientists are not engineers. More importantly, they are not engineering managers. It would be stupid to put a scientist in charge of building a bridge. Yes, the scientist might understand all the basic principles and math required to build a bridge but he would have no experience in all the real-world minutiae of actually cobbling steel and concrete together to span a body of water. The scientist wouldn’t even understand the basic terminology of the profession. Few if any scientists have any experience in managing long-term, large-scale projects of any kind. They don’t understand the management principles and practices that experience has taught make a project successful.
(As an aside, this cuts both ways. Engineers are not scientists and when they try to act like scientists the results are often ugly. When you see some pseudo-scientific nonsense being peddled, an engineer is often behind it.)
The design, production and maintenance of large pieces of software require project management skills greater than those required for large material construction projects. Computer programs are the most complicated pieces of technology ever created. By several orders of magnitude they have more “parts” and more interactions between those parts than any other technology.
Software engineers and software project managers have created procedures for managing that complexity. It begins with seemingly trivial things like style guides that regulate what names programmers can give to attributes of software and the associated datafiles. Then you have version control in which every change to the software is recorded in a database. Programmers have to document absolutely everything they do. Before they write code, there is extensive planning by many people. After the code is written comes the dreaded code review in which other programmers and managers go over the code line by line and look for faults. After the code reaches its semi-complete form, it is handed over to Quality Assurance which is staffed by drooling, befanged, malicious sociopaths who live for nothing more than to take a programmer’s greatest, most elegant code and rip it apart and possibly sexually violate it. (Yes, I’m still bitter.)
Institutions pay for all this oversight and double-checking and programmers tolerate it because it is impossible to create a large, reliable and accurate piece of software without such procedures firmly in place. Software is just too complex to wing it.
Clearly, nothing like these established procedures was used at CRU. Indeed, the code seems to have been written overwhelmingly by just two people (one at a time) over the past 30 years. Neither of these individuals was a formally trained programmer and there appears to have been no project planning or even formal documentation. Indeed, the comments of the second programmer, the hapless “Harry”, as he struggled to understand the work of his predecessor are now being read as a kind of programmer’s Icelandic saga describing a death march through an inexplicable maze of ineptitude and boobytraps.
CRU isn’t that unusual. Few scientific teams use any kind of formal software-project management. Why? Well, most people doing scientific programming are not educated as programmers. They’re usually just scientists who taught themselves programming. Moreover, most custom-written scientific software, no matter how large, doesn’t begin as a big, planned project. Instead the software evolves from some small, simple programs written to handle relatively trivial tasks. After a small program proves useful, the scientist finds another related processing task so instead of rewriting everything from scratch, he bolts that new task onto an existing program. Then he does it again and again and again…
Most people who use spreadsheets a lot have seen this process firsthand. You start with a simple one sheet spreadsheet with a few dozen cells. It’s small, quick and useful but then you discover another calculation that requires the data in the sheet so you add that new calculation and any necessary data to the initial sheet. Then you add another and another. Over the years, you end up with a gargantuan monster. It’s not uncommon for systems analysts brought in to overhaul a company’s information technology to find that some critical node in the system is a gigantic, byzantine spreadsheet that only one person knows how to use and which started life as a now long-dead manager’s to-do list. (Yes, I’m still bitter.)
This is clearly the ad hoc process by which the CRU software evolved. It began as some simple Fortran programs back in the early ’80s which were progressively grafted onto until they became an incoherent rat’s nest of interlocking improvisations. The process encapsulates a dangerous mindset which NASA termed “the normalization of deviancy/risk”. This process happens when you do something statistically risky several times without any ill effect. After a time, people forget that the risky act was even dangerous. This is how two space shuttles came to be destroyed.
A lot of the CRU code is clearly composed of hacks. Hacks are informal, off-the-cuff solutions that programmers think up on the spur of the moment to fix some little problem. Sometimes they are so elegant as to be awe inspiring and they enter programming lore. More often, however, they are crude, sloppy and dangerously unreliable. Programmers usually use hacks as a temporary quick solution to a bottleneck problem. The intention is always to come back later and replace the hack with a more well-thought-out and reliable solution, but with no formal project management and time constraints it’s easy to forget to do so. After a time, more code evolves that depends on the existence of the hack, so replacing it becomes a much bigger task than just replacing the initial hack would have been.
(One hack in the CRU software will no doubt become famous. The programmer needed to calculate the distance and overlapping effect between weather monitoring stations. The non-hack way to do so would be to break out the trigonometry and write a planned piece of code to calculate the spatial relationships. Instead, the CRU programmer noticed that that the visualization software that displayed the program’s results already plotted the station’s locations so he sampled individual pixels on the screen and used the color of the pixels between the stations to determine their location and overlap! This is a fragile hack because if the visualization changes the colors it uses, the components that depend on the hack will fail silently.)
Regardless of how smart the scientists who wrote the CRU code were or how accomplished they were in their scientific area of specialization, they clearly were/are amateur programmers and utterly clueless software project managers. They let their software get away from them.
Of course the obvious question is why no one external to CRU realized they had a problem with their software. After all, isn’t this the type of problem that scientific peer review is supposed to catch? Yes, it is but contemporary science has a dirty little secret:
There is no peer review of scientific software!
As far as I can tell, none of the software on which the entire concept of Catastrophic Anthropogenic Global Warming (CAGW) is based has been examined, reviewed or tested by anyone save the people who wrote the code in the first place. This is a staggering omission of scientific oversight and correction. Nothing like it has happened in the history of science.
For now, we can safely say all the data produced by this CRU code is highly suspect. By the ancient and proven rule in computing of “Garbage in, Garbage Out” this means that all the climate simulations by other teams that make predictions using this dubious data are likewise corrupted. Given that literally hundreds of millions of lives over the next century will depend on getting the climate models correct, we have to start all our climate modeling over from scratch.
I’ve read that when celebration dinners were held for the first American electronic computer (ENIAC), the attendees were the electrical engineers who designed it and the mathemeticians who defined the problems it was to solve. The people who actually programmed it–mostly women–were not invited. (Programming ENIAC was done by plugging in patch cords, not in any form of language, but this doesn’t change the point)
This attitude toward the programmers may have had something to do with sex bias, but also reflected the fact that programming was not yet viewed as a complex and intellectually-demanding task, but rather as a function barely one step above clerical work.
I’m guessing–and this is just a hypothesis–that this attitude has tended to survive more in the scientific world than in other software application areas.
OK, a couple of points.
To my understanding, the East Anglia U computer codes are not for a computer model. When I think of computer model, I am thinking of GCM’s or Global Circulation Models, massive fluid-mechanics equation solvers representing world climate. The computer codes in question are a kind of glorified Census Bureau, organizing temperature, tree ring, and other readings from around the globe and over time to produce the Hockey Stick chart.
In a way, I don’t even care what these codes look like and if they were subject to the same quality control as the Space Shuttle flight control system. What I care about is the raw data. Does anyone have it?
It someone can produce the raw data, any number of people could program a script in any of a number of software packages — S-Plus, R, Octave, Matlab — and reconstruct the temperature charts. To the extent that Fortran is a “wrong computer language” for this kind of work, the “right computer language” probably even isn’t structured C or object-oriented C++, and the right computer language is most likely one of the packages I have mentioned for the statistical reduction of large data sets.
In fact, what people from outside this tightly-knit circle have wanted from day one is the raw data, unfiltered by someone else’s procedures, whether in carefully crafted C, hacked-together Fortran, or perhaps a Matlab or R script that could perform the required statistical analyses with a few lines of code.
Paul Milenkovic,
What I care about is the raw data. Does anyone have it?
According to a story I just read, the original data was lost.
The solution to all of this is to simply reproduce the original work including resampling all the trees for growth ring data. If this was any other scientific field we would have numerous teams out there all reproducing each others work down to the most trivial piece of data.
Yes, it’s that bad.
No, it’s worse.
The computer codes in question are a kind of glorified Census Bureau
More like least-squares curve fits with a bit of meta-analysis thrown in to combine different proxy series. Also a lot of ad-hoc elimination of naughty points and attitude adjustment of recalcitrant data. Treatment of the data itself might be analogous to the census but I sure hope the census bureau does a better job of archiving and indexing. After all, they hired Hollerith who built the machines to tabulate the 1890 census, and Hollerith founded the company that became part of the conglomerate that was early IBM, and IBM gave up on shoes and concentrated on the tabulating machine business, and later computers, and the rest is history.
Re female programmers, Lincoln Labs drafted secretaries to help program the SAGE system. My dad also says the basic design of the system was worked out by Irving Reed during a lunch break
the problem is the data. tree rings, met stations moved or near heat islands, met station protocols. this is a problem looking for a socialistic solution.
Anyone who has studies the hockey stick mistake also understands that scientists are often caviler in their application of multivariate tools, even though they lack suitable expertise.
you have to wonder that if these folks were really intent on modeling the future, which is what this is all about, they would have done so quietly what with the unknown billions available to them in predicting the crop harvest. no something else is going on.
Chuck…Census, for all its illustrious history as a leader in application of new technology, seems to have slipped a bit lately…
http://photoncourier.blogspot.com/2008_04_01_photoncourier_archive.html#1445822385101318979
Sorry, formatting messed up..try this for the Census link.
That widely reported e-mail about “deleting the data before I would ever release it to Steve McIntyre” raises all sorts of questions about the “loss” of the original data.
Newrouter makes an important point. If the models predict well there should be tremendous money to be made by exploiting them commercially. Yet they are being used mainly as political marketing tools to extract rents from governments. Which is more likely: that the models are unaccountably not being exploited by the world’s greedy businessmen or that the models don’t predict well?
Before you ascribe too much smarts to software engineers, check out the story of SAGE at
http://ed-thelen.org/sage-1.html. Don’t have any coffee unswallowed when you read the part about the BOMARC missile.
The SAGE was roughly speaking an 80’s vintage PC implemented in the late 50’s using vacuum tubes and filling up a huge cement windowless building.
Secretaries programming SAGE? I mean no offense to secretaries or to the women hired as secretaries, but we are talking 500,000 lines of assembly-language programming for a real-time system? There must have been an intense training program to take clerical people and assign them to coding.
A read of this site suggests that it’s not just scientists who have problems with dodgy code.
Oh balls. The code is nothing like as good as that second motorbike.
It’s interesting that even people who know something about programming, if they are sufficiently committed to AGW, will deny that the code is a problem. I’m thinking of Charles Johnson, of Little Green Footballs, who became famous for demonstrating the fact that the fake Bush memo that CBS relied on in its TANG story had been written on a word processor using Times New Roman font. Recently, he has become obsessed about creationism and environmentalism. He is adamantly denying the significance of the CRU story and has no doubts about AGW. When one of his commenters, who is obviously some sort of IT professional, admonished him about the code issues being discussed here, Johnson came back in the thread accusing him of being a “denier” and refusing to discuss the code issues.
There is a large group of true believers who will not be convinced by anything. “Having faith” they call it.
People still take Charles Johnson seriously?
The only thing worse than a global warming denier is a code denier!
People still take Charles Johnson seriously?
He still has hundreds of commenters but it is interesting to see how they have changed,.
RE: the lost data. As far as I can tell, what Phil Jones threatened to delete, and which CRU later said was lost in the 1980s, is not the original data but records of what “corrections” were performed on that data by CRU. The original data should still exist at the various national meteorological organizations who collected them.
What this means is that not even CRU can reproduce CRU’s results. Which makes CRU’s results anecdotal.
It all seems very obvious to any one who’s ever had to write up an experiment. Standard Contents are:
1 The Object of the experiment (what the experiment is supposed to test) which the various hockey sticks have
2 The method used- they have some method given but its hardly explicit. However that would be allowable since the method is basically the procedure for getting the results (what is currently described as data)- and that could be included in the following section
3 The results themselves (i.e. raw data). This has never been presented in any of the various hockey stick papers. Of course it would be perfectly acceptable to reference a source for the data, which should also detail how it was obtained.
4 The calculations. This section should include all the data correction calculations (those made to individual items of raw data to correct for biases)where applicable- it should be clear why the particular corrections were made. and any processing of the corrected data. Of course these days the calculations are done on a computer- so to publish the calculations means to publish the computer code. None of this has been published by CRU, though Mr. Mann’s code was published, reluctantly, after years of pressure and the involvement of the US senate- and found to generate a hockey stick from random data.
5 The conclusions need to follow from the above. Since 3 and 4 have been missing they are not conclusions but assertions.
I take the same view of evidence in essence as the tax authorities and the courts. It is insufficient to assert entitlement to a tax refund without data and calculations, it will not suffice to sue someone, produce no evidence and demand recompense. Their absence invalidates the papers. The fact that we Know that Mann’s code was faulty doesn’t help their case. If you guys above are right (I’m no programmer ) CRU’s is worse- and we still don’t have the calculations they used to correct the raw data.
Peer review is a blind. It was a system set up to save learned journals from publishing embarrassing papers- not to verify the absolute veracity of each and every paper. Its a bit like asking your lawyer if you have a case before instituting proceedings, or getting advice from an accountant before submitting your tax return- it cuts out a lot of grief if its done properly, but doesn’t guarantee success.
Interesting post, but, IMO, you are trying to have it both ways.
1. “The design, production and maintenance of large pieces of software require project management skills greater than those required for large material construction projects. Computer programs are the most complicated pieces of technology ever created. By several orders of magnitude they have more “parts” and more interactions between those parts than any other technology.”
2. “Indeed, the code seems to have been written overwhelmingly by just two people (one at a time) over the past 30 years.”
It seems that #2 is the case – this is a simple, albeit convoluted data “smoothing” script designed to give the “correct” answer. In terms of your amusing pics of motorbikes – if you ain’t actually going anywhere, then either bike is equally functional.
Neil,
Interesting post, but, IMO, you are trying to have it both ways.
No, I’m not. It’s quite easy to create a very large and complex software system by simply gradually bolting on more and more functions to the existing structure over time. In fact, a big chuck of all the data systems in major institutions grow exactly like this.
However, these systems are not particularly reliable and their behavior is unpredictable. I recently heard of a case in which a minor change to accounting database to handle a minor tax issue in one state, propagated unexpectedly throughout the entire accounting system of a major corporation and crippled it for several weeks.
Software that evolves by accretion has biological complexity and inconsistency. These big systems in institutions only function because their day-to-day output has to be continuously checked against reality and outputs of many external systems (such as the tax mans.)
Scientific software doesn’t have those constant checks. After all, defining reality is the purpose of the software in the first place. CRU has few if any means of double checking their calculations. If something goes wrong subtly they will never know. It’s not like the IRS is going to raid their servers.
If you want large, complex, reliable, well understood and maintainable software, you have to have serious project management start to finish.
Old programmer’s tag: “Once a line of code has been entered into a program, it’s almost impossible to get rid of it.”
From The Sunday Times on November 29, 2009 “Climate change data dumped” by Jonathan Leake, Environment Editor
SCIENTISTS at the University of East Anglia (UEA) have admitted throwing away much of the raw temperature data on which their predictions of global warming are based.
It means that other academics are not able to check basic calculations said to show a long-term rise in temperature over the past 150 years.
The UEA’s Climatic Research Unit (CRU) was forced to reveal the loss following requests for the data under Freedom of Information legislation.
The data were gathered from weather stations around the world and then adjusted to take account of variables in the way they were collected. The revised figures were kept, but the originals ”” stored on paper and magnetic tape ”” were dumped to save space when the CRU moved to a new building.
* * *
In a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
* * *
Roger Pielke*, professor of environmental studies at Colorado University, discovered data had been lost when he asked for original records. “The CRU is basically saying, ‘Trust us’. So much for settling questions and resolving debates with science,” he said.
==================
*Jr. son of Roger A. Pielke Sr., Senior Research Scientist, Cooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado in Boulder, and Professor Emeritus of the Department of Atmospheric Science, Colorado State University, Fort Collins.
Sr.’s professional web page
Sr.’s Blog Go there and read it. Sr. is a real climate scientist, but not an AGW hysteric. He shows that there are possible positions on GW other than all in and all out.
Jr.’s professional web page. He is a poli sci Ph.D. who studies public policy and science, politicization of science, and environment-society interactions.
Jr.’s Blog
W. Edwards Deming was once brought in to a textile factory to discover why production had suddenly dropped by 50% and discovered that
‘a bean counter had found bobbins of thread for $.01 cheaper a bobbin.’ So, said Deming, for a saving of a penny, production dropped by 50%.
“Yes, I’m still bitter”
I look forward to this chapter in your autobiography.
Having been a scientific programmer/data mangler, let me tell you that you’re mostly spot on. I can’t speak for the rest of my peers, but I got to the point that I got tired of rewriting code, so I took the time to write and test the code because I knew it would be used/reused at some point in the future.
I came to that conclusion when I was approached to code up a small program to read in some text data file and put it in a well behaved format suitable for a IDL program. Ok, it was a one-off, so I slapped some perl together and got it done. Six months later, the same folks come along and say “hey, we’ve got a lot more of that data, and we want your program to handle it”. Ok, so it needs to be streamlined, modularized, and made to request filenames in a directory, and process each one in order.
As I’m working on it, I’m thinking “what a festering, steaming pile of poo. What moron wrote this? Oh, yeah, I did.” I ended up throwing out the first iteration, and doing it right this time.
I’m no longer bitter. But then again, I’m now a BOFH.
When I first started with computing, I too heard “garbage in, garbage out.” Only later did I learn that the truth was closer to “garbage in, gospel out.” (Yes, I’m still bitter).
I’m 51 y.o. I was in software for 20 years — mostly general accounting such as A/R, A/P, payroll, and inventory. When I got a look at Harry’s notes, I sent the following e-mail to some of my former colleagues:
———-
Let’s say somebody is a computer programmer. Let’s say he’s trying to figure out what’s going on with source code and data files. Let’s say he works at one of the most prestigious institutions in the world. Let’s say that institution plays a key role in climatology. Let’s say the reports that institution publishes are used to push national and international policies that would determine the expenditure of hundreds of billions of dollars.
And said programmer writes stuff like this in his notes:
“OH F*** THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done I’m hitting yet another problem that’s based on the hopeless state of our databases. There is no uniform data integrity, it’s just a catalogue of issues that continues to grow as they’re found.”
———-
One of my friends wrote back: “And this surprises you ”¦”¦”¦?”
And I replied, “What surprises me is how much it sounds like every place I ever worked as a programmer.”
[Deleted as off topic — Shannon]
The recent events of “Climategate” have stirred new life to a number of thoughts that have been lying dormant in my head during the past few years. Let me say that I have been programming computers since 1970 and spent a long time writing and dealing with computer modeling of the sort that the IPCC is hanging their hat on. Here are some of these thoughts.
1. One of the first things a modeler has to consider is the precision and accuracy of the numerical calculations that a particular model of computer produces. It is a combination of the way the computer is constructed and the code that generates the math. Any floating point calculation is subject to errors that occur because of the nature of those calculations. Both IMSL and NAG, two of the companies that provide mathematical software libraries for technical computing have repeatedly stressed over the years that the biggest problems they have had is in the actual architecture of the computers performing the math operations that the programs tell them to do. NIST used to be able to certify specific machines but I do not know of their current practices. If the machine you are running on cannot do precise math, all bets are off.
2. Models have to have some certainty. If you cannot demonstrate that a know set(s) of data can produce and expected result, anything coming out of a model is useless.
3. Models should be able to not only predict actions into the future but “predict” past effects by using the appropriate data sets. That is, using some of the older data and cutting off the newer data, you should be able to match reality with the models prediction. Has this exercise ever been done with climate data? If I were sitting in a research center and a modeler showed me a model that went in the opposite direction of reality, I would stop the model, not reality.
4. I was at a talk in the early 80’s by Dr. David Pensak, a renowned computational chemist when he was asked if DuPont (his employer) would computerize their lab notebooks. His answer went to the heart of the problem. He said (I paraphrase). If I had a theory and talked to you about it in the hallway, you may or may not believe it depending on what you thought of me and my research. If I published that theory in a peer-reviewed paper, you may or may not believe it depending on the regard you held that journal in. If I gave the theory printed on computer paper, you would treat it as gospel. The psychological power of a computer printout far exceeds its actual credibility.
5. Destroying raw data is the original sin of a scientist. If you do it and have published based on it, you have tacitly admitted to being so cynical and unethical that you cannot stand valid reexamination of your hypothesis. You have ceased being a scientist and become a religious fanatic.
Re:
These are decidedly old-school approaches; more professional standards and techniques were developed some time ago. Yes, the majority of shops don’t use them (which makes them no better, in the end, than poor Harry) but those that do are world-class.
The most important thing that can be done to this code is to build a battery of automated unit tests to support analysis. Unit tests are small, discrete programs that are designed to exercise and test larger programs for correctness, expected behaviours and error conditions. They serve several purposes:
a) They show how the code is intended to function;
b) They provide “working documentation”;
c) They provide a test “scaffold” that sends up a red flag whenever changes are made or new code is added that “breaks” the program
This isn’t beyond the ability of scientists to understand – although it takes a while to master, it’s easily taught and serves to produce code that is of much higher durability than one that is “cowboy coded” and only proven to work on the researcher’s machine.
The REAL scientists that I have know were experts at both. In fact, most software engineers are incapable of understanding – nevermind coding – the math required to define the models involved in many serious researchers’ work. For an example, go do some research on SEADYN. These guys are posers. Peer review, schmeer review. Show me some real results. Show me a model that accurately predicts the past and the present and then I might give you the benefit of the doubt on its conjecture about the future.
Excellent, Shannon.
As a 20+ year programmer, your points hit the nail right on the head to me.
This is indeed even bigger than their fake peer review and their running from FOIA requests.
Thanks for posting this,
Rob
I believe your visual metaphor is flawed, as the second vehicle, though ugly, appears to be capable of moving from Point A to Point B.
The Climate code is not merely hideously ugly, it is inoperable.
A different opinion:
I have over 30 years of programming experience. And I have an advanced degree in applied mathematics. For 10 years I was manager of a research group developing software for earth modeling. The end users were geologists and geophysicists working for petroleum companies. The algorithms we wrote often required understanding complex mathematical and physical processes. I preferred to hire geoscientists, mathematicians, physicists, and mechanical engineers, instead of university trained computer programmers. I found that the scientists I hired learned good programming practices, while getting up to speed on the unique math and/or physics involved, faster than the already trained programmers could learn the science. Even though most university trained programmers do study mathematics as part of their degree requirements. Heck, if I was still working, I still might hire a climatologist instead of a programmer depending on grades and what I learned during the interview process.
Mark Webster,
In fact, most software engineers are incapable of understanding – nevermind coding – the math required to define the models involved in many serious researchers’ work.
This is true but it also true that programmers don’t understand accounting principles when they write banking software, they don’t understand tax code when they write tax software, they don’t understand art when they write graphics software and they don’t understand business methods when they write business logic.
Programmers don’t have to be experts on the subject being modeled in the program, they just have to understand the logical relationships that the experts want.
Professional programmers have a lot of skills and habits that amateurs, even very intelligent amateurs simply do not have. No one who has never worked in software development understands how to manage such projects.
Looks like I’ll be the first scientist to step up to the plate here.
1. Crappy code is not good, but it doesn’t make the results automatically invalid.
2. Sometimes there are good reasons to throw out data – the collection could have been done wrong, making even a good model fail.
3. Scientists aren’t policymakers, either. The policymakers know their word isn’t gospel, and they aren’t taking it entirely uncritically. A scientists word doesn’t equal law, so don’t act like the data integrity is crucially important because policymakers can’t and won’t think for themselves and get other opinions.
4. This isn’t the only data supporting the AGW hypothesis. Arctic ice, polar bear habitats, and a variety of other data support the AGW hypothesis.
5. Work done 20 years ago isn’t going to be 100% compliant with today’s best practices. Pretty much every scientific result even obtained has some degree of sloppiness associated with it. It’s not appropriate to assume malicious intent because they didn’t correct for something they couldn’t have known about.
6. The relative validity of a theory doesn’t hinge on one result – everything needs to be replicated, as this will be.
7. It’s not an either/or situation. Science is about assigning probabilities to the various hypotheses out there. At no point does any hypothesis get annointed as the one true and eternal truth. It looks, to this observer, like continued work will continue to strengthen the AGW hypothesis, even if the probability contributed to AGW by this work was greater than it should have been.
8. Even if we move from a 99& chance global warming is human caused to a 60% chance global warming is human-caused, it’s still actionable information.
9. Finally, since it appears certain that global warming is a bad thing, shouldn’t we take steps to curtail it, even if it’s not “our fault”?
As far as the code quality issues go – you can’t decide who wants to do global warming research and who doesn’t. Interest dictates what is studied, so unless you can find a way to make only “good” coders interested in studying climate change, there’s going to be come crappy code that continues to be written, just as there are some crappy experiments done in every other scientific endeavor. That’s why peer-review exists and why experiments are repeated. Even if you did, those people would be criticized 20 years from now for things they did. It’s just the way science works. If you’ve got a better idea, do tell.
There’s probably been some bad actors in climate research, even after giving them the benefit of the doubt. There’s incompetence, malice, and bad luck in every field, but that doesn’t mean the whole field is invalid, even in finance. Collateralized debt securities weren’t a universally bad idea. It was assuming you knew more about the risks and overleveraging that was the bad idea.
This was a very small field so contamination is much more serious than it might be in others with long track records and a large published record that has been validated.
One thing that impresses me about the discussion is how much this unsystematic code construction resembles biological systems. We share 90% + of our genes with yeast and the law of unintended consequences rules everything. Natural selection chooses what works. That is not a good system when you expect the results to be consistent.
Finally, since it appears certain that global warming is a bad thing, shouldn’t we take steps to curtail it, even if it’s not “our fault”?
It may or may not be “bad”. Some areas may be adversely affected, some may benefit. The fact of the matter is that climate changes. Always have and always will. It’s folly to try to stop it. A better solution is to focus policy and technology to best adapt to changing climate.
Mr Gunn – Once you start dealing in cost-benefit analyses: you cease to be a scientist, and you become an activist. You say 60% is actionable – why not 30%? 15%? .001%? Go float this Pascalian crap on a religious forum. I won’t take you seriously.
Especially since you admit that the code was worthless – “crappy” implies “crap” implies “something to be flushed away”. Since Code Is Law (h/t Lessig), a worthless piece of code is EQUIVALENT to a worthless process for achieving results. If the process does not work, and can not work, then that alone should invalidate its results. “Automatically”.
You are a poor scientist.
On the plus side, you do earn points with me for not pulling rank with a “Dr” Gunn. Maybe some more schooling would do you good.
Sometimes there are good reasons to throw out data
A lack of storage space isn’t one of them. If the data is bad, the data is bad. Yet somehow this data seeped into the next iteration of the data set, so it couldn’t have been that flawed.
I almost wish I was taking a course from some of these folks, so that when they marked on my test show all your work I could come back with why, you don’t. They should be able, given just the raw data & their programs, to recreate the final dataset.
P.S. Polar bear habitats cannot suggest AGW. They can only suggest Arctic warming. Polar bears tell us nothing about anthrogeniety or even “global” warming.
Shannon,
Thanks for posting this. You hit each point right on the head. I got a really good laugh out of it, especially the part about QA sexually violating code – that’s pretty close from my first-hand experience in QA.
Mr. Gunn,
1. Crappy code is not good, but it doesn’t make the results automatically invalid
Well, bad math doesn’t automatically make results invalid but that’s the way to bet. More to the point, we’ve been told this high quality work that has play a major role in making the science “settled” to the point we justifying reengineer the entire planetary economy and tech base. I don’t think that “might not be invalid” is a high enough standard to base such momentous decisions on.
2. Sometimes there are good reasons to throw out data – the collection could have been done wrong, making even a good model fail.
In this case, the data was used to create the models. CRU was trying to establish the history of climate so that the history could be used in creating simulations of future climate. They had no valid reason to toss out any data just because it didn’t give them the answer that they wanted.
3. Scientists aren’t policymakers, either. The policymakers know their word isn’t gospel, and they aren’t taking it entirely uncritically.
Quite clearly, you don’t follow the news. All the policy makers are rather explicitly taking the climatologist word as gospel and are planning on making extraordinary sacrifices in order to forestall the cataclysm that gospel says is coming. People who don’t believe that the scientist’s word is gospel are likened to holocaust deniers. That says rather a lot about the trust politicians place in the climatologist.
4. This isn’t the only data supporting the AGW hypothesis.
So, now it’s a hypothesis? I thought it was “settled” and an established fact. Although, the CRU data is not the only data it is important data that was used to create all the climate simulation upon which we will now, and I can’t repeat this enough, reengineer the entire planetary economy and tech base. It is the only major source of data about centuries long climatic trends. All the models that use CRU data will have to be scrapped.
5. Work done 20 years ago isn’t going to be 100% compliant with today’s best practices. Pretty much every scientific result even obtained has some degree of sloppiness associated with it. It’s not appropriate to assume malicious intent because they didn’t correct for something they couldn’t have known about.
I didn’t assume malicious intent. I assumed they were amateur software developers who committed a very common mistake of letting their code evolve by accretion into something they could not manage.
6. The relative validity of a theory doesn’t hinge on one result – everything needs to be replicated, as this will be
Here’s an idea: Let’s reproduce it and then try to falsify it BEFORE we use it to reengineer the entire planetary economy and tech base.
7. It’s not an either/or situation.
When you’re talking about passing laws and fining, imprisoning or even killing people who break those laws, then yes there is an either/or threshold. Either the scientific work has enough validity to justify taking political action or it does not.
8. Even if we move from a 99& chance global warming is human caused to a 60% chance global warming is human-caused, it’s still actionable information.
What action? Reengineer the entire planetary economy and tech base action or put a little extra money into research sort of action? Are we talking about action like going to the beach over the weekend or action like invading Normandy?
Judging the economic consequences decades down the road is not and will never be science. Every attempt by scientist and others to make long range predictions about any human phenomena have failed completely and have often produced horrific outcomes (see eugenics.)
9. Finally, since it appears certain that global warming is a bad thing, shouldn’t we take steps to curtail it, even if it’s not “our fault”?
It is far from certain that global warming is a bad think. Bjorn Lombard took all the global warming models as true, took their combined most likely predicted climate changes and then attempted to calculate the probable impact. In his estimation the benefits of the most likely degree of global warming will offset the negatives for most people.
If global warming is largely natural, then we should take much different actions to adapt to it.
As far as the code quality issues go – you can’t decide who wants to do global warming research and who doesn’t.
I never said I did. I just simply want them to use professional tools and standards and to produce high quality and verifiable software. I want the same standards applied to software that we apply to every other scientific tool.
If you’ve got a better idea, do tell.
My better idea is to wait and make policy that reengineers the entire planetary economy and tech base until all the science has been hashed over and reproduced.
There’s incompetence, malice, and bad luck in every field, but that doesn’t mean the whole field is invalid
No, what makes a field invalid is its lack of predictive power. It doesn’t matter how nice people in a field are if they can predict outcomes. We’re supposed reengineer the entire planetary economy and tech base based on simulations whose predictions we cannot test.
These revelations at CRU have simply revealed that climatologist and their allied politicians have vastly oversold the predictive power of the field.
Ha! Very much liked the comparative analogy in your two photos. One aspect that I never saw coming was the construct of their programming and it’s veracity. I simply assumed they had either mastery over their computational methods or had team on hand that did. This is far worse than the derogatory nature of some of the emails (which I think have been carried too the extreme by some of the louder pundits.) I’m reminded of astronomer Percival Lowell who in the late 19th century observed and recorded a system of canals on the surface of Venus. He certainly had his skeptics and for good reason. The surface of Venus is, as we know now, shrouded in clouds and so he certainly couldn’t have been observing canals on the planets surface. A century later it was suspected Lowell’s use of his telescope’s aperture had actually led to him to see the canal like shadows of his own retinal blood vessels.
Munro Ferguson,
Percival Lowell thought he saw canals on Mars.
Even if we move from a 99% chance global warming is human caused to a 60% chance global warming is human-caused, it’s still actionable information.
Based on what criteria? Your personal opinion? Your gut feelings? What about 50%, or 20%, or 2%? What if it’s 100% certain that 3% of global warming is caused by humans – is that actionable?
And if it is, how much money is warranted to spend on it? How much action would you get against that 3% for a billion dollars? A trillion dollars? One hundred trillion dollars? What amount is justified?
Finally, since it appears certain that global warming is a bad thing, shouldn’t we take steps to curtail it, even if it’s not “our fault”?
This is the point re AGW that failed my “smell test” from the very beginning. Somehow I was supposed to believe that a change in climate would have wholly negative, extreme and unambiguous consequences for everyone everwhere to the end of time. More floods! More droughts! More hurricanes! More and more and more of everything that’s bad – everywhere. There was no mention of anything good or better or simply different. That bears no relation to my observations of the real world. When one area goes through a drought another might get perfect rain. And while one area has the highest snowfall on record another might have their mildest weather in years. Yet global warming was somehow destined to bring unmitigated disaster to every corner of the globe simultaneously. Tell that to all the people in frozen Siberia and other places. Basically, proponents were making the absurd claim that, miraculously (just as climate science was discovered), we are at the point of optimal climate in the whole entire history of mankind. No change in either direction would benefit anyone. I see no evidence of that claim being a fact. It’s just an assertion. Who really says, on balance, a warmer earth is worse in all respects? History suggests otherwise. Only someone who is preaching, instead of observing how natural systems actually function, could make an absolutist claim like that.
One more point about this: since it appears certain that global warming is a bad thing,
Interesting, your choice of the word “appears”. After the revelations from CRU, appearance may be all there is. And “certain” is not exactly certain, at this point. And “bad thing” is an essentially meaningless term. I’ve got a little mold in my shower. That’s a bad thing. Should I see if I can get a UN panel to tax the entire world to pay to clean it up. Without a scale (that’s accurate and reliable) “a bad thing” is not a sufficiently precise measure to justify the kinds of actions these people are talking about.
Oh, and one more thing, not even the IPCC is lame enough to say “Even if we move from a 99% chance global warming is human caused…
They’ve never claimed 99%. Basically, I think their claims of whatever percentage they use is just so much hand-waving (it seems to be based on gut feelings and hunches masquerading behind some pseudo-scientific math for cover) but to use as your premise that we’re at 99% and will move from there is already stretching things about the “certainty” of global warming.
Shannon Love,
Mars as well as Venus.
This is in response to Mr. Gunn’s #2 point about tossing data.
I agree that excluding bad data from further analysis is an accepted and common practice. I do not agree that locking down or losing the raw is standard and accepted.
When you have observations that you believe are not “right” for some reason you can exclude the data but you had better explain your reasoning behind assigning that data point “flier status”.
In any event Shannon’s point in the article is the IT practices observed by CRU crew are deplorable. Such code as the CRU code would have caused a swarm of SEC auditors to descend upon my previous client.
You say that the scientific programs are not peer reviewed. I’ve no personal experience with peer review, but Steve McIntyre at Climate Audit has been trying to get the raw data for years to review the ‘normalization’ performed by CRU. The data has been withheld, we now know why. This review would have effectively been ‘peer review’. I suspect that the crew at CRU do not consider Mr. McIntyre a peer, and based on their emails, that would be true. They are no scientists and he is.
Programming is like sex. One mistake and you have to support it forever.