Recent revelations that the peer review system in climatology might have been compromised by the biases of corrupt reviewers miss a much bigger problem.
Most climatology papers submitted for peer review rely on large, complex and custom-written computer programs to produce their findings. The code for these programs is never provided to peer reviewers and even if it was, the peer climatologists doing the reviewing lack the time, resources and expertise to verify that the software works as its creators claim.
Even if the peer reviewers in climatology are as honest and objective as humanly possible, they cannot honestly say that they have actually preformed a peer review to the standards of other fields like chemistry or physics which use well-understood scientific hardware. (Other fields that rely on heavily on custom-written software have the same problem.)
Too often these days when people want to use a scientific study to bolster a political position, they utter the phrase, “It was peer reviewed” like a magical spell to shut off any criticism of a paper’s findings.
Worse, the concept of “peer review” is increasingly being treated in the popular discourse as synonymous with “the findings were reproduced and proven beyond a shadow of a doubt.”
This is never what peer review was intended to accomplish. Peer review functions largely to catch trivial mistakes and to filter out the loons. It does not confirm or refute a paper’s findings. Indeed, many scientific frauds have passed easily through peer review because the scammers knew what information the reviewers needed to see.
Peer review is the process by which scientists, knowledgeable in the field a paper is published in, look over the paper and some of the supporting data and information to make sure that no obvious errors have been made by the experimenters. The most common cause of peer review failure arises when the peer reviewer believes that the experimenters either did not properly configure their instrumentation, follow the proper procedures or sufficiently document that they did so.
Effective peer review requires that the reviewers have a deep familiarity with the instruments, protocols and procedures used by the experimenters. A chemist familiar with the use of a gas-chromatograph can tell from a description whether the instrument was properly calibrated, configured and used in a particular circumstance. On the other hand, a particle physicist who never uses gas-chromatographs could not verify it was used properly.
Today, each instance of custom-written scientific software is like an unknown, novel piece of scientific hardware. Each piece of software might as well be an “amazing wozzlescope” for all that anyone has experience with its accuracy and precision. No one can even tell if it has subtly malfunctioned. As a result, the peer review of scientific software does not indicate even a whisper of the same level of external objective scrutiny that the peer review of scientific hardware indicates.
How did we let this problem develop? I think it was simply a matter of creeping normalcy. The importance of scientific software grew so slowly that we never developed the habit of questioning it.
Thirty years ago, most scientific software was no more complicated than your average household spreadsheet is today. Software was mostly just a numerical summation tool that merely accelerated the processing of data. If a scientist had a computer, great, but if not it didn’t change the actual conclusions of their experiments. As a result of software’s relatively trivial nature, peer reviewers, journal editorial boards and other scientist paid little attention to the software that experimenters used to produce their results.
Unfortunately, that attitude has persisted even as software has grown from a minor accessory into the tool that actually performs the experiment. Today many papers are nothing but reports of what a unique piece of software spit out after processing this or that great glob of data. These software programs are so huge, so complex and so unique that no one who wasn’t directly involved in their creation could hope to understand them without months of study.
Just about everyone in business has had the experience of having to puzzle out a spreadsheet created by someone else who left the company unexpectedly. Although we seldom think of them this way, in reality each individual spreadsheet is a custom piece of computer software written in language of the spreadsheet program. (Technically, all spreadsheets are scripts.) Everybody else knows that you can’t trust the output of a spreadsheet just because the person who made it tells you, “It’s done in Excel.” To trust the output, you either have to compare it against known good data or you have to look at the individual spreadsheet itself to find any places it might go wrong. People create very complex spreadsheets and then leave and some poor schmuck gets stuck trying to figure out what the %$#@! the creator of the spreadsheet was trying to do.
Custom-written scientific programs are much, much larger and much, much more complex than any spreadsheet. It would take a huge amount of time for a peer reviewer to go through the code line by line to see if the software had any faults. Normally, peer reviewers work for only a token payment and they work in isolation. They don’t have the time or resources to actually check out a complex piece of software. Further, there is no guarantee that a peer reviewer in a particular field is competent to judge software. That is like assuming that a biologist who understands everything about the Humboldt squid can also rebuild any automotive transmission.
The practical inability of peer reviewers to verify scientific software doesn’t mean much in reality, because scientific institutions never even developed the standard that experimenters had to make the code for their software available to reviewers in the first place!
This raises a troubling question: When scientists tell the public that a scientific study that used a large, custom-written piece of software has been “peer reviewed” does that mean the study faced the same level of peer scrutiny as did a study that used more traditional hardware instruments and procedures?
Clearly not.
Scientists have let a massive flaw slowly creep into the scientific review system as they have ignored the gradually increasing significance and complexity of computer software. Standards created to deal with relatively simple and standardized scientific hardware no longer work to double-check much more complex and nonstandard scientific software.
Eric S. Raymond, the famous computer scientist and writer, has called for open source science. I think this is the way we should go. In the past, it cost too much to print out all a study’s data and records on paper and ship that paper all over the world. With the internet, we have no such limitations. All scientific studies should upon publication put online all of their raw data, all of their protocols, all of their procedures, all of their records and the code for all of their custom-written software. There is no practical reason anymore why only a summary of a scientist’s work should be made public.
Scientific software has grown too large and complex to be maintained and verified by a handful of individuals. Only by marshaling a scientific “Army of Davids” can we hope to verify the accuracy and precision of the software we are increasingly using to make major public decisions.
In the short term, we need to aggressively challenge those who assert that studies that use complex custom software have been “peer reviewed” in any meaningful way. In the long term, we have a lot of scientific work to do over again.
See these two directly related post as well:
Scientist Are not Software Engineers — In which we learn that the critical software upon which we will reengineer the world is written and maintained by amatures.
Scientific Peer Review is Lightweight — In which we learn that not only isn’t scientific peer review the inquisitorial process most lay people think but that it isn’t even about science.
When the Republicans regain control of the House, whenever that may be, they should routinely subpoena the data and software underlying any scientific claim made in testimony. That would be a good start. And they should also require all the witnesses to swear under oath as to its veracity and completeness.
Then they could pass a bill to require such certifications for all Federally-funded research. They could call it the Mann Act.
Excellent post, Shannon.
Many scientists would probably respond that if the peer-reviewed publication defines the algorithms that the software is implementing, there is no need for a review of the software per se. This line of thought misses the complexities and the often-subtle failure modes that can exist in software.
Even when the task being performed is *much* similar than global climate modeling, bad things can happen. For example, in the Gulf War, a Patriot missile battery lost tracking on a Scud missile because of an accumulation of very small roundoff errors. The roundoff errors had been there all along, but had never previously caused the missile to lose tracking on its target–because the battery control computer had never previously been run for such a long uninterrupted period without being rebooted. Quite a few American soldiers were killed because of this failure.
I think that Mr. Raymond’s “open source science” idea does have some merit. But, there would be problems at present. Consider the way that science is assessed in UK universities – the process is heavily biased towards numbers of papers published, conference presentations and so on. Whilst this sort of assessment is in place (it is due to worse) then there is pressure to be guarded with one’s data as one runs the risk of being scooped. If other groups publish papers without giving proper credit (it has happened to me once) then that’s an opportunity for an entry in the next RAE lost, which could mean reduced funding.
Knirirr,
Yes, there is the equivalent of scientific intellectual property that scientist need to protect in order to get proper credit and to have careers.
However, that mostly applies to things that need to be kept secret before publication. As long a scientist is not making a public assertion that they have proven a hypothesis, the general public has no interest in having any details about the work. Once a scientist asserts that he has proven a hypothesis, then all his work should enter the public domain.
As a practical matter, science works by independent replication of results. You can’t reproduce someone else’s results unless you know everything they went through to get those results. Now that scientific software has become a critical tool, you can’t reproduce someone’s results without understanding their software either.
Of course, we do have the problem that scientist have a property interest in software that they’ve spent years developing but for scientist who work on the public dime and who provide data for policy makers, the public interest overrides that implied property interest. Besides, we paid for it.
Wouldn’t a simple solution be that all those who develop datasets have their names appended to the publication? This would incentivize competing datasets and minimize the ability of anybody to fool around with a foundational set and become the tail that wags an entire discipline.
Once a scientist asserts that he has proven a hypothesis, then all his work should enter the public domain.
That is a bit difficult to do, though. What we can say is that we’ve so far failed to disprove it and if we fail to disprove it for long enough we could call it a “theory.” But, that may take considerable time.
One other thing to consider is that one might often get several papers out of the method or dataset, and releasing it all after the first paper would reduce the chance of managing to re-use it.
One accepted method to get hold of the code/data &c. is to ask the scientists directly, although as we have seen this has its flaws. As an aside, the raw data for the project I currently work on is available to download. But, to get hold of the scientists’ personal code they used to do the statistical analyses on these data one would have to ask them directly.
Wouldn’t a simple solution be that all those who develop datasets have their names appended to the publication?
AFAIK that sort of thing is often done. The order of names on research papers is often (in my experience):
A, B, C, D, E and F.
…where A did the work assisted by B, C was the technician and the work was based on some previous results obtained by D and E. F is the head of the research group that undertook the work.
Reproducibility, and therefore open source data and methodology, has been the hallmark of science a long time. Judah Folkman was ridiculed for years about his theory of angiogenesis inhibition as a mechanism to treat cancer. The fact that he was a surgeon and not a molecular biologist, and therefore not a member of the club (Like Steve McIntyre) was a significant factor. The problem was a combination of NIH syndrome and difficulty with reproducing his results. It is now a basic method of cancer therapy.
The opposite example is Robert Koch who, in 1890, announced he had found a cure for tuberculosis. Since he had discovered the organism and was the father of bacteriology, his announcement was taken at face value. When it later became obvious it didn’t work, there was a huge scandal. His motive may well have been financial as he had been divorced and remarried.
Scientists are subject to similar forces as the rest of us. Maybe more so since so few people make an attempt to become literate in science and math.
That link didn’t work. Here is another.
Everytime I read something like this, I think of Robin Warren and Barry Marshall.
In the 80’s they were viewed as crackpots and lunatics, in 2005 they were awarded the Nobel Prize in Medicine (yes Virginia, every so often the Nobel folks get something right).
”¦where A did the work assisted by B, C was the technician and the work was based on some previous results obtained by D and E. F is the head of the research group that undertook the work.
Close, but not quite. The first three names on a paper are the most critical. That is valuable turf and pitched battles are often waged over the third slot. A mere technician would never get into that space.
Often it is the second name on the paper that did most of the work or generated most of the thought. The first name is commonly the most senior, well recognized or established name of the group, even though that individual may have actually contributed little to the effort.
“No One Peer-Reviews Scientific Software” – Not true in all the fields. For example, check http://calgo.acm.org/ which is collected software from papers in ACM Transactions on Mathematical software. They are peer reviewed. When we use automated tools to check the software it is not very difficult to review the software.
I’m sorry but I’m not so sure the software can’t be tested during peer review …
I have worked with and tested custom software for financial trading for over 17 years and have never looked at one line of code while I have tested dozens of highly complex applications (I have looked at plenty of code during the development phase so it not that I’m opposed its just easier to test other ways). If I am told what the inputs are and what the outputs are expected to be then I can “peer review” the software. In this case it should be quiet simple to take one temp. or tree ring record and manually adjust it based on what the scientist claims his software does and then compare my manual output to the softwares output. As long as the scientist lays out all of his adjustments and expections programmed into the software then I should be able to test a subset of the data and have a good picture of his software. Of course if I run a subset of the data through and my results differ from his for a particular record then I would assume either a bug or an hidden exception which means the software fails. I forgot to tell you about it doesn’t cut it.
Great article Shannon
As an old software engineer (in both senses), I completely agree with the notion of open source science. Not only should the mathematics and logic of research be open, but the key datasets and code must be as well. Certainly for any scientific work that will be used to inform and drive policy, everything must be completely open. Not only open and “peer reviewed”, but validated.
However, there are problems. As Knirirr points out, intellectual property rights (which Jones et al hid behind for years) does limit how much can be released initially. Also, the complexity of any non-trivial code will defeat casual reviwers (my rule of thumb is that fully understanding a piece of code generally takes nearly as long as creating it). Look how long poor Harry spent on trying to understand the CRU code before giving up in defeat (3 years).
There is one possible way to help address the software & dataset problem: unit tests. Unit tests are automatic tests with artificial data used to validate that the software is working properly. Quality software, which is most of the high quality open source software, is built using these unit tests.
If there was a way to review unit tests used to validate the scientific code used in a paper or even provide independent unit tests then a little sanity might be returned to the scientific process. Creating an artificial dataset is much simpler than understanding all the ins and outs of a complex piece of code. Reviewing both the unit tests and the results of those unit tests is well-within the capabilities of most scientists (while reviewing large software is definitely not), so both peer-review and independent verification could be done in a timely manner. If the code and dataset remained private (for a reasonable period of time) but independent reviewers could sanity-check the unit tests and results then the intellectual property rights could be reasonably protected.
Just a modest proposal and my $0.002 worth (after inflation).
Mike T
Excellent point.
Large numerical codes (like these) are typically written by the people who develop the models, not by professional software developers. In most cases, they grow over time, rather than having been designed, and can easily reach the point, particularly with staff turnover, that no one really understands the whole thing. Testing, error checking, and documentation are typically poor.
Its quite common to see significant defect rates in the code; 5 defects per thousand lines of code would not be unusual for this type of development. There are fairly well known procedures which can be used to reduce the number of defects injected in the code, and to remove the defects after they are created. This includes detailed design reviews, code audits, and several layers of testing (from components to entire system), although with systems which predict behavior which can’t be tested, there are obvious limits to final testing. All together, these procedures can reduce the defect density to under 1 per 1000 lines of code. Still, for a software systems with 500,000 lines of code … there are still likely to be errors.
In life critical systems (aircraft control systems, for example) one may write 3 different programs, in different computer languages, with different programming teams, and then compare the results under extensive testing.
All of this takes time and costs money, and the modelers hate it. It slows things down. It reduces their authority. It doesn’t get any more papers published. But its absolutely critical if you really care about the results.
There are standards (google CMM) for development of large software systems. Its fairly ridiculous that any software system that is the basis for economic decisions of this size is not developed according to the highest standards available.
You hit the nail squarely on the head this time. I wrote complex simulation and signal processing/world modelling software early in my career and can absolutely verify what you write here.
I spent a couple of hours over the break going through the comments left behind by the unfortunate programmer who had to deal with the CRU’s climate modelling software. Many of the comments struck a real chord of recognition. However, what *really* struck me is how horribly organized and undisciplined these people are. No one was betting trillions of dollars on the software that I was writing (although millions *were* involved) but I guarantee you that we had a far clearer path from our signal data to the models that ultimately emerged. Because we were working in three geographically-dispersed teams and with government funding, we always maintained the ability to package the entire thing and send it, complete and runnable, to the funding agency or another team member. This was simply a matter of reasonable practice – not an exotic outlier in any way – and it is a standard by which the CRU team fails miserably.
There are really three propositions that we are supposed to believe in the global warming debate: that the earth is warming, that it is warming disastrously, and that human CO2 production is the cause of the warming. I think #1 remains on pretty firm footing. #2 is now up in the air given the prediliction that these people have shown for shading the truth. #3 has no firm foundation whatsoever given that it depends entirely on computer models such as the horrible *mess* that appears to have been created at CRU. If the models are cleaned up, released to the public domain for auditing and *still* show both an agreement with past climate data and an ongoing contribution from CO2, *then* we can talk about anthropogenic causes. Until then, we’re no better than a Cargo Cult pretending that we know what we’re doing as we wave totems about.
Excellent post, however, an even bigger problem is the attempt to use Simulation modeling to support a scientific theory. (Simulation models are often used, and in many cases are the only means available, to conduct climate research) Unless the output of a simulation can be proven to be statistically significant as related to the real world, the simulation is meaningless.
Not only are scientists not software engineers, they often are not statisticians.
“Simulation modeling is the modern day alchemy”.
CRU should change its name to the Global Information-Gathering Organization, or GIGO.
I’ve done peer review on many papers that I thought were useless or even wrong (depending too much on a calculation where there was a known discontinuity in the standard model), but which went to publication anyway since the errors weren’t fatal in the sense that the derivations were correct.
As a former physicist and now engineer, I’m used to open sourcing all the data and algorithms. It’s just what’s done in hard sciences. I’ve always been leery of most of the softer sciences for two reasons: (1) its participants generally don’t have the mathematical ability and training to handle data as rigorously as should be done, especially if custom software is required and (2) knowing the previous, if the data and analysis isn’t available to everyone the odds of the analysis being correct are much lower.
I’m glad to see CRU open-sourcing their data and programs. It just would have been better if they had done it rather than having it leaked.
Reminding us that, aside from base data and data-collection methodology, IT codes and programming crucially define research results, supplies a vital missing link to Climate Cultists’ decades-long publication of egregious, prima-facie frauds. Minuscule samples, inconsistent and truncated data, blacklists of dissenting colleagues are par for Ministries of Truth. But behind all this lies, as “Chicago” says, the high-tech means used to attain nefarious and phony ends.
Absolutely, every single jot-and-tittle of relevant computer code must go on record in accessible and intelligible form. Not just “debugging” but overall evaluations are prerequisites for validating any hypothesis (“theory” is not synonymous). As Climategate has proved, not only are scientists themselves incapable of composing elaborate IT scenarios, but self-interest by definition prevents them from doing so objectively.
Per Steve McIntyre, Anthony Watts et al., highly politicized hypotheses such as AGW demand expert “peer review” not merely of facts and method but of meta-techniques, programming protocols and standard statistical procedures, which alone may validate an inherently contentious exercise. This means that “peers” need not be scientific specialists themselves– quite the contrary, objective third-party evaluation makes this undesirable. “Keep ’em honest” is peer reviewers’ mantra, always… corrupting this process does not impugn Science but vivifies the necessity for conducting all aspects of any discipline with scrupulous transparency, integrity. Open-source Internet contributors are ideally suited to this purpose.
AGW Cultists succeeded only by concealing every aspect of their agenda-driven, extreme-left ideological aggrandizement of grant monies and academic power to themselves. Absent stringent necessary and sufficient controls, they or others of their ilk will coil like reptiles to poison scientific inquiry again. Only institutional integrity, including not only academia but journals, review boards, panels of disinterested IT experts will preclude recurrence of this massive fraud.
Partisans accustomed to smearing and ruining opponents will protest furiously, arguing from authority by stipulation: Good. The more prejudices they put on record, the easier it becomes to refute their sociopathic fables root-and-branch. To Briffa, Hansen, Jones, Mann, Trenberth, and so on and on: “Climategate” is a decline that you can’t hide or otherwise subvert. Given simple procedural reforms, long known and even longer overdue, you never will be able to pull such vicious stunts again.
Nice article. Here’s my experience with computer models.
Sorry. This is the link
I’m a computer programmer by profession with experience in academia. One of my jobs was in bioinformatics which is a term one doesn’t encounter regularly but refers to the growing intersection of tradional biology with the elaborate computational methods normally associated with other fields. One of my first tasks was to look over a very large program that created alignments of genetic code – essentially computing best matches of strings according to certain rules. The program was written by someone with training as a biologist rather than a computer scientist and as a result the program was intensely disorganized, arcane, and poorly structured.
To give any fellow computer programmers some indication of the problem, rather than following the general pattern of gathering inputs, performing calculations, and producing an output based on the data, the original writer had structured the program to perform calculations on the incremental output based on incremental data, then fed the incremental output back into the calculation loop to be modified repeatedly (adding new peices and modifying old ones that had been invalidated by the new incremental calculation) until the final output was produced. Additionally, the original writer had no clear understanding of what a function was for, so while the program at first glance appeared to be structured, close examination showed that the functions where simply arbitrary separation points in the code that did not ecapsulate a single logical task. Worse yet, the same function was often called from different places in the code to perform entirely different tasks resulting in something that looked like structured programming but in effect allowed arbitrary entry and exit points into functions every bit as confusing as if the writer had just called goto to jump into the middle of a loop.
I worked on and off debugging that program for probably 6 months as new subtle errors where uncovered. Eventually, I dispared of the program ever being made error free and scrapped it in favor of rewriting the software from scratch – something I probably should have done from the beginning except that at the beginning I really had very little idea what the program was actually doing (which was far from clear looking at the code).
The point is that alot of scientific papers are produced with code that is no more sound than software I was debugging, which itself had been involved in the production of several papers.
The code that the climate modeling was using if any thing appears to have been worse. The code I was debugging principally produced subtle errors when the input to the program contained invalid data (which wasn’t that infrequent as often some of the data had been hand entered). The climate modeling software appears to have pervasive calculation problems that effected the operation of the program even when the data was properly formatted.
In any event, it is my considered opinion as a computer scientist that any scientific paper whose experiment consists of producing a computer model is very much in danger of being nothing more than psuedoscience – not because large programs are prone to error – but because programmers tend to massage the program until it produces something that looks like the expected results. However, you can easily write a model that produces something that looks like the expected results and have the guts of the model have little or no bearing on how things really work. Game programmers do this all the time, creating a model for the game world that appears to the consumer of the output (the game player) to produce ‘realistic results’, but which on the inside is often filled with all sorts of simplifications that work only within the narrowly defined game world. Scientific models are often little better. For climate data in particular, there is simply no way to verify the model because the real world data set is so marked by random variation (weather) that no model will ever match the observed results. This turns ‘science’ into little more than a philosophical thought experiment where you imagine what the results ought to be.
Excellent article.
We need to keep in mind that even with source code released to reviewers, “peer-reviewed” does not mean “validated”. No reviewer has the time to do a line-by line study of the code, but might be able to pick up obvious bugs. The real value of freeing the code would be that sufficiently motivated people such as Steve McIntyre would be able to take the time after publication to study the code, observe what the code does to different types of input data, re-implement the algorithms in other languages (such as R), and so on. By this process, they could determine both that the code does what the authors claim it does and that what it does is reasonable from a statistical and physical perspective.
The boundary between theory/model and fact cannot be scientifically blurred because computers and specially designed software are used in modeling theories. Computer models are by definition theory– that is all that models can ever be. Here, where data now contradicts predictions of the model and acannot be explained by the model, it means that the model/theory is disproved. It is now the position of the CC establishment that disproved models might/could be/hopefully-will-be reformulated so that AGW theory can be proved correct. The fact that Ptolemy had better data conformance with the theory of epicycles isn’t stopping the IPCC. I’m an attorney and I strongly believe that AGW supporters, Ph.D’d “experts” as well as the likes of Al Gore and the completely ignorant Leonardo DiCaprio (heard this actor has an AGW fouundation but never graduated HS) would be humiliated if subjected to elementary cross-examination about the limits and uncertainties of the modeling assumptions and now, post hockey-stick fraud, proved inability to predict facts. What experimental evidence there is disproves the theories heretofore advanced. Never mind, there’s another end-of-the-world model coming, as soon as they can figure out how to program it, which as of this moment, they apparently can’t.
In Aviation, we use DO-178B, which specifies a formal process including requirements, traceability, verification and validation. It’s a two-way check, where the requirements (high and low level) need to match up against the results, including low level testing. It seems to work pretty well.
NASA also has an excellent standard for Software Quality Assurance. Unfortunately, GISS has chosen not to even generate the top level “Software Assurance Classification Report” document.
The tools are out there. It doesn’t even require publication to use them. They don’t work if they’re not used, though.
Iconoclast, I agree about unit tests as a valid approach; however my 25 years in software tells me that unit tests don’t usually check boundaries, they tend to use very small data sets (such as the Patriot failure above) that don’t account for cumulative errors, and they don’t account for integration failures.
It’s somewhat misleading to refer to it as “customized software.” In the domain of social science there are a number of well-documented standardized software packages for analyzing statistical data — SAS, SPSS, S, R, MatLab, Maple, Mathematica, and subroutine libraries like NAG and RATS. In general these packages contain procedures for handling / processing / transforming data files (in SAS, the DATA step) and procedures for modeling and statistical analysis (e.g., SAS’s PROC REG for linear regression or PROC FACTOR for principle components analysis). These are pretty bulletproof and if you do the same set of data transforms and modeling in language A and language B you will get the same result. The process in a nutshell looks like this:
Raw Data ==> (transforms) ==> processed data ==> (model) ==> significance tests
In the case of the AGW data base,
Raw Temp data ==> (East Anglia CRU) ==> processed temp data
Dozens, if not hundreds, of models and statistical studies have been based on East Anglia’s processed temperature data base.
The controversy regarding the East Anglia IPCC data is really about the transforms applied to raw data. As I understand the infamous “Harry_read_me.txt” documentation, files containing raw ground station temperature data, tree ring data and ice core samples were transformed using Fortran and R statements into consolidated time series temperature data. The process may have included Fast Fourier low-pass filters, but also apparently included bizarre “fudge factors” that scaled old temperatures downward and recent temps up. In any case, it appears now that the raw data no longer exists, only the resultant processed data. It also appears the researchers cannot even document the transforms they applied to the raw data to get the processed data. One reading of the “Harry_read_me.txt” file is that Harry was desperately trying to reconstruct the raw data from “untransforming” the processed data.
This morning’s announcement that the East Anglia CRU is now willing to share the “data” is really a dodge, because it is an offer to share the processed data.
I just posted these comments to another entry in ChicagoBoyz but I this they are appropriate here. In addition, I’ll add the following.
Ӣ Test the computer model combination for accuracy and precision. NAG and IMSL used to sell software to do this. Result: any model not validated for accuracy and precision should not be considered.
”¢ Test the computer model for previous predictions. If the input of good, multiple source reviewed data from the past can produce today’s weather, then this model is a candidate for continuing to live.
Ӣ Open all the efforts up to public scrutiny. Anyone hiding something under their coat is excluded.
The recent events of “Climategate” have stirred new life to a number of thoughts that have been lying dormant in my head during the past few years. Let me say that I have been programming computers since 1970 and spent a long time writing and dealing with computer modeling of the sort that the IPCC is hanging their hat on. Here are some of these thoughts.
1. One of the first things a modeler has to consider is the precision and accuracy of the numerical calculations that a particular model of computer produces. It is a combination of the way the computer is constructed and the code that generates the math. Any floating point calculation is subject to errors that occur because of the nature of those calculations. Both IMSL and NAG, two of the companies that provide mathematical software libraries for technical computing have repeatedly stressed over the years that the biggest problems they have had is in the actual architecture of the computers performing the math operations that the programs tell them to do. NIST used to be able to certify specific machines but I do not know of their current practices. If the machine you are running on cannot do precise math, all bets are off.
2. Models have to have some certainty. If you cannot demonstrate that a know set(s) of data can produce and expected result, anything coming out of a model is useless.
3. Models should be able to not only predict actions into the future but “predict” past effects by using the appropriate data sets. That is, using some of the older data and cutting off the newer data, you should be able to match reality with the models prediction. Has this exercise ever been done with climate data? If I were sitting in a research center and a modeler showed me a model that went in the opposite direction of reality, I would stop the model, not reality.
4. I was at a talk in the early 80’s by Dr. David Pensak, a renowned computational chemist when he was asked if DuPont (his employer) would computerize their lab notebooks. His answer went to the heart of the problem. He said (I paraphrase). If I had a theory and talked to you about it in the hallway, you may or may not believe it depending on what you thought of me and my research. If I published that theory in a peer-reviewed paper, you may or may not believe it depending on the regard you held that journal in. If I gave the theory printed on computer paper, you would treat it as gospel. The psychological power of a computer printout far exceeds its actual credibility.
5. Destroying raw data is the original sin of a scientist. If you do it and have published based on it, you have tacitly admitted to being so cynical and unethical that you cannot stand valid reexamination of your hypothesis. You have ceased being a scientist and become a religious fanatic.
Almost all of the climate research in the US is ultimately funded by the federal government, either at federal agencies (NASA, NOAA) or through grants to universities and research institutions. Regardless of whether or not that research becomes part of some policy discussion, we are paying for the research. I think it is entirely appropriate that we make open source data and software a condition of federal funding.
Iowahawk,
It’s somewhat misleading to refer to it as “customized software.”
I don’t think so. Even if a researcher uses one of the many standardized data processing tools, the configuration of the tool is different in each case.
(For the benefit of non-programmers) It is just like with spreadsheets. Each spreadsheet is unique even through the application that created it is highly standardized. Saying that you did a spreadsheet in “Excel version x.x” doesn’t tell anyone else anything about the validity of the spreadsheets.
The real problem here is that peer review is just a superficial process to protect the interest of the publishers of scientific journals. The publishers do not have the resources to pay someone to go over a submitters code with a fine tooth comb. People just trust that the original researchers got the software correct.
When software was a small and simple tool, that was fine. Now it is not. Even most of the packages you mention are themselves full fledged programming environments in their own right.
“Testing” the “model” is not really an option, because there’s nothing to test it against, and there’s not really a model.
To recap:
let Y = [Y1….Yt] = the observed raw time series of temperatures.
let Y’ = [Y’1 … Y’t] = the transformed time series of temperatures.
let C = [C1 … Ct] = the observed time series of CO2 measures.
let C’ = [C’1 … C’t] = the transformed time series of CO2 measures.
What we do know:
C’ and Y’ correlate highly. This is the basis of the IPCC doomsday report.
What we don’t know:
How the hell you get from Y to Y’.
How the hell you get from C to C’.
East Anglia says, “trust us, Y’ is a valid temperature time series.” Even though they cannot produce the Y, or any of the transforms that supposedly transform Y into Y’. Neither can they produce the transform from C to C’.
Nerdbert “I’m glad to see CRU open-sourcing their data and programs. It just would have been better if they had done it rather than having it leaked.”
Actually they are not open-sourcing their data. As Iowahawk notes “This morning’s announcement that the East Anglia CRU is now willing to share the “data” is really a dodge, because it is an offer to share the processed data”
The raw data is gone, down the memory hole.
Close, but not quite. The first three names on a paper are the most critical. That is valuable turf and pitched battles are often waged over the third slot. A mere technician would never get into that space.
That depends entirely upon the research group, number of authors &c. Research groups differ in their opinion of how the credit should be divided. As for ‘mere’ technicians, their skill and knowledge can vary considerably. In some cases they may have more knowledge than the researchers in specific techniques being used.
Shannon
The temperature data decade to decade and century to century seems to be sinusoidal/periodic. The current perceived wisdom: i.e. Forcing a line through the last 20 data points and then asking for worldwide carbon trading based on the “precautionary principle” seems a little much to me. I worry about the factory worker in Ohio.
The precautionary principle is not a scientific principle but a risk management rule [I worked in a division of Monsanto and know a little about how this “principle” was applied to genetically modified foods. The Europeans love it. That’s why they never took the boat over here in 1890. Too much risk. ]
We studied infinite series at RPI (1958) and I recall that the Fourier Series has strong convergence properties. I am a retired chemical engineer and we did not use trigonometric series in our modeling.
EEs do. And would argue that the 300 year shape of temperature data looks more like a radio wave than a straight line.
Do you know if anyone has tried the Fourier series approach? I have a vague recollection from all the statistics books I have read that regressing N data points gives a series with (N-2)/2 terms. The divisor is 2 because each term in the series has both an amplitude and wave length. So 200 years data would be quite powerful.
Peter B. Ch. E., M. Ch. E., P. E.
Lake Barrington, IL
I worked in a National Lab in which an affiliated group was trying to model a complex dynamic system. The group, filled with very smart people with Ph.D.s spent a year validating the model, trying to figure out why their results were not matching the experiment. Eventually, someone was able to pin-point the forgotten factor, and the model began to produce acceptable results, i.e., began to predict well the behavior of the system given certain dynamical inputs.
Here’s the thing: the model was a simple linear, albeit large scale, system. The validation included hundreds of well controlled experiments with very precise measurements. Climate models are not linear and cannot be subjected to independent, well controlled experiments with precise measurements, for which reason I am naturally skeptical about their predictive ability.
Here’s another thing. The mathematics of dynamic systems, particularly the stocastic element, is surprisingly complicated, and most climate researchers are largely ignorant of it. I read a paper by James Hansen in which he used a variety of data sources to show that positive derivative (sensitivity) of global temperature to increased CO2. The implication, he seemed to assert, was that the asymptotic temperature would therefore be higher. This, from a dynamical system point of view, is demonstrably wrong. Even a type of relatively simple linear system – the so-called non-minimum phase system – can show an instantaneous positive derivative to an input, but have an overall decrease in that variable. Unfortunately, no one knows enough about the dynamic nature of the earth’s climate system to be able to definitively assert what the asymptotics will be. In fact, my impression is that most of the models are static energy-balance types of models – yet another factor in my skepticism.
Finally, there is an operational issue. Good code must be well designed, continually reviewed, and well documented. I’ve rarely experienced solid, working code that looks like spaghetti. From what I understand, the code produced by the CRU was amateurish. Did I mention that I was skeptical?
A story:
My boss years ago was making a presentation of of simulation results, which he referred to in passing as “data.” An irascible old Swiss physicist objected saying “Zets not data! Zets compooter generated garbage!”
Our Swissy friend did have a point. Simulation based projections lack credibility if they can’t point to specific observations predicted without prior knowledge of the outcome. Still waiting for this from the warmer crowd. That doesn’t mean they’re wrong, but it does mean it’s not prudent to spend trillions assuming that they’re right.
This is not a new problem. Thirty-four years ago, in 1975, Edsger Dijkstra (noted computer scientist) said, “In the good old days physicists repeated each other’s experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other’s programs, bugs included.” The only real difference is that CRU didn’t publish their program.
Ha. I worked at NCAR in the mid 1980’s with top climate scientists building the Acid Rain Deposition model. I _know_ for a fact that the scientific software is subjected to immense amounts of peer review. Further, my wife worked for the environmental consulting firm that created the code used throughout the country for measuring environmental impact of emissions regulations. Her particular job was processing inputs and the statistical massaging of input data sets. She’s laughing her ass off at the level of ignorance shown in this post.
Software is hard – and I’ve been in the software industry for over 2 decades now. I’d be willing to bet my bottom dollar that well over 99% of the programmers in what is known as the software industry have never done any scientific programming, much less understand even the basics of what goes into numerical programming and the issues involved. All of the projects I’ve seen in the real live climate modeling field have all used professional programmers to augment and support the scientists who also code. It’s not like they don’t understand programming and the importance of process and review.
Oh, and I got to say that anyone who writes web sites or IT software who casts aspersions on the software development practices and processes in the scientific community is probably the richest joke of all. Pull the other finger, dude. That’s a good one.
Yes, an army of Davids. Because millions of flies can’t be wrong.
Your article regarding scientific software is grossly inaccurate. You’ve based your entire argument on the fraud committed by CRU and applied it to all of science. Most if not all molecular modeling software is thoroughly reviewed and tested.
A statistical study was done some decades ago that showed an exponentially increasing probability of error in software programs once the total number of lines of code exceeded something like ten (10). Within the software engineering community, this was not open to debate. All of the techniques, processes – and even some languages – developed over the past 40 years or so have been attempts to deal with this issue. A program consisting of 500,000 lines of code will without question contain errors, probably hundreds if the code is unspecified, undesigned, unstructured, improperly tested and developed piecemeal over a long period of time without formal oversight and process. It’s not so much a program per se, implying significant logical internal structure and consistency, as the software equivalent of a giant ball of yarn fragments.
Here’s the deal. Given a data set, a model concept and a predetermined desired outcome, I can always achieve that outcome. Always. (All I have to do is abandon my integrity.) I can do this either by altering or combining the data in various helpful ways, or by “tweaking” the model, or both. If I absolutely control both elements of the “study”, you’ll never know how I did it. If the model is sufficiently large and apparently complex, verification and validation become all the more difficult even if I give it to you. If I “lose” the raw data, V & V becomes literally impossible. This is the current situation in a key corner of climatology – that we know about so far. The situation won’t improve because it can’t, but it may further deteriorate.
I think Iconoclast’s proposal of unit tests is the best way to go. It avoids the problems of software that is proprietary or that needs to be secure for whatever reason.
For years I worked in the aviation industry and did a lot of modeling using Mathcad. Early on I realized the importance of testing my model with known artificial data to make sure the model was legit.
It should be possible to develop a series of standardized data sets (or algorithms) that include positive and negative slopes, single and multiple inflection points, single and multiple asymptotes, sinusoidal variations, log and exponential variations, purely random data etc.
This is how the ‘hockey stick’ error (or hoax if you prefer) was discovered, right? Random data was run through the model and it still generated the infamous ‘hockey stick’ curve.
“Ha. I worked at NCAR in the mid 1980’s with top climate scientists building the Acid Rain Deposition model. I _know_ for a fact that the scientific software is subjected to immense amounts of peer review.”
Cool. Can I have a link to the current version of the code base for that model? I did some googling, but it wasn’t obvious where I could find the source code. Do I have to subscribe to a particular journal?
“Further, my wife worked for the environmental consulting firm that created the code used throughout the country for measuring environmental impact of emissions regulations.”
Excellent. Can I have the source files?
And just for the record, in case there might be some confusion, in this context ‘peer review’ does not mean ‘code review’. I’ve left academia and am now in industry. The code I write is subject to review by my peers (the other people who work here), but that in no way shape or form makes it ‘peer reviewed’.
The statistical software package ‘R’ has been subject to emmense amounts of actual peer review (because it is open sourced, not because a formal peer review process exists for software) and ‘R’ is frequently used in scientific research. The problem is that I can also use R as a full featured programming language, and quite happily get papers published based on my results while keeping my software methods totally secret or having to document them at all. All I in theory have to do is describe the algorithmic techniques I used so that someone else could recreate the software. The thing is, that in CRU’s case, they can’t even do that.
Jdkchem,
Your article regarding scientific software is grossly inaccurate. You’ve based your entire argument on the fraud committed by CRU and applied it to all of science.
Nope its based on my experience and observations of the social factors related to how science works in the real world. For example, I know how much money is spent publishing your average scientific paper. That cost does not include enough money to cover professional software review.
Most if not all molecular modeling software is thoroughly reviewed and tested
The standardized modeling environment? Yes. Each individual model? No.
It’s just like with spreadsheets. Excel as program is rather thoroughly tested. Each individual spreadsheet created by the tens of millions excel user are not. The vast majority of errors related to spreadsheets occur in the individual speadsheets not excel itself.
Most have you have already done the translation of Hal’s risible post. But just in case you missed it : “flies” = “Hitlers”
Godwin is smiling at the incredible subtlety.
Heh.
Anyway, why yes all us master engineers who work for international scientific instrument companies are entirely unfamiliar with measurement, calibration, simulation modeling and software quality issues. So since that’s true, Hal must be right and let’s be sure to feed trillions of dollars to O Duce and all the other modern day Idi Amin wannabes and their slaveholders rather than actually, say, do anything that would actually help people who need it.
Speaking of actual results, lets take a look at the track record of where the warmer “environmentalist” shibboleths lead. Maybe you haven’t heard the true story of DDT, for example:
http://www.youtube.com/watch?v=aSYla0y9Wcs
And also of relevance here is Michael Crichton on environmentalism as a religion:
http://www.youtube.com/watch?v=Vv9OSxTy1aU
A Chesterton quote that has enormous explanatory power for today’s world is the following: “When a Man stops believing in God he doesn’t then believe in nothing, he believes anything.” (Some dispute about whether this is an exact quote but it’s the pithiest form I have seen. And no, don’t pigeonhole me as a religious loon, one of my other favorite quotes is Fitzgerald’s “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”)
Just for reference in how it has been done in the nuclear industry, where public health and safety is very much a concern —
The research institute funded by U.S. utilities developed a code to model thermal hydraulics in the reactor core, VIPRE, and a code, RETRAN, for the balance of plant.
– The Nuclear Regulatory Commission reviewed and accepted the codes.
– They are maintained under a quality assurance, QA, program.
– Five volumes of documentation accompany each code.
– Users groups get together to regularly review experience with the code and any new code developments.
My own experience was in using a very early RETRAN three decades ago to develop a model for the power plant on which I worked.
We can safely say that the CRU code has
– No independent review. And trust me, CRU couldn’t have a much more adversarial relationship with skeptics than I had with the NRC back in the day. My bad, I admit.
– No QA program
– Documentation? Let us laugh together.
– Users review? Well, yes, yes they did. We can piece their review together from the emails detailing which tricks to use to get the right answer, which data to leave out because it doesn’t conform to pet theories, and which fudge factors to use.
Just saying.
I will add here a response I made elsewhere to this blog.
Regards,
Doug Williams
> I think this article points to what >is an even more serious problem >(because more pervasive):
> https://chicagoboyz.net/archives/10436.html
Well, this is a serious argument for a change, and I think a good one, though overstated. Most research involving statistical modeling is written in an open source programming language called R. This language is taught as part of postgraduate programs in the sciences where statistics are heavily used. For example:
http://socserv.mcmaster.ca/jfox/Courses/UCLA/index.html
So this part of the argument is actually incorrect:
> Today, each instance of
>custom-written scientific software is >like an unknown, novel piece of >scientific hardware. Each piece of >software might as well be an “amazing >wozzlescope” for all that anyone has >experience with its accuracy and >precision.
This mystifies R into a singular thing, unknown and unknowable. That is incorrect.
However, while scientists who evaluate the results produced by software written in R are not necessarily incapable of evaluating the accuracy of the modeling software, it is quite another thing to say that they are given access to the modeling program. I am sure in some cases that they are, but I doubt that such access is necessarily part of the standard peer review publication process. And if it were, then the following argument is also probably valid:
>
> The software that is used in science >is too complex for peer review as >currently practiced. ESR has >suggested Open Source Science, and it >is perhaps the right way forward.
But this is a little too simple, and conflates peer review with the ongoing testing of results that are part of the scientific method.
So far as the tool is concerned, R *is* open source. SAS isn’t, but even there, there’s actually an argument to be made that if everyone is using the same closed source modeling system, then you’d have less likelihood of writing in subtle biases that existed as an artifact of one statistical modeling program.
> This problem has been known for a >while, it was several years ago that >the famous “Four Color” problem in >mathematics was proved by computer. >The mathematicians were uncomfortable >for a while, but then decided that >they had looked the thing over well >enough, and the proof was good.
? I’m not quite sure how that is
relevant to the problem. What is the argument? That the scientists looked at the proof, rather than the software used to generate the proof? If so, then that speaks to the lack of familiarity of scientists in *one* field with using statistical modeling, not all fields. And as you can see for yourself, there are courses taught in this.
>
…
But I agree entirely with this argument of Eric S. Raymond:
> open-source it all. Publish the >primary data sets, publish the >programs used to interpret them and >create graphs like the well-known >global-temperature “hockey stick”, >publish everything. Let the code and >the data speak for itself; let the >facts trump speculation and >interpretation.
>http://esr.ibiblio.org/?p=1436
That is in fact exactly what I would expect to see in survey data, and in other areas where statistics are used. That seems like a good idea to implement–though on an online version, as journals are long enough as it is.
>
> Note again, that I am not an >AGW-skeptic, but I might become one,
>if these revelations keep continuing.
But this is still, so far as I’ve seen, a series of trivial objections.
If someone skewed results in their proprietary program, then that is only going stand up under the normal scrutiny of science if there is a massive conspiracy of the scientists who rely on that result for another study, or are checking that result with their own research, to cover up the inaccurate result. If the result does not correlate with the results other teams of scientists produce with their modeling software, then that is an anomaly that would have to be addressed as part of these other studies, regardless of whether or not the modeling software code was open to scrutiny. And if they check their data and find that the anomaly is not on their end, then instead of a dull paper, they have a much more exciting paper conflicting with the other paper–and the exciting quality of their paper increases in proportion to the importance of the paper whose results contradict what would have been expected from the data.
Doing this sort of metaanalysis is hard work, and it is harder if all of the data is not out there in the original study, but as I’ve noted to you before, in less than a year after this “hockey stick” study, even without an “open source” data set to draw from, the Mann and Jones, 2003 paper with the highest projections was shown to be an inaccurate model, whose alarming results in the later years were likely an error produced by a specific statistical modeling methodology Mann and Jones described using:
> However, we failed to reproduce the long-term
> (>40 years) Northern Hemispheric surface thermometer
> temperature trends shown in Figure 2.21 of IPCC TAR (as
> Figure 1a here) and in Figure 2a of Mann and Jones [2003]
> (as Figure 1d here). We conclude that published results
> suggesting that the Northern Hemisphere surface air temperature
> has increased by the extremely rapid rate of about 1
> to 2.5C per decade during the last one year (2002–2003)
> (see Figure 3) are most likely artefacts of methodology and
> procedure of trend smoothing. Accurate communication of
> methods and avoidance of data-padding procedures for
> smoothing and/or filtering of climatic time series should
> be incorporated in reporting data trends.
http://www.cfa.harvard.edu/~wsoon/myownPapers-d/SLB-GRL04-NHtempTrend.pdf
So the claim the Chicago Boys blog wants to make that no one understands this, and that scientists cannot check each other’s work, ONLY works at the level of a peer review publication, which is not the same thing as saying that a published error is never corrected. It *is* corrected, because the scientific method ensures that other scientists will check the data, or uncover indirect evidence that it is wrong by relying on it, and then discovering results they can’t account for until they examine the premises of their own work.
Again, this is a distraction fallacy: It is a trivial objection. In this case, the hypothesis they make (no one can check this) is refuted by the history of the very study they cite as one that scientists couldn’t address–which was refuted within one year as faulty, because of a statistical modeling error.
Moreover, to treat this as generally undercutting global warming science is the fallacy of hasty generalization. One or two errors (and again, an error caught within one year) does not invalidate the entire field of climatology.
> BTW Doug, you may well be wrong >about the uniqueness of the data. I >suspect that there was a program for >weather stations to ship their old >data to England. Some stations may >have kept backups, but given that >much of this was on tape (!!!) one >tends to doubt it. This is truly a >catastrophe.
I very much doubt that any single source of data would have simply been destroyed. Above all in the sciences, destroying the only source of data would be unthinkable. Destroying your *own* copy wouldn’t, even if it was culled over time from dozens of scattered sources around the world, so long as those primary sources remained intact, though in the context it is of course suspect behavior. It leaves others with the inconvenience of gathering up the raw data themselves, and testing it again, which is precisely what Soon, et al, did within 10 months of the original paper’s publication.
And apart from the fact that Soon, et al, managed to do what you imply they could no longer do, the University of East Anglia is hardly the only place in the world where scientists would have been interested in having climate records, nor the only place where scientists use such data to generate statistical predictive models of the climate. I would think that not just Mann, Jones, Soon, and his associates have the raw data, but also thousands of other scientists working in the field.
Regarding Doug Williams’ comment and a few thoughts:
GISTEMP is not written in R, nor is the code that came out of the UEA CRU ‘unofficial FOI response’.
They are the kind of Fortran code that grows out of the work of a non-professional programmer plinking around trying to do something; the shortcuts and hacks that are acceptable in a non-mission-critical piece of experimental code gradually become the core of something that grows little by little without much thought to structure.
You don’t get a ‘peer review’ of something like this from ‘climate scientists’, whatever that term means; you need a body of skilled programmers and software architects – who will NOT be ‘climate scientists’ – to do a code walkthrough and review the thing for both design and functionality. You need a formal design spec that documents what the program does, and a QA process comparable to what you’d find for any piece of commercial software to check that expected inputs produce expected outputs based on that design spec, and that anomalous inputs produce traceable error conditions and do not propagate errors into downstream processes.
Given the money – our money – that our politicians have riding on GISTEMP and HADCRUn output, you’d expect the kind of software QC effort that goes into medical or weapons-control software. The ‘peer review’ process as it exists accomplishes none of this.
With the Mann ‘hockey stick’, the hard-fought release of GISTEMP code, and now this, what we’ve seen is that the response of any software professional who bothers to look is “Is that all there is?” A piece of software that tracks a urine sample from collection to testing has infinitely more complexity than these things, far more effort put into its design, far more quality control applied to its production.
I think this is only half truth….
Iowahawk – i’m astonished to learn there is an AGW regression. where is it and what’s the R2? :)
For 8 years Republicans raised a war against science by repressing environmental research by NASA and EPA, blocking stem cell research, supporting creationism and fighting the teaching of evolution.
Now, because one University in UK leaked a few comments out of context and concealed data and methodology they are “oh-so-shocked!”. I wonder what is the conservative word for hipocrisy.
Diego Moita,
All politicians dislike some facet of science because real science tells them something they do not want to hear. The left is just as bad about science and many cases worse because they claim to base so much of their policy on it. I have done several post on the subject.
In any case, just because “conservatives” are hypocritical on some attributes of science in no way excuses or mitigates the damage caused by the incompetence and possible fraud at CRU. Conservatives disrespect for science cannot possibly do as much damage and kill as many people as would altering the entire planetary economy over the course of the next century to head of a wildly exaggerated threat.
I hate to break it to you but it is usually the case that both/all sides in political struggles are equally contemptuous of the facts scientific or otherwise. Politics is about power not the search for truth. The best you can do is to the pick the side in each particular instance that is most respectful of the truht right at that moment.
Right now, in the context of global warming, the scientific integrity prize definitely belongs on the right side of the political spectrum. However, pick another issue and they may not be.
Shannon, I found this recent article of yours after reading a reply you made to a post on Pajamas Media about Ozone Hole depletion. I was impressed by the non-hysterical balanced and logical reply you made there and noticed you are in Austin — me too — and wondered if you had written anything else. So I found this, and thought I’d like to throw my 2 cents in.
I am a climate change believer, but before all that have always been a science believer, which means that I am open-minded and willing to listen to POVs that challenge mine. Unlike Greengrove, who appears to be convinced that there is no GW and appears to be unwilling to listen otherwise. I really like the comments of Christy, Eric Raymond, and Anonymous [who posted Nov 30 at 6:57pm] primarily because they appear balanced.
My main concern with this whole debacle with the CRU is that they have further damaged the GW argument credibility. For example, what will happen if in one or two years the scientific community accepts Shannon’s argument for good open source modeling and code, applies that to the CRU data, and arrives at the same conclusions the CRU did? Will any of you anti GW fanatics believe the science then?
Probably not is my guess. Shannon is also right that people who get too tied emotionally to a debate tend to accept only the research that validates them.
Billb166,
For example, what will happen if in one or two years the scientific community accepts Shannon’s argument for good open source modeling and code, applies that to the CRU data, and arrives at the same conclusions the CRU did?
Even if we recreate CRUs data by starting from scratch, that is only one minor step in predicting our climate a century out to the extent that we can justify reengineering the entire planetary economy and tech base.
Remember, to restrict our energy use in order to prevent the warming based on the IPCC projections we are going to have to trap half the world’s population at a subsistence level of energy consumption. That in turn will condemn hundreds of millions of people over the next century to great suffering and death from simple material poverty. The level of certainty we need before killing so many people is very, very high.
Reproducing the CRU data won’t help us a lot. The data under discussion actually has nothing to do with predicting future climate. It is instead concerned with creating a climate record of the last 1500 years based on tree ring data and other suspected climate proxies. The problem is that such data is completely untestable.
In order to test a scientific hypothesis, you have to be able to falsify it. That means the hypothesis must make a prediction of some phenomena that CANNOT be observed if the hypothesis is correct. There is no conceivable observation that we could make that would conclusively demonstrate that the tree ring data did NOT accurately reflect the climate of the distant past. This means we have no means evaluate the bounds of the tree ring data’s power for predicting past climate. It might be fairly accurate, it might be way off. We can’t tell.
The simulations of future climate depends on this historical data to create the patterns they extrapolate. The complex and chaotic nature of the simulations mean that even minor inaccuracies can create wild inaccuracies. Worse, we have no means of testing here and now the predictive power of simulations a century ahead.
So, in the end, it is very hard to get a true, predictive scientific model of the long term climate. It is unlikely we will ever have models so accurate and proven that we can justify using them to kill people.
Will any of you anti GW fanatics believe the science then?
I often ask proponents of Catastrophic Anthrogenic Global Warming if there is any evidence that could conceivable convince them they are incorrect. They usually reply that there isn’t because it’s all so proven that no possible evidence could exist proving it wrong.
Of course, that is not science. Something that cannot be falsified, even in theory, is religion, not science.
So, I put the question to you. What observations could we make here and now that you would accept as conclusive evidence that we cannot accurately predict the climate a 100 years ahead? If you can’t answer that question off the top of your head, then you don’t have a scientific understanding of global warming. If you think it can’t be answered, then you’re talking about science at all.
Wow, thanks for reasoning as opposed to blasting. These days a conversation like this seems kind of quaint.
I will not pretend to be as scientifically in tune as yourself Shannon, but I still think there are some arguments. Your statement “to restrict our energy use in order to prevent the warming based on the IPCC projections we are going to have to trap half the world’s population at a subsistence level of energy consumption” is not the end path that I foresee if suddenly everyone agreed that global warming was human caused. Humans avoid pain. Our nature is to postpone pain as long as possible, even if we understand that postponing that pain will in the long run be better for us. Look at our history with addictions to nicotine, drugs, …even energy(?). For some people it is not possible to curtail an addiction. So I am not sure that we immediately pass laws prohibiting driving, grounding jets, mandating everyone grow their own food… not sure exactly how the scenario you fear could be accomplished. I think your argument is taking an unreasonable turn.
If your statement implies that developing nations will somehow keep on using energy but that they will force the third and fourth worlds to subsist…well I think that is unlikely too.
IMO, the purpose of trying to get everyone on the same page re the causes of GW is so that we can plan the next steps toward reducing our energy footprint with as little pain as possible. If you are saying that we cannot reduce our energy footprint or that the only steps we can take cause civilizational collapse then I would have to ask you to start proving that hypothesis. I would argue that we can start taking some steps that start reducing our energy usage without tipping us into globabl economic decline.
For example, if governments passed laws mandating that all cars get 30mpg. Or 40mpg. Aside from howls about “restricting our rights” the most serious argument against this is that achieving such a standard forces us to use lighter materials which causes cars to be less safe, more injurious in accidents.
But as you have similarly argued elsewhere, if 1000 more people lose their lives because of these “unsafe” cars, yet the atmosphere, oceans, etc are less polluted, is not this an OK tradeoff? [I forget the details of your comment, but somewhere on another blog you pointed out that accepting a little pain in one area is OK if it helps in another area]
So for me, the real argument is how far can we push the economic tipping point? I.e., what steps can we take that will help slow down GW yet not damage our economy.
I realize that many still debate GW, but for me the real debate has moved on to what can we do that is realistic. I completely agree with you that we should not take steps that ruin us. But I want to know what we can do.
You also argue that: 1) CRU simulations may be faulty and cannot be tested [true in the strict sense, but sims can be improved], 2) CRU tree ring hypotheses cannot be tested [true] and finally 3) very hard to get a valid predictive model [I guess we will not know for another 50 years, 100 years, 1000 years?] — you are correct here too.
You then ask if there is any data that I would accept showing that AGW (Anthrogenic Global Warming) is incorrect. Well, yes, if so many of the sciences that currently show GW changes taking place were to backtrack and say no, the changes in their field are reversing, then I would happily be willing to believe that humans did not cause GW. You are not one of those people who believe that all of us global warmers are conspirators are you? My guess is no — that you believe we just do not understand real science.
I am enjoying this debate Shannon, hope you are too.
Just noticed that my sentence in paragraph 2 of my post should read “Our nature is to postpone pain as long as possible, even if we understand that postponing that pain will in the long run be WORSE for us.”
Forgive my ignorance, but I had thought there was concern for the “Global Temperature”, on an annualized basis. Why is there so much adjusting and nudging and poking and prodding of the data, tortured enough that it will confess to anything.
A reasonable person would not attempt to measure the GLOBAL temperature, but that of a specific area, probably equatorial, that had the least impacts from outside influence. That would minimize the noise, and make any real temperature drift much more easily observed.
With such a simplistic measurement, perhaps a real scientific hypothesis could be generated. Such as,
“for every GT of CO2 released into the global atmosphere, we recorded an X.XXXXX C change in temperature.” AND the recorded temperatures at the multiple equatorial sites around the world responded similarly to the change in CO2 ppm.”
Likely? Not. We don’t even have one of the X’s in the C change in temperature as we are arguing about the trees in Yamil and the Urban Heat Island effect on the stations in Birmingham,AL, and Fresno,CA, where the airport came to the station, but the records are “adjusted”… by some unknown factor possibly pulled out of thin air. Interpolated, adjusted, creamed, whipped, and its a Floor Polish Too!
The unsupervised cronies in UK wasted their time arguing about corrective factors to generate a global number that would mean nothing.
Someone, somewhere must be chortling mightily at this attempt to measure the number of angels that can dance on the head of a pin.
tom
Shannon,
Most of your comments are very perceptive but I would not necessarily support the open source approach as the only way to ensure good modeling results. There are several things I have mentioned before to approach this model.
”¢ Test the computer model combination for accuracy and precision. NAG and IMSL used to sell software to do this. Result: any model not validated for accuracy and precision should not be considered. If the current code uses home brewed functions, they also must be validated. I first encountered this problem in the mid-70’s when trying to calculate spring rate and damping coefficients for rubber components. The computer was wrong. My pocket calculator was right.
”¢ Test the computer model for previous predictions. If the input of good, multiple source reviewed data from the past can produce today’s weather, then this model is a candidate for continuing to live. Does anyone know if the IPCC or related institutions ever did this to validate the model?
”¢ Open all the efforts up to public scrutiny (this includes the data filters). Anyone hiding something under their coat is excluded. Sounds like open source but it doesn’t have to be, just publication of the data and the results. Combined with the first two points this should provide enough confidence in the performance of the models. Note that it provides no confidence intrinsically for the inputs to the model. Some other testing and analysis must be done for that.
On a more expansive note, I have been doing stuff like this over the last 30+ years and I have done and seen the results of using both good and bad computer models. I prefer strongly that any model be fully tested and validated and that any data that goes in is of the most reliable quality. I am familiar of the type of quality environmental data has due to my 6+ years working under contract to the US EPA. I would never base a US law on it let alone a global mandate like the one being proposed.
Left off a name on the last comment. It’s Ludd, Ned Ludd
Billb166,
Look at our history with addictions to nicotine, drugs, ”¦even energy(?)
I believe this comment encapsulate why you and others think like you are so willing to bet that Catastrophic Anthrogenic Global Warming (CAGW) is real. You simply have no intuitive understanding of the central role that Carbon Emitting Energy Sources (CEES) play in our civilization. You believe that CEES are something akin to luxury product which we can easily do without with just a little minor belt tightening.
You comment about cars reveals this. Like most naive people, you think of energy consumption in terms of house hold consumption. In fact, household/end user consumption of a CEES in all forms is minor compared to the consumption of CEES in manufacturing, information management and distribution. While we some of us could do with smaller cars, we can’t really get buy with smaller ships, planes, trains, trucks etc. The same applies for factories, offices, hospitals etc.
Modern technology is not a luxury. It keeps people alive. It is why no child of my generation died but everyone of grandparents lost a sibling in childhood. It is why malaria has disappeared from America (mosquitos blocked by metal screens, smelted with coal and air conditioning powered by coal and gas generators) whereas energy poor Africa suffers horribly.
The changes that the IPCC requires based on current technology will be economically devastating world wide. When you get right down to it, an economy is just a system for using energy to turn dirt into useful things. Less energy=fewer useful things = harder lives and more death.
Besides my technical background, the biggest difference between you and I is that I understand just what a colossal undertaking preventing the global warming under the IPCC predictions will be and you do not. You imagine it is something analogous to giving up booze for lent. I understand it means throwing millions of Americans back to the lives of backbreaking energy poor work that our grandparents lived in their youths. I understand it means the collapse of social security and other entitlements. I understand it means denying the poorest of humanity any hope for a better future.
You are careless in your evaluation of the CAGW data because you believe that even if it is in error, the consequences will be minor. I am rigorous in my evaluation because I understand that if it is in error and we act on it, the consequences will be worse than all the wars and democides of the 20th century look trivial.
Well, yes, if so many of the sciences that currently show GW changes taking place were to backtrack and say no, the changes in their field are reversing, then I would happily be willing to believe that humans did not cause GW.
That is not what I ask. I ask how you would know if GW was not a problem. In other words, I was asking what specific physical observation you could make, at least in principle, that would demonstrate it was wrong. I ask this to try and demonstrate to you how doggy the science go CAGW is.
In solid science, the models make firm predictions of phenomena that cannot be observed if the model is accurate. Scientist then attempt to observe those observation in order to destory the model. This is the opposite to how most people in most fields decide what is true. Instead they, seek confirmation of model and try to explain away any facts that undermine the model.
The “science” of CAGW fits the latter pattern. The CRU data quite clearly shows scientist attempting to confirm the model instead of scientist seeking to destroy it.
That you cannot think of a specific physical observation reveals that you don’t believe in CAGW because of the science of lack thereof.
You are not one of those people who believe that all of us global warmers are conspirators are you?
No, I believe this part of a cultural pattern in which people who are not involved in the process of material production attempt to create a rationale for why the uninvolved have a right to use the force of the state to control those who are. I do not believe this is conspiracy but rather a self-emergent phenomena in which large numbers of people angry and resentful towards the productive, individual gravitate towards political doctrines that justify giving them dominance and control over the productive. The origins of racism would be a good example of these type of phenomena. Religious and scientific rationals for racism evolve only after it had become widely established in the population. Politicians enacted rascist laws because enough people in the population wanted a legal advantage over others.
Key to controlling the productive is explaining undermining the valid perception that the productive make life peaceful and prosperous for everyone. The easiest way to do that is to create a theory that says that all though things look now the productive are in fact driving us all over a cliff. Then the theory is declared to be absolute valid and “proven” to the extent that we can sweeping changes to society.
The great granddaddy of these theories is Marxism which held that (1) the productive did not actually produce the good in society and that (2) is was “scientifically” proven that a society dominated by the productive was doomed to collapse.
Less dramatically we have since the 60’s seen an ongoing series of “crises” of which fit the same pattern. The productive are out of control and are going to get us all killed regardless of how good things look now. The “population bomb”, the energy crisis and general resource depletion where all sold as evidence that just letting people, espeically the productive, live their lives without intensive political micromanagement would kill us all. (If want to see how ugly CAGW could get read up on Paul Ehrlich arguments that it was “obvious” that we had “write off” India and let hundreds of millions starve.)
Those crises proved to be political illusions. Population growth was already slowing when the panic begin and the energy crisis was wholly political. Every critical resource they had “proven” we were running out of in the 70’s is more abundant and cheaper today.
CAGW is just another excuse for the non-productive to control the productive. It is an excuse to control and dominate people in general. Tellingly, the people who are most passionate about CAGW are likewise the most resistant to using nuclear power which would solve the problem. I don’t think they want the problem solved in a way that leaves people free. At the very least, I can say that their behavior is consistent with someone who did believe these things.
I don’t think that you have every sat down and thought about things this way. I think you have just “felt” that controlling the productive and others “feels” more right than living people free. Take tens of millions of people who feel that way and they all drift into a massive group think that justifies that feeling.
I have a lot of confidence that we will discover that anthrogenic global warming is real but a relatively minor concern which the natural progress of technology toward more dense forms of energy will solve without any need for major political action.
I also have a lot of confidence that not ten years after CAGW fades we will have another crisis to replace it. Certainly, CAGW showed up almost exactly ten years after the energy crisis suddenly ended.
Whoa Shannon, I had hopes that you were rational enough to avoid inferring the emotional feelings or philosophical bent behind my arguments. But your statement “large numbers of people angry and resentful towards the productive individual gravitate towards political doctrines that justify giving them dominance and control over the productive” dispels that notion. But more on that in a moment, let me take on some of your reasoning.
The first five or so paragraphs of your post revolve around this concept: “You simply have no intuitive understanding of the central role that Carbon Emitting Energy Sources (CEES) play in our civilization.” You say that “you and others think[sic] like you” (I am going to assume you meant ‘you and others like you’) are “willing to bet that Catastrophic Anthrogenic Global Warming (CAGW) is real.”
You may be right about the “others” but can I please just stick to me? I lived almost 3 years in a mud hut on a mountain in Central America with the Mam and Quiche Indians. I crapped in a hole in the ground, warmed myself with a wood fire in the 30 degree nights, slept on a lice infected straw mattress, read at night by two candles placed about a foot near my head.
Some of my Indian friends died of infected boils — that could have been prevented if they had simple cleaning alcohol. Half of their babies died before they were two, one died in my arms. They were old at 40 from the manual labor: women cleaned clothes by pounding then on rocks in a polluted creek, men carried wood by tump-line packs, bent over for hours.
I could go on with more interesting stories but the gist of this is that no I do not want to live like that, and please do not try to tell me I have little understanding of how good we have it.
You say “modern technology is not a luxury” and I hope you now believe that I agree with you. In fact I am extremely happy that you understand this, because my experience here in the U.S. is that a whole lot of people do not grasp this. I would be interested in hearing how you have come to know this idea.
So your summary [in bold type] is this: “You are careless in your evaluation of the CAGW data because you believe that even if it is in error, the consequences will be minor.”
My response is this:
1)If I am ‘careless’ in my evaluation of the data, I am a)talking to you and listening to you to understand your points, b)always willing to hear another side c)always reading about new data and learning more.
2)If my points are shallow (“You comment about cars reveals this. Like most naive people,…”) … well I would respond that yes I did use a personal example, more energy efficient cars. Your response goes to the heart of CEES consumption: industrial use, distribution, etc. (Was the fact that my example was a personal one is how you decided I was naive?) My idea is that some changes can be made in all of the heavy CEES use areas that will help us. Am I interpreting you correctly when you say that basically we will not be able to do enough in these areas to affect CEES usage, or put another way, that the cost of making changes in these areas will be so high that our way of life is threatened?
In a previous post when I said “what steps can we take that will help slow down GW yet not damage our economy?” I hoped that I made it clear I was not willing to destroy us. It seems to me that at least our goal is the same. We do not want to ruin our civilization. But you think I am naive for,… for what — exploring how we might be able to lower industrial CEES consumption, or for even wondering if that is possible? Do you think that this is just NOT POSSIBLE? If this is your point, then OK, the corollary is that you believe it is a waste of time to even think about how to do that, much less implement any trial programs. Let me know you answer to this paragraph Shannon, because if so we finally may be able to conclude this discussion.
You have three paragraphs starting with “In solid science, the models make firm predictions of phenomena that cannot be observed if the model is accurate. …” and state that “you cannot think of a specific physical observation reveals that you don’t believe in CAGW because of the science of lack thereof.”
Well, this is where I may be lacking, because I thought that a scientific model would make predictions and scientists would then try to both confirm it or deny it through observations. Sort of like Einstein’s Special Theory of Relativity… when astronomers confirmed that spatial bodies bent light this was one observation in favor of it. Yet there also have been some observations that have not yet been made that still might deny it.
Regarding CAGW, it seems like the model is not near as rigorous as a physicist’s hypothesis would be. Maybe because the system it is trying to model is so complex. And yes I would agree that CRU data confirms the model, but also agree that there is still data that denies the model. To my mind it is still up in the air. Yet even so, I think that it is worth developing trials to cut energy usage, just from the point of saving money.
Finally, in your last 8 paragraphs you swerve off into trying to understand my (“our” because you lump me in with every other CAGW supporter) rationale or beliefs and emotions.
To my mind, the greatest obstacle to human progress is our over emotional communication: if we can discuss an issue without imputing another’s emotions or rationale or belief system, if we can just stick to “the facts ma’am, just the facts” as Sergeant Friday of Dragnet used to say, then our discussions will yield more fruit.
I cannot defend other CAGW proponents from your belief that they are just “people angry and resentful towards the productive” people of our society. But I do not put myself in that class.
Do I think that your idea may have some merit? Yeah, there are people who think that way. I’m a child of the 60’s when Marcuse and Mao held sway, so sure, I have seen plenty of people who sneered at us working proletariot types. But IMO you are reaching when you say that “CAGW is just another excuse for the non-productive to control the productive,” … I am not sure how you reach that broad conclusion.
I understand your desire to understand why people believe in CAGW. I have some theories myself. But please don’t throw them at someone whom you are having a discussion with, unless you are tired and just want to turn them away. Accusations do not foster understanding, just anger.
BTW, I want nuclear power to come back too.
“I have a lot of confidence that we will discover that anthrogenic global warming is real but a relatively minor concern which the natural progress of technology toward more dense forms of energy will solve without any need for major political action.”
I find this to be a great dissertation. I believe capitalism is an engine of continuous technological innovation for mankind and it is the right answer, the best answer we have so far, to any world environmental present of future crisis; our cars today pollute a fraction of what they did 25 years ago and new cars will pollute a fraction of what today’s cars pollute thanks what Shannon calls “the natural progress of technology”, many manufacturing processes have seen, are seeing and will continue to see huge savings in energy consumption thanks to efficiency gains from incorporating software and computerized automated production and maintenance systems and from breakthroughs in all fields of sciences and disciplines that impact production processes and businesses.
New technologies are ever moving in driven by companies seeking recognition, profits and to expand their market pools, they are ever more energy-efficient, ever more clean and beneficial to us, ever more revolutionary, challenging every set of technological assumptions, concepts and paradigms of past and present times, and we are not to thank and revere the “unselfish concern for the welfare of others” of our politicians and scientists turned celebrities who want to impose upon us their grand plans to save the world, but we must thank market driven forces instead.
If you’d like to see the code base used in a growing body of climate modeling work, you can download the CCSM (Version 3.0) here:
http://www.ccsm.ucar.edu/models/download.html
The CCSM (Community Climate System Model) project is described here:
http://www.ucar.edu/communications/CCSM/history.html
“CCSM belongs to an elite category of computer-based simulations known as general-circulation models….”
I trust that, going forward, discussion throughout the blogosphere will take care to remind readers that current-generation modeling code is open to public scrutiny, and that it is, presumably, in general agreement with less-available code.
Recognition of this will doubtless reduce anxiety about misleading science, which (should, I think) please everyone.
Anonymous,
The CCSM is definitely a step in the right direction but is limited by two major factors.
(1) The software requires a supercomputer with hundreds of processors in order to run. Unless all the subcomponents of the software can be run on much smaller computers, we’re not going to see a lot of double checking of how the code actually runs.
(2) The data that CCSM uses to run its simulations were not collected or processed in a transparent and accountable method. That means that even if CCSM were perfect software, the doctrine of GIGO (Garbage In, Garbage Out) means that any of its results are fatally corrupted.
We basically need to start over from scratch in collecting and process our climatology data. We need to create formal transparency, formal oversight, formal procedures and formal accountability BEFORE we start recollecting the data.
Only then can we start to focus on making the modeling software as accurate as possible.
Billb166,
You say “modern technology is not a luxury” and I hope you now believe that I agree with you.
You say that but in your previous post you analogized energy use to an addiction to recreational drugs. If you really understood the central importance of energy to our lives and to the betterment of lives of all humans, such an analogy would have been so obviously wrong it wouldn’t have even occurred to you.
The classical hierarchy of material needs runs: oxygen, water, food, shelter etc. In the modern world it really should run: oxygen, energy, water, food, shelter etc because without energy most of us cannot get water, food, shelter etc. Just to start, without energy we can’t harvest, purify and distribute clean drinking water.
From my perspective, when you wrote “Look at our history with addictions to nicotine, drugs, ”¦even energy(?)” you might as well have written “Look at our history with addictions to nicotine, drugs, ”¦even clean drinking water(?)” Obviously, anyone who believed that clean water was an “addiction” would be considered shockingly naive. You essentially did make exactly that analogy.
Am I interpreting you correctly when you say that basically we will not be able to do enough in these areas to affect CEES usage, or put another way, that the cost of making changes in these areas will be so high that our way of life is threatened?
I am saying the concentration on trivial consumer consumption first and foremost in debate reveals that far to many people don’t understand the real challenge. The fact that the first example that popped into your head was this common trivial improvement strongly suggested to me that you didn’t actually understand it was trivial. This is especially true when taken in context with your “addiction” comment.
But you think I am naive for,”¦ for what ”” exploring how we might be able to lower industrial CEES consumption, or for even wondering if that is possible?
Any idea that begins with the idea that we need to use less energy has immediately headed off in the wrong direction. We don’t need to use less energy. The laws of physics dictate that the only way to use less energy is to reduce our standard of living. We need to start all discussion with how to create more energy. It is naive to start any such discussion with conservation.
Indeed, thanks to little economic effect called Jevon’s paradox, increasing the efficiency of a technology like cars cause us to use more, not less energy in cars. Again, naive.
Do you think that this is just NOT POSSIBLE?
It is not possible short term to reduce carbon emissions without some reduction in the standard of living. Right now we getting the atmospheric carbon sink for free. We don’t include that in our accounting anymore than we account for the cost of ambient oxygen. Abandoning otherwise function CEES technology and replacing it with non-CEES technology means diverting resources from increasing/maintaing standards of living to building the new technology. It’s inescapable.
However, it is technologically possible to completely forestall the possibility of CAGW by building large numbers of nuclear power plants. We might still get some warming but nukes will almost certainly head off any catastrophe while at the same time having enough energy to raise everyone’s standard of living to first world levels.
Revealingly, the same people who are the most hysterical about CAGW are also those most hysterically opposed to nuclear power. It’s almost like they don’t want to prevent CAGW but just want to have a state of perpetual crisis.
The only real discussion about adapting to possibility of CAGW is how fast to build what kind of nuclear power plants. All other discussions, without exception are either complete wastes of time or actively and lethally counterproductive.
Well, this is where I may be lacking, because I thought that a scientific model would make predictions and scientists would then try to both confirm it or deny it through observations
This is common misperception. Scientists do not attempt to find evidence to confirm a hypothesis because there is to much conformational information and most conformational information will confirm more than one hypothesis. Instead, they look to destroy the hypothesis.
Karl Popper illustrated the idea by saying, “If you want to test the hypothesis that all swans are white, you don’t go out counting white swans, you go out and look for one black one.” You could spend you entire life counting white swans without actually confirming the hypothesis. However, if you spent your time trying to track down a black swan but never could, then that would lend more credence to the hypothesis than just tallying up white swan after white swan.
In the case of CAGW, you just can’t go around tallying up evidence of warming, you have to rigorously try to destroy the idea that CO2 buildup will lead to catastrophic warming a century ahead. First you must look for some pattern in nature that could exist in principle but CANNOT exist if CO2 is driving significant warming. Otherwise, you just end up counting up white swan after white swan never knowing if the black swan is just on the other side of the reeds.
Finally, in your last 8 paragraphs you swerve off into trying to understand my (”our” because you lump me in with every other CAGW supporter) rationale or beliefs and emotions
I was answering your question of “You are not one of those people who believe that all of us global warmers are conspirators are you?” You lumped yourself together.
In any case, I merely answered with what I saw as the social and intellectual factors that caused people to embrace the idea that CAGW was a dead certainty when (1) the science was far from settled and (2) they didn’t understand the science anyway.
Given that (1) we have seen this exact same pattern in unrelated issue before and (2) the same people who where dead certain and wrong back then about other issues are dead certain now, it is reasonable to ask what mechanism other than scientific understanding is driving their beliefs.
To my mind, the greatest obstacle to human progress is our over emotional communication: if we can discuss an issue without imputing another’s emotions or rationale or belief system, if we can just stick to “the facts ma’am, just the facts” as Sergeant Friday of Dragnet used to say, then our discussions will yield more fruit.
I agree. I suggest you start by getting your fellow warmers to (1) not label as a flake of sellout every scientist no matter of what standing or accomplishments who question CAGW (2) possibly acknowledge that people who have spent their lives in the energy industry and business in general might not in fact be shallow creatures willing to sacrifice the whole of humanity on the altar of short-term profit but might in fact have real solid experience of the scale of the changes we contemplate and be trying to honestly warn us.
It is impossible to read anything about global warming policy without a reading a tirade by CAGW proponents about how anyone who disagrees with them are stupid, greedy and evil while only they have intellect and morality to guide the fate of humanity.
But IMO you are reaching when you say that “CAGW is just another excuse for the non-productive to control the productive,” ”¦ I am not sure how you reach that broad conclusion.
Because (1) the propensity to believe in CAGW rises in direct proportion to an individuals hostility to the productive classes and (2) as noted above, the vast majority of CAGW proponents do not endorse solution that we know would solve the problem but instead endorse solution that call for expansion of government power and a reduction of individual freedom, especially the freedom of the productive to create.
Imagine if we faced the threat of viral outbreak and we had a vaccine that could prevent the disease but instead some people wanted to use health inspectors going door to door to find the infected and ship them off to quarantine camps. You would reasonable conclude that the viewed the virus a pretext to exert dominance.
Basically, the people who have always been hostile to the productive and have gone through a long series of justifications for why they should dominate the productive, have (we are supposed to believe by sheer luck) discovered a permanent emergency that will let them do just that.
I would point out that the existence of social or ideological motives to believe in CAGW has nothing to do with the whether it is or is not occurring. It merely puts the political debate and especially the debate over solutions in a broader social context It explains why otherwise intelligent people won’t be as skeptical of CAGW as they should be. It explains why we have college professors, movie stars, politicians etc all creating this enormous groupthink pressure. On some level, they want it to be true. On some level, they think the world would actually be better off long term if it was.
Again, doesn’t affect the actual outcome of the science.
Shannon,
Thanks for returning to form, good post.
I won’t try to convince you that considerations of energy reduction are not trivial or naive, if anyone else wants to reply to you in this vein then I’ll leave that up to them. There might be someone out there with the science to support the idea.
And I won’t bother trying to convince you that I am not one of the naive ones — in your eyes I damned myself too many times, so be it.
And yes, Popper probably would have rolled his eyes at the CAGW theory, but he was a guy who felt most theories were at best conjectures that had never been falsified. On the other hand, even Popper eventually warmed up to natural selection. So as I said earlier, as falsifications are shot down, CAGW acceptance will be strengthened.
Or not. As CAGW “facts” are disproved, CAGW will lose it’s attraction.
I am not sure, however, if you would support my previous sentence. Because if you believe that the majority of CAGW believers are people whose belief
“rises in direct proportion to [their] hostility to the productive classes and (2) as noted above, the vast majority of CAGW proponents do not endorse solution that we know would solve the problem but instead endorse solution that call for expansion of government power and a reduction of individual freedom, especially the freedom of the productive to create”
then I am not sure you have much faith in the rationality of the many people who are CAGW proponents. You doubt their ability to change their position.
You may be correct Shannon — there may be a link between believing in CAGW and being anti-capitalistic/pro-governmental power. But I know a lot of conservatives, even libertarians, who believe in CAGW, so to me this is the weakest part of your argument.
My point here is that going off into imputations of social philosophy does not help us have a rational discussion.
For me the bbest part of this conversation is the clarity you have brought to some of the key issues:
1) that the discussion needs to focus on open development of models with formal oversight, procedures, and accountability
2) that we need to move the discussion away from “trivial” [your term] consumption to industrial consumption
3) that we need to focus on how to produce large quantites of clean power, absolutely including nuclear power.
My addition would be that we need to refrain from imputing motives or ideologies as this just gets us all worked up and keeps us from understanding each other.
I sincerely appreciate your participation in this discussion, hopefully it has allowed people from both sides to understand some of the key issues of the debate.
Just..out of sheer curiosity (OK, really to play the Devil’s Advocate here and pull out info, since I’m neither a climatologist nor computer expert) what is your response to THIS, from Sharon Begely:
Seems her argument is that the emails in the first place contained nothing damning other than the very real and difficult/tedious work of REAL climate scientists, who got quite understandably frustrated with the dumbbunny denialist crowd, and some email correspondence detailing said frustration.
Many of us would sooner not have some emails revealed and aired out in broad daylight, no?
About the same line of thinking, it seems (though they no longer allow new registrations for comments) showed up on Little Green Footballs, which in turn references an article in New Scientist.
Some excepts from NS:
Well, good points. NO?
And then we have, from the same article:
Also:
As to the other condemnations of faux outrage (this time, morals of the methodology and “suppressing” dissenting papers):
Said another, in response to the glib notion that some Denialists posted that at least we’ll have lots of CO2 “plant food” in reserve to help with crops, etc:
So it goes, then.
See also Elizabeth May‘s piece on all this, who–to her undying credit–has apparently done what the vast majority of the Denialist Crowd has not: Read ALL of the damned emails.
Any input, Boyz?
I’m a piker and was able to pull some good retorts in the space, of, oh, about 20 minutes of casual Net wandering. Not looking good for the Denialist crowd even with just a simple search.
Said one commenter, at the bottom of the article, this whole allegation of “ClimateGate” is really little more than a Right Wing tabloid “feeding frenzy” for dummies, the scientifically illiterate rednecks of Jesusland, and the moronic.
Regarding the faux moral outrage over the FOI requests, from New Scientist we also have the claim that per the letter of the law, things were followed to a T in regards to the abusive requests (50 in one week) for information.
Wakefield Tolbert
I understand why the emails have drawn so much attention. They represent the human element in the story that most people can understand better than the science. However, the emails are not the scientific story here. The emails can be explained away if a person is willing to give the authors the benefit of the doubt.
What cannot be explained away is the utter train wreck of the computer code as well and the blatant manipulation of the data revealed by that computer code.
All attempts to make this about the emails are attempts to distract people from the real story. The real story is the computer code and the data files!
Nothing in the several hundred words you wrote addressed any of the real concerns about the code. Nothing you wrote addressed the fact that (1) the software was written by amateur programmers (2) there were no professional standards of design, documentation, oversight and quality testing of the software (3) there was no independent outside review of the software.
Remember, this is the software upon whose output we are supposed to decide to let hundreds of millions of people over the coming decades die from energy starvation. I think we can all agree that the quality of such software and the degree of oversight of that quality should be equivalent of that used for banking and military software.
The question that warming alarmist need to ask/answer is why any of this data was ever such a closely guarded secret in the first place? Why did anyone have to file a freedom of information request just to look at it? (Besides the fact that as CRU admitted they had actually lost the data.) The files were already on computers. It took the whistleblower just a few minutes to upload all the files. The CRU scientist could have done the same years ago and they wouldn’t have to worry about it since.
Why the secrecy? Why the hysteria over sharing the data about the single most important scientific question of our era? What public good is served by anointing a priest-caste who will read the secret entrails and then tell all the rest of us how to run our lives? Why appoint the
If you really believe the science on CAGW is sound (despite you self-admitted inability to judge that soundness) then you should support making the scientific process about CAGW perfectly transparent. Only people with something to hide require secrecy, especially in science.
(My responses in italics)
Shannon:
“I understand why the emails have drawn so much attention. They represent the human element in the story that most people can understand better than the science. However, the emails are not the scientific story here. The emails can be explained away if a person is willing to give the authors the benefit of the doubt.”
Indeed. But along those lines we also see there is far less to the conspiracy factor than is made out to be. It’s the equivalent of saying the bloat of a grape is on par with a Macy’s balloon. The emails might not seem the FULL story, but DO indicate the human desire to make sure certain standards of behavior were adhered to, and some of the small failing that TRULY, as you said, has no real bearing on the science of all this. We merely learned the shocking fact that scientists are human and feel emotion and are not automatons following some preordained script, as the Deniers often argue in conspiratorial mode.
“What cannot be explained away is the utter train wreck of the computer code as well and the blatant manipulation of the data revealed by that computer code.”
You need to provide some good examples of that, and why simple addition and correction of temps turned out to be such a train wreck. Apparently, this has NOT come to the attention of RealClimate or Sharon Begley and most others. The emails DID in fact point out WHY you often need to do self-correction when anomalies turn up. I was under the impression that was rather clear. And at that, they’ll need to be some realy hum-dingers.
“All attempts to make this about the emails are attempts to distract people from the real story. The real story is the computer code and the data files!
Nothing in the several hundred words you wrote addressed any of the real concerns about the code. Nothing you wrote addressed the fact that (1) the software was written by amateur programmers (2) there were no professional standards of design, documentation, oversight and quality testing of the software (3) there was no independent outside review of the software.”
That’s a rather heavy piece of artillery you’re pointing at someone who is actually a friendly witness to your site (a capitalist money-monger businessman and Ayn Rand fan) and most of your core libertarian/conservative beliefs. So please don’t shoot the messenger who merely did a few clicks of legwork, even if that yielded someone ELSE’s hundreds of words THEY felt important. It merely seems yours truly posted the words. Most of them, however, are from New Scientist. THEY apparently felt the need to explain the full context of the emails, which, while you claim they are not the focus nor the real problem, ARE the main issue for MOST of the conservative blogosphere. It’s the emails and the resultant commentary that are causing all the gnashing of teeth and claims and counterclaims about conspiracy and “cover-ups” and false charges of deletions and data dumps. Not the encoding of the Fortran. One search of Hot Air or Mark Steyn or Michelle Malkin and dozens of others indicates no results of Fortran encoding or any such beast. It’s the emails that drove this, and so it’s the emails the LGF and NS responded to.
“Remember, this is the software upon whose output we are supposed to decide to let hundreds of millions of people over the coming decades die from energy starvation. I think we can all agree that the quality of such software and the degree of oversight of that quality should be equivalent of that used for banking and military software.”
If such is so damning, then YES I’d agree with your concern. But yours is one of the few sites on either side to make much noise about coding one way or another. Better have some hum-dinger examples of that, if so much is at stake as you say.
“The question that warming alarmist need to ask/answer is why any of this data was ever such a closely guarded secret in the first place? Why did anyone have to file a freedom of information request just to look at it? (Besides the fact that as CRU admitted they had actually lost the data.) The files were already on computers. It took the whistleblower just a few minutes to upload all the files. The CRU scientist could have done the same years ago and they wouldn’t have to worry about it since.
Why the secrecy? Why the hysteria over sharing the data about the single most important scientific question of our era? What public good is served by anointing a priest-caste who will read the secret entrails and then tell all the rest of us how to run our lives? Why appoint the (unfinished?)
If you really believe the science on CAGW is sound (despite you self-admitted inability to judge that soundness) then you should support making the scientific process about CAGW perfectly transparent. Only people with something to hide require secrecy, especially in science.”
You’re right. I’m not a scientist, and don’t play one on TV or the Net. Glenn Beck’s antics with blackboards and ACORN/SEIU frets/conspiracies gets to fill in for that gig.
But my instincts serve me well, and a simple observation of just the seasonal shifts where I live and the affirmation of much of what common sense understanding that NS posted seems logical. Observation is science also. It can be 100% CODE FREE, if necessary. And remember from above: The CodeGate that is not even the main focus of the Far Right is not even necessary–however flawed you claim–to observe the maps of shrinking glaciers easily available on the Net, along with bird migration shifts, cold-sensitive hummingbirds sticking around all year or migrating back earlier with every spring, the dearth of cold weather for years on end where I live (people don’t believe to this day that zero-degree and below temps in Columbia SC actually used to occur in January sometimes decades ago–but they DID) or for that matter the handy maps on hurricane intensity of the last several years, also available from NOAA. The poles are getting warmer than the equatorial reasons and this also fits most AGW modeling for the physical principle that cold areas heat faster than areas already warm, etc. I could go on and on. Unless you have your own ski resort equipment, snowmen in the Atlanta area are all but extinct. It used to be something the kiddies looked forward to merely two decades ago.
Thanks for your interesting response and input. I appreciate that more than you know, because the general response of Devil’s Advocate postings is just to proclaim the guests of the Savage Nation and the cockamamie opines of dumbbunny factories like Sarah Palin to be gospel, and to some that’s darn good enough. But the issue of cover-ups and deletions and missing data was answered by LGF and NS quite well, as the data in question can all be found over at NOAA. IS this NOT the case then?
There was no real “secrecy” in the common sense of that word if by that you mean is the data generally available? Also pointed out, 50 requests under the guise of FOI is indeed a form of harassment, no matter the urgency some might feel for revelation in the space of, say, 96 hours or so. Secrecy? No, it’s all there, even if certain select men working on dissertations or publishing at some institutions kept some things close to the chest for a few months or years, etc. As to CRU themselves, it seems only about 5% of the raw data is gone from THEIR location, and they dumped only the parts considered crap. If the crap is that valuable as some claim, my understanding is that NOAA and Tim Lambert over at Deltoid on Science Blogs are more than happy to track it down.
Also, factcheck.org certainly seems to feel the issue is settled and this is all vastly blown out of proportion.
http://littlegreenfootballs.com/weblog/
If such is so damning, then YES I’d agree with your concern. But yours is one of the few sites on either side to make much noise about coding one way or another. Better have some hum-dinger examples of that, if so much is at stake as you say.
See my next post in this series: Scientist are not software Engineers. It contains an explanation as well as links to detailed reviews of the software.
Even more damning, I think, is the fact that no one I can find has come forward to defend this software. Nobody has said it is acceptable at all.
Observation is science also.
No, it is not. Science is the testing of hypothesis by demonstrating that the hypothesis can predict the outcomes of experiments. We had observation long before we had science. It is the scientific method that tells us what we are actually observing and what we are actually seeing.
I appreciate that more than you know, because the general response of Devil’s Advocate postings is just to proclaim the guests of the Savage Nation and the cockamamie opines of dumbbunny factories like Sarah Palin to be gospel, and to some that’s darn good enough.
Without endorsing any particular personality, I would point that the last time we had a similar situation during the “energy crisis” of the period of ’73-’84, the “rightwing extremist” were proved correct. For 11 years it was considered completely and utterly obvious, scientifically proven in fact (they had lots of studies and computer models) that the earth was out of oil and by extension all other material resources as well. The public dialog now over global warming is an exact duplicate of the dialog back then. Just as now, leftists (in many cased the same actual individuals as now) explained how rapacious, greedy and short-sided capitalism had gobbled up the fixed and finite naturals resources and how the only possible solution was an expansion of government power to ratio the ever dwindling resources. Just as now, anyone who questioned the concept was stigmatized as either morons or sell outs. Probably 80% or more of the population was convinced that oil would be in permeant short and every shrinking supply. We based all our economic, national foreign policy on the idea. The only people who questioned the idea was a small handful of scientist, a handful of free-market economist, a handful of people in the energy industry and a whole lot of “rightwing extremist” who basically believed that if the entire issue was fabricated by the left as a power grab.
In the end, the energy crises turned out to be a massive, world wide shared delusion/hysteria. In 1984, minor changes in US and UK law caused the price of oil to utterly collapse and it has been in plentiful and increasing supply ever since. The leftists and conventional wisdom proved dead wrong and the extremist were proved correct albeit for the wrong reasons. There never was a conspiracy, just a diffuse cultural susceptibility to any explanation that created a justification for more power for leftists.
So, I would say that based solely on their track record, the “rightwing extremists” are more likely to be correct today than the leftists who so consistently mock them while being very careful not to bring up their own history of buying into a fake crisis.
There was no real “secrecy” in the common sense of that word if by that you mean is the data generally available?
CRU has admitted they lost their original data. Most of the train wreck in the code is a desperate attempt to reverse calculate the original data from the published data. All the major warmist data is secret in that no one has independent access to it. Given that there is no logistical reason in the internet age not to provide the data, this alone raises questions. I mean, this is the most important data in history. Why would we trust just few dozen people to review it in detail?
But the issue of cover-ups and deletions and missing data was answered by LGF and NS quite well, as the data in question can all be found over at NOAA.
I have not seen their articles. I was basing my arguments on my own direct examination of the code and that of others. In any case, given the corruption at GISS I don’t know why I should take any of these people’s word for anything. After all, this is the most important data in history. I think I can ask for a much higher standard of transparency than normal.
Also pointed out, 50 requests under the guise of FOI is indeed a form of harassment, no matter the urgency some might feel for revelation in the space of, say, 96 hours or so.
You keep saying that but you never answer how many of the 50 request were answered honestly and full. As near as I can tell, the number is zero even partial responses. Complaining that people are burying you in FOI request after you refused to answer any of them screams dishonesty. Plus, we have the entire issue of why it would take an FOI request to get the information in the first place. What they couldn’t take 15 minutes and upload the files to one of the many sites that provide file hosting? What was so damn hard about fulfilling the request.
And again, given that this is the most important data in human history, what is the rational for refusing to divulge it in full to any random person that ask for it? What is the rationale for not making generally public?
Tim Lambert over at Deltoid on Science Blogs
I wouldn’t put much stock in Tim Lambert. We’ve locked horns before over the Lancet Iraqi Mortality survey and I was proven correct eventually, ironically by the people who did the original study. Lambert defended their obviously silly conclusions and their dishonest presentation all the way to the ground. I can only assume he is doing the same again.
Lambert is actually a textbook case of someone who will instantly and reflexively subvert his scientific understanding to politics.
I would ask you to answer these questions. Keep in mind that if we overreact to global warming, we could kill hundreds of millions of people over the next few decades by depriving them of the energy they need to survive. Remember we have to kill people now long before the predicted disasters materialize.
(1) Given the horrific decisions we have to make based on this data, why shouldn’t the process be utterly transparent? Why shouldn’t request the highest standards of software project management?
(2) What would convince you the data and the models that use them are wrong? Note, I’m not asking about a social process, I’m asking about physical scientific evidence.
(3) How many people would you conceptually be willing to condemn to death based on this data? Even conceptually, if you had to choose between killing people today to prevent much larger numbers of death decades down the road, how many people would you kill based on your assessment of the quality of this research?
Thanks for the link, and I’ll keep this one shorter this time and just rest on the “obervation” bit (it’s not the complete picture, but it’s a good start) about science and say it’s reasonable to assume by now that after decades of migration patterns, disease spread when no other vectors are available or nothing else out of the ordinary, flowers blooming earlier every year, and cold-sensitive hummingbirds hanging around, we can make a darn good guess that the climate is warming.
That’s just for starters. The full monty is found over on the NS article I’ve already linked.
Thanks.
Ooops.
Didn’t mean to make that anonymous.
That post was in fact mine, Wakefield Tolbert
Question (1) answered by the Factcheck link I provided. The context from there will demonstrate why some of the crap data (about 5%) was destroyed. This is also available from NOAA. In sum, nothing is gone. There’s your vaunted transparency. Also provided was the context of the modeling and the analogy of using thermometers in various climes, with the obvious need to correct the data, etc.
Question (2) was answered by Factcheck and NS. The issue of the coding was already answered. The corrections were explained as necessary and rather common and mundane. We all do this kind of correcting and compensation and as explained by NS not doing so is the worst thing you can do. If it is discovered that measuring devices were too close to heat islands, or areas of draft or moisture that pulled temperature down due to evaporation, for example, you’d have to “rightsize” the data.
I’ll be convinced I’m in the wrong–and indeed all the AGW community who piddles at a higher level than I can with all this–when leading scientists not financed by Exxon or under the tutelage of CEI affirm in mass numbers that manipulation was nefarious.
Question (3) is a theoretical that assumes horrific, dystopian Orwellian nightmares of a police state GONE ABSOLUTELY APE CRAP with utterly no accountability in an advanced industrial or post industrial society where the end consumer is constantly nitpicking the politicians of all stripes. The system is not perfect, but borrowing from Winston Churchill is the least bad of all the bad options in a Murphy’s Law reality.
This bureacratic overload stuff from the Far Right has been bunked out by Steve Kangas among others, who pointed out as have many since his death that it is the socialistic, bureacratic, supposed nanny-state Europeans who while imperfect and given to acting like spoiled teens, have surpassed the US in GDP percentage increase even in frowsy economic times, and provide more for the average citizenry than we do or can. Their stats under “socialism” and nanny-statism whip ours (we rank low on the totem, brother) in almost all areas. Less crime, better working conditions, better retirement plans, better management of resources, less major scandal in all government levels generally, lower premature births, lower infant mortality, higher potential from educational opportunity. Better health care, better stats on well-being and longevity. All this with higher taxes and a supposed crushing and interfering overlord bureacracy. Hmmmmm.
Ever convert dollars to Euros? It’s well…..a very humbling experience for an American.
Seeing that CapnTrade and these other horrid draconian measures to put us under heel is an extrapolation of how they’ve done things in Europe for some time now (in some ways, the nanny state governance has gotten well into personal issues!) it seems somewhat less frightful than first glance.
But for now, I think it’s safe to assume that a warmer climate with gutloads of carbon is, pace the glibertarians, as damaging to the ecosystem as increased warmth and certainly more horrid than any draconian measures that even right wingers agree to some degree are requisite to at LAST get us AWAY from the continued financing of the House of Saud.
That last part in an of itself is something to cheer about and do fistpumps in the air.
The old refrain that we’ve already mucked things up by introducing invasive animal species and even pets is not solid enough, as has already been explained (this is an issue of scale, as the NS commentator mentioned) as has the notion of extra co2 being beneficial, etc.
The Glibertarians in fact might be technically correct on sustaining things we should not be doing NOW, such as population increases that will take us disasterously to about 9 billion in two decades or so, and at that point we’d welcome forced culling (speaking of horrific far beyond anything proposed under the gentle push of CapnTrade or other regulations and supposed socialist albeit moderate measures, as SciAm pointed out recently) before being forced to eat our daily algae slurry for breakfast. ONLY.
So I could turn around and ask which horror is the most horrible.
Wakefield Tolber,
Okay, to start with, I did read both the Newsweek and the New Scientist articles and neither addresses any of the technical issues myself and others have raised. In particular, they don’t answer my specific objections about the slipshod and amateur that the most important computer software in the world was created and maintained. Neither article is actually a technical analysis at all but just repeated arguments from authority that say trust us.
They answer absolutely nothing.
Question (1) answered by the Factcheck link I provided…There’s your vaunted transparency.
Well, no unless I can go online and download both the raw data and the code as well as a log of all corrections applied to the data. I repeat, given the seriousness of the decisions we are being asked to make why shouldn’t we make the process completely transparent?
You haven’t answered the question.
(2) was answered by Factcheck and NS. The issue of the coding was already answered. The corrections were explained as necessary and rather common and mundane.
As noted above, they merely say that the corrections are mundane but they don’t print the code and show why it is mundane. I have personally looked at one section of code that applies different correction based solely on the date the observation was taking. It “corrects” temperatures in the past to be cooler and “corrects” temperatures more recent to be warmer. As a result, even if you ran the same fixed temperature every year through the code, it would produce a warming trend. Nothing in the documentation justifies this “correction” other than as an attempt to force the data to agree with another data set.
I see the skeptics making detailed technical arguments and the alarmist making hand waving, “trust us this is all normal” counterarguments. Given the seriousness of the issue, I think it reasonable to insist on the highest standard.
I’ll be convinced I’m in the wrong–and indeed all the AGW community who piddles at a higher level than I can with all this–when leading scientists not financed by Exxon or under the tutelage of CEI affirm in mass numbers that manipulation was nefarious
So, you’re saying that you believe in CAGW because of who says it is happening? That is not science. Science works regardless of the personal biases of scientist. Free-world and Communist physicist all arrived at the same scientific conclusion.
This is the exact dynamic at play circa 1980 during the energy crises. Back then I would have been arguing that we had plenty of oil world wide and that the crisis was purely political. I would have pointed out some scientist and experts in the petroleum industry who would explain things. You would respond that all the real scientist understood we were out of oil every where and that anyone who said otherwise was just a pawn of the oil companies.
I believe we are undergoing the exact same dynamic now as then. I think that people like you with no technical understanding of the issues choose to believe in the crisis du jour because authority figures that you ego identify with tell you the crisis is real.
Anyone who says you should not trust technical work because the person who did it belongs to some despised out group is trying to manipulate you.
More importantly, do you really want to start a discussion about motives? Why would money from the oil industry distort science any less than money from the government. Can we trust climatologist when tens of millions in grants, their personal reputations and careers and the status of the entire field rest on CAGW occurring?
So either you did not answer the question or you can’t think of any scientific evidence that would convince you instead. Instead, you will decide it is wrong only when authority figures that share your prejudices tell you it is wrong.
Question (3) is a theoretical that assumes horrific, dystopian Orwellian nightmares of a police state GONE ABSOLUTELY APE CRAP with utterly no accountability in an advanced industrial or post industrial society where the end consumer is constantly nitpicking the politicians of all stripes.
Actually, no, my question has nothing to do with politics of any kind. If we respond aggressively now to head off global warming we will kill hundreds of millions of people over the coming decade by starving them of life saving energy. It doesn’t matter what social, economic or political mechanism we use, we will still kill them. Even if we waved a magic wand and converted every country for all time into a liberal-democracy with free-market.
Energy is life. The industrial revolution was an energy revolution and all the benefits we take for granted come form using more and more energy. People who are poor lack energy, not money. To raise their standard of living, we have to make more energy available to them. That is physics, not politics.
To head off CAGW decades down the road, we have to restrict our use of energy with the technology we have here and now. That means doing things like producing less cement (very energy intensive and done exclusively with fossil fuels). That however will mean less concrete to make water, sewage and flood control systems in places that desperately need them. That means people dying from contaminated water, sewage born illnesses and floods. That is just one example of every single life giving technology we use all of which require energy to be created and used.
Now, if the CAGW models are correct then we must make such sacrifices for the sake of our long term survival. The question I put to you is simply are you confident enough in this data to bet peoples lives on it? Remember, you have to kill people today regardless of whether it turns out to be right or wrong in the distant future. If CAGW prove correct, then you have saved lives net in a grim lifeboat dilemma calculus. However, if CAGW proves incorrect or some future technology easily corrects the problem, then you have killed people for nothing.
So, please answer the question how many lives are you willing to sacrifice based on this data?
So I could turn around and ask which horror is the most horrible.
Did you ever see that movie were a ship gets torpedoed and the survivors crowd into one lifeboat? The officer believes that owing to the war they won’t be rescued so they must row to the coast. He calculates how much food and water they have and comes to grim conclusion that they all can’t make it. To have a hope of saving some of the passengers, he has to push some others overboard.
That’s pretty much the position you find yourself in. How confident are you in your calculations and your assessment of the technological evolution over the next 100 years?
In the movie, a rescue ship shows up right after he drowns the last surplus passenger.
A needless horror is the most horrible. Killing people now to prevent an illusionary horror down the road is the worst horror I can imagine. Killing a few to save the many and the finding out you didn’t have to kill anyone is the worst thing I can imagine.
I would remind you that people have been predicting that we can’t “sustain” ourselves since at least Malthus and every single one of the doomsayers without exception have been proven wrong. Why should we even suspect they are suddenly correct now? After all, the same social/political segment was wrong about the energy crisis,resource-depletion, population-bomb etc so why should they suddenly be correct now?
You should reflect that back in the 60s-70s you personally would have believed just as passionately in the utter certainty of all those predictions just as we you have perfect faith in CAGW today. Doesn’t that suggest to you that something might be wrong with your intellectual process in these matters?
OK. I know I’m late on the draw here, but still.
A few points, in the order that my extreme tiredness can recall while juggling 100 other things this week:
Okay, to start with, I did read both the Newsweek and the New Scientist articles and neither addresses any of the technical issues myself and others have raised. In particular, they don’t answer my specific objections about the slipshod and amateur that the most important computer software in the world was created and maintained. Neither article is actually a technical analysis at all but just repeated arguments from authority that say trust us.
They answer absolutely nothing.
Question (1) answered by the Factcheck link I provided”¦There’s your vaunted transparency.
Well, no unless I can go online and download both the raw data and the code as well as a log of all corrections applied to the data. I repeat, given the seriousness of the decisions we are being asked to make why shouldn’t we make the process completely transparent?
You haven’t answered the question.
I do believe the articles answer the Far Right Wing’s absurdist claim of these guys serving as Soro’s sockpuppets, if nothing else. The data is available online.
I realize that the link from FactCheck, Begley over at Newsweek, and the others have not answered the issue of CODING per se, which seems to be your focus now, unlike most other Doubting Thomas type sites out there at the moment. What was also detailed in those stories is that the emails revealed only nasty habits, and not conspiracy. It also gave the lie to the notion that large reams of data are flushed down the Great Water Closet and unavailable. Also untrue. So on that note alone 98% of the Right Wing blogosphere is hunting down a phantom. Yours is the only site that deals with this code issue; the climate scientists ARE also reviewing these materials–including the computer programs, and while some minor blips have shown up, the consensus among the climatologists (as the articles also pointed out) is that nothing is out of order. As to the issue of science in general being one of testing and not observation, you’re technically correct of course on the methodology. But observation must come first, and even if we had flushed all the data of decades down the commode, we still have the observation of polar ice cap melting, glaciers retreating, hummingbird migration pattern shifts, crop failures even in erstwhile favorable zones, and dozens of other anomalies that have hit the earth like a bolt out of the blue. Coding or not, that needs to be answered by the Skeptics as well. It has not so far. Coding faux pas or not, it is not necessary for any of that ongoing fret to see animal migration shifts and polar caps melting. Any input on THAT?
(2) was answered by Factcheck and NS. The issue of the coding was already answered. The corrections were explained as necessary and rather common and mundane.
As noted above, they merely say that the corrections are mundane but they don’t print the code and show why it is mundane. I have personally looked at one section of code that applies different correction based solely on the date the observation was taking. It “corrects” temperatures in the past to be cooler and “corrects” temperatures more recent to be warmer. As a result, even if you ran the same fixed temperature every year through the code, it would produce a warming trend. Nothing in the documentation justifies this “correction” other than as an attempt to force the data to agree with another data set.
Fair enough. If that’s truly the case, then you’d raise some eyebrows over at Newsweek and New Scientist. They don’t seem aware, and……They need to hear from YOU, not guys like me.
But what else am I supposed to do generally but go to the climate scientists? You ask me to piddle with the code, albeit we both freely admit I’m not into this kind of gig, and yet this would be the equivalent of handcuffing my arms and legs and asking me to do cartwheels and expert gym flips on the sawhorse. Instead, I move to go to consensus among the scientist who actually do the encoding, their testimony, mind you, and YET this is STILL not good enough due to some Crichton-sounding conspiratorial mindset that says they must be compromised due to working primarily for government agencies of one descript or another and on the public dime. But what else am I to do but to defer? Would I trust mine or (no offense) your judgment on the expert preparation of Hollandaise sauce if we’re not professional chefs? And are the scientists in this field really all that more compromised than Richard Lindzen and Patrick Michaels and even some non-scientists like “Lord” Monckton when chiming in on this issue? Are the capabilities of others who work on the public dime or with partial or mostly public funding part of some mass, humanity-killing spree with CapnTrade notions dancing in their heads due to the socialist overlords like George Soros? How then are they SUPPOSED to be funded????? Donations only? This is all so surrealistic and conspiracy sounding. I don’t ask the climate scientists, however much I might like to keep more tax money, about hair styling methodology. And by the same token I think my wife would leave the issue of climate to the experts and not her friend Darlene down at SuperCuts. Michael Crichton like to use these handy aphorisms like “consensus is not science, and science is not consensus. True for what the wording is, but like most aphorisms like “location, location location” this too leaves out vast swatches of context. The AGW climate workers across the PLANET have for the most part all arrived at the same consensus. That’s when consensus is important and DOES very well mean something. Whom then shall I ask otherwise?
I see the skeptics making detailed technical arguments and the alarmist making hand waving, “trust us this is all normal” counterarguments. Given the seriousness of the issue, I think it reasonable to insist on the highest standard.
I’ll be convinced I’m in the wrong–and indeed all the AGW community who piddles at a higher level than I can with all this–when leading scientists not financed by Exxon or under the tutelage of CEI affirm in mass numbers that manipulation was nefarious
So, you’re saying that you believe in CAGW because of who says it is happening? That is not science. Science works regardless of the personal biases of scientist. Free-world and Communist physicist all arrived at the same scientific conclusion.
That’s right. Now we can get around to figuring out why you think the public dime makes about 99.9 percent of all climate scientists compromised. See above again. The WHO of this CAN be important. Skeptics always make detailed arguments, on ANY issue at hand. That’s not a problem in and of itself, of course, and gets some stones turned in time, and is healthy for the overall situation. But we’re at the point now that once the issue is settled on the part about the mundane corrections being some kind of conspiracy, we’re getting into annoyance. See Steve Hoofnagle’s site on this one over at Denialism.com. I’m looking not for potshots from Skeptic Zoners and non-scientists, but from vast throngs of climate scientists suddenly saying WHOA HORSEY!
But they’ve not done that.
This would stand in firm contrast to the (usual) addition of the opines of non-scientists from Skeptics, which is far more common. Thus for example we had that laughable, notorious, mythical “statement” (absurdly signable online at that) of a claimed 17,000 “scientists” (!??) who opposed the notion of AGW. Turned out that most of these people were NOT climate researchers with any applicable degree or expertise, and the term “scientist” included ever from homemakers to actors to phony names like Mickey Mouse and at best some doctors of veterinary medicine. Well-meaning, I’m sure, and no doubt some of them could punch holes in the Fortran code, but this is not the same as canvassing the honest opinions of real climate researchers who work in the industry day-in, day-out. So it goes with most AGW denialists and detractors and professional Skeptics.
Also, one last thing on the coding issue. Who put the code in, and who developed it? I would agree they need to be looked at. But if we find that the issue here is (as my understanding) not that complicated compared to other kinds of encoding, then the argument is finished.
This is the exact dynamic at play circa 1980 during the energy crises. Back then I would have been arguing that we had plenty of oil world wide and that the crisis was purely political. I would have pointed out some scientist and experts in the petroleum industry who would explain things. You would respond that all the real scientist understood we were out of oil every where and that anyone who said otherwise was just a pawn of the oil companies.
I believe we are undergoing the exact same dynamic now as then. I think that people like you with no technical understanding of the issues choose to believe in the crisis du jour because authority figures that you ego identify with tell you the crisis is real.
Anyone who says you should not trust technical work because the person who did it belongs to some despised out group is trying to manipulate you.
More importantly, do you really want to start a discussion about motives? Why would money from the oil industry distort science any less than money from the government. Can we trust climatologist when tens of millions in grants, their personal reputations and careers and the status of the entire field rest on CAGW occurring?
See above. Why is private industry any more or less sacrosanct than government work? Can or should I choose on that basis, or should I have no choice to but to delegate that issue to those who actually work in the field? How and why are you making the distinction? The oil barons and coal burners have motives as well, and that’s why it was the force of law that gave us Cap1, regarding sulfur emissions, though most people forget.
So either you did not answer the question or you can’t think of any scientific evidence that would convince you instead. Instead, you will decide it is wrong only when authority figures that share your prejudices tell you it is wrong.
Well, again that depends on just what authority is saying what. Where else am I to go but the majority of climate researchers, who you seem to think are under the funding gun and so can’t feel “free” to be honest about their findings. You’ve handcuffed me and wish to see flips. See much of the above again regarding consensus and government work. If I need a public defender, I don’t generally have a valid (certainly not provable) reason to doubt the integrity of his or her work due to the fact that the taxpayer undergirds his or her salary. I don’t detect any nefarious, underworld interest or even raw self-interest. Continued funding would continue regardless of their findings but simply be chunked into a new direction regarding something else, or something more particular.
Actually, no, my question has nothing to do with politics of any kind. If we respond aggressively now to head off global warming we will kill hundreds of millions of people over the coming decade by starving them of life saving energy. It doesn’t matter what social, economic or political mechanism we use, we will still kill them. Even if we waved a magic wand and converted every country for all time into a liberal-democracy with free-market.
Most unlikely.
Energy is life. The industrial revolution was an energy revolution and all the benefits we take for granted come form using more and more energy. People who are poor lack energy, not money. To raise their standard of living, we have to make more energy available to them. That is physics, not politics.
To head off CAGW decades down the road, we have to restrict our use of energy with the technology we have here and now. That means doing things like producing less cement (very energy intensive and done exclusively with fossil fuels). That however will mean less concrete to make water, sewage and flood control systems in places that desperately need them. That means people dying from contaminated water, sewage born illnesses and floods. That is just one example of every single life giving technology we use all of which require energy to be created and used.
The current proposals on the table at worst, have nothing to do with restricting industry or energy anything of that kind. Just perhaps the methodology in the LONG haul.
Now, if the CAGW models are correct then we must make such sacrifices for the sake of our long term survival. The question I put to you is simply are you confident enough in this data to bet peoples lives on it? Remember, you have to kill people today regardless of whether it turns out to be right or wrong in the distant future. If CAGW prove correct, then you have saved lives net in a grim lifeboat dilemma calculus. However, if CAGW proves incorrect or some future technology easily corrects the problem, then you have killed people for nothing.
So, please answer the question how many lives are you willing to sacrifice based on this data?
I’m confident of the data as currently presented by experts in this field; thousands of scientists who work in this area are in general agreement. As to killing millions and whom to sacrifice, as with the great faux moral outrage over the faux DDT Caper of the early 1970s, this too is hyperbole at most. It’s unanswerable. It’s like P.J O’Rourke’s old sardonic quip about “Would you kill your grandmother to pave I-95?.”
So I could turn around and ask which horror is the most horrible.
Did you ever see that movie were a ship gets torpedoed and the survivors crowd into one lifeboat? The officer believes that owing to the war they won’t be rescued so they must row to the coast. He calculates how much food and water they have and comes to grim conclusion that they all can’t make it. To have a hope of saving some of the passengers, he has to push some others overboard.
I’ve seen something similar with a similar theme. But mankind does not generally live in such a metaphysical nightmare, and we can’t generally figure things on such scale in the manner you placed it. But, by contrast we DO know we need to take unpleasant actions that can limit and curtail freedoms. Fortunately this unpleasantry will not involve killing half the Third World due to energy starvation. That is hyperbole. Europe has already performed most of these rather modest Cap-n-Trade styled actions based on previous case law and previous morally adequate and tested notions, and they still exist without all the theoretical fallout of civlizational destruction even in the interim. They just made things a little pricier at most, and yet in Germany I can have heart surgery without being in hock to the doctor for the rest of my life.
That’s pretty much the position you find yourself in. How confident are you in your calculations and your assessment of the technological evolution over the next 100 years?
In the movie, a rescue ship shows up right after he drowns the last surplus passenger.
A needless horror is the most horrible. Killing people now to prevent an illusionary horror down the road is the worst horror I can imagine. Killing a few to save the many and the finding out you didn’t have to kill anyone is the worst thing I can imagine.
I would remind you that people have been predicting that we can’t “sustain” ourselves since at least Malthus and every single one of the doomsayers without exception have been proven wrong. Why should we even suspect they are suddenly correct now? After all, the same social/political segment was wrong about the energy crisis,resource-depletion, population-bomb etc so why should they suddenly be correct now?
You should reflect that back in the 60s-70s you personally would have believed just as passionately in the utter certainty of all those predictions just as we you have perfect faith in CAGW today. Doesn’t that suggest to you that something might be wrong with your intellectual process in these matters?
Whew. OK. I’m not sure how to even begin here, because this assumes more than I can attend to at the moment, and quite a bit of hyperbole regarding the notion that governments around the world are going to halt industrial scale production and stop people from building things like roads, infrastructure, water treatment plants, and new improvements on agriculture and medicine. Let me give this a shot here, since I’m old enough to remember those glossy Weekly Readers in school that, along with Newsweek and a grand total of 5 other outlets, supposedly bespoke of horrid predictions of doom, etc., banning of DDT, coming ice ages, etc. More on that momentarily.
But first, you’re offering for the viewing audience here the allegation of a scale of destruction that would outpace that of Stalin and Mao, and at this the issue has no more credibility than the far leftists that get confused with the mainstream scientists who merely are offering a very modest set of rules like Cap-n-Trade to create incentives to make improvements in usage and infrastructure. I’ve seen the claims of both extreme sides. Those who want us live like Laura Ingalls to save Gaia, the Earth Goddess, and then those who think we can continue to belch carbon into the air unabated and suffer no ill effects. Not even Dixie Lee Ray thought this, and in fact regarding the mainstream Greens who warned about some things sans hyperbole, also wrote that she felt it was indeed time to curb the excesses of a “throwaway” society in her anti-green book called “Trashing the Planet.” Similarly even Reason.com Ronald Bailey now understands the AGW issue is in need of addressing. Too we have the words of Thomas DiLorenzo, a free-market heralder and trumpeter if ever walked the earth, who told us we don’t need to be “Pollyannaish” about all this or “continue this grand chemistry lab experiment with earth’s atmosphere and Co2) and that we certainly need to pay attention to what the science is saying to make changes where necessary. And this goes to the crux of the matter where some libertarians fall down on the job. All our rights are actually exquisitely dependent on the full context of things. We have the right to own guns and dispose of garbage, for example, with actually some wide girth in freedoms, at least as Americans. But we can’t fire off a 30.06 in the backyard in most residential zoning districts, and most modern neighborhoods “limit” the personal freedoms of its residents with CCRS (restrictive covenants and various ordinances) like fence rules and home decor to ensure property value and pragmatic issues like safety. Steve Kangas makes this point very clear and well said in a series of essays about what true “freedom” in both personal lives and business realms requires of us. Along with freedoms comes responsibilities. That’s all Cap-n-Trade really is, and we’ve got some good,NON-Draconian, case law and case studies on the books to indicates that not only is this rather modest body of rules not overweening for state power, it follows established rules that go back to the English Kings’ rules on private property and works as advertised. Unbeknownst to most people is that in those horrid days of the 1970s when Greenies were running amuck with all manner of horror on industry, the air was getting cleaned up. HOW? Via the wonderful magic of free enterprise? No. Government power. Sulfur was “capped and traded” much the way carbon is in Europe at the moment. The Cap-n-Trade1(Sulfur) worked well, and limited sulfur emissions to the point where incentives were found to either switch to other coal varieties or introduced scrubber technologies. Today the air in America some of the best on the planet. The market did not do this. Force did. Force from the radical wackos enviro-nuts armed with studies and computer modeling for acid rain’s dissolving effects on concrete and even foliage in the Alps and the Rockies. Imperfect data, I might add, and some that the snipe artists on the Right had fun poking at, though nevertheless the consensus is still that this move was requisite to spare human health. And the costs of a few bucks a person for the temporary pollution allowances that some cynics on Right and Left thought was whoredom created the incentives. Did the costs get passed on? Indeed they did. A few bucks a month, as is the upper limited estimate of the most horrid predictions for Cap-n-Trade (carbon, this time) was certainly cheaper than the social health costs of asthma and infrastructure repair due to bummed out monuments and concrete and road erosion.
Unknown to most is the Europe has had great success under Kyoto (think of it as Copenhagen I) long before all the faux moral outrage and death stories and bank busting costs claimed for Copenhagen II. Europe is still there, the taxpayers of her nations are not busted, the social programs and health benefits are all still intact even if some horror stories on MRI wait times seems to be half truth, and compared to the frowsy performance of US industry, hers are going gangbusters. The streets are clean, the people are healthy, and there is far less carbon belch over there than here. We got the sulfur, now it is onto carbon.
As to who was right from the Right, in the 1970s, I’m not sure what you’re referring to. The Right as well as the Left had it’s share of loonies with the accompanying loony predictions. Thus for example we DID have that apocryphal Newsweek article that did not share the opinions of any real climatologist (just like right wing outlets are not doing today in AGW denialism) purporting to show a new ice age. Yes, the NEA had some odd priorities in those days and did hand out, as Reason puts it, a lot of garbage Green stuff. The NEA truly became the National Everything Association and went beyond science and into Leftie propaganda. True, and regrettable . But, also as was pointed out in THOSE days (I remember the asterisk at the bottom of the page on this one on the Weekly Reader) it was affirmed that, THEN AS NOW, MOST climatologists thought the world was getting warmer despite the data suggesting a temporary downturn or where people confused weather with climate–another common error where someone laughs about snow at global warming conferences in cold-weather climes like Norway.
(See also Steve Hoofnagle’s Denialism.com site. He has the background on who really said what and whom should be taken with a grain of salt or two)
Paul Erlich was around with his dumb antics in those days, as were the people complaining about one pound of beef needed the water and food for the cow that could float a destroyer or feed 10,000 people, etc. Cranks come and go, but the settled science of AGW was around then as now. Then, as now, liberal mainstream scientists would have been happy to tell you Erlich was a crank and an hysterical buffoon who’d never heard of Norman Borloug. But overpopulation, per Borloug, while not leading to one billion deaths in 10 years from the time of publishing of The Population Bomb, WAS serious, DID have deleterious effects, and will again. As will Erlich’s emphasis on raw materials and oil, and so forth. The truth was somewhere in-between, and was generally acknowledged so at the time, from what I can remember and backtrack to articles from that time. You had some who went overboard. You always will. Like Erlich, Rachel Carson, Margaret Sanger, and some others who were refuted even if some of their crap unfortunately did get into the minds of kiddies in the public schools.
It is also myth that professional for the most part told us oil would run out quickly. I remember no such grave warnings, other than from moron politicians like Jimmy Carter and some in his cabinet making this into a political issue hawking windmills and chicken poop, and yes we have those types today as well. But so what? Oil executives and geologists and people who have a vested interest in all this made the discovery even in those days that soured many Green faces, in that oil would in all probability just get more expensive before it runs out and the real problems is that there’d be too damn much of it for political comfort and that the House of Saud would continue to be the primary source once the embargo got worked out.
But the Right had its foibles as well. One of the most famous mythologies that ties into your claim that millions have or might or will die due to radical nutcase Greenie legislation of a Draconian nature was that due to Silent Spring and cranks like Ruckelshaus, DDT was “banned’ worldwide. It was not. It was banned for MOST applications in the US, and most of Europe, but basically given free reign in the Third World. And it is used today in some applications, though with more caution as advised by Carson. (She did NOT advocate it’s complete ban, contrary to mythmaking on the Right). So millions have not died from lack of DDT and the resurgence of malaria. However, what WAS done was the prevention of resistant strains of mosquitoes getting the better of that chemical, which is a common problems in pesticides, and was with DDT as well. An American icon of a bird was spared and is making a slow comeback and so too were tens of thousands of Americans spared tumors. So even if used on larger scale it’s effectiveness would be in question by now anyhow for mosquito resistance, though Michael Fumento does claim it still repels mosquitoes if coating yourself and surroundings, etc.
Then of course as with Carbon Capping now, we were told in lugubrious tomes that Sulfur trading was a cynical Green ploy to destroy business, as would be the banning of CFCs (which also worked well and did not bring us to a halt) and that we’d all suffer due to lack of freedom and consumer choice of aerosols and problems with finding refrigerants with cars, and your average power bill would be 1000 bucks a month even if the power company stayed in business, and little ma and pa shops would knuckle under due the tax burden. All right wing scares proved false.
What the Right “right” about then? Not much, other than the predictable fact that the Greens had among them ideological and quasi-religious intonations just as the Right Wingers did at the Holy Alter of Ayn Rand who felt it a damnation of your freedom if you can’t burn tires in your backyard at whim.
Poor people do indeed need energy–and industry as the outgrowth of that availability. But Copenhagen does not nix that possibility, just as Sulfur Trade deprived us of nothing either. It made things cost a wee bit more in the interim transmission period, and likewise Carbon Trade will make us pay somewhat higher fees and, yes, save some freakish and perhaps unneeded birds migration routes, but also force us to find alternative paths to energy. The Third World is truly not going to stand by and allow your dystopian mode of thinking even IF the Copenhageners under the lordship and tutelage of George Soros stenographers did have this in mind. No way. What WILL happen unlike the lifeboat scenario, is that costs will shift to those of us more able to afford this transition (is that unfair???) and provide alternatives to China and India and some others bringing on line nothing but coal plants at the rate (last checked) of about 10 per month. Even for those doubters that linger, 10 new power plants of month belching carbon and soot is not acceptable. What else do you propose then???? Nuclear is a fine, carbon free notion but is almost prohibitively expense without the right incentive, but private and public. CapnTrade provides this. That’s modest, not Draconian and boot-stomping.
Also, as to those predictions from time immemorial? True. But so what, as most of those were prescientific and came in the age where religious dogma came as literal holy writ and based on emotion and scare tactics more than any real evidence and long before the age of discovery and unlocking the secrets of things like modern mining, metallurgy, and modern agriculture. It was a time when you’re held basically ransom to the whims of gods and natural rhythms. And keep in mind Malthus was not disproved, it’s just the modern technology gave us a temporary call from the governor, if you will. A reprieve of sorts, in other words. His charts merely applied to human society at large what was and is still known about the life-cycle of all biological organisms and systems, though admittedly his scale was shorter term for the final curtain call.
Here’s anotehr dandy:
A review of the emails and DATA HAS taken place. It’s as I suspected.
http://www.msnbc.msn.com/id/34392959/ns/us_news-environment/
Wakefield Tolbert,
I do believe the articles answer the Far Right Wing’s absurdist claim of these guys serving as Soro’s sockpuppets, if nothing else. The data is available online.
(1)None of the links you provided link to the data in question. Indeed you didn’t link to the FactCheck.org itself. If you meant this article, I would point out it likewise contains no links to the data in question but merely points to emails saying by alarmist scientist saying that they have published the data and Factcheck does nothing to confirm that. Indeed, Factcheck simply accepts the word of alarmist as true and makes no attempt to independently confirm them.
For example, Factcheck points out that processing raw data, i.e.”correcting” it is common without explaining that such corrections are forbidden unless they follow strict, predetermined rules which much of the alarmist clearly did not do. Neither do they point out that such processing is the single major source of scientific error. Neither did they point out that all the evidence for the warming comes from the corrections and that the raw data does not show any obvious warming trends at all. They make no attempt to demonstrate that the corrections did not in fact introduce a great deal of error.
However, I have seen quotes where CRU and GISS members tell other researchers that in order to get the raw weather station data, the other researchers must contact each individual country in the world and request the records from the source. Obviously, that is a multi-year job. CRU has admitted they lost some or all of some important datasets. Even f they just lost 5%, that could be critical depending on what that data describes. It only takes one wrong bit to crash a computer.
If you have links to the actual complete data e.g. weather station data, tree ring data, ice core data, satellite data you should post it.
In passing, I would point out that yet again, you offer as proof an article which simply parses the language of the emails. You still haven’t provided links to any serious independent examination of either the data, the methodology or the software. You haven’t provided links that even hint at the corrections logs or the software.
You also seem to forget that I was educated as a biologist, that I am a programmer and that I have personally reviewed a lot of the CRU code and found it to be garbage. I don’t have to understand climatology to see the glaring software mistakes, the clumsy hacks and patches and the lack of proper software development procedure. Since I know, personally, that the CRU software will produce accurate outputs only by mystical intervention, I know personally that the CRU data is highly suspect.
That makes everyone who is defending the output of the software highly suspect as well. This makes me wonder is (1) their software is just as bad and (2) how much oversight and double-checking they’ve actually done on their peers. Clearly, nothing you’ve offered has done anything to dispel these doubts. Once I’ve seen with my own eyes very strong evidence that calls into the question the entire fields competence and integrity, why do think I will accept their verbal assurances that nothing is wrong.
This like me looking at my suddenly empty bank account and accepting my accountants word that he’s done everything right. The rhetoric and the reality do not line up.
No, I’m telling you how many people we will have to kill if we take global warming seriously.
Here’s the thing, the “modest” improvements won’t do squat to prevent CAGW based on the IPCC models. If you believe that the IPCC models are valid, then you must be willing to undertake drastic reduction int CO2 output. There are only two metrics to describe how effective any CO2 reduction will be: (1) the peak temperature that will be reached and (2) date in the future that the peak temperature is reached. All these “modest” changes such as Kyoto only shift the peak temperature a few months ahead and or reduce it by a few tenths of a degree.
At Copenhagen this week the developed nations pretended to commit to a 20% reduction in CO2 emissions by 2020. The US gets 87% of its total energy from Carbon Emitting Energy Sources (CEES). To hit that goal we will have to restrict our energy consumption by at least 15%. Reduced energy consumption means reduced economic output. People are going to lose jobs and their futures. If we extend that to the developed world people will die.
And no, we can’t just swap Non-carbon Emitting Energy Sources (NEES). Why? Because building that new technology will require energy from CEES. Every new NEES comes prepackaged with a carbon debt used in its creation. Using CEES to create NEES actually raises CO2 output short-term. So, not only must we have to reduce our direct use CEES but we have to reduce it even farther to compensate for the increased CO2 generated by building the NEES.
And that is assuming that (1) we have NEES technology that can step in for CEES (which with exception of nuclear we do not) and (2) that we have the industrial capacity to crank out that much NEES infrastructure and, guess what, we don’t (even for nukes.)
Oh, and did I mention that since we haven’t brought a new nuclear plant on-line in the last 30 years so that starting i 2020 we will start to lose all our nuclear plants to old age? Nukes provide 20% of our electricity total an 80% of our NEES electricity. So that means not only do we have to plan to replace the CEES electricity sources but most of our NEES sources as well.
This is all without considering how we’re going to energy to the people of the 3rd world to raise their standard of living so they don’t die from causes we have the technology to prevent.
I would point out that the only time in last two hundred years in which CO2 production has leveled off was in the period of 1973-1983 during the energy crisis. This is also the only period in American history, including the great depression, in which standards of living in America actually declined. In order to meet the goals of Copenhagen or to fend off CAGW based on the IPCC predictions, we will have to trigger an even larger economic reversal.
The left has long established a pattern in which they claim they have proven without a doubt that some massive catastrophe is looming on the horizon but that the actual changes we have to do to prevent the catastrophe are not only minimal but actually beneficial.
It’s like the left was a doctor and says, “Oh dear, my test show conclusively that you have a cancer that will kill you horribly a few years down the road but I can cure you if you come to my clinic three times a day to each this special belgian chocolate truffle I’ve created! Not only is it tasty but it has no side effects!” Horrible disease, trivial cure. If you believe the cure will work why wouldn’t you accept the diagnosis? Even if it’s wrong, you still get to eat chocolate.
Warming skeptics are like a doctor that says, “Well I’m not sure you do have cancer but if you do treating it will require a grueling series of chemotherapy, surgery and radiation whose side effects might kill you and even they don’t will make you wish you were dead.” In this circumstance, you would look very closely at the diagnosis before submitting yourself to such draconian treatment.
Warming skeptics are people who understand just how hard it’s going to be to actually prevent CAGW and what sacrifices in resources and lives we will have to make. As a consequence we look at the science of global warming much more critically than someone who thinks preventing global warming is a trivial exercise
(1) Non of these things required us to reduce our energy consumption. Quite the opposite, all these required to increase our energy use. We could only do those things by increasing out energy use overall. It takes energy to remove pollutants from coal. It takes more energy to produce new refrigerants. Taking those steps caused us to produce more CO2 than we would have without them. Likewise, switching to NEES will cause us to raise our CO2 production.
(2) Restriction on coal burning only worked because we’ve relocated much of our industry out from the rust belt to either other parts of the US or overseas. Predictions back on the 70’s warning that reducing emissions would cause serious economic problems were based on the idea that the rust belt would see the same economic growth it had in the post-WWII era. Of course, it did not. The area was economically devastated and now limps along with only a few more power plants than it had back then. (3) The requirements didn’t prevent any environmental damage, it just shifted it to another part of the planet.
This is a textbook case of what it takes to drastically lower energy consumption.
problems with finding refrigerants with cars, and your average power bill would be 1000 bucks a month even if the power company stayed in business, and little ma and pa shops would knuckle under due the tax burden.
So, refrigerants/cars are proportionally cheaper today? We have the same or a higher percentage of mom and pop businesses today as back then? Just because we’re not dead doesn’t mean we didn’t pay a price. Americans in 1950 were better off than Americans in 1940, that doesn’t translate to “WWII was no big deal.” People did suffer from those changes, they just weren’t upper class, urbanites so they’re invisible to people like you.
Unlike the numerous leftwing scares of “we’re all gonna die it’s been scientifically proven” that have come true? The left has been overwhelming wrong every time it has launched a nightmare scenario. The fact that we’ve adapted relatively easy to the pollution scares of the past is better evidence that the scares where exaggerated than it is evidence that the problems were easily fixed. If you look at the original claims about what reductions in pollutant levels would be required versus what we have now, we’re not even close.
We’ve adapted to pollution because (1) pollution represents inefficiency so the natural progression of technology always reduces pollution. (The coal plants that were regarded as so awful in the 70’s were literally thousands of times cleaner than those built even three decades before and that was done with virtually zero regulation. It is part of an established pattern in which leftist claim credit for long standing trends.) (2) We haven’t had to reach the levels of reduction the hysterics wanted in the first place. (3) We’ve just shifted a lot of the pollution geographically.
Then, as now, liberal mainstream scientists would have been happy to tell you Erlich was a crank and an hysterical buffoon who’d never heard of Norman Borloug.
Yet, Erlich was a big player in the media, politics and academia. It is a rewrite of history to say that he was a marginal figure. He was actually firmly mainstream left and remains so today. Governments did base actual policy on his recommendation although fortunately, not his most serious ones.
Really? Is that what happened in the during the energy crisis? Did people in the developed world pay more for oil so that people in the 3rd world could by cheap petroleum products that made a huge impact on their standard of living? I hate to break it to you but they did not. Energy cost shot up in the 3rd world and people died. Millions died. Places like Ethiopia and the Sudan are still suffering the effects.
It would be nice if we could micromanage things to the point were we took all the hit and the most vulnerable saw only benefits but that has never happened in the whole of human history and we should bet people’s live on the premise we can pull it off now.
Nuclear is a fine, carbon free notion but is almost prohibitively expense without the right incentive, but private and public. CapnTrade provides this. That’s modest, not Draconian and boot-stomping.
Agreed, and I would enthusiastically support a plan that put a modest tax on CEES and then funneled that money into building nuclear power plants and researching a new generation of nuclear power plants. I would particularly support a plan which focused solely on increasing energy production and consumption for everyone.
Unfortunately, nobody has produced even the ghost of such a plan. Instead, all the focus is on radically reducing our overall energy consumption and on planning on replacing CEES with unproven NEES technology that people have pleasant emotional association with.
Hell, the very people who today scream the loudest about CAGW are the very same people who stopped nuclear power in the first place, If we’d kept building nukes at the same rate as we were in 1975, we would be automatically Kyoto complaint twice over. Had we embarked on nuke boom like France did, we would have prevented the possibility of CAGW entirely (albeit at the cost of a reduced standard of living.) What makes you think those numbnuts who contributed to the problem more than anyone else will suddenly see the light and go nuke? I mean these are the people that claim to be convinced the climate is changing but who advocate the almost exclusive use of solar and wind power sources that depend on a steady climate to function! WTF?
The reason why so many people are convinced that the left is using the CAGW as a political trojan horse is because their behavior is indistinguishable from someone who was doing just that. Instead of choosing the least invasive ad least political solutions, they choose the most invasive. Every one of their recommendations without exception increases government power and reduces individual freedom. Most damningly, instead of embracing the technology we have on hand that will fix the problem, they instead op for the solutions which will draw the crisis out indefinitely.
My great nightmare is that CAGW is real but that the left is far more interested in sticking it capitalism than it is in preventing catastrophe. They will cause us to wreck our economy and tech base such that we are less able to adapt to CAGW negative consequences. We will end up with the nightmare climate but without the tools to survive in it.
In summation, I think it’s clear that you are willing to excuse the science on global warming because you think that negative consequences of acting on the science will be trivial. You simply don’t care if the CRU software is garbage because you see no serious negative consequence if it is. (I have no doubt that if you thought the consequences were more severe, you would be more critical of the science. )
I do not have that moral luxury. I have spent a big chunk of my life studying the relationship between technology/energy use and standards of living and understand in my bones what the price many must pay to address a problem of the predicted scale in a realistic fashion. Therefore, I will not accept any but the most professional, open, accountable and confirmed by prediction science on the issue. I refuse to make decisions based in whole or in part on this utterly crappy software.
Nix the one above. Lousy formatting. This is somewhat improved visually.
OK. I know I’m late on the draw here, but still.
A few points, in the order that my extreme tiredness can recall while juggling 100 other things this week:
Okay, to start with, I did read both the Newsweek and the New Scientist articles and neither addresses any of the technical issues myself and others have raised. In particular, they don’t answer my specific objections about the slipshod and amateur that the most important computer software in the world was created and maintained. Neither article is actually a technical analysis at all but just repeated arguments from authority that say trust us.
They answer absolutely nothing.
Question (1) answered by the Factcheck link I provided”¦There’s your vaunted transparency.
Well, no unless I can go online and download both the raw data and the code as well as a log of all corrections applied to the data. I repeat, given the seriousness of the decisions we are being asked to make why shouldn’t we make the process completely transparent?
You haven’t answered the question.
I do believe the articles answer the Far Right Wing’s absurdist claim of these guys serving as Soro’s sockpuppets, if nothing else. The data is available online.
I realize that the link from FactCheck, Begley over at Newsweek, and the others have not answered the issue of CODING per se, which seems to be your focus now, unlike most other Doubting Thomas type sites out there at the moment. What was also detailed in those stories is that the emails revealed only nasty habits, and not conspiracy. It also gave the lie to the notion that large reams of data are flushed down the Great Water Closet and unavailable. Also untrue. So on that note alone 98% of the Right Wing blogosphere is hunting down a phantom. Yours is the only site that deals with this code issue; the climate scientists ARE also reviewing these materials–including the computer programs, and while some minor blips have shown up, the consensus among the climatologists (as the articles also pointed out) is that nothing is out of order. As to the issue of science in general being one of testing and not observation, you’re technically correct of course on the methodology. But observation must come first, and even if we had flushed all the data of decades down the commode, we still have the observation of polar ice cap melting, glaciers retreating, hummingbird migration pattern shifts, crop failures even in erstwhile favorable zones, and dozens of other anomalies that have hit the earth like a bolt out of the blue. Coding or not, that needs to be answered by the Skeptics as well. It has not so far. Coding faux pas or not, it is not necessary for any of that ongoing fret to see animal migration shifts and polar caps melting. Any input on THAT?
(2) was answered by Factcheck and NS. The issue of the coding was already answered. The corrections were explained as necessary and rather common and mundane.
As noted above, they merely say that the corrections are mundane but they don’t print the code and show why it is mundane. I have personally looked at one section of code that applies different correction based solely on the date the observation was taking. It “corrects” temperatures in the past to be cooler and “corrects” temperatures more recent to be warmer. As a result, even if you ran the same fixed temperature every year through the code, it would produce a warming trend. Nothing in the documentation justifies this “correction” other than as an attempt to force the data to agree with another data set.
Fair enough. If that’s truly the case, then you’d raise some eyebrows over at Newsweek and New Scientist. They don’t seem aware, and……They need to hear from YOU, not guys like me.
But what else am I supposed to do generally but go to the climate scientists? You ask me to piddle with the code, albeit we both freely admit I’m not into this kind of gig, and yet this would be the equivalent of handcuffing my arms and legs and asking me to do cartwheels and expert gym flips on the sawhorse. Instead, I move to go to consensus among the scientist who actually do the encoding, their testimony, mind you, and YET this is STILL not good enough due to some Crichton-sounding conspiratorial mindset that says they must be compromised due to working primarily for government agencies of one descript or another and on the public dime. But what else am I to do but to defer? Would I trust mine or (no offense) your judgment on the expert preparation of Hollandaise sauce if we’re not professional chefs? And are the scientists in this field really all that more compromised than Richard Lindzen and Patrick Michaels and even some non-scientists like “Lord” Monckton when chiming in on this issue? Are the capabilities of others who work on the public dime or with partial or mostly public funding part of some mass, humanity-killing spree with CapnTrade notions dancing in their heads due to the socialist overlords like George Soros? How then are they SUPPOSED to be funded????? Donations only? This is all so surrealistic and conspiracy sounding. I don’t ask the climate scientists, however much I might like to keep more tax money, about hair styling methodology. And by the same token I think my wife would leave the issue of climate to the experts and not her friend Darlene down at SuperCuts. Michael Crichton like to use these handy aphorisms like “consensus is not science, and science is not consensus. True for what the wording is, but like most aphorisms like “location, location location” this too leaves out vast swatches of context. The AGW climate workers across the PLANET have for the most part all arrived at the same consensus. That’s when consensus is important and DOES very well mean something. Whom then shall I ask otherwise?
I see the skeptics making detailed technical arguments and the alarmist making hand waving, “trust us this is all normal” counterarguments. Given the seriousness of the issue, I think it reasonable to insist on the highest standard.
I’ll be convinced I’m in the wrong–and indeed all the AGW community who piddles at a higher level than I can with all this–when leading scientists not financed by Exxon or under the tutelage of CEI affirm in mass numbers that manipulation was nefarious
So, you’re saying that you believe in CAGW because of who says it is happening? That is not science. Science works regardless of the personal biases of scientist. Free-world and Communist physicist all arrived at the same scientific conclusion.
That’s right. Now we can get around to figuring out why you think the public dime makes about 99.9 percent of all climate scientists compromised. See above again. The WHO of this CAN be important. Skeptics always make detailed arguments, on ANY issue at hand. That’s not a problem in and of itself, of course, and gets some stones turned in time, and is healthy for the overall situation. But we’re at the point now that once the issue is settled on the part about the mundane corrections being some kind of conspiracy, we’re getting into annoyance. See Steve Hoofnagle’s site on this one over at Denialism.com. I’m looking not for potshots from Skeptic Zoners and non-scientists, but from vast throngs of climate scientists suddenly saying WHOA HORSEY!
But they’ve not done that.
This would stand in firm contrast to the (usual) addition of the opines of non-scientists from Skeptics, which is far more common. Thus for example we had that laughable, notorious, mythical “statement” (absurdly signable online at that) of a claimed 17,000 “scientists” (!??) who opposed the notion of AGW. Turned out that most of these people were NOT climate researchers with any applicable degree or expertise, and the term “scientist” included ever from homemakers to actors to phony names like Mickey Mouse and at best some doctors of veterinary medicine. Well-meaning, I’m sure, and no doubt some of them could punch holes in the Fortran code, but this is not the same as canvassing the honest opinions of real climate researchers who work in the industry day-in, day-out. So it goes with most AGW denialists and detractors and professional Skeptics.
Also, one last thing on the coding issue. Who put the code in, and who developed it? I would agree they need to be looked at. But if we find that the issue here is (as my understanding) not that complicated compared to other kinds of encoding, then the argument is finished.
This is the exact dynamic at play circa 1980 during the energy crises. Back then I would have been arguing that we had plenty of oil world wide and that the crisis was purely political. I would have pointed out some scientist and experts in the petroleum industry who would explain things. You would respond that all the real scientist understood we were out of oil every where and that anyone who said otherwise was just a pawn of the oil companies.
I believe we are undergoing the exact same dynamic now as then. I think that people like you with no technical understanding of the issues choose to believe in the crisis du jour because authority figures that you ego identify with tell you the crisis is real.
Anyone who says you should not trust technical work because the person who did it belongs to some despised out group is trying to manipulate you.
More importantly, do you really want to start a discussion about motives? Why would money from the oil industry distort science any less than money from the government. Can we trust climatologist when tens of millions in grants, their personal reputations and careers and the status of the entire field rest on CAGW occurring?
See above. Why is private industry any more or less sacrosanct than government work? Can or should I choose on that basis, or should I have no choice to but to delegate that issue to those who actually work in the field? How and why are you making the distinction? The oil barons and coal burners have motives as well, and that’s why it was the force of law that gave us Cap1, regarding sulfur emissions, though most people forget.
So either you did not answer the question or you can’t think of any scientific evidence that would convince you instead. Instead, you will decide it is wrong only when authority figures that share your prejudices tell you it is wrong.
Well, again that depends on just what authority is saying what. Where else am I to go but the majority of climate researchers, who you seem to think are under the funding gun and so can’t feel “free” to be honest about their findings. You’ve handcuffed me and wish to see flips. See much of the above again regarding consensus and government work. If I need a public defender, I don’t generally have a valid (certainly not provable) reason to doubt the integrity of his or her work due to the fact that the taxpayer undergirds his or her salary. I don’t detect any nefarious, underworld interest or even raw self-interest. Continued funding would continue regardless of their findings but simply be chunked into a new direction regarding something else, or something more particular.
Actually, no, my question has nothing to do with politics of any kind. If we respond aggressively now to head off global warming we will kill hundreds of millions of people over the coming decade by starving them of life saving energy. It doesn’t matter what social, economic or political mechanism we use, we will still kill them. Even if we waved a magic wand and converted every country for all time into a liberal-democracy with free-market.
Most unlikely.
Energy is life. The industrial revolution was an energy revolution and all the benefits we take for granted come form using more and more energy. People who are poor lack energy, not money. To raise their standard of living, we have to make more energy available to them. That is physics, not politics.
To head off CAGW decades down the road, we have to restrict our use of energy with the technology we have here and now. That means doing things like producing less cement (very energy intensive and done exclusively with fossil fuels). That however will mean less concrete to make water, sewage and flood control systems in places that desperately need them. That means people dying from contaminated water, sewage born illnesses and floods. That is just one example of every single life giving technology we use all of which require energy to be created and used.
The current proposals on the table at worst, have nothing to do with restricting industry or energy anything of that kind. Just perhaps the methodology in the LONG haul.
Now, if the CAGW models are correct then we must make such sacrifices for the sake of our long term survival. The question I put to you is simply are you confident enough in this data to bet peoples lives on it? Remember, you have to kill people today regardless of whether it turns out to be right or wrong in the distant future. If CAGW prove correct, then you have saved lives net in a grim lifeboat dilemma calculus. However, if CAGW proves incorrect or some future technology easily corrects the problem, then you have killed people for nothing.
So, please answer the question how many lives are you willing to sacrifice based on this data?
I’m confident of the data as currently presented by experts in this field; thousands of scientists who work in this area are in general agreement. As to killing millions and whom to sacrifice, as with the great faux moral outrage over the faux DDT Caper of the early 1970s, this too is hyperbole at most. It’s unanswerable.
It’s like P.J O’Rourke’s old sardonic quip about “Would you kill your grandmother to pave I-95?.”
So I could turn around and ask which horror is the most horrible.
Did you ever see that movie were a ship gets torpedoed and the survivors crowd into one lifeboat? The officer believes that owing to the war they won’t be rescued so they must row to the coast. He calculates how much food and water they have and comes to grim conclusion that they all can’t make it. To have a hope of saving some of the passengers, he has to push some others overboard.
I’ve seen something similar with a similar theme. But mankind does not generally live in such a metaphysical nightmare, and we can’t generally figure things on such scale in the manner you placed it. But, by contrast we DO know we need to take unpleasant actions that can limit and curtail freedoms. Fortunately this unpleasantry will not involve killing half the Third World due to energy starvation. That is hyperbole. Europe has already performed most of these rather modest Cap-n-Trade styled actions based on previous case law and previous morally adequate and tested notions, and they still exist without all the theoretical fallout of civlizational destruction even in the interim. They just made things a little pricier at most, and yet in Germany I can have heart surgery without being in hock to the doctor for the rest of my life.
That’s pretty much the position you find yourself in. How confident are you in your calculations and your assessment of the technological evolution over the next 100 years?
In the movie, a rescue ship shows up right after he drowns the last surplus passenger.
A needless horror is the most horrible. Killing people now to prevent an illusionary horror down the road is the worst horror I can imagine. Killing a few to save the many and the finding out you didn’t have to kill anyone is the worst thing I can imagine.
I would remind you that people have been predicting that we can’t “sustain” ourselves since at least Malthus and every single one of the doomsayers without exception have been proven wrong. Why should we even suspect they are suddenly correct now? After all, the same social/political segment was wrong about the energy crisis,resource-depletion, population-bomb etc so why should they suddenly be correct now?
You should reflect that back in the 60s-70s you personally would have believed just as passionately in the utter certainty of all those predictions just as we you have perfect faith in CAGW today. Doesn’t that suggest to you that something might be wrong with your intellectual process in these matters?
Whew. OK. I’m not sure how to even begin here, because this assumes more than I can attend to at the moment, and quite a bit of hyperbole regarding the notion that governments around the world are going to halt industrial scale production and stop people from building things like roads, infrastructure, water treatment plants, and new improvements on agriculture and medicine. Let me give this a shot here, since I’m old enough to remember those glossy Weekly Readers in school that, along with Newsweek and a grand total of 5 other outlets, supposedly bespoke of horrid predictions of doom, etc., banning of DDT, coming ice ages, etc. More on that momentarily.
But first, you’re offering for the viewing audience here the allegation of a scale of destruction that would outpace that of Stalin and Mao, and at this the issue has no more credibility than the far leftists that get confused with the mainstream scientists who merely are offering a very modest set of rules like Cap-n-Trade to create incentives to make improvements in usage and infrastructure. I’ve seen the claims of both extreme sides. Those who want us live like Laura Ingalls to save Gaia, the Earth Goddess, and then those who think we can continue to belch carbon into the air unabated and suffer no ill effects. Not even Dixie Lee Ray thought this, and in fact regarding the mainstream Greens who warned about some things sans hyperbole, also wrote that she felt it was indeed time to curb the excesses of a “throwaway” society in her anti-green book called “Trashing the Planet.” Similarly even Reason.com Ronald Bailey now understands the AGW issue is in need of addressing. Too we have the words of Thomas DiLorenzo, a free-market heralder and trumpeter if ever walked the earth, who told us we don’t need to be “Pollyannaish” about all this or “continue this grand chemistry lab experiment with earth’s atmosphere and Co2) and that we certainly need to pay attention to what the science is saying to make changes where necessary. And this goes to the crux of the matter where some libertarians fall down on the job. All our rights are actually exquisitely dependent on the full context of things. We have the right to own guns and dispose of garbage, for example, with actually some wide girth in freedoms, at least as Americans. But we can’t fire off a 30.06 in the backyard in most residential zoning districts, and most modern neighborhoods “limit” the personal freedoms of its residents with CCRS (restrictive covenants and various ordinances) like fence rules and home decor to ensure property value and pragmatic issues like safety. Steve Kangas makes this point very clear and well said in a series of essays about what true “freedom” in both personal lives and business realms requires of us. Along with freedoms comes responsibilities. That’s all Cap-n-Trade really is, and we’ve got some good,NON-Draconian, case law and case studies on the books to indicates that not only is this rather modest body of rules not overweening for state power, it follows established rules that go back to the English Kings’ rules on private property and works as advertised. Unbeknownst to most people is that in those horrid days of the 1970s when Greenies were running amuck with all manner of horror on industry, the air was getting cleaned up. HOW? Via the wonderful magic of free enterprise? No. Government power. Sulfur was “capped and traded” much the way carbon is in Europe at the moment. The Cap-n-Trade1(Sulfur) worked well, and limited sulfur emissions to the point where incentives were found to either switch to other coal varieties or introduced scrubber technologies. Today the air in America some of the best on the planet. The market did not do this. Force did. Force from the radical wackos enviro-nuts armed with studies and computer modeling for acid rain’s dissolving effects on concrete and even foliage in the Alps and the Rockies. Imperfect data, I might add, and some that the snipe artists on the Right had fun poking at, though nevertheless the consensus is still that this move was requisite to spare human health. And the costs of a few bucks a person for the temporary pollution allowances that some cynics on Right and Left thought was whoredom created the incentives. Did the costs get passed on? Indeed they did. A few bucks a month, as is the upper limited estimate of the most horrid predictions for Cap-n-Trade (carbon, this time) was certainly cheaper than the social health costs of asthma and infrastructure repair due to bummed out monuments and concrete and road erosion.
Unknown to most is the Europe has had great success under Kyoto (think of it as Copenhagen I) long before all the faux moral outrage and death stories and bank busting costs claimed for Copenhagen II. Europe is still there, the taxpayers of her nations are not busted, the social programs and health benefits are all still intact even if some horror stories on MRI wait times seems to be half truth, and compared to the frowsy performance of US industry, hers are going gangbusters. The streets are clean, the people are healthy, and there is far less carbon belch over there than here. We got the sulfur, now it is onto carbon.
As to who was right from the Right, in the 1970s, I’m not sure what you’re referring to. The Right as well as the Left had it’s share of loonies with the accompanying loony predictions. Thus for example we DID have that apocryphal Newsweek article that did not share the opinions of any real climatologist (just like right wing outlets are not doing today in AGW denialism) purporting to show a new ice age. Yes, the NEA had some odd priorities in those days and did hand out, as Reason puts it, a lot of garbage Green stuff. The NEA truly became the National Everything Association and went beyond science and into Leftie propaganda. True, and regrettable . But, also as was pointed out in THOSE days (I remember the asterisk at the bottom of the page on this one on the Weekly Reader) it was affirmed that, THEN AS NOW, MOST climatologists thought the world was getting warmer despite the data suggesting a temporary downturn or where people confused weather with climate–another common error where someone laughs about snow at global warming conferences in cold-weather climes like Norway.
(See also Steve Hoofnagle’s Denialism.com site. He has the background on who really said what and whom should be taken with a grain of salt or two)
Paul Erlich was around with his dumb antics in those days, as were the people complaining about one pound of beef needed the water and food for the cow that could float a destroyer or feed 10,000 people, etc. Cranks come and go, but the settled science of AGW was around then as now. Then, as now, liberal mainstream scientists would have been happy to tell you Erlich was a crank and an hysterical buffoon who’d never heard of Norman Borloug. But overpopulation, per Borloug, while not leading to one billion deaths in 10 years from the time of publishing of The Population Bomb, WAS serious, DID have deleterious effects, and will again. As will Erlich’s emphasis on raw materials and oil, and so forth. The truth was somewhere in-between, and was generally acknowledged so at the time, from what I can remember and backtrack to articles from that time. You had some who went overboard. You always will. Like Erlich, Rachel Carson, Margaret Sanger, and some others who were refuted even if some of their crap unfortunately did get into the minds of kiddies in the public schools.
It is also myth that professional for the most part told us oil would run out quickly. I remember no such grave warnings, other than from moron politicians like Jimmy Carter and some in his cabinet making this into a political issue hawking windmills and chicken poop, and yes we have those types today as well. But so what? Oil executives and geologists and people who have a vested interest in all this made the discovery even in those days that soured many Green faces, in that oil would in all probability just get more expensive before it runs out and the real problems is that there’d be too damn much of it for political comfort and that the House of Saud would continue to be the primary source once the embargo got worked out.
But the Right had its foibles as well. One of the most famous mythologies that ties into your claim that millions have or might or will die due to radical nutcase Greenie legislation of a Draconian nature was that due to Silent Spring and cranks like Ruckelshaus, DDT was “banned’ worldwide. It was not. It was banned for MOST applications in the US, and most of Europe, but basically given free reign in the Third World. And it is used today in some applications, though with more caution as advised by Carson. (She did NOT advocate it’s complete ban, contrary to mythmaking on the Right). So millions have not died from lack of DDT and the resurgence of malaria. However, what WAS done was the prevention of resistant strains of mosquitoes getting the better of that chemical, which is a common problems in pesticides, and was with DDT as well. An American icon of a bird was spared and is making a slow comeback and so too were tens of thousands of Americans spared tumors. So even if used on larger scale it’s effectiveness would be in question by now anyhow for mosquito resistance, though Michael Fumento does claim it still repels mosquitoes if coating yourself and surroundings, etc.
Then of course as with Carbon Capping now, we were told in lugubrious tomes that Sulfur trading was a cynical Green ploy to destroy business, as would be the banning of CFCs (which also worked well and did not bring us to a halt) and that we’d all suffer due to lack of freedom and consumer choice of aerosols and problems with finding refrigerants with cars, and your average power bill would be 1000 bucks a month even if the power company stayed in business, and little ma and pa shops would knuckle under due the tax burden. All right wing scares proved false.
What the Right “right” about then? Not much, other than the predictable fact that the Greens had among them ideological and quasi-religious intonations just as the Right Wingers did at the Holy Alter of Ayn Rand who felt it a damnation of your freedom if you can’t burn tires in your backyard at whim.
Poor people do indeed need energy–and industry as the outgrowth of that availability. But Copenhagen does not nix that possibility, just as Sulfur Trade deprived us of nothing either. It made things cost a wee bit more in the interim transmission period, and likewise Carbon Trade will make us pay somewhat higher fees and, yes, save some freakish and perhaps unneeded birds migration routes, but also force us to find alternative paths to energy. The Third World is truly not going to stand by and allow your dystopian mode of thinking even IF the Copenhageners under the lordship and tutelage of George Soros stenographers did have this in mind. No way. What WILL happen unlike the lifeboat scenario, is that costs will shift to those of us more able to afford this transition (is that unfair???) and provide alternatives to China and India and some others bringing on line nothing but coal plants at the rate (last checked) of about 10 per month. Even for those doubters that linger, 10 new power plants of month belching carbon and soot is not acceptable. What else do you propose then???? Nuclear is a fine, carbon free notion but is almost prohibitively expense without the right incentive, but private and public. CapnTrade provides this. That’s modest, not Draconian and boot-stomping.
Also, as to those predictions from time immemorial? True. But so what, as most of those were prescientific and came in the age where religious dogma came as literal holy writ and based on emotion and scare tactics more than any real evidence and long before the age of discovery and unlocking the secrets of things like modern mining, metallurgy, and modern agriculture. It was a time when you’re held basically ransom to the whims of gods and natural rhythms. And keep in mind Malthus was not disproved, it’s just the modern technology gave us a temporary call from the governor, if you will. A reprieve of sorts, in other words. His charts merely applied to human society at large what was and is still known about the life-cycle of all biological organisms and systems, though admittedly his scale was shorter term for the final curtain call.
OK. Let’s forget about the ideological angles of all this, the conspiracy theories, and scare stories about having to nix more people than Stalin in order to rightsize what we’ve done to the climate and assume this will be just some Laodiciean hellfire at most, with little more than lost ski slopes and drowned polar bears and species lost elsewhere (all of the above already happening.)
What about visual confirmation? Does that count for nothing? We have polar ice caps melting, and the fact that the poles are warming faster than temperate regions is EXACTLY what AGW theory set forth in a well-known prinicple of geophysics: colder bodies warm much faster than bodies relatively warmer to start. This fits why we can OBSERVE–human-emotion emails and faux codes or not, what we have about Arctic ice meling, and why people remark that little has changed in climate in the regions where most of us live–the temperate zones.
In fact, at this point the “theory” angle of this is purely academic. (Thus for example I’m quite sure the residents of Hiroshima would laugh at the notion that theories about mass to energy conversion are not fully tested, even if the term STILL technically applies to radiation. Yep–it’s a “theory”.
A friend of mine had the following précis on all the foregoing, and it might be worth a gander:
CLAIM: Scientists had private doubts about whether the world really is heating up.
TRUTH: Combing through over a decade of personal correspondence, which is then taken out of context can seem to prove just about anything. Skeptics have been pointing to one email from Kevin Trenberth, in which he said, “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” However, this is clear example of cherrypicking quotes. Trenberth was referring to that there was an “incomplete explanation” of the short-term variability of temperatures, but concludes that “global warming is unequivocally happening.”
___________________________
CLAIM: These scientists worked to suppress evidence and deleted emails.
TRUTH: Thousands of emails from over 13 years were stolen, and edited, and have been taken out of context for those with a political agenda. As blogger Jeff Masters writes,
“Even if every bit of mud slung at these scientists were true, the body of scientific work supporting the theory of human-caused climate change which spans hundreds of thousands of scientific papers written by tens of thousands of scientists in dozens of different scientific disciplines; is too vast to be budged by the flaws in the works of the three or four scientists being subject to the fiercest attacks.”
As climate czar Carol Brower says, “I’m sticking with the 2,500 scientists [of the Intergovernmental Panel on Climate Change.] These people have been studying this issue for a very long time and agree this problem is real.”
___________________________
CLAIM: Scientists have been working to remove skeptical peers from the climate discussion.
TRUTH: Organization politics, disagreement and strife are hardly foreign ideas in university, research and scientific communities. As the blog run by climate scientists Real Climate writes, “Since emails are normally intended to be private, people writing them are, shall we say, somewhat freer in expressing themselves than they would in a public statement.” Again, this does not remotely prove any sort of cover-up, and the critiques of these papers were made and debated by scientists PUBLICLY, but perhaps less bluntly than they were stated in the emails. (Here’s what the “infamous” line about keeping people out and peer review was ACTUALLY about.)
___________________________
CLAIM: These emails are the final nail in the coffin for the idea that humans cause global warming.
TRUTH: If the denier’s wildest claims were true that are bantered around throughout the Internet, wouldn’t nearly 15 years of emails ACTUALLY SHOW some of these insipid rumors to be true?
More from Real Climate: “More interesting is what is not contained in the emails. There is no evidence of any worldwide conspiracy, no mention of George Soros nefariously funding climate research, no grand plan to ‘get rid of the MWP’, no admission that global warming is a hoax, no evidence of the falsifying of data, and no ‘marching orders’ from our socialist/communist/vegetarian overlords. The truly paranoid will put this down to the hackers also being in on the plot though.”
___________________________
CLAIM: This reignites the debate about if global warming is real.
TRUTH: There is strong consensus in scientific community that global warming is real and is caused by humans. The top scientists in the world have just released a new report on the realities of global warming. Kevin Grandia summarizes some of the key points about emissions, melting ice sheets, and rising sea levels. The emails don’t change any of this reality.
___________________________
CLAIM: Scientists have manipulated data.
Skeptics have been pointing to an email from scientist Phil Jones where he said he used a “trick” with his data. As climate expert Bob Ward writes, “Scientists say ‘trick’ not just to mean deception. They mean it as a clever way of doing something — a short cut can be a trick.” RealClimate also explained that “the ‘trick’ is just to plot the instrumental records along with reconstruction so that the context of the recent warming is clear. Scientists often use the term ‘trick’ to refer to … ‘a good way to deal with a problem’, rather than something that is ‘secret’, and so there is nothing problematic in this at all.”