GE is advertising to build political support for Obama’s plan to purchase billions of dollars of GE tech in order to make the power grid “smart”. After all, who would want a “dumb” anything when they could have a “smart” something?
The reason we should keep things dumb is that in engineering the word “dumb” has a different connotation. In engineering, “dumb” means simple and reliable.
Increasing complexity in any networked system increases possible points of failure. Worse, the more interconnected the system, i.e., the more any single component affects any other randomly selected component in the system, the faster point-failures spread to the entire system. Power grids are massively interconnected. Every blackout starts with a seemingly trivial problem that, like a pebble failing on a mountain side, triggers an avalanche of failure.
Here, let me take a page from GM’s book and explain it in song. (To the tune of “If I Only Had a Brain“)
You can while away the hours
While waiting for your power
But it’s really, really lame.
(whistling)
Programmers are busy patchin’
Since the switches are a crashin’
Cause your Grid has a Brain
In the dark you can fiddle
cause your power’s just a piddle
Cause your Grid has a Brain
When your Grid it starts to thinking
More it will be breaking
Cause your Grid has a brain
Oh I, will tell you why
Smart systems are such a chore
Cause complex things break more
They break in ways you never thought before
Management will be a toughin’
If with smarts we get to stuffin’
It will bring the system pain
We won’t dance and be merry
Our problems will be more hairy
Cause our grid has a brain!
For many people, it seems intuitively obvious that a top-down approach to network design makes it more reliable, but real-world experience proves the opposite. Networked system exhibit unpredictable emergent behavior. Very tiny inputs can produce unexpectedly large outputs. Minor inputs that look trivial in the design phase turn out to bring down the entire system in practice. This makes the top-down design of reliable, complex networks nearly impossible.
Instead, large networks should evolve from the bottom up. You start by improving small, localized networks and getting them to work on their own. Only then do you begin linking them together to create progressively larger networks. In this way, each new stage rests on a proven foundation. Emergent problems are smaller and easier to localize. In extremity, you can just revert the system to functioning local networks.
The Internet evolved in such a fashion. The Internet began with small, localized intra-nets which were gradually linked together from the bottom up to produce one, giant, planet spanning network. Had we tried to engineer the Internet from the top down we would have never succeeded.
When we start having the federal government paying for and directing the upgrading of the power grid, we inevitably have a top-down design process. Instead of grand visions we should start modestly from the bottom up, by making end-user power management “smart” — by installing computerized meters and other technology that lets end users control their power consumption. When that works well, we can start adding more intelligence to local switching and then move up the system from there in stages. We should keep the major core systems as “dumb” as possible for as long as possible.
[Addendum: I would also add that at no point in the history of any technology has improving the efficiency of a technology led to people using less of it. If we improve the efficiency of the power grid, people will consume more electricity. I’m all for increasing the efficiency of the grid just because that is how we progress. However, pushing improvements in hope of conserving energy is counterproductive.]
[Update (2009-4-8-08:46): When I wrote this post, I was thinking about failures that arise from the unexpected interactions between the components of the grid itself. However, as any one who gets viruses knows, computerizing a system makes it more prone to attack. Instapundit links to an article describing how foreign black-hat hackers have penetrated our existing computer controls on the power grid. As we increase the level of computerization in the grid we will increase our vulnerability to such remote attacks. Again, the best defense is a more module system wherein different modules use different technology. Your network is safer if you have Windows, Macs and Linux boxes in it than it is if its pure Windows. ]
The real question is not will the grid evolve intelligence, but will the greens allow any improvements to the grid at all.
It will be ‘interesting’ (in the Chinese sense) when plug-in electric cars start to come on line in a serious way.
One of the most marvelously interconnected and complex systems anywhere is the human brain. It would be nice if Shannon Love used hers. But maybe she is makig her own case. The post above is full of unbased and erroneous opinions presented as fact. Take for example her concept that the internet “began with small, localized intra-nets which were gradually linked together from the bottom up to produce one, giant, planet spanning network.” The intrernet actually began with a top down approach in the development of a redundant system that would be able to withstand the system crrash of a nuclear war. (Look up DARPANET) While in some sense the United states as a whole may be ‘small’ and ‘localized’ it really was designed with a top down approach. Many of the problems that exist on the internet today come from trying to integrate disparate systems.
I have no idea how old Shannon Love is but I would direct the poster to review his or her history about the development of safeguards in systems particularly with regard to overloading the power grid. Particularly asking him or herself why the large area blackouts of the 70s and 80s have still (albeit infrequently) happened. And why the system was not able to simply disconnect itrself from each other as she suggests. The short answer is that they were built on ‘dumb’ technology. If they had the capability to sense and evaluate the loading as a system with a brain would have been able to do, they would have been able to do exactly what she proposed, disconnect themselves and remain operational and not had the large area failures that occured.
Shannon Love is proposing the same philosophy that says raw meat is good enough, why make make this fire thing? It is not safe. It is not reliable. Don’t do it. Welcome to the 19th century.
Geoff…the assertion that the ARPANET was developed to withstand a nuclear war is an urban myth…see this history, written by people who were heavily involved in the development of the network. ARPANET was in actuality designed as a research network to allow the linking of different computer systems, developed by different manufacturers and running different operating system software. The protocols for linking these systems were deliberately kept as simple as possible.
Some of us still like POTS with rotary. Even if the power is out to my house, my phone still works unless the line is physically down. KISS.
Geoff Bickford,
The intrernet actually began with a top down approach in the development of a redundant system that would be able to withstand the system crrash of a nuclear war.
As David Foster has pointed out above, that is a myth. Without going into gory technical detail, I would refute your assertion by simply pointing out that we call the internet the “inter” -“net” because it is composed of connections between smaller networks. More importantly for my purposes, the internet was not a big top-down political project. No committee sat down and decided where all the routers, servers and clients would be. Even Domain Name Services, the one central authority of the internet, only names the largest domains. The vast majority of internet addresses in the world are assigned by local administrators.
Particularly asking him or herself why the large area blackouts of the 70s and 80s have still (albeit infrequently) happened
The famous northeast blackouts in the late 70’s and 80’s are textbook examples of emergent cascade failures in a top-down designed system. Starting in the late 50’s, the American state and federal governments in cooperation with Canada began a program of “rationalizing” the power grid in the northeast to make it more efficient. The modular nature of the grid prior to that was largely dissolved so that power could flow easily all across the region. Unfortunately, this also meant that overloads and failure could propagate just as easily. Interestingly enough, New York cities Con Ed power grid was still largely modular and would have stayed on line had not its operators tried to use its power to try to stabilize the rest of the grid.
If they had the capability to sense and evaluate the loading as a system with a brain would have been able to do, they would have been able to do exactly what she proposed, disconnect themselves and remain operational and not had the large area failures that occured.
Yes, modern technology could have prevented the failures of 40 years ago but it won’t prevent its own failures in the future. We’re talking about an untested, computer-controlled grid that spans the entirety of North America routing power from solar farms in Arizona to New York City.
Shannon Love is proposing the same philosophy that says raw meat is good enough, why make make this fire thing? It is not safe. It is not reliable. Don’t do it. Welcome to the 19th century.
Well, if you’d bother to read to the end of the parent you would have noticed that I have no problem with new technology. My entire adult life as been dedicated creating and maintaing new technology. I am not arguing against new technology but rather against a specific method of implementation. I am arguing for a bottom-up, evolutionary implementation instead of a top-down, dictated implementation. I am arguing for an implementation conducted by engineers starting small and experimenting their way up to larger systems instead of an implementation conducted by politicians who will start big at the top of the system and try to change everything at once.
In technology, we face trade offs between reliability and efficiency. Reliability requires surplus and redundancy. Efficiency means eliminating surplus and redundancy. When you have a political culture that says that increasing efficiency will “save the earth!” its easy to see how reliability will be a secondary consideration.
See today’s WSJ.
Computer control to avert wide spread blackouts is impossible. It requires that the detection of an anomalous event be transmitted to other parts of the grid to stop it from spreading. The anomaly spreads at the speed of light in the medium. Just as fast as the signals. It doesn’t help much if someone shouts earthquake when the roof is halfway to the floor.
One issue we should be very concerned about is the dependency of the water system on the electrical grid. In the U.S., at least, the vast majority of water pumping facilities run off electricity, and my impression is that backup generators are pretty rare. (Also pretty expensive, since major water pumping stations use a fair amount of power)
I’ve read that in Britain, at least some of the water pumping is done with directly-connected steam or gas turbines, which sounds like a really, really good idea.
“It would be interesting to find out if your friend Shannon Love does have a brain (as in the song title)”
“Go and tell him he doesn’t have a brain on one of his own posts, and say why you think so, so we can watch him mop the floor with you.” http://zenpundit.com/?p=3074#comment-10816
I see someone else tried to be cute with your song title, but it did seem to me that you were taking a bit of a “straw-man” (you know, no brain) approach to this subject. Even if a bottom-up strategy was used, the system would need to become “Smart” to take advantage of the way we orient ourselves in the world today. So it wasn’t entirely clear, to me, if you were just picking apart “Obama” or had some clear point to make, and I still am not sure.
“Your network is safer if you have Windows, Macs and Linux boxes in it than it is if it[‘]s pure Windows.”
That statement simply makes my mind explode, a feeble one to begin with, to be sure. I am not sure I would use any of them (or GE for that matter), but mixing them all together seems to be pure crazy. Redundancy is a much better way to go, IMHO. You could use any or all, but they would all have to comply with the same logic, and I think that is the point you are missing.
Logic is what moves you from the past to the future and the Grid has two distinct ways to go, the logic of the wave or the logic of the pulse. The wave is great for top down control, but the pulse is better for bottom up command. To lower the conversation to the level of maximizing profits (logic of G.E.) does little to help in the conversation.
“by making end-user power management “smart” ”” by installing computerized meters and other technology that lets end users control their power consumption.”
The end user has no control only command. Of course the consumer has the ultimate control over everything, but that control is through commands. If everyone had an electric car and unplugged from the system at 7:00AM every morning, this movement would have to be controlled by the power system and would start before you unplugged.
Have you seen the movie “Matrix”? In a way, the consumer acts as a storage battery, in that our commands are not random, but can be speculated on. This speculation is in the form of stored power, it is what we are going to do. Your computerized meters would simply record your commands and tell the system what you are doing, or about to do. If the system was DC, I suspect it would have to be modular, redundant, and smart.
Mob away.