Archive for the 'Tech' Category
Bruce Webster writes about the parallels (and differences) between the design of legislation and the design of software systems.
(via a thread at Bookworm)
…said Richard Nixon, famously. Comes now Joe Biden, with “I am not a geek.” Specifically, in responding to questions about the problems with the Obamacare website and its supporting systems, Biden said:
“Neither (the president) or I are technology geeks and we assumed that it was up and ready to run.”
I don’t think the main problems with this implementation have to do with a lack of geekitude–most likely, there are many quite competent software developers working on this project–but rather with a lack of effective management. (And if there is a shortage of competent developers on the project, well, that’s a management issue, too, isn’t it.)
Real managers, real executives, don’t assume that important things will be ready when they’re supposed to be ready, and they aren’t satisfied with superficial answers to superficial questions, either. These effective leaders are people who have developed effective questioning skills so they can find out what is really going on. They establish open, non-fear-based organizational cultures so that people with concerns feel able to bring them forward. As I noted in my post about Benghazi (excusing failure by pleading incompetence), it is the responsiblity of an executive to establish an information and decision-flow architecture…including clear assignment of responsibilities…to ensure that the right things are seen and acted upon by the right people at the right time. Failure to do this..and to maintain and tune the system over time…will predictably result in catastrophes.
Later in the interview with Biden, the Vice President also said he didn’t know the specifics of why the website isn’t working, but that he was told the platform “is fine, but they have to change an awful lot of the inputs.”
“Look, all I know is they talk about 50,000 lines of this and this, I don’t know the technical reasons,” Biden said.
”So I don’t know, I wish I could tell you, that’s why I became a lawyer,”
A pretty flippant response to a serious situation. Slow Joe might not be able to understand the technical reasons for the failure, but he should be able–if he were competent at his job–to investigate and understand the management reasons for the failure.
Some of the questions that come to mind about this debacle are: How were the contractors selected? Why was it decided to have the government (CMS) act as prime contractor, rather than choosing an external company for that role? What do the contracts with the outside contractors actually specify, in terms of deliverables? What remedies are provided in the contracts for failures in delivery? If these remedies are inadequate, why did the government not require that they be more stringent? What coordination vehicles were there between the government group writing and interpreting the Obamacare regulations and the separate group that was attempting to act as prime contractor? Was there a single individual in charge? What project scheduling and tracking methods were employed throughout this effort?
These are not issues that are specific to software technology–the above questions are ones that any good executive, whether his background is in construction or in theater or in wholesale distribution, would understand that he should ask.
A United States President is not elected as a philosopher king; he is elected to run the executive departments of government and to faithfully execute the laws passed by Congress. The members of the present administration have repeatedly demonstrated their utter incompetence to perform these tasks.
An administration that seeks endless expansion of government’s role–but is at the same time completely incompetent at carrying out basic executive tasks–will drive expanding circles of chaos throughout ever-broader reaches of American society and the American economy.
My profession is much in the news at the moment, so I thought I would pass along such insights as I have from my career, mostly from a multibillion-dollar debacle which I and several thousand others worked on for a few years around the turn of the millennium. I will not name my employer, not that anyone with a passing familiarity with me doesn’t know who it is; nor will I name the project, although knowing the employer and the general timeframe will give you that pretty quickly too.
We spent, I believe, $4 billion, and garnered a total of 4,000 customers over the lifetime of the product, which was not aimed at large organizations which would be likely to spend millions on it, but at consumers and small businesses which would spend thousands on it, and that amount spread out over a period of several years. From an economic transparency standpoint, therefore, it would have been better to select 4,000 people at random around the country and cut them checks for $1 million apiece. Also much faster. But that wouldn’t have kept me and lots of others employed, learning whatever it is we learn from a colossally failed project.
So, a few things to keep in mind about a certain spectacularly problematic and topical IT effort:
- Large numbers of reasonably bright and very hard-working people, who have up until that point been creating significant wealth, can unite in a complete flop. Past performance is no guarantee, and all that. Because even reasonably bright, hard-working people can suffer from failures of imagination, tendencies to wishful thinking, and cultural failure in general.
- Morale has got to be rock-bottom for anybody with any degree of self-awareness working on this thing. My relevant moment was around the end of ’99 when it was announced, with great fanfare, at a large (200+ in attendance) meeting to review progress and next steps, that we had gotten a single order through the system. It had taken various people eight hours to finish the order. As of that date, we were projecting that we would be doing 1,600 orders a day in eight months. To get an idea of our actual peak rate, note the abovementioned cumulative figure of 4,000 over the multi-year lifespan of the project.
- Root cause analysis is all very well, but there are probably at least three or four fundamental problems, any one of which would have crippled the effort. As you may infer from the previous bullet point, back-office systems was one of them on that project. Others which were equally problematic included exposure to the software upgrade schedule of an irreplaceable vendor who was not at all beholden to us to produce anything by any particular date, and physical access to certain of our competitors’ facilities, which they were legally required to allow us into exactly two (2) days per year. See also “cultural failure,” above; most of us were residing and working in what is one of the most livable cities in the world in many ways, but Silicon Valley it ain’t.
- Not to overlook the obvious, there is a significant danger that the well-advertised difficulties of the website in question will become a smokescreen for the fundamental contradictions of the legislation itself. The overall program cannot work unless large numbers of people act in a counter-incentived (possibly not a word, but I’m groping for something analogous to “counterintuitive”) fashion which might politely be termed “selfless” – and do so in the near future. What we seem likely to hear, however, is that it would have worked if only certain IT architectural decisions had been better made.
This thing would be a case study for the next couple of decades if it weren’t going to be overshadowed by physically calamitous events, which I frankly expect. In another decade, Gen-X managers and Millennial line workers, inspired by Boomers, all of them much better at things than they are now, “will be in a position to guide the nation, and perhaps the world, across several painful thresholds,” to quote a relevant passage from Strauss and Howe. But getting there is going to be a matter of selection pressures, with plenty of casualties. The day will come when we long for a challenge as easy as reorganizing health care with a deadline a few weeks away.
Posted in Big Government, Book Notes, Commiserations, Current Events, Customer Service, Health Care, Internet, Law, Medicine, Personal Narrative, Politics, Predictions, Systems Analysis, Tech, USA | 6 Comments »
Posted by Michael Kennedy on 14th October 2013 (All posts by Michael Kennedy)
I hadn’t thought of this situation, only because I didn’t have enough imagination to see that politics trumps all with Obama.
A growing consensus of IT experts, outside and inside the government, have figured out a principal reason why the website for Obamacare’s federally-sponsored insurance exchange is crashing. Healthcare.gov forces you to create an account and enter detailed personal information before you can start shopping. This, in turn, creates a massive traffic bottleneck, as the government verifies your information and decides whether or not you’re eligible for subsidies. HHS bureaucrats knew this would make the website run more slowly. But they were more afraid that letting people see the underlying cost of Obamacare’s insurance plans would scare people away.
This just didn’t occur to me. It should have. After all, what was Benghazi about ?
This political objective—masking the true underlying cost of Obamacare’s insurance plans—far outweighed the operational objective of making the federal website work properly. Think about it the other way around. If the “Affordable Care Act” truly did make health insurance more affordable, there would be no need to hide these prices from the public.
It is just amazing that the politicians know so little about technology (this was the guy with the Blackberry who made fun of McCain) that they did not understand that saying something doesn’t make it happen.
…they do not always achieve mutual understanding. And when misunderstandings do occur, the consequences can range from irritating to expensive to tragic.
On July 6 of 2013, Asiana Airlines Flight 214 crashed on final approach to San Francisco International Airport, resulting in over 180 injuries, 3 fatalities, and the loss of the aircraft. While the NTSB report on this accident is not yet out, there are several things that seem to be pretty clear:
–The flight crew believed that airspeed was being controlled by the autothrottle system, a device somewhat analogous to the cruise control of an automobile
–In actuality, the airspeed was not being controlled by the autothrottles
–The airspeed fell below the appropriate value, and the airplane dipped below the proper glidepath and mushed into the seawall
It is not yet totally clear why the autothrottle system was not controlling the airspeed when the captain and first officer believed that it was doing so. It is possible that the autothrottle mechanism failed, even that it failed in such a way that its failure was not annunciated. It is possible that an autothrottle disconnect button (one on each power level) was inadvertently pressed and the disconnection not noticed. But what seems likely in the opinion of several knowledgeable observers is that the captain and FO selected a combination of control settings that they believed would cause the autothrottle to take control–but that this setting was in fact not one that would cause autothrottle activation…in other words, that the model of aircraft systems in the minds of the flight crew was different from the actual design model of the autothrottle and its related systems.
Whatever happened in the case of Asiana Flight 214…and all opinions about what happened with the autothrottles must be regarded as only speculative at this point…there have been numerous cases–in aviation, in medical equipment, and in the maritime industry–in which an automated control system and its human users interacted in a way that either did or could have led to very malign results. In his book Taming HAL, Asaf Degani describes several such cases, and searches for general patterns and for approaches to minimize such occurrences in the future.
Degani discusses human interface problems that he has observed in common consumer devices such as clocks, TV remote controls, and VCRs, and goes into depth on several incidents involving safety-critical interface failures. Some of these were:
The airplane that broke the speed limit. This was another autothrottle-related incident, albeit one in which the consequences were much less severe than Asiana 214. The airplane was climbing to its initial assigned altitude of 11,000 feet, under an autopilot mode (Vertical Navigation) in which speed was calculated by the flight management system for optimum efficiency–in this case, 300 knots. Air traffic control then directed that the flight slow to 240 knots for separation from traffic ahead. The copilot dialed this number into the flight control panel,overriding the FMS-calculated number. At 11000 feet, the autopilot leveled the plane, switched itself into ALTITUDE HOLD mode, and maintained the 240 knot speed setting. Everything was fine.
The controller then directed a further climb to 14000 feet. The copilot re-engaged VERTICAL NAVIGATION MODE and put in the new altitude setting. The engines increased power, the nose pitched up, and the airplane began to climb. But just a little bit later, the captain observed that the airplane wasn’t only climbing–it was also speeding up, and had reached almost 300 knots, thereby violating an ATC speed restriction.
What happened here? Degani refers to events of this sort as “automation surprises.” The copilot was apparently thinking that the speed he had dialed in to override the flight management system would continue to be in force when he re-enabled the vertical navigation climb mode. But that wasn’t the way the system was actually designed. Selecting Vertical Navigation mode re-initialized the source of the airspeed command to be the FMS, which was still calling for a 300-knot Best Efficiency speed.
Degani says that the pilots were well trained and understood how the speed reference value actually worked…but that the unintuitive nature of the interface caused this knowledge to be effectively forgotten at the moment when the additional climb was requested. He draws an analogy with the user of a cordless phone, who picks up the ringing phone and pushes the TALK button..a seemingly-logical action that actually turns off the phone and disconnects whoever is calling.
The blood-pressure monitor that didn’t monitor. A surgery patient was under anesthesia; as is standard practice, his blood pressure was being monitored by an electronic device. The patent’s blood pressure showed a high reading, and the surgeon noted profuse bleeding. The anesthesiologists set the blood-pressure monitor to measure more frequently. Periodically, they glanced back at the monitor’s display, noting that it still showed an elevated blood pressure, actively treating the hypertension–as they believed it was–with drugs that dilated blood vessels.
But actually, the patient’s blood pressure was very low. The alarmingly-high blood pressure values shown in the display were actually constant…the machine was displaying the exact same value every time they looked at it, because after the measurement-interval reset, it had never made another measurement.
What happened here? The blood-pressure monitor has three modes: MANUAL (in which the pressure is measured immediately when the “start” button is pressed), AUTOMATIC (in which pressure is measured repeatedly at the selected interval), and IDLE. When the interval is changed by the anesthesiologist, the mode is set at IDLE, even if the monitor were already running in AUTOMATIC. To actually cause the automatic monitoring to occur, it is necessary to push START. In this case, the pushing of the START button was omitted, and the machine’s display did not provide adequate cues for the anesthesiologists to notice their mistake.
Critiquing the machine’s design, Degani notes that “The kind of change they sought is not very different from changing the temperature setting in your toaster over…On almost every oven, you simply grab the temperature knob and rotate it from 300 Farenheit to 450, and that’s it. You are not expected to tell the system that you want it to stay in OVEN mode–you know that it will.”
Bruce Springsteen, 1983:
They’re closing down the textile mill across the railroad tracks
Foreman says these jobs are going boys and they aint coming back to
In 1712, Thomas Newcomen erected a steam engine of his own design near Dudley, in the West Midlands of England, thereby kicking off the age of steam. (Yes, this would have made a better post last year, to mark a round 300-year anniversary, but better late than never..)
We were told in the 5th grade that the steam engine had been invented by James Watt after noticing the way that the steam pressure in a teapot could cause the lid to lift a little. A nice story, but (a) James Watt did not invent the steam engine, and (b) early steam engines did not work the way that the teapot story would suggest.
In ancient Greece there were some experiments with the use of steam power to create mechanical motion; thereafter nothing significant happened in this field until the late 1600s, when Thomas Savery invented a device for raising water by steam: it was intended to address the growing problem of removing water from mines. Savery’s invention was conceptually elegant, with no moving parts other than the valves: unfortunately, it could not handle a water lift of more than about 30 feet, which was far insufficient for the very deep mines which were then becoming increasingly common.
Newcomen’s engine filled a cylinder with low-pressure steam, which was then abruptly cooled by the injection of a water jet. This created a partial vacuum, which pulled the piston down with great force–these were called “atmospheric” engines, because the direct motive force came from air pressure, with the role of the steam being simply to create the vacuum when condensed. After the piston reached the bottom of the cylinder, it would be pulled upwards by a counterweight, and the cycle would repeat. (See animation here.) Conceptually simple, but modern reconstructors have found it quite difficult to get all the details right and build an engine that will actually work.
These engines were extremely inefficient, real coal hogs, requiring about 25 pounds of coal per horsepower per hour. They were employed primarily for water removal at coal mines, where coal was by definition readily available and was relatively cheap. But as the cotton milling industry grew, and good water-power sites to power the machinery became increasingly scarce, Newcomen engines were also employed for that service. For example, in 1783 a cotton mill–complete with a 30-foot waterwheel–was constructed at Shudhill, near Manchester..which seemed odd given that there was no large stream or river there to drive it. The mill entrepreneurs built two storage ponds at different levels, with the waterwheel in between them, and installed a Newcomen engine to recycle the water continuously. The engine was very large–with a cylinder 64 inches in diameter and a stroke of more than 7 feet–and consumed five tons of coal per day.
Despite their tremendous coal consumption and their high first cost, a considerable number of these engines were installed, enough that someone in 1789 referred to the Newcomen and Savery engines in the Manchester area as common old smoaking engines. The alternative to the Newcomen engine described above would have been the use of actual horses–probably at least 100 of them, if my guesstimate of 40 horsepower for this engine is correct. These early engines resembled the mainframe computers of the early 1950s, in that they were bulky, expensive, resource-intensive, and limited in their fields of practical applicability…but, within those fields, absolutely invaluable.
In April 1945 the US Army’s 27th Infantry Division launched an attack against the Kakuza Ridge position held by the Imperial Japanese Army on Okinawa with the 193rd Tank Battalions 30 thirty tanks, self-propelled assault guns, and attached armored flame throwers from the 713th Flame Tank Battalion. When the battle was over, 22 of the 30 armored fighting had been destroyed in a coordinated ambush by Japanese anti-tank guns, artillery, mortars and suicide close assault teams. Among the dead was the battalion commander of the 193rd, on whom blame was laid for attacking without American infantry in close support. This battle is referenced in almost every narrative account of Okinawa as proof of the tougher defenses American soldiers and marines would face in an invasion of Japan.
It turns out that while this particular narrative has a great deal of truth, it isn’t the whole truth and hides the most important one. In a photo film negative image of British Prime Minister Winston Churchill’s comment that “In war, The Truth must have a bodyguard of lies,” This narrative has a huge lie buried in a bodyguard of truth.
The most important truth of this battle was that American troops suffered a technological surprise. The Japanese were listening to the SCR-300, SCR-500 and SCR-600 series frequency modulated (FM) radios of American infsntry, tanks and artillery forward observers at Kakuza Ridge (and other battles through out the Pacific in 1945) with Japanese Type 94 (1934) Mark 6 walkie-talkie radio that was issued to every Japanese infantry battalion.
Saw this comic book cover displayed at the Computer History Museum last summer:
…had to search around the Internet to find the story. It seems that a friend, knowing that Lois will never get anywhere with Superman, tricks her into appearing on a TV program in which the UNIVAC computer is used to find ideal matches for people. When she is called on stage, Lois agrees only because she thinks it might make a good story for the newspaper.
How does it turn out? You can read the whole story here.
The Second World War demonstrated the devastation that could be caused by even conventional bombing…and was capped by the nuclear destruction of Hiroshima and Nagasaki. With the intensification of the Cold War and the first Soviet atomic bomb test…and the Communist aggressiveness demonstrated by the outbreak of war in Korea…air defense of the United States became an issue of very high priority.
During World War II, the British had been successful with their innovative network of radar stations linked to command centers at which the positions of friendly and enemy aircraft were plotted continuously and orders issued to fighter squadrons and antiaircraft gun sites. In the postwar era, though, the increased speeds of combat aircraft, combined with the utter devastation that could result from a single failed intercept–one plane, one bomb, one city–drove the view that something better than manual plotting would be required.
Although digital computers were still very much in their infancy in 1953, the solution to the air defense problem chosen in that year was a computer-based system to be known as SAGE…the Semi-Automatic Ground Environment. Real-time information from multiple radar sites flowed in digital form to the computers at the SAGE Direction Centers. The computers tracked the targets, friendly, unknown, and enemy, and displayed them on dozens of video displays at each Center. Battle-management personnel at these displays made the determination of which enemy targets should be engaged with what priority, and what friendly aircraft should engage them, and the computers then calculated the optimum intercept courses. For certain fighter aircraft types, the interception commands could be relayed directly via datalink, obviating the necessity for voice communication. SAGE Direction Centers also had control over high-speed BOMARC antiaircraft missiles…these carried small nuclear weapons intended to ensure that a near miss would not allow enemy bombers to escape.
At the heart of each Direction Center was a pair of computers, AN/FSQ-7, duplexed for reliability. Each pair contained fifty thousand vacuum tubes, covered almost an acre of floor space, and consumed about 3 megawatts of power. (Some sources cite the 50,000-tube number as being for each computer of the pair–either way, it’s a LOT of vacuum tubes.) Here’s a fairly well-done recent article about the SAGE project. Note, however, the author’s comment about “thousands of people all over North America constantly scanning their radar screens for Soviet attacks, all hankering for an opportunity to launch a radio-controlled nuke.” I wonder: does this guy really believe that the airmen at the SAGE scopes were really looking forward to a nuclear war, or did he just think that’s the sort of thing that would play well with his editors and his audience?
Developing the hardware required for SAGE was a challenge; developing the software even more so. IBM’s Tom Watson Jr explained the issue: ”In those days computing was typically done in what was called batch mode. This meant that you would collect your data first, feed it into the machine second, then sit back for a little while until the answer came out. You could think of the batch processor as a high diver at a circus–each performance involves a lengthy drum roll in preparation, a very fast dive, and then a splash. But the SAGE system was supposed to keep track of a large air defense picture that was changing every instant. That meant it had to take a constant stream of new radar information and digest it continually in what is called “real time.” So a SAGE computer was more like a juggler who has to keep a half dozen balls in the air, constantly throwing aside old balls as his assistant toss him new ones from every direction.”
Last month, I mentioned GE’s 3-D printing contests. The company says it has already received hundreds of submissions for one of these contests, the Jet Engine Bracket Challenge, and has posted some of them as a slideshow. Presumably, there is some sort of structural logic (at least in the opinions of the submitters) behind the weird appearance of some of these designs.
The top 10 submissions will be fabricated and load-tested. The objective is to create a bracket that is at least 30% lighter than the one currently in use.
More broadly, GE seems to be attempting to establish a network of useful contributors among the “maker” community of hobbyists and small-scale enterprises.
There is a lot of discussion going on in the press about the decline of the PC. Per this article by Gartner:
Worldwide PC shipments dropped to 76 million units in the second quarter of 2013, a 10.9 percent decrease from the same period last year, according to preliminary results by Gartner, Inc. This marks the fifth consecutive quarter of declining shipments, which is the longest duration of decline in the PC market’s history.
From my own experience, my PC (which I use for work and applications that I’ve traditionally run on Windows) just gets more and more annoying by the day.
My house has a MacBook Pro, 2 PC’s, an iPhone, an iPad, and a Blackberry (still). The problem is that the PC seems interminable to setup and run, with myriad anti virus upgrades, system upgrades, and the like, and a generally long and painful startup. The perception of the problem is made even worse in that if you don’t run it every day, the updates pile up and it takes even LONGER to get started on the machine.
Meanwhile you just walk up to the MacBook and turn it on, and it’s up. My Macbook is great when you are on wifi and it has smooth typing and a great experience. It also connects to my Samsung TV through Thunderbolt for watching the web up there which is another benefit.
Read the rest of this entry »
In 2008, Michael and Xochi Birch sold Bebo, which is some kind of social networking company, to AOL—for 850 million dollars.
Things didn’t go too well, and in 2010, AOL sold Bebo to a private equity firm for 10 million dollars.
Things continued to not go so well, and Michael Birch has bought the company back–for 1 million dollars. He doesn’t know exactly what he’s going to do with Bebo now, but plans to have fun trying to reinvent it.
I think what often happens in such situations is this: if a company is so clueless about its market that it fails to either develop internally the product for which there is a critical emerging need…or to acquire the product externally before the prices go out of sight…then it winds up paying an exorbitant price. The price will be one that makes sense economically only if the acquiring company is able to obtain truly stellar results on its new property…but typically, the same cluelessness that led to the product shortfall in the first place will also lead to an inability to successfully integrate or even effectively manage the acquisition.
iTunes was always crap. I run the Windows version. It has inconsistent menus, disappearing menus, a different user interface on each page, a sync button here, an important checkbox there — overall an outstanding example of poor UI design.
My iTunes got corrupted and for years the text labels on most of the buttons and menu items were invisible. Some kind of font issue, I guess. I tried uninstalling, reinstalling, fiddling with Windows fonts, nothing helped. Fortunately, I remembered where the sync button was. That was all I needed, most of the time.
Then the computer that I had iTunes installed on conked out. I fixed the computer and installed a new hard drive and reinstalled Win 7 and iTunes. Works great but now it turns out that syncing doesn’t really mean syncing. I’m not sure what it means. All I know is that after I do it the file libraries on my iPod and iTunes don’t match. You can get them to match but only at the cost of deleting all of the files on your device. You cannot download files from your device to iTunes and add them to any new files you’ve acquired. It’s obvious why this is the case: Apple wants to keep people from busting the DRM on purchased files by downloading them to unauthorized computers. But Apple’s system makes life difficult for anyone who has a significant file library and replaces or upgrades his computer. There are workarounds but they are mostly a PITA for the user, and particularly for the non-tech-savvy user who replaces his hard drive or computer. This is a case where the customer doesn’t come first (though, to be fair, Apple is far from the only company that does things in this way).
John Barnes asks: Are we as a society putting too much emphasis on abstract categorization rather than practical application? The so-called Flynn Effect says that average IQs worldwide rise by about 3 points per decade, but:
Stuart Brown has described younger engineers at advanced research facilities who are “good at filling in bubbles” but don’t seem to be able to make a machine work. Senior engineers lament that the next generation overvalues its high test scores and undervalues the things that get the job done. Fine arts teachers tailor assignments to students who want to express simpler ideas with easier tools rather than acquire more open-ended and sophisticated skills.
A smug and depressing post on “innovation” by a French bureaucrat. Reminded me of my old post Leaving a Trillion on the Table (although “trillion” probably considerably understated the real amount of potential wealth left on the table in this matter.)
Speaking of 3-D printing, GE is running a couple of interesting contests. First, there is the GE jet engine bracket challenge–participants submit a design taking advantage of additive manufacturing capabilities to meet all performance criteria while minimizing mass. Submitted designs will be evaluated by simulation: the top ten will then be fabricated and subjected to actual loads. There is also the 3-D printing production quest: high precision and advanced materials. This one is focused on making parts requiring extreme precision with complex geometries, especially for healthcare applications–entrants are going to need production as well as design capabilities, and in addition to the $50K prizes there may be an opportunity to become a GE supplier or otherwise “collaborate” with the company.
Unlikely animal friendships (photos)
John Hawkins and friends select the 20 hottest conservative women in the new media. (photos, obviously)
I’m currently reading 1913: In Search of the World Before the Great War, by Charles Emmerson. The book describes the social and political climates then existing not only in the major European countries, but also in other places around the world, ranging from Australia to Canada to China.
In his description of Jerusalem–then under control of the Ottoman Empire but with a population including residents and pilgrims from many countries–the author says:
Different countries even had their own postal services, circumventing the Ottoman telegraph service, which was widely thought to be a nest of spies reporting communications back to Constantinople.
Fast forward 100 years….In the wake of the reports concerning NSA surveillance programs, there is widespread concern..among non-Americans as well as among citizens of this country…that the American telecommunications and information-processing services may be “a nest of spies” reporting communications back to Washington…and from there, possibly, to other shadowy recipients. These concerns may have serious economic ramifications.
See, for example, Forbes–NSA Surveillance Threatens US Competitiveness:
Non-US customers of any US business will immediately evaluate their exposure to these new risks and look for alternatives. European, Canadian, and Australian tech companies will profit from this. Competitors in those regions will offer alternatives that will also draw US customers away from the compromised US services.
Washington Post–European Leaders Raise Concerns on US Surveillance
“The German business community is on high alert,” said Volker Perthes, director of the German Institute for International and Security Affairs. “It’s not just about listening in on some bearded guy from Ulm who bought a ticket to Afghanistan and makes conversation with his friends in Waziristan. . . . The suspicion in large parts of the business sector is that Americans would also be interested in our patent applications.”
Popular Mechanics–Why the NSA Prism Program Could Kill US Tech Companies:
Think for a second about just how the U.S. economy has changed in the last 40 years. While a large percentage of our economy is still based in manufacturing, some of the most ascendant U.S. companies since the 1970s have been in the information technology sector…
Let’s say you ran a business in (Japan, India, Australia, Mexico, or Brazil) that relied upon information services from a U.S. company. Don’t these revelations make using such a service a business liability?
See also Business Insider–Did Obama Just Destroy the US Internet Industry?
I don’t think these revelations, even if they are fully validated, will really “kill” US tech companies or “destroy” the US Internet industry…the headlines are a bit over the top, as headlines often are. I do believe, however, that the American information technology industries will be significantly harmed, with implications for the entire US economy…something that we really cannot afford at this particular point in time.
I think it is obvious that the US government needs to conduct anti-terrorist surveillance programs, which must encompass telecommunications networks…the idea that NSA should be abolished, as some have suggested in recent days, is to my mind very unwise. But non-Americans as well as Americans have every right to be concerned about the scope of what has apparently been going on, and the apparent lack of proper controls, and furthermore, to raise questions about how the information gathered is actually being used.
The government of Sweden didn’t do a very good job of protecting its citizens and their property from the rampant rioting that took place in late May.
Government agents did, however, fulfill their duty of issuing parking tickets…to burned-out cars.
I’m reminded of an old SF story, “Dumb Waiter,” written by Walter Miller, who is best known for his novel A Canticle for Leibowitz. This story, which dates from 1952, lacks the philosophical depth of Canticle, but seems quite relevant to the events in Sweden.
In the story, cities have become fully automated—municipal services are provided by robots linked to a central computer system. But when war erupted–featuring radiological attacks–some of the population was killed, and the others evacuated the cities. In the city that is the focus of the story, there are no people left, but “Central” and its subunits are working fine, doing what they were programmed to do many years earlier.
The radiation levels have died down now, and the city is now habitable, from a radiological standpoint–but the behavior of the automated systems, although designed with benign intent, now makes entry to the city very dangerous.
Mitch, the protagonist, resolves to go into the city, somehow get control of Central, and reprogram it so that it will be an asset rather than a hazard for future human occupants of the city. The first thing he sees is a robot cop, giving a ticket to a robot car with no human occupants. Shortly thereafter, he himself is stopped for jaywalking by another robot cop, and given a summons to appear in traffic court. He also observes a municipal robot mailing out batches of delinquent utility-bill notices to customers who no longer exist.
Eventually Mitch establishes contact with Central and warns it that a group of men are planning to blow it up in order to have unhindered access to the city for looting…that the war is over, and Central needs to revise its behavior to compensate for the changed situation. The response is that he himself is taken away for interrogation. He hears a woman crying in an adjacent cell—she has been arrested by a robot cop for some reason or other, and her baby was separated from her and is being held in the city nursery.
General Electric posted a cool video of jet engine fuel nozzles being fabricated–in one piece–with a 3-D printing process. Extensive data collection during the process is done for quality control purposes (they use the term “big data,” of which I am not overly fond.)
Welders have monitored weld pools for centuries with shaded glasses, listening to the “bacon sizzle” of the molten metal, and later using infrared sensors, cameras, and pyrometers. GE is collecting all this data, as well as information from sensors checking the mechanical stability of the 3-D printing machines and the laser beams, and feeding it into algorithms that reduce terabytes of raw data to megabytes of useful information.
It seems that certain skills, such as understanding what is happening to molten metal via direct sensory perception, are becoming less important in this manufacturing process…other skills, surely, are becoming more important. It would be both interesting and worthwhile for someone to perform a multi-decade analysis of the actual skill mix required to produce a particular product. For example, how does the set of skills that built the J-47 jet engine in the early 1950s compare with the set of skills for building the engines being produced today? Millions of words and trillions of pixels have been devoted..by academics, journalists, consultants, educators, and even the occasional practitioner…to talking about “jobs of the future,” but a high proportion of this writing and talking is of the hand-waving variety. It would be nice to see some serious historical (and quantitative) comparative research.
More on 3-D printing in today’s WSJ. Note that the Ford and Mattel examples are for 3-D printing of prototypes, not of actual customer products.
Bookworm discovered and embedded a video by Professor Anthony Esolen, in which he challenges the common belief that the Middle Ages were a dark and dreary era with few redeeming attributes. Book adds thoughts of her own, and there is a good comment thread on the post.
Pseudodionysius posted the same video at Ricochet, resulting in an extensive discussion thread…192 comments so far…which includes significant pushback against the Esolen thesis. The thread became pretty contentious…unpleasantly so, at points, but it includes some worthwhile discussion and useful links, especially on the comparison of Medieval with Classical technologies.
Here’s some color footage of London in the 1920s
More old color film: New York City in 1939
Following a scary mammogram experience, a GE researcher is working on the development of high-resolution MRI technology
When Best Buy first opened I used to spend hours looking at computers, electronics, stereos, and gadgets. I haven’t been in a Best Buy for years except to briefly pick something up since it seems much of the “sizzle” has gone out of that business. However, while in London I stopped in the enormous Selfridges store which has an incredible electronics boutique in the basement and I had a great time looking through all they had to offer.
This television is
a Samsung an LG 84″ television with 4K resolution. This means 4000 instead of 1080 like you probably have on your TV. Wikipedia has an article about 4K here and I researched it a bit and most movies are already filmed in 4k and ESPN and many other television shows are also in 4k. I was a bit suspicious about programming because in the demo TV they seemed to have filmed their own (gorgeous) shows with attractive women, flowers, and other items that looked fantastic close up. It was about $25,000.
This television was another Samsung (no surprise) and it was amazingly thin – about as wide as your thumb. Apparently the electronics (connectors, etc…) are in the base of the TV or controlled remotely.
Yesterday, Glenn Reynolds linked some comments by Senator Dick Durbin, who said he favors a “media shield law”…but isn’t sure if such a law should protect people who are bloggers and/or tweeters, rather than being employees of Associated Press, Fox News, etc.
“Are these people journalists and entitled to constitutional protection?, asked Durbin. “We need to ask 21st century questions about a provision that was written over 200 years ago.”
As it happened, last night I was reading Alexis de Tocqueville, who (as usual) has some relevant things to say:
In France the press combines a twofold centralization; almost all its power is centered in the same spot and, so to speak, in the same hands, for its organs are far from numerous. The influence upon a skeptical nation of a public press thus constituted must be almost unbounded. It is an enemy with whom a government may sign an occasional truce, but which it is difficult to resist for any length of time.
Neither of these kinds of centralization exists in America. The United States has no metropolis; the intelligence and the power of the people are disseminated through all the parts of this vast country, and instead of radiating from a common point they cross each other in every direction; the Americans have nowhere established any central direction of opinion, any more than of the conduct of affairs. This difference arises from local circumstances and not from human power; but it is owing to the laws of the Union that there are no licenses to be granted to printers, no securities demanded from editors, as in France, and no stamp duty, as in France and England. The consequence is that nothing is easier than to set up a newspaper, as a small number of subscribers suffices to defray the expenses.
Hence the number of periodical and semi-periodical publications in the United States is almost incredibly large. The most enlightened Americans attribute the little influence of the press to this excessive dissemination of its power; and it is an axiom of political science in that country that the only way to neutralize the effect of the public journals is to multiply their number…The governments of Europe seem to treat the press with the courtesy which the knights of old showed to their opponents; having found from their own experience that centralization is a powerful weapon, they have furnished their enemies with it in order doubtless to have more glory for overcoming them.
In America there is scarcely a hamlet that has not its newspaper. It may readily be imagined that neither discipline nor unity of action can be established among so many combatants, and each one consequently fights under his own standard. All the political journals of the United States are, indeed, arrayed on the side of the administration or against it; but they attack and defend it in a thousand different ways.
Durbin referred to the First Amendment as “a provision that was written over 200 years ago,” apparently implying that the passage of time makes it less relevant today. If he were better-educated and more intelligent, he would understand that the press environment of the Revolutionary era and the first half of the 1800s, marked by decentralization and low start-up costs, is more similar to today’s Internet-driven media environment–marked by the same factors–than either is to the era that was marked by a few huge quasi-monopolistic media organizations.
When the Founders referred to “freedom of the press,” what exactly did they mean? I think there is a very strong case to be made (see detailed legal analysis by Eugene Volokh) that they meant freedom of the printing press (and, implicitly, of its technological successors) rather than offering a grant of special privilege to entities within a particular industry. Indeed, what would a grant of special protection to a “press” industry have even meant in an age when any citizen could buy a simple printing press and immediately begin publishing pamphlets or newspapers, without any need for huge capital investments, AP wire feeds, dozens of employees, etc?
I agree with Glenn Reynolds that “We need protections for journalism, not journalists.” The idea of special civil-liberties protections only for a particular industry, with membership in that industry inevitably to be certified by the powers-that-be, is highly dangerous, and takes us back to an environment of licenses to be granted to printers, securities demanded from editors, as in France, and stamp duty, as in France and England.
I notice that the people who want to use “technology” as an excuse for the erosion of constitutional protections are generally people whose ignorance of technology is exceeded only by their ignorance of history.
Lord, Thou hast made this world below the shadow of a dream,
An’, taught by time, I tak’ it so – exceptin’ always Steam.
From coupler-flange to spindle-guide I see Thy Hand, O God -
Predestination in the stride o’ yon connectin’-rod.
John Calvin might ha’ forged the same – enorrmous, certain, slow -
Ay, wrought it in the furnace-flame – my “Institutio.”
I cannot get my sleep to-night; old bones are hard to please;
I’ll stand the middle watch up here – alone wi’ God an’ these
My engines, after ninety days o’ race an’ rack an’ strain
Through all the seas of all Thy world, slam-bangin’ home again.
Slam-bang too much – they knock a wee – the crosshead-gibs are loose;
But thirty thousand mile o’ sea has gied them fair excuse….
Fine, clear an’ dark – a full-draught breeze, wi’ Ushant out o’ sight,
An’ Ferguson relievin’ Hay. Old girl, ye’ll walk to-night!
His wife’s at Plymouth…. Seventy-One-Two-Three since he began -
Three turns for Mistress Ferguson…. an’ who’s to blame the man?
There’s none at any port for me, by drivin’ fast or slow,
Since Elsie Campbell went to Thee, Lord, thirty years ago.
(The year the ‘Sarah Sands’ was burned. Oh roads we used to tread,
Fra’ Maryhill to Pollokshaws – fra’ Govan to Parkhead!)
Not but they’re ceevil on the Board. Ye’ll hear Sir Kenneth say:
“Good morrn, McAndrew! Back again? An’ how’s your bilge to-day?”
Miscallin’ technicalities but handin’ me my chair
To drink Madeira wi’ three Earls – the auld Fleet Engineer,
That started as a boiler-whelp – when steam and he were low.
I mind the time we used to serve a broken pipe wi’ tow.
The whole poem is here.
Back in 2004, one of the Ben & Jerry’s cofounders put up an animation using stacks of cookies to demonstrate that the US spends way too little on education relative to its spending on defense. The page showed $35 billion worth of cookies for K-12 education as opposed to $400 billion for defense.
Actually, the US in that year was spending almost $500 billion in government money for K-12 education. The $35 billion looks about right –for Federal government spending only. Most educational funding in the US occurs, of course, at the county, state, and municpal levels. The phrase “Federal budget” does occur somewhere in the presentation. But the manner in which the numbers are presented–in the form of a single bar graph–implied that the $35B for education was directly comparable to the $400B for defense. The casual or not-very-knowledgeable reader would be likely to look at this page and draw very incorrect conclusions about the relative levels of defense and educational spending in the United States.
I was reminded of this misleading presentation of data by another bad infographic, this one appearing in the United Airlines in-flight magazine. The piece, titled “Geek Tragedy,” shows the U.S. having a rank of 27th among developed nations in proportion of STEM (science, technology, engineering, and math) bachelor’s degrees, asserted that the US economy would benefit by $75 trillion (over the next 80 years) if we could match Canada’s math proficiency level…and went on to compare “Annual US Federal Investment in STEM Education Programs” ($3 billion) with “Amount Americans Spent on Beer in 2011″ ($96 billion.)