…they do not always achieve mutual understanding. And when misunderstandings do occur, the consequences can range from irritating to expensive to tragic.
On July 6 of 2013, Asiana Airlines Flight 214 crashed on final approach to San Francisco International Airport, resulting in over 180 injuries, 3 fatalities, and the loss of the aircraft. While the NTSB report on this accident is not yet out, there are several things that seem to be pretty clear:
–The flight crew believed that airspeed was being controlled by the autothrottle system, a device somewhat analogous to the cruise control of an automobile
–In actuality, the airspeed was not being controlled by the autothrottles
–The airspeed fell below the appropriate value, and the airplane dipped below the proper glidepath and mushed into the seawall
It is not yet totally clear why the autothrottle system was not controlling the airspeed when the captain and first officer believed that it was doing so. It is possible that the autothrottle mechanism failed, even that it failed in such a way that its failure was not annunciated. It is possible that an autothrottle disconnect button (one on each power level) was inadvertently pressed and the disconnection not noticed. But what seems likely in the opinion of several knowledgeable observers is that the captain and FO selected a combination of control settings that they believed would cause the autothrottle to take control–but that this setting was in fact not one that would cause autothrottle activation…in other words, that the model of aircraft systems in the minds of the flight crew was different from the actual design model of the autothrottle and its related systems.
Whatever happened in the case of Asiana Flight 214…and all opinions about what happened with the autothrottles must be regarded as only speculative at this point…there have been numerous cases–in aviation, in medical equipment, and in the maritime industry–in which an automated control system and its human users interacted in a way that either did or could have led to very malign results. In his book Taming HAL, Asaf Degani describes several such cases, and searches for general patterns and for approaches to minimize such occurrences in the future.
Degani discusses human interface problems that he has observed in common consumer devices such as clocks, TV remote controls, and VCRs, and goes into depth on several incidents involving safety-critical interface failures. Some of these were:
The airplane that broke the speed limit. This was another autothrottle-related incident, albeit one in which the consequences were much less severe than Asiana 214. The airplane was climbing to its initial assigned altitude of 11,000 feet, under an autopilot mode (Vertical Navigation) in which speed was calculated by the flight management system for optimum efficiency–in this case, 300 knots. Air traffic control then directed that the flight slow to 240 knots for separation from traffic ahead. The copilot dialed this number into the flight control panel,overriding the FMS-calculated number. At 11000 feet, the autopilot leveled the plane, switched itself into ALTITUDE HOLD mode, and maintained the 240 knot speed setting. Everything was fine.
The controller then directed a further climb to 14000 feet. The copilot re-engaged VERTICAL NAVIGATION MODE and put in the new altitude setting. The engines increased power, the nose pitched up, and the airplane began to climb. But just a little bit later, the captain observed that the airplane wasn’t only climbing–it was also speeding up, and had reached almost 300 knots, thereby violating an ATC speed restriction.
What happened here? Degani refers to events of this sort as “automation surprises.” The copilot was apparently thinking that the speed he had dialed in to override the flight management system would continue to be in force when he re-enabled the vertical navigation climb mode. But that wasn’t the way the system was actually designed. Selecting Vertical Navigation mode re-initialized the source of the airspeed command to be the FMS, which was still calling for a 300-knot Best Efficiency speed.
Degani says that the pilots were well trained and understood how the speed reference value actually worked…but that the unintuitive nature of the interface caused this knowledge to be effectively forgotten at the moment when the additional climb was requested. He draws an analogy with the user of a cordless phone, who picks up the ringing phone and pushes the TALK button..a seemingly-logical action that actually turns off the phone and disconnects whoever is calling.
The blood-pressure monitor that didn’t monitor. A surgery patient was under anesthesia; as is standard practice, his blood pressure was being monitored by an electronic device. The patent’s blood pressure showed a high reading, and the surgeon noted profuse bleeding. The anesthesiologists set the blood-pressure monitor to measure more frequently. Periodically, they glanced back at the monitor’s display, noting that it still showed an elevated blood pressure, actively treating the hypertension–as they believed it was–with drugs that dilated blood vessels.
But actually, the patient’s blood pressure was very low. The alarmingly-high blood pressure values shown in the display were actually constant…the machine was displaying the exact same value every time they looked at it, because after the measurement-interval reset, it had never made another measurement.
What happened here? The blood-pressure monitor has three modes: MANUAL (in which the pressure is measured immediately when the “start” button is pressed), AUTOMATIC (in which pressure is measured repeatedly at the selected interval), and IDLE. When the interval is changed by the anesthesiologist, the mode is set at IDLE, even if the monitor were already running in AUTOMATIC. To actually cause the automatic monitoring to occur, it is necessary to push START. In this case, the pushing of the START button was omitted, and the machine’s display did not provide adequate cues for the anesthesiologists to notice their mistake.
Critiquing the machine’s design, Degani notes that “The kind of change they sought is not very different from changing the temperature setting in your toaster over…On almost every oven, you simply grab the temperature knob and rotate it from 300 Farenheit to 450, and that’s it. You are not expected to tell the system that you want it to stay in OVEN mode–you know that it will.”
The ship that got confused. How does a ship run aground when it is equipped with GPS and LORAN and radar and a depth sounder…and operating in well-marked and charted coastal waters? This actually happened to the cruise ship Royal Majesty.
The first event in the chain leading to this grounding was that the antenna cable for the GPS came loose. The GPS, now with no source of true position from the satellites, defaulted as designed into Dead Reckoning mode, in which changes in position are calculated based on compass heading and ship’s speed. A DR position will initially be quite accurate, but the accuracy will degrade over time since there is no way to compensate for the effects of current, wind, and waves.
Dead reckoning is a perfectly respectable means of navigation, and many successful voyages have been completed with its assistance. But the officers of those ships knew they were using DR, and hence could make allowances for the inherent inaccuracy in their estimated positions. The officers of the Royal Majesty had no such knowledge.
When the ship’s GPS unit lost its signal and went into DR mode, it chirped an alarm…which nobody heard…and displayed the messages dr and sol (for solution) on its panel, along with the calculated and changing latitude and longitude, which were in much larger type. The crew continued manually plotting the displayed latitude and longitude, and no one noticed the unobtrusive system status messages.
In addition to the manual plotting, the position from the GPS drove a moving-map display, which was overlayed on the radar scope. Position updates from the GPS to the map unit were sent periodically, and when the GPS went into Dead Reckoning mode, it flagged those messages accordingly…instead of sending a message like “GPS, 12:10:34, latitude, longitude, valid”, it sent “GPS, 1:11:46, latitude, longitude, invalid.” The assumption was that whatever device was receiving these message would be programmed to look at the valid/invalid flag, and to notify the operator to use caution in the latter case…because the accuracy of the position would be continuously degrading. The radar and map system on this ship, however, was not programmed to look at the relevant part of the message. Hence, the people doing the manual plotting did not notice the tiny messages warning that DR mode had been entered, and the people looking at the radar/map unit were given no indication whatsoever that anything was abnormal.
28 hours after the loss of the GPS signal, the Royal Majesty was 14 miles southwest of her intended route. At 6:45, the chief officer saw a blip on the radar, 7 nautical miles away from the ship. The blip was colocated on the map with the BA buoy, which marked the entrance into the Boston traffic lanes. All seemed well. But actually, the buoy that was displayed on the radar–and was visually sighted though not properly identified–was not the BA buoy, but rather a totally different marker known as the Asia Rip buoy.
Degani summarizes:
As the gray sky turned black veil, the phosphorus-lit radar map with its neat lines and digital indication seemed clearer and more inviting than the dark world outside. As part of a sophisticated integrated bridge system, the radar map had everything–from a crisp radar picture, to ship position, buoy renderings, and up to the last bit of data anyone could want–until it seemed that the entire world lived and moved transparently, inside that little green screen. Using this compelling display, the second officer was piloting a phantom ship on an electronic lie, and nobody called the bluff.
The bluff was finally called by reality itself, at 10 PM, when the ship jerked to the left with a grinding noise. The captain ran to the radar map, extended the radar range setting to 24 miles, and saw an unmistakable crescent-shaped mass of land: Nantucket Island. “Hard right,” he ordered.
But it was too late. The ship hit the Rose and Crown Shoal, hard, and could not be backed off. “He (the captain) couldn’t fathom how the ocean had betrayed him”…he ran to the chart table and plotted the position from the LORAN-C receiver, which was entirely independent of the GPS. The true position of the ship now differed by 17 miles from the GPS-derived position that the officers has been assuming was correct.
No one was injured, but repairs to the hull damage were estimated at $2 million.
Diverging lines never intersect. In 1983, Korean Airline flight 007…far off-course…entered Soviet airspace and was shot down, resulting in the loss of all 269 passengers and crew aboard. Many theories have been promulgated about why this plane was so far away from where it was supposed to be: Degani places the blame firmly on the human-automation interface.
While still over Alaska, the flight crew put the autopilot in HEADING mode, with a selected compass heading of 245 degrees. Their intent was to intercept the international airway R-20, with the autopilot then following this track under the direction of the aircraft’s inertial navigation system. Degani believes that the crew, after becoming established on the 245 degree heading, probably switched the autopilot to INS (Inertial Navigation System) mode.
But the way this autopilot actually worked was that selecting INS would only arm the intertial mode: the airplane would not actually transition from compass heading to inertial control until it was within a specified distance (7.5 nautical miles) of the desired inertial course. This makes sense when you want to fly a particular heading to reach the inertial course, and not actually turn to that new course until you intercept it, or at any rate get close to it.
But a course of 245 degrees, from the position the airplane was evidently at when it was selected will never intercept the R-20 track. According to Degani, the inertial mode remained ARMED..but never engaged..throughout the entire remainder of the flight. The inertial navigation system itself did have an indicator that would show the pilot the active mode: the letters INS appeared in amber when the system was armed, green when it was engaged. But the autopilot itself had no indicator showing that the rotary switch setting to INS had not yet taken effect–and on this particular flight, would never take effect.
Other factors contributed to the tragedy: the presence of a military RC-135 plane in the vicinity, which increased Soviet suspiciousness, the darkness, which made visual identification impossible, the fact that the Soviet aircraft which fired warning shots had no tracer ammunition, so that those shots were invisible to the KAL 007 crew. But the primary cause was that the airplane was where it was not supposed to be.
In this particular case, I find it hard to accept Degani’s explanation: it seems incredible that during a long flight with low workload the crew did not attempt to verify their position by plotting the INS position manually, or by taking radio cross-bearings…which would not have been very accurate at the ranges involved, but would still have most likely shown that something was wrong. Still, in the appendix Degani gives other examples of airplanes that wound up way off course as a result of similar autopilot confusion.
One common thread in these incidents is that they could have been prevented, or at least limited in the seriousness of their outcomes, by cross-checking. If the Asiana pilots had been keeping an eye on the airspeed indicator, they would surely have realized the autothrottle problem in time to recover. The captain of the flight that broke the speed limit did observe an excessively high airspeed and take corrective action. If the anesthesiologists in the medical example had noticed that the blood pressure readings were absolutely constant, they would have caught the problem much earlier. If the officers of the Royal Majesty had cross-checked their GPS position with the LORAN position, OR periodically set the radar to a longer range setting, they could have avoided the grounding.
But when people use an automated system on a regular basis, and find it reliable, it is easy for them to assume that it can always be counted on, and for the cross-checking to be omitted. Indeed, in some instances cross-checking many not be operationally feasible–is it really reasonable to expect that an anesthesiologist will notice that a blood pressure reading is NOT changing, if he has other duties that draw his attention away from the monitor?
To maximize the safety of human-machine integrated systems, it is essential (a) that the creators of this system consciously design the interface with to try and avoid “automation surprises,” (b) that the humans who will operate the system understand its functioning in depth, including the interaction between separate components (like the GPS system and the radar/map plotter) and the behavior of those modes which are likely to be used only on rare occasions, and (c) that cross-checking, wherever possible, should be conducted religiously.
In a future post, I’ll talk about a fairly recent accident that involved not only the human-automation interface, but also the nature of decision-making in large organizations.
A similar situation occured with the Air France crash over the Atlantic a few years ago. The plane was equipped with unlinked “joysticks: instead of linked control columns
http://www.youtube.com/watch?v=kERSSRJant0&sns=em
A corollary to cross checking in operation is that designers should test complex systems using as many different test protocols and human testers as practicable. Different individuals often do things in different ways. Some people, like the Spanish guy who found an iOS 7 security flaw, excel at finding systemic vulnerabilities. It’s good to keep Murphy’s Law in mind.
Grurray…interesting video…thanks!
I think the absence of linking the controls in the Airbus was a questionable decision. On the other hand, in the Air France case, the pilot not flying (the guy in the left seat, in this case) could have determined what was going on with the stick-back situation by observing his attitude indicator, which was apparently working fine. Still, the physical linkage allows the use of sensory cues as well as visual cues, which is surely a benefit.
Some of the cases cited in Degani’s book would probably not have occurred had the systems been designed with an old-style analog approach. With the blood pressure monitor, for example, if there were an “Interval” knob that functioned by adjusting a mechanical or electronic-analog timer, it seems very unlikely that the turning of this knob would have been arranged to put the system in an idle mode…it would function similarly to the toaster over he mentions in the analogy.
One of course could achieve the same thing with a digital computer-based system, but the temptation to do otherwise seems to be stronger.
One time about 30 years ago, early in the laparoscopy trend in surgery, I noticed that the CO2 insufflation pump was not stopping when the intraabdominal pressure was higher than planned. We quickly disconnected and the anesthetist had noticed difficulty in expanding the patient’s lungs at almost the same moment. After fiddling with the pump, we eventually resumed the procedure and finished safely. I still don’t know what happened. A few years later I was asked to review a malpractice case with a similar story but they did not notice anything until the patient died of a CO2 embolus to her lungs. We never, as far as I can recall, figured out the flaw in the pump but the malpractice was not noticing a rock hard abdomen and inability to inflate the lungs.
A friend of mine was an anesthesiologist who did the first case in a brand new hospital tower with a brand new operating suite. The patient was a niece of a famous movie actress. The nitrous oxide and oxygen lines had been switched in the ceiling and nobody had tested them. The child died and everybody was devastated. Technology will kill you if given even a slight chance.
The nitrous oxide and oxygen lines had been switched in the ceiling and nobody had tested them.
Same thing happened in the office of the guy who removed my wisdom teeth.
I once had a work debacle because of a bug in a program I had written. It was a simple BASIC program that ran on a DOS PC. I did not know enough to include a routine to zero variables after calculating output. There was never had a problem when I ran it, because I always closed and restarted the program before running it again. Then someone else ran the program and didn’t close it first, and the output was way off. Didn’t kill anyone but cost some money.
I once read that Eddie Rickenbacker personally checked every machine gun cartridge carried in his plane, as this was the only way to prevent jams. There is something to be said for being meticulous and obsessive in certain areas.
MK…re the blood-pressure monitor example in the book…is it likely that an anesthesiologist would be able to detect the compete absence of change in the readings, or does this seem unrealistic?
David – a friend of mine is a retired 777 captain and he was saying that the ILS – the Instrument Landing System – was inoperative that day on that runway – and (if I remember this correctly ) the autothrottle system was linked to the ILS.
Of course they should have known by the airspeed that they were dangerously slow – I believe an entire crop of airline pilots are used to computers doing most of their work and are in trouble when they have to actually fly the plane.
As Gurray already mentioned the Airbus joysticks (they don’t have a control yoke) are unlinked – the caption can be pulling the stick back and the First Officer all the way forward and the computer will average the 2 inputs.
That is a critical design flaw IMO – they can’t both be right.
In the Air France FO’s defense, he had no outside reference, no airspeed indication – still – hindsight (and armchair quarterbacking) would suggest to first do nothing and try to analyze the situation – if you were gaining speed terribly (in a dive) you would have outside stimuli such has hearing wind rushing I would think.
In his case he did the worst thing he could have done – pull up.
Bill…airline pilot Karlene Petitt believes that aviation safety would benefit if pilots spent more time flying gliders, and that indeed as much as half of the required 1500 hours of experience (for captains and first officers) should b allowed and encouraged to be obtained by flying gliders.
http://karlenepetitt.blogspot.com/2013/07/the-future-of-aviation.html
In the case of the Airbus the linked columns would have provided simple redundancy safeguards to make up for any inevitable mistakes.
Another technique might be to periodically simulate small failures or game-like challenges as a type of drill just to keep the user (here the pilot) alert and on his toes. The complexity of these systems is removing meaningful human interaction, and as a result attention and reactions suffer.
Grurray…”Another technique might be to periodically simulate small failures or game-like challenges as a type of drill just to keep the user (here the pilot) alert and on his toes”
Failure simulation is a regular thing with recurrent simulator training. However, it seems that the simulator training for airline pilots has emphasized stall *prevention* but has not included stall *recovery*, at least as far as high-altitude stalls go. One reason for this is that the altitude loss involved in a stall recovery can be dangerous if there is someone already there at the lower altitude…(but not nearly as dangerous as failing to recover from the stall at all!) Another is the belief that there is really no reason why stalls should be allowed to occur, given stick-pushers, envelope-protection systems, etc. And finally, getting the simulators to accurately model post-stall behavior would have required flight-test data that has not been available.
http://www.aviationweek.com/Article.aspx?id=/article-xml/AW_02_25_2013_p39-548386.xml
Note that the FAA’s current solution track for this problem involves aerodynamic simulation to predict the airplane’s post-stall behavior, rather than actual flight test data:’
“Bihrle last year won a contract from the FAA to create a “representative” simulation model for a large transport aircraft to be used for stall training, but one that would not require flight test data in the stall and post-stall regime. That information can be too costly to obtain from airframers, assuming that it exists and they would be willing to provide it.”
I bet the airframe people would be “willing to provide it” if it were required for certification! Whether this would be a reasonable requirement given the costs involved, the low incidence of such accidents, and the possibility of a “good-enough” solution based purely on modeling, I’m not sure.
David, I agree that the models and simulations will never completely describe real world conditions. Even if they’re based on real life data, there’s always some anomaly that could happen that is unforeseen and experientially off the charts. Similar to the two hundred year flood that hits when we only have a 150 years of data to measure from.
What I mean is small simulations in flight, not for practice, but to keep the pilots head in the game.
Like your previous link about gliding states, pilots are losing their skills due to automation. They aren’t necessarily losing flying skills. They’re losing attention, reaction, and problem solving skills.
@David interesting thought and no doubt applicable – knowing the true basics of flight. Ironically it was his knowledge of glider flying that enabled the Air Canada aka the Gimli Glider – to make a safe landing without any power.
http://www.smithsonianchannel.com/sc/web/series/802/air-disasters/138554/gimli-glider
These days I think there are too many people going into the airlines who have forgotten the basics of flying – that fellow who had the iced up wings in Buffalo – pulled up – again – the worst thing you could do – but my friend the retired 777 captain said that they never should have certificated that aircraft for this country in the first place
Best training is the military – but that source is drying up.
Here’s an interesting story: Apple Maps was directing automobile drivers to **cross a runway** to get to the terminal building at the Fairbanks (Alaska) airport.
The Map error isn’t good, but much worse is the apparent absence of a reasonable physcal barrier to keep people from intentionally or inadvertently making a wrong turn and finding themselves face-to-face with an airplane. And what is really disturbing is the willingness of people to follow the directions of an automated system even when it should be obvious that those directions are not making sense.
http://www.evolvingexcellence.com/blog/2013/09/of-maps-erp-and-basic-thinking.html
@David – I have a story almost as funny. I am trying to find an address in an old railroad town – Newcastle – and as soon as my car is on top – ready to cross – the main rail line – the VOICE says “YOU HAVE ARRIVED AT YOUR DESTINATION” – right on top of the tracks.
“MK”¦re the blood-pressure monitor example in the book”¦is it likely that an anesthesiologist would be able to detect the compete absence of change in the readings, or does this seem unrealistic?”
Any body who relies on automated systems is asking for trouble. Anesthesiologists have gotten into lots of trouble over this but so have others.
Anesthesia can be pretty boring. I once had an interesting experience. We had a very cavalier anesthetist at a small hospital. I was doing a procedure on an elderly nursing home patient with a cold leg. These are often emboli from either cardiac arrhythmias or arterial disease. I had scheduled the case to do under local anesthesia because of the patient’s poor condition. After the assistant surgeon and I had scrubbed and entered the operating room I found that the patient was asleep and there were NO monitor leads in place. I angrily told the anesthetist that he had directly contradicted my orders. He said, “Oh, it’s no big deal. Just do the case.”
I looked at the blank monitor and said, “Is the patient alive ?” I checked his carotid pulse and there was none. The anesthetist didn’t even realize he had arrested. The patient died and I stopped going to that hospital for years. Later the Medical Board came around asking about that anesthetist. He had been seen peeing in a cup and injecting it into the patient. Crazy. I don’t know if they ever found him.
@Mike K
I did some anaesthetic training with an old school anaesthetist who taught me to look at the patient and not the monitor. This philosophy saved my arse on more than one occasion. I was once involved in transferring an intubated patient to another hospital. He was hooked up to this damn ECG monitor which wouldn’t register that monitoring leads were off, rather it displayed a pattern rather similar to ventricular fibrillation. The patient was quite sweaty and the leads would be falling off all the time. Several hours were spent waiting for transfer and we got habituated into equating monitor pattern with disconnected leads. Anyway I notice the familiar pattern, no big deal. Then I look at the patient, He is blue. A quick zap and all is OK but it scared the shit out of me. We got rid of the monitor after that.
I think a fair amount of the blame for the loss of Air France 447 can apportioned to design of the cockpit of the Airbus. This transcript, of the last few seconds of the flight, shows that pilot was not even aware that he was pulling back on the joystick! But most diabolical of all,just as the stressed and confused pilots work out that they should be pushing the nose down, the plane tells them to pull up.
Here is an interesting discussion on the topic.
“I did some anaesthetic training with an old school anaesthetist who taught me to look at the patient and not the monitor. This philosophy saved my arse on more than one occasion. ”
On the other hand, this idiot put an elderly sick patient to sleep against orders with no monitor at all. Looking at the patient didn’t do him, or the patient, much good.
The American Anesthesia Society started requiring CO2 monitors about 25 years ago when they became available. The result was a huge drop, not only in erroneous intubation errors, but in Anesthesia malpractice rates.
I’m an “old school” surgeon but I didn’t stop learning. Today we had a nice demonstration of SIMMmen a complex teaching tool for teaching heart and lung diagnosis plus cardiac life support classes. I’ve already signed up my students for a session with it.