Chicago Boyz

                 
 
 
 
What Are Chicago Boyz Readers Reading?
 

Recommended Photo Store
 
Buy Through Our Amazon Link or Banner to Support This Blog
 
 
 
  •   Enter your email to be notified of new posts:
  •   Problem? Question?
  •   Contact Authors:

  • CB Twitter Feed
  • Blog Posts (RSS 2.0)
  • Blog Posts (Atom 0.3)
  • Incoming Links
  • Recent Comments

    • Loading...
  • Authors

  • Notable Discussions

  • Recent Posts

  • Blogroll

  • Categories

  • Archives

  • Automated Systems Need to be Supervised by Humans

    Posted by David Foster on July 17th, 2016 (All posts by )

    …and not just any humans.

    Listen to this very-well-done podcast about one of those times when thermonuclear war did not happen: Flirting with the end of the world.

    Automated systems need to be supervised by humans, and not just any humans, as Stanislav Petrov’s story makes clear.  Individuals and bureaucracies that themselves behave in a totally robotic fashion cannot be adequate supervisors of the automation.  See also my post Blood on the tracks for an additional example.

     

    9 Responses to “Automated Systems Need to be Supervised by Humans”

    1. David Foster Says:

      Meant to mention: Petrov podcast was done by a Ricochet member, the Ricochet Member Feed being where I found the link.

    2. David Foster Says:

      The *type* of human supervision that is feasible and appropriate varies from system to system. An automatic elevator is “supervised” by its passengers to the limited extent that if a passenger thinks it is acting weirdly, he can always push the STOP button–and, certainly, the manufacturers and maintainers of these systems collect and analyze safety-related failure data very closely.

      An advanced airplane autopilot can conduct a landing conditions of very low ceiling and visibility. There will not really be time for the human pilot to override it during the last few seconds of the landing; however, it can be monitored up to that point. These are extremely expensive and highly-redundant systems, and the manufacturers as well as the FAA and its international counterparts certainly keep close track of problem reports.

      A railroad signal system will usually have to be trusted by the train engineer, he will not typically have sufficient visibility ahead to stop the train if there is another train on the tracks ahead that was not reported by the signaling. For this reason, it is extremely important that the humans in charge of the system act conscientiously and aggressively to maintain the system properly deal with any problems in the system. This was not done in a Washington Metrorail crash that happened more recently than the one I wrote about in the link: in this case, it was apparently well-known that the track circuits that report occupancy by a train ahead were failing intermittently, but the problem was allowed to exist until it killed people.

    3. dearieme Says:

      “humans in charge of the system act conscientiously and aggressively”: as is often the case with this American usage, I have no idea what you mean by “aggressively”. You obviously don’t want the signalmen to attack each other: what do you mean? “Energetically” perhaps? “Enterprisingly”? “Responsibly”? That they should exercise initiative?

    4. David Foster Says:

      ‘Aggressively’ in my usage here means that if you are, say, a Washington Metrorail supervisor and you hear rumors that blocks are showing Green when they ought to be showing Red, you raise hell until there is enough investigation to determine whether the rumors are true or not. And if you can’t get any attention from your management, you contact the NTSB and also whatever agency regulates municipal rail system (if there is any such–I believe there has been some ambiguity about this)

    5. David Foster Says:

      Washington Metrorails problems and ‘learning disability’ continued to the point that finally they have been subjected to safety oversight by the Federal Transit Administration:

      http://greatergreaterwashington.org/post/28397/federal-takeover-of-wmata-safety-is-an-unprecedented-move/

      Seems to me that Federal Railroad Administration would have been a better choice; FTA looks like their main mission is handing out grant money. And a local transit system that runs on rails with high speeds is still a railroad and needs railroad-grade operational disciplines.

    6. jaed Says:

      David Foster’s mention of elevators reminds me of a story I heard after 9/11: the elevators in the towers, by design, automatically locked up when the crashes damaged the shafts, so that passengers couldn’t get out of them. The idea being that if there’s a mechanical problem with an elevator, the passengers should be kept caged until building maintenance arrives, because if they try to leave the elevator they may hurt themselves. Problems with this approach in the context of a passenger jet hitting the building the elevator is in are left as an exercise. I’m sure it was well-intentioned.

      But I do remember hearing about the slogan of an elevator company, at a convention that had been held shortly before this, touting their “safety” feature: “They’re Not Getting Out!”. And shuddering.

      Failing to trust people’s common sense, and overriding their decisions with automation, can have terrible consequences.

    7. David Foster Says:

      Jaed….When the Soviets launched Yuri Gagarin into space, there was concern that low gravity might lead to mental problems….so it was set up so that for him to switch from automatic to manual control required him to enter a key into a cypher lock. “We believed that if he was able to get the envelope out of the instruction folder, open it, read the code, and punch the code in, then he was in his right mind and could be trusted to perform manual control.” (Two members of the development team later confessed that they had secretly and against orders informed Gagarin of the code, which was “125.”

      Source: Boris Cheroot
      https://chicagoboyz.net/archives/49961.html

    8. Grurray Says:

      The recent crashes involving Tesla Autopilot show the difficulty of people working with automated systems

      http://www.consumerreports.org/tesla/tesla-autopilot-too-much-autonomy-too-soon/

      Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer’s control. At 65 mph, that’s between 100 feet and quarter-mile traveled by a vehicle effectively under no one’s control.

      This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in a 2015 monthly report (PDF). “People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?”

      That Yuri Gagarin example may provide some answers. There could be tasks that the user could perform periodically to keep them engaged. Maybe you could style it like a game to keep their interest. The guy who crashed in Florida was supposedly watching a Harry Potter DVD, so he didn’t notice the truck he was speeding into. Maybe you could customize the autopilot to interact with the movie or game or podcast you prefer to play and have it somehow tie into your route.

    9. David Foster Says:

      When humans and robots communicate