Chicago Boyz

What Are Chicago Boyz Readers Reading?

  •   Enter your email to be notified of new posts:
  •   Problem? Question?
  •   Contact Authors:

  • CB Twitter Feed
  • Blog Posts (RSS 2.0)
  • Blog Posts (Atom 0.3)
  • Incoming Links
  • Recent Comments

    • Loading...
  • Authors

  • Notable Discussions

  • Recent Posts

  • Blogroll

  • Categories

  • Archives

  • Artificial Intelligence and Human Behavior

    Posted by David Foster on February 28th, 2018 (All posts by )

    WSJ has an story on the use of artificial intelligence to predict potential suicides.  The article cites a study indicating that “the traditional approach of predicting suicide,” which includes doctors’ assessments, was only slightly better than random guessing.

    In contrast, when 16,000 patients in Tennessee were analyzed by an AI system, the system was able to predict with 80-90% accuracy whether someone would attempt suicide in the next two years, using indicators such as history of using antidepressants and injuries with firearms.  (It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.)

    Another approach, which has been tested in Cincinnati schools and clinics, uses an app which records sessions between therapists and patients, and then analyzes linguistic and vocal factors to predict suicide risk.

    Both Apple (Siri) and Facebook have been working to identify potential suicides.  The latter company says that in a single month in fall 2017, its AI system alerted first responders in 100 cases of potential self-harm.  (It doesn’t sound like the system contacts the first responders directly; rather, it prioritizes suspected cases for the human moderators, who then perform a second-level review.)  In one case, a woman in northern Argentina posted “This can’t go on.  From here I say goodbye.”  Within 3 hours, a medical team notified by Facebook reached the woman, and, according to the WSJ article, “saved her life.”

    It seems very likely to me that, in the current climate, attempts will be made to extend this approach into  the prediction of self-harm into the prediction of harm to others, in particular, mass killings in schools.  I’m not sure how successful this will be…the sample size of mass killings is, fortunately. pretty small…but if it is successful, then what might the consequences be, and what principles of privacy, personal autonomy, and institutional responsibility should guide the deployment (if any) of such systems?

    For example, what if an extension of the linguistic and vocal factors analysis mentioned above allowed discussions between students and school guidance counselors to be analyzed to predict the likelihood of major in-school violence?  What should the school, what should law enforcement be allowed to do with this data?

    Does the answer depend on the accuracy of the prediction?  What if people identified by the system will in 80% of cases commit an act of substantial violence (as defined by some criterion),  but at the same time there are 10% false positives (people the system says are likely perpetrators, but who are actually not)?  What if the numbers are 90% accuracy and only 3% false positives?



    13 Responses to “Artificial Intelligence and Human Behavior”

    1. Roy Says:

      A friend had called, telling me to turn on the TV. When I watched in real time the 2nd airliner hit the World Trade tower I immediately groaned as I recognized I was watching the deaths of probably hundreds of people. My first words to my wife, however, did not focus on that horror. I said, “We will loose freedom.”

      That describes my thoughts about any AI system used to predict behavior.

    2. Mike K Says:

      The military is focusing more and more on self mutilation as an indicator of unsuitable recruits,

      Years ago, when I was still enthusiastic about electronic medical records, there was an AI program tested in Minnesota that did a better job of predicting acute MIs for Physician Assistants in rural communities.

      Some of these were just “if-then” logic trees.

      AI seems to be form of magic that can do anything. Especially for those who don’t understand it.

      “Any sufficiently advanced technology is indistinguishable from magic.”

      Clarke’s Law #3.

    3. David Foster Says:

      Mike K…What are MIs?

    4. David Foster Says:

      Of course, cops have always used heuristics…conscious or just intuitive…to identify likely suspects.

    5. Mike K Says:

      MI = Myocardial Infarction.

      Otherwise know as heart attack.

      The program helped PAs and rural practitioners decide which cases needed to go to the city for intervention like TPA or stents.

      TPA = tissue plasminogen activator. It dissolves clots.

    6. Assistant Village Idiot Says:

      Self-harm is distinct from suicide, though they have overlap. False positives are a big issue. Do you want to lock people up against their will when they haven’t done anything dangerous but they have six out of seven signs that they will? You are going to have to build many hospitals.

    7. Brian Says:

      “It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.”
      Yes to both of these. Suicide is pretty rare, even if they picked a pretty at-risk population. So a predictor that says no one will commit suicide will be accurate ways more than 80-90% of the time. And something that predicts most suicide attempters will almost certainly have a very large number of false-positives. Unfortunately even among scientists, statistical proficiency is very low, and among the media it’s almost non-existent.

    8. Grurray Says:

      I’m skeptical that these indicators are as accurate as they say they are to predict something as complex as human behavior. I need to see some evidence.

      It reminds me of the Macdonald triad to predict violent behavior. It seemed like it should work but didn’t hold up to scrutiny.

    9. David Foster Says:

      Even where the assessment/prediction is done entirely by human “experts”, without machine aid, there are big issues. If a panel of 3 psychiatrists think a particular student is dangerous…but he has not committed any crimes, or only minor ones…then what is the therefore? How far can you go on restricting someone’s right based on expert opinion outside of a legal process in a court of law?

      It’s interesting: in recent there have been great restriction placed on a school’s ability to expel someone based on that person’s actual *behavior*, but it seems that we are now likely to allow…even require…action to be taken against students based not on actions but on a diagnosis.

    10. Mike K Says:

      Psychiatrists have been found by the military to be pretty much useless in assessing recruits’ stability.

      The Army still asks for psych consults when they disqualify recruits for self harm but it is pretty much a CYA. None of the other services use psych consults.

      We had a psychotic recruit one day a few months ago but that is rare as the recruiters know better.

    11. Grurray Says:

      Regarding predicting crime, Peter Thiel’s Palantir has been working on it. They are secretive about it probably because it’s so controversial – the targets inevitably turn out to be minorities – and you never can be sure how much the police reports may be skewed by local politics.

      Chicago has been using similar big data programs, but it’s difficult to determine how well they’ve worked. Violent crime has been down lately, but I attribute it to the Trump effect.

    12. Bill Brandt Says:


      An old saying from 1960s data processing meaning “garbage in garbage out“.

      The most surprising man I ever knew who committed suicide used to service our printer at work. Always seemed cheerful, and one day we were calling for service and his office is in a hush. Apparently he was playing tennis, took some pain meds and a drink and then shot himself.

      I think there is a tendency, or more accurately a wish, that “science“ can somehow predict human behavior.

      And as has been discussed, let’s say that it can but the behavior has not been affected yet. Are you supposed to lock someone up for something they have not done?

      In the case of the Argentinian woman whose “life they saved“, what was to prevent her from simply killing yourself at a later time?

    13. Bill Brandt Says:

      I might add to this (since we can’t edit). I think meds have a huge unpredictable side effect in suicides. In the case of the above man, I believe to this day that it was the alcohol mixed with his anti pain med (from tennis) that pushed him over.

      How is an AI program supposed to predict this? I think the human mind is still far more complicated than a programmer can decipher.

      I have heard more than one person who, upon taking “anti depressants”, had an occasional thought of committing suicide. Add to that a mixture of prescribed meds and, to use a thought of combat pilots flying damaged planes, “everyone becomes a test pilot” (flying a plane in a configuration for which no tests were made).

    Leave a Reply

    Comments Policy:  By commenting here you acknowledge that you have read the Chicago Boyz blog Comments Policy, which is posted under the comment entry box below, and agree to its terms.

    A real-time preview of your comment will appear under the comment entry box below.

    Comments Policy

    Chicago Boyz values reader contributions and invites you to comment as long as you accept a few stipulations:

    1) Chicago Boyz authors tend to share a broad outlook on issues but there is no party or company line. Each of us decides what to write and how to respond to comments on his own posts. Occasionally one or another of us will delete a comment as off-topic, excessively rude or otherwise unproductive. You may think that we deleted your comment unjustly, and you may be right, but it is usually best if you can accept it and move on.

    2) If you post a comment and it doesn't show up it was probably blocked by our spam filter. We batch-delete spam comments, typically in the morning. If you email us promptly at we may be able to retrieve and publish your comment.

    3) You may use common HTML tags (italic, bold, etc.). Please use the "href" tag to post long URLs. The spam filter tends to block comments that contain multiple URLs. If you want to post multiple URLs you should either spread them across multiple comments or email us so that we can make sure that your comment gets posted.

    4) This blog is private property. The First Amendment does not apply. We have no obligation to publish your comments, follow your instructions or indulge your arguments. If you are unwilling to operate within these loose constraints you should probably start your own blog and leave us alone.

    5) Comments made on the Chicago Boyz blog are solely the responsibility of the commenter. No comment on any post on Chicago Boyz is to be taken as a statement from or by any contributor to Chicago Boyz, the Chicago Boyz blog, its administrators or owners. Chicago Boyz and its contributors, administrators and owners, by permitting comments, do not thereby endorse any claim or opinion or statement made by any commenter, nor do they represent that any claim or statement made in any comment is true. Further, Chicago Boyz and its contributors, administrators and owners expressly reject and disclaim any association with any comment which suggests any threat of bodily harm to any person, including without limitation any elected official.

    6) Commenters may not post content that infringes intellectual property rights. Comments that violate this rule are subject to deletion or editing to remove the infringing content. Commenters who repeatedly violate this rule may be banned from further commenting on Chicago Boyz. See our DMCA policy for more information.