Artificial Intelligence and Human Behavior

WSJ has an story on the use of artificial intelligence to predict potential suicides.  The article cites a study indicating that “the traditional approach of predicting suicide,” which includes doctors’ assessments, was only slightly better than random guessing.

In contrast, when 16,000 patients in Tennessee were analyzed by an AI system, the system was able to predict with 80-90% accuracy whether someone would attempt suicide in the next two years, using indicators such as history of using antidepressants and injuries with firearms.  (It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.)

Another approach, which has been tested in Cincinnati schools and clinics, uses an app which records sessions between therapists and patients, and then analyzes linguistic and vocal factors to predict suicide risk.

Both Apple (Siri) and Facebook have been working to identify potential suicides.  The latter company says that in a single month in fall 2017, its AI system alerted first responders in 100 cases of potential self-harm.  (It doesn’t sound like the system contacts the first responders directly; rather, it prioritizes suspected cases for the human moderators, who then perform a second-level review.)  In one case, a woman in northern Argentina posted “This can’t go on.  From here I say goodbye.”  Within 3 hours, a medical team notified by Facebook reached the woman, and, according to the WSJ article, “saved her life.”

It seems very likely to me that, in the current climate, attempts will be made to extend this approach into  the prediction of self-harm into the prediction of harm to others, in particular, mass killings in schools.  I’m not sure how successful this will be…the sample size of mass killings is, fortunately. pretty small…but if it is successful, then what might the consequences be, and what principles of privacy, personal autonomy, and institutional responsibility should guide the deployment (if any) of such systems?

For example, what if an extension of the linguistic and vocal factors analysis mentioned above allowed discussions between students and school guidance counselors to be analyzed to predict the likelihood of major in-school violence?  What should the school, what should law enforcement be allowed to do with this data?

Does the answer depend on the accuracy of the prediction?  What if people identified by the system will in 80% of cases commit an act of substantial violence (as defined by some criterion),  but at the same time there are 10% false positives (people the system says are likely perpetrators, but who are actually not)?  What if the numbers are 90% accuracy and only 3% false positives?

Discuss.

13 thoughts on “Artificial Intelligence and Human Behavior”

  1. A friend had called, telling me to turn on the TV. When I watched in real time the 2nd airliner hit the World Trade tower I immediately groaned as I recognized I was watching the deaths of probably hundreds of people. My first words to my wife, however, did not focus on that horror. I said, “We will loose freedom.”

    That describes my thoughts about any AI system used to predict behavior.

  2. The military is focusing more and more on self mutilation as an indicator of unsuitable recruits,

    Years ago, when I was still enthusiastic about electronic medical records, there was an AI program tested in Minnesota that did a better job of predicting acute MIs for Physician Assistants in rural communities.

    Some of these were just “if-then” logic trees.

    AI seems to be form of magic that can do anything. Especially for those who don’t understand it.

    “Any sufficiently advanced technology is indistinguishable from magic.”

    Clarke’s Law #3.

  3. MI = Myocardial Infarction.

    Otherwise know as heart attack.

    The program helped PAs and rural practitioners decide which cases needed to go to the city for intervention like TPA or stents.

    TPA = tissue plasminogen activator. It dissolves clots.

  4. Self-harm is distinct from suicide, though they have overlap. False positives are a big issue. Do you want to lock people up against their will when they haven’t done anything dangerous but they have six out of seven signs that they will? You are going to have to build many hospitals.

  5. “It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.”
    Yes to both of these. Suicide is pretty rare, even if they picked a pretty at-risk population. So a predictor that says no one will commit suicide will be accurate ways more than 80-90% of the time. And something that predicts most suicide attempters will almost certainly have a very large number of false-positives. Unfortunately even among scientists, statistical proficiency is very low, and among the media it’s almost non-existent.

  6. I’m skeptical that these indicators are as accurate as they say they are to predict something as complex as human behavior. I need to see some evidence.

    It reminds me of the Macdonald triad to predict violent behavior. It seemed like it should work but didn’t hold up to scrutiny.

  7. Even where the assessment/prediction is done entirely by human “experts”, without machine aid, there are big issues. If a panel of 3 psychiatrists think a particular student is dangerous…but he has not committed any crimes, or only minor ones…then what is the therefore? How far can you go on restricting someone’s right based on expert opinion outside of a legal process in a court of law?

    It’s interesting: in recent there have been great restriction placed on a school’s ability to expel someone based on that person’s actual *behavior*, but it seems that we are now likely to allow…even require…action to be taken against students based not on actions but on a diagnosis.

  8. Psychiatrists have been found by the military to be pretty much useless in assessing recruits’ stability.

    The Army still asks for psych consults when they disqualify recruits for self harm but it is pretty much a CYA. None of the other services use psych consults.

    We had a psychotic recruit one day a few months ago but that is rare as the recruiters know better.

  9. Regarding predicting crime, Peter Thiel’s Palantir has been working on it. They are secretive about it probably because it’s so controversial – the targets inevitably turn out to be minorities – and you never can be sure how much the police reports may be skewed by local politics.

    Chicago has been using similar big data programs, but it’s difficult to determine how well they’ve worked. Violent crime has been down lately, but I attribute it to the Trump effect.

  10. GIGO

    An old saying from 1960s data processing meaning “garbage in garbage out“.

    The most surprising man I ever knew who committed suicide used to service our printer at work. Always seemed cheerful, and one day we were calling for service and his office is in a hush. Apparently he was playing tennis, took some pain meds and a drink and then shot himself.

    I think there is a tendency, or more accurately a wish, that “science“ can somehow predict human behavior.

    And as has been discussed, let’s say that it can but the behavior has not been affected yet. Are you supposed to lock someone up for something they have not done?

    In the case of the Argentinian woman whose “life they saved“, what was to prevent her from simply killing yourself at a later time?

  11. I might add to this (since we can’t edit). I think meds have a huge unpredictable side effect in suicides. In the case of the above man, I believe to this day that it was the alcohol mixed with his anti pain med (from tennis) that pushed him over.

    How is an AI program supposed to predict this? I think the human mind is still far more complicated than a programmer can decipher.

    I have heard more than one person who, upon taking “anti depressants”, had an occasional thought of committing suicide. Add to that a mixture of prescribed meds and, to use a thought of combat pilots flying damaged planes, “everyone becomes a test pilot” (flying a plane in a configuration for which no tests were made).

Comments are closed.