Artificial Intelligence and Human Behavior

WSJ has an story on the use of artificial intelligence to predict potential suicides.  The article cites a study indicating that “the traditional approach of predicting suicide,” which includes doctors’ assessments, was only slightly better than random guessing.

In contrast, when 16,000 patients in Tennessee were analyzed by an AI system, the system was able to predict with 80-90% accuracy whether someone would attempt suicide in the next two years, using indicators such as history of using antidepressants and injuries with firearms.  (It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.)

Another approach, which has been tested in Cincinnati schools and clinics, uses an app which records sessions between therapists and patients, and then analyzes linguistic and vocal factors to predict suicide risk.

Both Apple (Siri) and Facebook have been working to identify potential suicides.  The latter company says that in a single month in fall 2017, its AI system alerted first responders in 100 cases of potential self-harm.  (It doesn’t sound like the system contacts the first responders directly; rather, it prioritizes suspected cases for the human moderators, who then perform a second-level review.)  In one case, a woman in northern Argentina posted “This can’t go on.  From here I say goodbye.”  Within 3 hours, a medical team notified by Facebook reached the woman, and, according to the WSJ article, “saved her life.”

It seems very likely to me that, in the current climate, attempts will be made to extend this approach into  the prediction of self-harm into the prediction of harm to others, in particular, mass killings in schools.  I’m not sure how successful this will be…the sample size of mass killings is, fortunately. pretty small…but if it is successful, then what might the consequences be, and what principles of privacy, personal autonomy, and institutional responsibility should guide the deployment (if any) of such systems?

For example, what if an extension of the linguistic and vocal factors analysis mentioned above allowed discussions between students and school guidance counselors to be analyzed to predict the likelihood of major in-school violence?  What should the school, what should law enforcement be allowed to do with this data?

Does the answer depend on the accuracy of the prediction?  What if people identified by the system will in 80% of cases commit an act of substantial violence (as defined by some criterion),  but at the same time there are 10% false positives (people the system says are likely perpetrators, but who are actually not)?  What if the numbers are 90% accuracy and only 3% false positives?

Discuss.

Just sayin’

In all the righteous indignation about the story that there were four cops who failed to enter the school after the shooting, I’ve yet to see a source cited other than CNN.

We on the right have spent a lot of time and energy yelling that CNN is untrustworthy.

Why, then, do we uncritically accept this story from CNN?

Boycott the NRA Boycotters

Start with the Enterprise and Alamo and National car rental companies. Add other companies to the list as they join the PC #BoycottNRA bandwagon.

Do these people remember the Smith & Wesson boycott? Perhaps not. And the anti-RKBA boycotters in this case aren’t gun companies and therefore don’t stand to lose as much from a conservative/pro-RKBA boycott as S&W did. The management of National et al no doubt figure their political opportunism won’t cost them much. They may be mistaken. Late-night TV hosts can get away with antagonizing half of their potential audience if doing so gets them increased viewership from the other half. However, sellers of ordinary goods and services are unwise to expect any such political partisanship to be good for their businesses.

We are in uncharted territory.

On October 18, 2016 Barack Obama ridiculed anyone who could think the election could be rigged.

OBAMA: I have never seen in my lifetime or in modern political history any presidential candidate trying to discredit the elections and the election process before votes have even taken place. It’s unprecedented. It happens to be based on no facts. … [T]here is no serious person out there who would suggest somehow that you could even rig America’s elections, in part, because they are so decentralized and the numbers of votes involved. There is no evidence that that has happened in the past or that there are instances in which that will happen this time. And so I’d invite Mr. Trump to stop whinin’ and go try to make his case to get votes.

Then Hillary lost.

In December 2016, Democrats were still trying to figure out what happened.

This process, which is a form of what’s called confirmation bias, can help explain why Trump supporters remain supportive no matter what evidence one puts to them—and why Trump’s opponents are unlikely to be convinced of his worth even if he ends up doing something actually positive. The two groups simply process information differently. “The confirmation bias is not specific to Donald Trump. It’s something we are all susceptible to,” the Columbia University psychologist Daniel Ames, one of several scholars to nominate this paper, said. “But Trump appears to be an especially public and risky illustration of it in many domains.” (Ames and his colleague Alice Lee recently showed a similar effect with beliefs about torture.)

One of those was a good observation. But what about the “Russia Collusion” story?

Read more