Strange Comparison, Dangerous Conclusion

About a week ago, the WSJ ran an article titled Mark Zuckerberg is No James Madison.  The article argues that a constitution is similar to a block of computer code—a valid point, although I would argue it is also true of legislation and contracts in general…both the code, and the constitution/law/contract must be sufficiently clear and unambiguous to be executable without reference to their originators.

Then the article goes on to say that ‘the Constitution understands human nature.  Facebook, dangerously at times does not.  In designing the Constitution, Madison managed to appeal to people’s better angels while at the same time calculating man’s capacity to harm and behave badly. Facebook’s designers, on the other hand, appear to have assumed the best about people. They apparently expected users to connect with friends only in benign ways. While the site features plenty of baby and puppy photos, it has also become a place where ISIS brags about beheadings and Russians peddling misinformation seek to undermine the institutions of a free society.’

The attempt to create a parallel between Zuckerberg and Madison is a strange one, IMO, given the completely different nature of the work the two men were doing. Madison was attempting to create a new model for a self-governing country, Zuckerberg was attempting to make money for himself and his investors, and maybe to provide a little fun and value for his users along the way.

What I find especially problematic is the ‘therefore’ that the author draws:

Facebook insists it is not a media company. Maybe so. But unless it takes on the responsibilities of an editor and publisher by verifying the identities of users, filtering content that runs on its platform, and addressing the incentives to post specious or inflammatory “facts,” Facebook should expect to be policed externally.

But is Facebook really a publisher, or it is it more of a printer?  If someone..Ben Franklin in the mid-1700s or some corporation today…is running a printing shop, running printing jobs for all who will pay, should he or it be held accountable for validating the truth of the material printed and verifying the identities of the customers?

Read more

Artificial Intelligence and Human Behavior

WSJ has an story on the use of artificial intelligence to predict potential suicides.  The article cites a study indicating that “the traditional approach of predicting suicide,” which includes doctors’ assessments, was only slightly better than random guessing.

In contrast, when 16,000 patients in Tennessee were analyzed by an AI system, the system was able to predict with 80-90% accuracy whether someone would attempt suicide in the next two years, using indicators such as history of using antidepressants and injuries with firearms.  (It would have been nice to see numbers cited for false negatives vs false positives…also, I wonder how nearly the same results could have been achieved by use of traditional statistical methods, without appeal to AI technology.)

Another approach, which has been tested in Cincinnati schools and clinics, uses an app which records sessions between therapists and patients, and then analyzes linguistic and vocal factors to predict suicide risk.

Both Apple (Siri) and Facebook have been working to identify potential suicides.  The latter company says that in a single month in fall 2017, its AI system alerted first responders in 100 cases of potential self-harm.  (It doesn’t sound like the system contacts the first responders directly; rather, it prioritizes suspected cases for the human moderators, who then perform a second-level review.)  In one case, a woman in northern Argentina posted “This can’t go on.  From here I say goodbye.”  Within 3 hours, a medical team notified by Facebook reached the woman, and, according to the WSJ article, “saved her life.”

It seems very likely to me that, in the current climate, attempts will be made to extend this approach into  the prediction of self-harm into the prediction of harm to others, in particular, mass killings in schools.  I’m not sure how successful this will be…the sample size of mass killings is, fortunately. pretty small…but if it is successful, then what might the consequences be, and what principles of privacy, personal autonomy, and institutional responsibility should guide the deployment (if any) of such systems?

For example, what if an extension of the linguistic and vocal factors analysis mentioned above allowed discussions between students and school guidance counselors to be analyzed to predict the likelihood of major in-school violence?  What should the school, what should law enforcement be allowed to do with this data?

Does the answer depend on the accuracy of the prediction?  What if people identified by the system will in 80% of cases commit an act of substantial violence (as defined by some criterion),  but at the same time there are 10% false positives (people the system says are likely perpetrators, but who are actually not)?  What if the numbers are 90% accuracy and only 3% false positives?

Discuss.

Attack of the Job-Killing Robots, Part 3

The final months of World War II included the first-ever battle of robots:  on one side, the German V-1 missile and on the other, an Allied antiaircraft system that automatically tracked the enemy missiles, performed the necessary fire-control computations, and directed the guns accordingly. This and other wartime projects greatly contributed to the understanding of the feedback concept and the development of automatic control technology.  Also developed during the war were the first general-purpose programmable digital computers: the Navy/Harvard/IBM Mark I and the Army/MIT ENIAC…machines that, although incredibly limited by our presented-day, standards were at the time viewed with awe and often referred to as ‘thinking machines.’

These wartime innovations in feedback control and digital computation would soon have enormous impact on the civilian world.

This is one in a continuing series of posts in which I attempt to provide some historical context for today’s discussions of automation and its impact on jobs and society…a context of which people writing about this topic often seem to have little understanding.

Read more

A 60 Year Old Fighter Design – Still Operational

In 2009, Neptunus Lex paid tribute to the MIG-21, which he referred to as “a noble adversary.”  At the time, it appeared that the airplane was about to be phased out of service by those countries still operating it.  Didn’t happen that way. though…the airplane is still in use by several countries, most notably India, which still operates more than 200 of them.

Design studies for the MIG-21  began in 1953, with first flight in 1958 and production shipments beginning in 1959.  As analogy for the design’s longevity, imagine the Red Baron’s Fokker triplane from 1918 still being employed in a military role in the post-Vietnam era of 1977!

An article asks: is the MIG-21 is the fighter jet that could fly for 100 years?  Probably not, I imagine, at least in any kind of operational role…but it’s already done pretty well in longevity terms for a combat airplane.

There are some web pages on the MIG-21 by a former East German fighter pilot.

Also, there’s a pretty decent movie, based on real events, about the 1966 Israeli operation to steal a MIG-21 from Iraq.  The moviemakers were evidently unable to get their hands on a real MIG-21 (in 1988), so a MIG-15 was used for the flying scenes instead.

More MIG-21 information here.

The Details of Work and the Realities of Automation

An interesting piece on the automation of trucking, with an extensive comment thread.  Many of the commenters have practical experience in the trucking industry and in automation work in other industries such as sawmills.