An artificial intelligence program was assigned the task of turning satellite images into street maps. It was graded by comparing reconstructed images (reconstructed from the map) and comparing them with the original; also, by the clarity of the street map. The grades were used by the program to continually improved its performance.
But what the program sneakily learned to do was to encode details of the original image into the street map, in a manner invisible to humans, thereby optimizing its grade on the reconstructed image…independently of how well the street map…which was the actual desired product…actually reflected the original image.
Humans, also, often respond to incentives in ways very different from those expected by the designers of those incentives…as many creators of sales commission plans and manufacturing bonus plans have discovered. Bureaucracies, especially, tend to respond to the measurements placed on them in ways that are not consistent with the interests of the larger organization or society that they are supposed to be serving. See Stupidity, Communist-Style and Capitalist-Style and The Reductio ad Absurdum of Bureaucratic Liberalism.
I call it a Principal Agent Problem,
This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard.
Even by algorithms. I wonder about “smart” thermostats that mighty connect with the public utility and figure out how to increase their own revenue.