Jim Miller, discussing customer-satisfaction surveys, highlights a common error of inference:
Consumer Reports does not seem to understand that all its surveys, not just those on cars, have a systematic problem; the respondents are self selected, which often biases the results, as any good survey researcher can tell you.
So (following Jim’s example) if the Consumer Reports survey shows the Camry as more reliable than the Corvette, is this because the Camry is really more reliable or is it because people who buy Corvettes tend to drive them hard? The reliability data provided by Consumer Reports do not provide enough information to answer this question.
Similar cases of flawed inference abound and undoubtedly contribute to policy errors. For example, Congressmen often say that their constituents’ letters and phone calls to them indicate support for their position on this or that issue. But if few people contact a Congressman to object to his position on an issue, does that really mean that few constituents object? It might mean that most of the constituents who object assume, perhaps for good reason, that the Congressman will ignore their opinions, and therefore do not bother to contact him. (A friend of mine once telephoned the office of the late Sidney Yates, D-IL, to complain about Yates’s position on some issue. The staff person who took my friend’s call asked him if he was one of those people who listened to Rush Limbaugh, then hung up on him. I am guessing that my friend never called back and that Yates received few critical calls overall.)
There are many other examples of this phenomenon. In my experience, people who work in medicine often overestimate the risks of motorcycling and other risky activities. It’s easy to understand why they do this, since they see many people who have suffered terrible injuries while engaging in such activities. However, the population samples that they see are not representative and therefore their conclusions are unreliable. (I assume that motorcyclists, skydivers et al who have been involved in their activities for some time have generally good ideas of how much risk they face, and have decided that on balance the risk is worth taking. Problems come from outsiders who overestimate risks and/or underestimate benefits, and who think it’s their prerogative to calculate other people’s tradeoffs.)
Public education about basic statistics may be the only way to reduce the negative consequences of such widespread errors of inference. There are already plenty of activists who seek to “educate” us and “raise our awareness” about various issues, many of them frivolous. It’s too bad that we do not have statistics activists as well.
UPDATE: See also David Foster’s post, How Not to Do Market Research.