Why Most of Us No Longer Read The Economist

I just received a press release promoting The Economist‘s new survey of academic economists about McCain’s and Obama’s respective economic programs. Here are the results:

What’s going on here?

This is a junk survey. Look at the data. Now look at the article.

Here’s The Economist‘s explanation of how they generated a survey sample:

Our survey is not, by any means, a scientific poll of all economists. We e-mailed a questionnaire to 683 research associates, all we could track down, of the National Bureau of Economic Research, America’s premier association of applied academic economists, though the NBER itself played no role in the survey. A total of 142 responded, of whom 46% identified themselves as Democrats, 10% as Republicans and 44% as neither. This skewed party breakdown may reflect academia’s Democratic tilt, or possibly Democrats’ greater propensity to respond. Still, even if we exclude respondents with a party identification, Mr Obama retains a strong edge—though the McCain campaign should be buoyed by the fact that 530 economists have signed a statement endorsing his plans.

The stuff about 683 research associates and the NBER is meaningless. What matters is that this was an Internet poll arbitrarily restricted to academic economists and with a self-selected sample. This is a problem because:

-Academic economists are likely to be more leftist than economists as a whole.

-Only 14 out of the 142 respondents identified themselves as Republicans.

-There is no way to know why only 10% or respondents identified as Republicans, but several possibilities implying gross sampling error are obvious. In other words, either most academic economists lean as far to the Left as do other academics, which seems unlikely and would impeach the survey results, or the sample is unrepresentative and impeaches the survey results.

-The labels “Democratic economist”, “Republican economist” and “unaffiliated economist” are self-selected and may be inaccurate. My guess is that most of the unaffiliateds usually vote for Democrats even if they are not registered Democrats. In this regard I am reminded of media people who claim to be independent even though everyone knows they vote overwhelmingly for Democrats.

So this is a worthless survey for research purposes. It is not, however, worthless, for business purposes, as I am sure it will generate a lot of discussion and outraged debunking by bloggers, and therefore a lot of traffic for The Economist‘s Web site. It may also help to get Obama elected, and perhaps that is part of the plan.

Where have we seen this kind of politically driven statistical analysis before?

UPDATE: The vagueness of the self-reported categorizations, “Republican”, “Democrat” and “independent” is obvious. One wonders why the survey did not also, or as an alternative, ask respondents to report for whom they voted in recent elections.

Number Gut Part II

Way back in 2004 I wrote about how the lack of an intuitive sense of scale prevented many people from viewing the Lancet Iraqi Mortality survey with skepticism. The same lack of sense of scale shows up in other areas such as in this article (via Megan McArdle) about ending subsidies to the oil industry instead of levying a windfall-profits tax.

Read more

Problems With Self-Selected Survey Data

Jim Miller, discussing customer-satisfaction surveys, highlights a common error of inference:

Consumer Reports does not seem to understand that all its surveys, not just those on cars, have a systematic problem; the respondents are self selected, which often biases the results, as any good survey researcher can tell you.

So (following Jim’s example) if the Consumer Reports survey shows the Camry as more reliable than the Corvette, is this because the Camry is really more reliable or is it because people who buy Corvettes tend to drive them hard? The reliability data provided by Consumer Reports do not provide enough information to answer this question.

Read more