Speaking of Surveys

An extensive collection of political polls, current and historical.   Use the ‘poll type’ and ‘state’ scroll bars, or the free-form search field.   Remarkably fast search.

7 thoughts on “Speaking of Surveys”

  1. What isn’t clear, here, is that any poll comes with a margin for error. If the end results are within the margin of error, it is a “push”.

    How do you reliably aggregate different polls with different margins for error? I’m not that deep into stats, but that one seems to be a problem with major hair on it… Perhaps there is a methodology in stats for dealing with that, but I’m unaware of it. And there is a lot in statistics that seems to be an awful lot of “how do we make the numbers say what we want them to say?”

    The only reason I don’t generally find stats improbable on the whole is the way they used them to estimate the actual german production rates in WWII by examining the serial numbers on captured and destroyed equipment. Post-war analysis of their actual manufacturing showed that they were within 10% error on everything except artillery, which they’d underestimated by a considerable amount. Modern serial numbers are “not serial”, to prevent that from being done today by an enemy.

  2. “How do you reliably aggregate different polls with different margins for error?”
    This is not a challenging statistical problem. It is very well established how to average numbers with different uncertainties. The problem is that while polls report margin-of-error / statistical uncertainties, they’re also hideously biased and you can’t just average away biases, which are systematic errors rather than statistical errors.

  3. ObloodyHell wrote

    “How do you reliably aggregate different polls with different margins for error?”

    Brian responded with

    “This is not a challenging statistical problem. It is very well established how to average numbers with different uncertainties. The problem is […] bias […]”

    In a problem where the imperfections in each data source can be usefully described by a single number (e.g., “margins for error” which might be an informal way of saying something like “standard deviations”), it may not be a challenging problem. But in the real world, even in the physical sciences, the relationship between multiple imperfect data sources tends to be less trivial than that, and it is often (I am tempted to say “normally”) a challenging problem. In some fields, serious highly qualified people analyze multiple sources of tangible physical data and call it “multi-sensor fusion” and write books about it. In AI, serious highly qualified people have various methods for combining results from multiple machine learning approaches which have somewhat complementary strengths and weaknesses, though I don’t know of a summary term for the entire family of techniques. And in climate science, well-funded people combine different kinds of data innovatively incorrectly without much connection to ordinary statistical techniques, put the results on the cover of a high profile official report and hammer it in their press releases, and circle the wagons around it for at least half a generation.

    Also, if the data are sufficiently bad, trying to make sense of the aftermath may be quite hopeless. Once upon a time my experimentalist coworkers were very irritated at me when I said arbitrarily small signals can be extracted from enough noisy measurements. I was thinking of cleverly designed setups like the GPS coding scheme, where investing ingenuity in the signal can let you extract it from the noise later. They were thinking of postmortems of less ingenious inconclusive experiments in the (physical chemistry) lab. For the situation they had in mind, they were basically correct. And the usual poorly conceived poorly documented poorly executed shoddiness of push polls, of polls for manufacturing talking points, and of polls for journotainment is to ordinary inconclusiveness of a chemistry experiment as the darkness of the moon before it rises is to the darkness of sunshine at noon. (And Brian is not completely off base in picking on “bias” but in practice the distortion can be much wilder than just bias.)

    OBloodyHell also wrote “The only reason I don’t generally find stats improbable on the whole is the way they used them to estimate the actual german production rates in WWII by examining the serial numbers on captured and destroyed equipment.”

    The only reason? That seems too harsh to me. When a phone system recognizes spoken input reasonably reliably, it seems to me that you should consider that to be a pretty forceful reminder that statisticians (defined broadly enough to include people designing AI learning systems, and possibly defined broadly enough to include an occasional blog commenter who did a Ph.D. on Monte Carlo simulation, and definitely defined narrowly enough to exclude fellow travellers of the Union of Superficial Science Reenactors whose skillset is punching down by torturing data while sucking up by supporting an agenda) really do know a useful thing or two. Not all purported statistics are necessarily actual statistics any more than purported republics are necessarily actual republics, and not all obscure studies are uselessness wrapped in obscurantism; like 20th century physics in general and quantum mechanics in particular, actual statistics has enough successes to provide a pretty good excuse for complexity.

  4. Q: “[…] How do you reliably aggregate different polls with different margins for error? I’m not that deep into stats, but that one seems to be a problem with major hair on it”¦ Perhaps there is a methodology in stats for dealing with that, but I’m unaware of it. […]”

    A: “This is not a challenging statistical problem. […]”

    me: [The “seems to be a problem with major hair on it” intuition was basically correct. The dismissive answer is fairly dysfunctional, and grating too. Wall of text! Though I did edit down the original “of sunshine at noon as seen by a protected species flying through the focus of a renewable energy facility” to four words; go me.]

    “The only reason I don’t generally find stats improbable […]”

    me: [Hey! I resemble that remark! And I have a deep well of walls of text in me.]

    “Sigh. The question was about averaging poll numbers, man.”

    You, not the questioner, introduced “average” into “the” (new) question, passing over the question in favor of “the” question that you preferred to answer. (And your preference is a valid feeling, and that’s the important thing, amirite, man? Sigh; indeed, Rockefeller weeps.) And you answered your preferred question not very correctly, and you chose a dismissive style. If you are weary of people pushing back against derailing involving personalized targets like that, perhaps you should focus on less personalized approaches, such as undirected spacy strawmanning of mainstream-on-this-blog ideas? (E.g., chicagoboyz.net/archives/68270.html “it needs to be” and “nothing internal or external”; do tell.)

  5. As with all things media, what they call the margin of error is nothing of the kind. What they call “margin” is actually the sampling error. This is the maximum difference between what was measured in a sample and what would be measured if the entire population was examined.

    This comes with two big “ifs”. The first is that the variation in the population is randomly distributed. The other is that the sample is made in a way that it truly represents the total population. Both of these conditions can be challenging to meet when you are dealing with inanimate widgets sitting on a table or coming off of a line.

    The wonder is that it seemed possible to successfully apply those techniques to people, each possessing his own agency, for this long. There’s an old Jimmy Stewart film called “Miracle Town” where the “sample” of a polling operation become aware that their opinions are being mined and start to supply the “right” answers. It seems that most of us that even respond to these polls anymore have taken to supplying whatever answer suits our political leaning or seems likely to throw off the results in a way we would wish.

    Of course, the polling organizations are all keenly aware that their bottom line depends on them supplying the answer desired by whoever is paying them. The day after the election only comes once a year and the politicians have to spend their money somewhere. Talk about Lucy holding the football, politicians and their consultants are a perfect prey population for every con known to man, even considering that most are prime practitioners themselves.

Comments are closed.