Chicago Boyz

                 
 
 
 

 
  •   Problem? Question?
  •   Contact Contributors:
  •   Please send any comments or suggestions about America 3.0 to:

  • CB Twitter Feed
  • Lex's Tweets
  • Jonathan's Tweets
  • Blog Posts (RSS 2.0)
  • Blog Posts (Atom 0.3)
  • Incoming Links
  • Recent Comments

    • Loading...
  • Authors

  • Notable Discussions

  • Recent Posts

  • Blogroll

  • Categories

  • Archives

  • Archive for the 'Statistics' Category

    The ghost of database past

    Posted by L. C. Rees on 18th July 2013 (All posts by )

    Section 2, Amendment XIV:

    Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State.

    Article I, Section 2, U.S. Constitution:

    Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct.

    For working on the paternal genealogy of Howard Ira Milligan, my mother’s father, U.S. Census records have proved to be an important primary source.

    Read the rest of this entry »

    Posted in Statistics, USA | 1 Comment »

    “Studies Show” – Widespread Errors in Medical Research

    Posted by Jonathan on 17th June 2013 (All posts by )

    Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?

    The arguments presented in this article seem like a good if somewhat long presentation of the general problem, and could be applied in many fields besides medicine. (Note that the comments on the article rapidly become an argument about global warming.) The same problems are also seen in the work of bloggers, journalists and “experts” who specialize in popular health, finance, relationship and other topics and have created entire advice industries out of appeals to the authority of often poorly designed studies. The world would be a better place if students of medicine, law and journalism were forced to study basic statistics and experimental design. Anecdote is not necessarily invalid; study results are not necessarily correct and are often wrong or misleading.

    None of this is news, and good researchers understand the problems. However, not all researchers are competent, a few are dishonest and the research funding system and academic careerism unintentionally create incentives that make the problem worse.

    (Thanks to Madhu Dahiya for her thoughtful comments.)

    Posted in Academia, Medicine, Science, Statistics, Systems Analysis, Video | 13 Comments »

    Five Thought-Provoking Statistics Problems

    Posted by David Foster on 28th November 2012 (All posts by )

    here

    Posted in Statistics | 15 Comments »

    Why is the election so close ?

    Posted by Michael Kennedy on 9th September 2012 (All posts by )

    I have been watching the trends in the election campaign thus far. I actually watched much more of both conventions than I expected to. My present question is Why is this election close ?. Powerline blog asks the same question and has a rather gloomy conclusion.

    But it now appears that the election will be very close after all, and that Obama might even win it. It will require a few more days to assess the effects (if any) of the parties’ two conventions, but for now it looks as though the Democrats emerged with at least a draw, despite a convention that was in some ways a fiasco. In today’s Rasmussen survey, Obama has regained a two point lead over Romney, 46%-44%. Scott Rasmussen writes:

    The president is enjoying a convention bounce that has been evident in the last two nights of tracking data. He led by two just before the Republican convention, so he has already erased the modest bounce Romney received from his party’s celebration in Tampa. Perhaps more significantly, Democratic interest in the campaign has soared. For the first time, those in the president’s party are following the campaign as closely as GOP voters.

    John Hinderaker comes to the following conclusion, at least tentatively.

    On paper, given Obama’s record, this election should be a cakewalk for the Republicans. Why isn’t it? I am afraid the answer may be that the country is closer to the point of no return than most of us believed. With over 100 million Americans receiving federal welfare benefits, millions more going on Social Security disability, and many millions on top of that living on entitlement programs–not to mention enormous numbers of public employees–we may have gotten to the point where the government economy is more important, in the short term, than the real economy. My father, the least cynical of men, used to quote a political philosopher to the effect that democracy will work until people figure out they can vote themselves money. I fear that time may have come.

    I have several other theories that are more optimistic. The polls may be wrong for several reasons. Citizens have been deluged with accusations of racism by frantic Democrats. Those who plan to vote for Romney may simply be misleading pollsters. In California about 30 years ago, the black mayor of Los Angeles, Tom Bradley, was ahead in the polls going into the 1982 election. In the event, he lost in spite of appearances on election day. Absentee ballots were credited with turning the result into a win for Deukmejian, his GOP rival. The racial effect is still disputed.

    Two theories of the racial effect are in competition. One holds that white voters are less likely to vote for a black candidate. The fact that a number of black office holders have been elected by majority white districts, including that of retired colonel Allen West, should dispute that theory. Another is that white voters are reluctant to disclose voting preferences to pollsters, which might expose them to changes of racism. Voting against Obama is widely attributed to racism by Democrats and, especially, the progressive left.

    It is not clear if either of these theories has validity. It would be very depressing to think the theory of dependency on government is valid.

    Read the rest of this entry »

    Posted in Conservatism, Economics & Finance, Elections, History, Leftism, Obama, Statistics | 48 Comments »

    John Derbyshire

    Posted by Michael Kennedy on 7th April 2012 (All posts by )

    A favorite writer, usually seen at National Review but widely published, has created a firestorm of political correctness by an article he wrote for another magazine. John Derbyshire is a mathematician and curmudgeon of the satiric variety. I think I have read all of his books, several of which are not an easy read. His We Are Doomed had me laughing so hard I cried. My review is here.

    His current outrage is to have said “There is a talk that nonblack Americans have with their kids, too. My own kids, now 19 and 16, have had it in bits and pieces as subtopics have arisen. If I were to assemble it into a single talk, it would look something like the following.

    * * * * * * * * * * * * *

    (1) Among your fellow citizens are forty million who identify as black, and whom I shall refer to as black. The cumbersome (and MLK-noncompliant) term “African-American” seems to be in decline, thank goodness. “Colored” and “Negro” are archaisms. What you must call “the ‘N’ word” is used freely among blacks but is taboo to nonblacks.

    (2) American blacks are descended from West African populations, with some white and aboriginal-American admixture. The overall average of non-African admixture is 20-25 percent. The admixture distribution is nonlinear, though: “It seems that around 10 percent of the African American population is more than half European in ancestry.” (Same link.)

    (3) Your own ancestry is mixed north-European and northeast-Asian, but blacks will take you to be white.

    Derbyshire’s wife is Chinese and his kids are mixed race Chinese-Caucasion

    (4) The default principle in everyday personal encounters is, that as a fellow citizen, with the same rights and obligations as yourself, any individual black is entitled to the same courtesies you would extend to a nonblack citizen. That is basic good manners and good citizenship. In some unusual circumstances, however—e.g., paragraph (10h) below—this default principle should be overridden by considerations of personal safety.

    (5) As with any population of such a size, there is great variation among blacks in every human trait (except, obviously, the trait of identifying oneself as black). They come fat, thin, tall, short, dumb, smart, introverted, extroverted, honest, crooked, athletic, sedentary, fastidious, sloppy, amiable, and obnoxious. There are black geniuses and black morons. There are black saints and black psychopaths. In a population of forty million, you will find almost any human type. Only at the far, far extremes of certain traits are there absences. There are, for example, no black Fields Medal winners. While this is civilizationally consequential, it will not likely ever be important to you personally. Most people live and die without ever meeting (or wishing to meet) a Fields Medal winner.

    So far, despite the outrage, this seems pretty benign to me. (Probably evidence of my own racism)

    Here comes trouble:

    (7) Of most importance to your personal safety are the very different means for antisocial behavior, which you will see reflected in, for instance, school disciplinary measures, political corruption, and criminal convictions.

    He is writing about means but few readers made that distinction and many may have no idea what a “mean “is.
    Read the rest of this entry »

    Posted in Blogging, Civil Society, Crime and Punishment, Human Behavior, Statistics, Urban Issues | 52 Comments »

    Estimating Odds

    Posted by Jonathan on 22nd March 2012 (All posts by )

    From a comment by “Eggplant” at Belmont Club:

    Supposedly the US has war gamed this thing and the prospects look poor. A war game is only as good as the assumptions programmed into it. Can the war game be programmed to consider the possibility that a single Iranian leader has access to an ex-Soviet nuke and is crazed enough to use it?
     
    Of course the answer is “No Way”.
     
    A valid war game would be a Monte Carlo simulation that considered a range of possible scenarios. However the tails of that Gaussian distribution would offer extremely frightening scenarios. The Israelis are in the situation where truly catastrophic scenarios have tiny probability but the expectation value [consequence times probability] is still horrific. However “fortune favors the brave”. Also being the driver of events is almost always better than passively waiting and hoping for a miracle. That last argument means the Israelis will launch an attack and probably before the American election.

    These are important points. The outcomes of simulations, including the results of focus groups used in business and political marketing, may be path-dependent. If they are the results of any one simulation may be misleading and it may be tempting to game the starting assumptions in order to nudge the output in the direction you want. It is much better if you can run many simulations using a wide range of inputs. Then you can say something like: We ran 100 simulations using the parameter ranges specified below and found that the results converged on X in 83 percent of the cases. Or: We ran 100 simulations and found no clear pattern in the results as long as Parameter Y was in the range 20-80. And by the way, here are the data. We don’t know the structure of the leaked US simulation of an Israeli attack on Iran and its aftermath.

    It’s also true, as Eggplant points out, that the Israelis have to consider outlier possibilities that may be highly unlikely but would be catastrophic if they came to pass. These are possibilities that might show up only a few times or not at all in the output of a hypothetical 100-run Monte Carlo simulation. But such possibilities must still be taken into account because 1) they are theoretically possible and sufficiently bad that they cannot be allowed to happen under any circumstances and 2) the simulation-based probabilities may be inaccurate due to errors in assumptions.

    Posted in Human Behavior, Iran, Israel, National Security, Predictions, Quotations, Statistics, Systems Analysis, War and Peace | 16 Comments »

    October Jobs and Unemployment Numbers for Wisconsin

    Posted by Dan from Madison on 17th November 2011 (All posts by )

    This is a terribly boring post I am writing so I will put the rest under the fold for those of you who are interested in our job market here in Wisconsin. So, just how is Wisconsin doing under Governor Walker?
    Read the rest of this entry »

    Posted in Business, Statistics | 5 Comments »

    Does this sound familiar ?

    Posted by Michael Kennedy on 10th September 2011 (All posts by )

    The science community is now closing in on an example of scientific fraud at Duke University. The story sounds awfully familiar.

    ANIL POTTI, Joseph Nevins and their colleagues at Duke University in Durham, North Carolina, garnered widespread attention in 2006. They reported in the New England Journal of Medicine that they could predict the course of a patient’s lung cancer using devices called expression arrays, which log the activity patterns of thousands of genes in a sample of tissue as a colourful picture. A few months later, they wrote in Nature Medicine that they had developed a similar technique which used gene expression in laboratory cultures of cancer cells, known as cell lines, to predict which chemotherapy would be most effective for an individual patient suffering from lung, breast or ovarian cancer.
     
    At the time, this work looked like a tremendous advance for personalised medicine—the idea that understanding the molecular specifics of an individual’s illness will lead to a tailored treatment.

    This would be an incredible step forward in chemotherapy. Sensitivity to anti-tumor drugs is the holy grail of chemotherapy.

    Unbeknown to most people in the field, however, within a few weeks of the publication of the Nature Medicine paper a group of biostatisticians at the MD Anderson Cancer Centre in Houston, led by Keith Baggerly and Kevin Coombes, had begun to find serious flaws in the work.
     
    Dr Baggerly and Dr Coombes had been trying to reproduce Dr Potti’s results at the request of clinical researchers at the Anderson centre who wished to use the new technique. When they first encountered problems, they followed normal procedures by asking Dr Potti, who had been in charge of the day-to-day research, and Dr Nevins, who was Dr Potti’s supervisor, for the raw data on which the published analysis was based—and also for further details about the team’s methods, so that they could try to replicate the original findings.

    The raw data is always the place that any analysis of another’s work must begin.

    Dr Potti and Dr Nevins answered the queries and publicly corrected several errors, but Dr Baggerly and Dr Coombes still found the methods’ predictions were little better than chance. Furthermore, the list of problems they uncovered continued to grow. For example, they saw that in one of their papers Dr Potti and his colleagues had mislabelled the cell lines they used to derive their chemotherapy prediction model, describing those that were sensitive as resistant, and vice versa. This meant that even if the predictive method the team at Duke were describing did work, which Dr Baggerly and Dr Coombes now seriously doubted, patients whose doctors relied on this paper would end up being given a drug they were less likely to benefit from instead of more likely.

    In other words, the raw data was a mess. The results had to be random.

    Read the rest of this entry »

    Posted in Academia, Bioethics, Environment, Health Care, Science, Statistics | 17 Comments »

    I thought I recognized that name !

    Posted by Michael Kennedy on 3rd September 2011 (All posts by )

    Obama has announced his new appointment for economic adviser. It is a Princeton economist named Alan Kreuger. I am not an economist or an expert on economists but that name rang a faint bell. Then I saw that someone else had remembered him, too.

    In a 1994 paper published in the American Economic Review, economists David Card and Alan Krueger (appointed today to chair Obama’s Council of Economic Advisers) made an amazing economic discovery: Demand curves for unskilled workers actually slope upward! Here’s a summary of their findings (emphasis added):
     
    “On April 1, 1992 New Jersey’s minimum wage increased from $4.25 to $5.05 per hour. To evaluate the impact of the law we surveyed 410 fast food restaurants in New Jersey and Pennsylvania before and after the rise in the minimum. Comparisons of the changes in wages, employment, and prices at stores in New Jersey relative to stores in Pennsylvania (where the minimum wage remained fixed at $4.25 per hour) yield simple estimates of the effect of the higher minimum wage. Our empirical findings challenge the prediction that a rise in the minimum reduces employment. Relative to stores in Pennsylvania, fast food restaurants in New Jersey increased employment by 13 percent.”

    This was tremendous news, especially for Democrats. Raising the minimum wage did not increase unemployment as classical economics had said since the issue first arose.

    Unfortunately, their study was soon ripped apart by other economists who used more objective methodology.

    It was only a short time before the fantastic Card-Krueger findings were challenged and debunked by several subsequent studies:
     
    1. In 1995 (and updated in 1996) The Employment Policies Institute released “The Crippling Flaws in the New Jersey Fast Food Study”and concluded that “The database used in the New Jersey fast food study is so bad that no credible conclusions can be drawn from the report.”
     
    2. Also in 1995, economists David Neumark and David Wascher used actual payroll records (instead of survey data used by Card and Krueger) and published their results in an NBER paper with an amazing finding: Demand curves for unskilled labor really do slope downward, confirming 200 years of economic theory and mountains of empirical evidence (emphasis below added):

    I would suggest reading the entire post which demolishes the study by Kreuger and Card. This is the new Chairman of the Council of Economic Advisers. More academics with no real world experience and this one is incompetent even as an academic. Spengler has a few words on the matter, as well.

    Posted in Economics & Finance, Obama, Politics, Statistics | 2 Comments »

    Poverty and Statistics

    Posted by Michael Kennedy on 18th July 2011 (All posts by )

    I am repairing a gap in my education by reading Thomas Sowell’s classic, Vision of the Anointed, which was written in 1992 but is still, unfortunately, as valid a critique of leftist thought as it was then. As an example of his methods, he constructs an experiment in statistics. This concerns poverty and inequality and, in particular, the poverty of leftist thinking.

    He imagines an artificial population that has absolute equality in income. Each individual begins his (or her) working career at age 20 with an income of $10,000 per year. For simplicity’s sake, we must imagine that each of these workers remains equal in income and at age 30, receives a $10,000 raise. They remain exactly equal through the subsequent decades until age 70 with each receiving a $10,000 raise each decade. He (or she) then retires at age 70 with income returning to zero.

    All these individuals have identical savings patterns. They each spend $5,000 per year on subsistence needs and save 10% of earnings above subsistence. The rest they use to improve their current standard of living. What statistical measures of income and wealth would emerge from such a perfectly equal pattern of income, savings and wealth?
     

    Age Annual Income Subsistence Annual Savings Lifetime Savings
     
    20 $10,000 $5,000 $500 $0
    30 $20,000 $5,000 $1,500 $5,000
    40 $30,000 $5,000 $2,500 $20,000
    50 $40,000 $5,000 $3,500 $45,000
    60 $50,000 $5,000 $4,500 $80,000
    70 $0 $5,000 $0 $125,000

     

    Unfortunately, even with an Excel spreadsheet, I cannot get these numbers to line up properly.

    [Jonathan adds: Many thanks to Andrew Garland for providing html code to display these numbers clearly.]

    Now, let us look at the inequities created by this perfectly equal income distribution. The top 17% of income earners has five times the income of the bottom 17% and the top 17% of savers has 25 times the savings of the bottom 17%. That is ignoring those with zero in each category. If the data were aggregated and considered in “class” terms, we find that 17% of the people have 45% of the all the accumulated savings for the whole society. Taxes are, of course, ignored.

    What about a real world example ? Stanford California, in the 1990 census, had one of the highest poverty rates in the Bay Area, the largely wealthy region surrounding San Francisco Bay. Stanford, as a community, has a higher poverty rate than East Palo Alto, a low income minority community nearby. Why ? While undergraduate students living in dormitories are not counted as residents in census data, graduate students living in campus housing are counted. During the time I was a medical student, and even during part of my internship and residency training, my family was eligible for food stamps. The census data describing the Stanford area does not include all the amenities provided for students and their families, making the comparison even less accurate. This quintile of low income students will move to a high quintile, if not the highest within a few years of completion of graduate school, A few, like the Google founders, will acquire great wealth rather quickly. None of this is evident in the statistics.

    Statistics on poverty and income equality are fraught with anomalies like those described by Professor Sowell. That does not prevent their use in furthering the ambitions of the “anointed.”

    Posted in Civil Society, Conservatism, Economics & Finance, Education, Human Behavior, Leftism, Personal Finance, Politics, Statistics | 8 Comments »

    Happy Birthday, Emlyn, and Applause, xkcd

    Posted by Charles Cameron on 20th March 2011 (All posts by )

    [ by Charles Cameron -- cross-posted from Zenpundit ]

    *

    My son, Emlyn, turns sixteen today.

    He’s not terribly fond of computers to be honest — but he does follow xkcd with appreciation, as do I from time to time: indeed, I am led to believe I receive some credit for that fact.

    So… this is a birthday greeting to Emlyn, among other things. And a round of applause for Randall Munroe, creator of xkcd. And a post comparing more reliable and less reliable statistics, because that’s a singularly important issue — the more reliable ones in this/ case coming from a single individual with an expert friend, the less reliable ones coming from a huge corporation celebrated for its intelligence and creativity… and with a hat-tip to Cheryl Rofer of the Phronesisaical blog.

    The DoubleQuote:

    quoxkcd-01.jpg

    Radiation exposure:

    Today, xkcd surpassed itself / his Randallself / ourselves, with a graphic showing different levels of radiation exposure from sleeping next to someone (0.05 muSv, represented by one tiny blue square top left) or eating a banana (twice as dangerous, but only a tenth as nice) up through the levels (all the blue squares combined equal three of the tiny green ones, all the green squares combined equal 7.5 of the little brown ones, and the largest patch of brown (8Sv) is the level where immediate treatment doesn’t stand a chance of saving your life)…

    The unit is Sieverts, Sv: 1000 muSv = 1 mSv, 1000 mSv= 1 Sv, sleeping next to someone is an acceptable risk at 0.05 muSv, a mammogram (3 mSv) delivers a little over 50,000 times that level of risk and saves countless lives, 250 mSv is the dose limit for emergency workers in life-saving ops — oh, and cell phone use is risk-free, zero muSv, radiation-wise, although dangerous when driving. [I apologize for needing to write "mu" when I intend the Greek letter by that name, btw -- software glitch with the ZP version of WordPress.]

    The xkcd diagram comes with this disclaimer:

    There’s a lot of discussion of radiation from the Fukushima plants, along with comparisons to Three Mile Island and Chernobyl. Radiation levels are often described as “ times the normal level” or “% over the legal limit,” which can be pretty confusing.
     
    Ellen, a friend of mine who’s a student at Reed and Senior Reactor Operator at the Reed Research Reactor, has been spending the last few days answering questions about radiation dosage virtually nonstop (I’ve actually seen her interrupt them with “brb, reactor”). She suggested a chart might help put different amounts of radiation into perspective, and so with her help, I put one together. She also made one of her own; it has fewer colors, but contains more information about what radiation exposure consists of and how it affects the body.
     
    I’m not an expert in radiation and I’m sure I’ve got a lot of mistakes in here, but there’s so much wild misinformation out there that I figured a broad comparison of different types of dosages might be good anyway. I don’t include too much about the Fukushima reactor because the situation seems to be changing by the hour, but I hope the chart provides some helpful context.

    Blog-friend Cheryl Rofer, whose work has included remediation of uranium tailings at the Sillamäe site in Estonia (she co-edited the book on it, Turning a Problem Into a Resource: Remediation and Waste Management at the Sillamäe Site, Estonia) links to xkcd’s effort at the top of her post The Latest on Fukushima and Some Great Web Resources and tells us it “seems both accurate and capable of giving some sense of the relative exposures that are relevant to understanding the issues at Fukushima” — contrast her comments on a recent New York Times graphic:

    In other radiation news, the New York Times may have maxed out on the potential for causing radiation hysteria. They’ve got a graphic that shows everybody dead within a mile from the Fukushima plant. As I noted yesterday, you need dose rate and time to calculate an exposure. The Times didn’t bother with that second little detail.

    In any case, many thanks, Cheryl — WTF, NYT? — and WTG, xkcd!

    Google:

    Once again, xkcd nails it.

    I’ve run into this problem myself, trying to use Google to gauge the relative frequencies of words or phrases that interest me — things like moshiach + soon vs “second coming” + soon vs mahdi + soon, you know the kinds of things that I’m curious about, I forget the specific examples where it finally dawned on me how utterly useless Google’s “About XYZ,000 results (0.21 seconds)” rankings really are — but the word needs to get out.

    Feh!

    Paging Edward Tufte.

    Sixteen today:

    Happy Birthday, Emlyn!

    Posted in Announcements, Arts & Letters, Blogging, Diversions, Internet, Japan, Science, Statistics, The Press | 4 Comments »

    Hah, Hah, I Was Right, Thrppppt!

    Posted by Shannon Love on 24th October 2010 (All posts by )

    One shouldn’t gloat but…

    I was right about the bogus Lancet Iraq Mortality Survey.

    There were actually two studies done by the same Soros funded group of “researchers”. I did fourteen posts on the first study back in 2004-2005, and I demolished its conclusions using simple methodological arguments that you did not need a degree in statistics to understand. The study was so bad and so transparently wrong that you didn’t need to understand anything about statistics or epidemiological methodology, you just needed to know a little history and have a basic concept of scale.

    In my very first post on the subject I predicted that:

    Needless to say, this study will become an article of faith in certain circles but the study is obviously bogus on its face.[emp. added]

    That prediction proved true. Leftists all over the world not only accepted the 600% inflated figure without hesitation but actively defended the study and its methodology. I confidently made that prediction almost exactly 6 years ago because I was even then beginning to understand a factor in leftists’ behavior: they are nearly completely controlled by delusional narrative

    Read the rest of this entry »

    Posted in Iraq, Leftism, Science, Statistics | 23 Comments »

    Industry Leanings In Things Political

    Posted by Joseph Fouche on 23rd March 2010 (All posts by )

    Data analysis guru and fellow Pythonista Drew Conway of Zero Intelligence Agents linked to Ideological Cartography, a blog whose author, Adam Bonica, posts interesting visualizations of political data. This post (Ideologically aligned and ideologically divided industries) had some interesting visualizations of the left-right ideological leanings of people in various industries as revealed by their campaign contributions (all data is from 2008):

    Read the rest of this entry »

    Posted in Politics, Statistics | 8 Comments »

    The Public-Health Fallacy

    Posted by Jonathan on 22nd November 2009 (All posts by )

    The discussion at this otherwise-good Instapundit post is typical.

    The discussion is misframed. The question isn’t whether a specific medical procedure is a good idea. The question is who gets to make the decisions.

    This is a comment that I left on a recent Neo-Neocon post:

    It’s the public-health fallacy, the confusion (perhaps willful, on the part of socialized-medicine proponents) between population outcomes and individual outcomes. Do you know how expensive that mammogram would be if every woman had one? The implication is that individuals should make decisions based on averages, the greatest good for the greatest number.
     
    The better question is, who gets to decide. The more free the system, the more that individuals can weigh their own costs and benefits and make their own decisions. The more centralized the system, the more that one size must fit all — someone else makes your decisions for you according to his criteria rather than yours.
     
    In a free system you can have fewer mammograms and save money or you can have more mammograms and reduce your risk. Choice. In a government system, someone like Kathleen Sebelius will make your decision for you, and probably not with your individual welfare as her main consideration.

    Even in utilitarian terms — the greatest good for the greatest number — governmental monopolies only maximize economic welfare if the alternative system is unavoidably burdened with free-rider issues. This is why national defense is probably best handled as a governmental monopoly: on an individual basis people benefit as much if they don’t pay their share for the system as if they do. But medicine is not so burdened, because despite economic externalities under the current system (if I don’t pay for my treatment its cost will be shifted to paying customers) there is no reason why the market for insurance and medical services can’t work like any other market, since medical customers have strong individual incentive to get the best treatment and (in a well-designed pricing system) value for their money. The problems of the current medical system are mostly artifacts of third-party payment and over-regulation, and would diminish if we changed the system to put control over spending decisions back into the hands of patients. The current Democratic proposal is a move in the opposite direction.

    Posted in Economics & Finance, Medicine, Rhetoric, Science, Statistics | 7 Comments »

    Anniversary Comparison

    Posted by Jay Manifold on 9th November 2009 (All posts by )

    Amazon search on “revolution 1848“: 17,292 results

    Amazon search on “revolution 1989“: 7,972 results

    Posted in Academia, Anti-Americanism, Europe, History, Leftism, Political Philosophy, Statistics | 2 Comments »

    Compare and Contrast

    Posted by James R. Rummel on 2nd September 2009 (All posts by )

    Murdoc very kindly gave us a heads up to this fascinating photo blog. Pictures taken in Normandy during the 1944 invasion are compared side-by-side with images taken from the very same spot today. Looks like they cleaned up the place a bit since then.

    Uncle points us to this photo array. The weekly food intake of families from various parts of the world are shown in graphic fashion, and the money spent is tabulated. Makes me proud to be an American.

    Well worth your time.

    Posted in Economics & Finance, History, Photos, Society, Statistics, War and Peace | 4 Comments »

    WHO’s Shotgun Statistics

    Posted by Shannon Love on 13th May 2009 (All posts by )

    Instapundit links to a story on a WHO report on Swine-Flu. This bit caught my eye:

    Ferguson and his collaborators, part of the World Health Organization’s (WHO) Rapid Pandemic Potential Assessment Collaboration, determined that 6,000–32,000 individuals had been infected in Mexico by late April. 

    Translating from media speak, 6,000-32,000 actually means a 95% confidence level of 19,000 plus/minus 13,000! That’s not a statistic, it’s a shotgun blast of mathematical pellets. At long range.

    All the rest of the calculations seem to descend from this dubious guess. Why do they even bother? As I’ve written before, bad data are worse than no data at all. 

    [Note to grammar nazis: Technically, data is a plural. Datum is the singular.]

    Posted in Science, Statistics | 3 Comments »

    Observational Bias in Mass-Shooting Stories

    Posted by Shannon Love on 6th May 2009 (All posts by )

    Why do we spend so much money on fire proofing buildings when we seem to have so few major fires? 

    Via Instapundit comes this news story of an armed college student preventing a mass killing. I think the most interesting facet of the story is where it was reported. This story of a lawful citizen killing a home invader and preventing a mass killing didn’t appear in the New York Times, just the website of a local TV station. 

    On the other hand, had the criminals carried out their apparent plan to murder the 10 victims in the apartment, does anyone doubt that such a horrible crime would have made nationwide news in every form of media? Does anyone doubt that a blizzard of opinion pieces would claim the murders as evidence of the need to disarm the citizenry?  

    Read the rest of this entry »

    Posted in Media, RKBA, Statistics | 3 Comments »

    Flu and Mortality

    Posted by Carl from Chicago on 1st May 2009 (All posts by )

    I am far from an expert on medicine but was interested in the difference in mortality in Mexico and the United States on this latest outbreak of swine flu. After reading many of the accounts I noted that many of the individuals in Mexico did not have access to health care and / or delayed going to the doctor and used home remedies or self-medicated until their situation was very bad.

    The victims seem to be dying of what is basically pneumonia. Pneumonia is a serious condition, and if left untreated (or not treated until far into its course) it can be deadly, even here in the US. I know several individuals who have gotten some form of pneumonia (or their children) in recent months here in Chicago – and while they missed work and obviously had high concern for any youngsters with the symptoms, they all were treated and came back fine after being ill or out of work for a while.

    What you likely are seeing in the difference in mortality is the difference between a broadly based, functioning health care system from a rich society and one for a semi-functioning health care system for a poorer society. Mexico is a pretty developed country – if this sort of flu broke out in Africa it probably wouldn’t even be noticed among the endemic diseases and preventable fatalities that happen every day, sadly enough. As I note in a recent post about Angola, one of the richer African countries (they have oil revenues), a significant portion of their total health care budget goes to sending the richest friends and family of their leader off for foreign doctors overseas, to show where their priorities lie.

    The media won’t come out and say it directly because it may be perceived as offensive to Mexican sensibilities but the mortality rate seems to be almost solely due to the differences in the effectiveness of our overall health care systems.

    Posted in Americas, Statistics | 12 Comments »

    Delayed Vindication

    Posted by James R. Rummel on 1st May 2009 (All posts by )

    Shannon Love was taken to task by the anti-war left way back in 2004. The reason why he drew their ire was because he dared to question the wisdom of a suspicious study that appeared in the Lancet. The study claimed that about 100,000 civilians died in Iraq during the first year after US forces invaded.

    Why was this suspicious? Mainly because the authors of the study laid the blame for the deaths at the feet of the Coalition, the number of deaths were ten times higher than any other credible estimate, and because it was released just in time to effect the 2004 US elections.

    (If you are interested in the back and forth, this post is a roundup of all essays discussing the study.)

    Strategypage reports that the Iraq government has just released the findings of a study of their own.

    “The government has released data showing that 110,000 Iraqis have died, mostly from sectarian and terrorist violence, since 2003.”

    So the 100K figure is finally correct, only five years after it was first reported. And the Coalition forces didn’t cause the majority of the deaths but terrorists, criminals, and blood feuds are to blame.

    Does this matter now, five years after the fact?

    Read the rest of this entry »

    Posted in Iraq, Leftism, Statistics, War and Peace | 7 Comments »

    This Debate Would Be Over If the Other Side was Rational

    Posted by James R. Rummel on 12th April 2009 (All posts by )

    One of the tactics used by those who advocate banning privately owned firearms is that Great Britain enjoys a lower level of homicide than that found in the United States. The idea is that we could have lower murder rates, if only guns were banned.

    Part of their argument is true. The US has a homicide rate about 2.5 times that of the UK.

    Kevin of The Smallest Minority discusses out some painful truths about this assertion. He points out that the US homicide rate used to be much greater, but has fallen even though more states have passed laws allowing private citizens to carry concealed firearms. At the same time, the rates of all violent crimes, and all crimes in general, have been climbing in the UK even though they have been passing ever more laws restricting legal self defense.

    Seems simple enough. They restrict weapons in the UK, and crime goes up. We allow more people to carry firearms here in the US, and crime goes down. Even if there are other reasons which affected this outcome (and there are), the very idea that banning guns will lead to less crime has been completely discredited. Right?

    I wish!

    Posted in Crime and Punishment, Law, RKBA, Statistics | 12 Comments »

    Money (Basket) Ball

    Posted by Carl from Chicago on 14th March 2009 (All posts by )

    Michael Lewis is a great journalist and author of several books that are highly recommended by Dan and I. “Moneyball” tells the story of the Oakland A’s, and how they used statistics and a novel view of baseball to win a lot of games on a small budget, as well as the story of Billy Beane, who went from a can’t miss, 4 tool prospect to an MLB bust, and then on to revolutionize baseball as manager of the A’s. “The Blind Side” explained the evolution of the left tackle in the NFL from an also-ran to one of the most important positions on the field, along with a lucid an excellent description of the evolution of passing offense, which sadly enough has apparently never been read by our beloved Chicago Bears. The book also featured Michael Oher, who was plucked from total obscurity to starting on Ole Miss, the only team that knocked off eventual NCAA champion Florida last season. For non-sports related items, Michael Lewis also wrote the famous book “Liars Poker” which explained the rise of bond trading at Salomon Brothers and is a Wall Street classic.

    Recently Michael Lewis wrote an article on basketball that appeared in the NY Times magazine – to find the article go to the NY Times site and type in the title of the article in the search engine – “The No-Stats All-Star”.

    In this article, Michael Lewis takes on basketball the same way he took on baseball and football, above. He is attempting to do what the best journalists do – tie in the “human element” with an original analysis of a complex topic. The key to Michael Lewis’ writing is that his human element actually matters and isn’t just fluff to glue the story together.
    Read the rest of this entry »

    Posted in Sports, Statistics | 2 Comments »

    Quote of the Day

    Posted by Dan from Madison on 12th February 2009 (All posts by )

    Anyway, if we weren’t supposed to eat animals, why are they made of meat?

    From comment 2 at this post at the Freakonomics Blog.

    Posted in Statistics, Vitamins | 7 Comments »

    Why Most of Us No Longer Read The Economist

    Posted by Jonathan on 2nd October 2008 (All posts by )

    I just received a press release promoting The Economist‘s new survey of academic economists about McCain’s and Obama’s respective economic programs. Here are the results:


    What’s going on here?

    This is a junk survey. Look at the data. Now look at the article.

    Here’s The Economist‘s explanation of how they generated a survey sample:

    Our survey is not, by any means, a scientific poll of all economists. We e-mailed a questionnaire to 683 research associates, all we could track down, of the National Bureau of Economic Research, America’s premier association of applied academic economists, though the NBER itself played no role in the survey. A total of 142 responded, of whom 46% identified themselves as Democrats, 10% as Republicans and 44% as neither. This skewed party breakdown may reflect academia’s Democratic tilt, or possibly Democrats’ greater propensity to respond. Still, even if we exclude respondents with a party identification, Mr Obama retains a strong edge—though the McCain campaign should be buoyed by the fact that 530 economists have signed a statement endorsing his plans.

    The stuff about 683 research associates and the NBER is meaningless. What matters is that this was an Internet poll arbitrarily restricted to academic economists and with a self-selected sample. This is a problem because:

    -Academic economists are likely to be more leftist than economists as a whole.

    -Only 14 out of the 142 respondents identified themselves as Republicans.

    -There is no way to know why only 10% or respondents identified as Republicans, but several possibilities implying gross sampling error are obvious. In other words, either most academic economists lean as far to the Left as do other academics, which seems unlikely and would impeach the survey results, or the sample is unrepresentative and impeaches the survey results.

    -The labels “Democratic economist”, “Republican economist” and “unaffiliated economist” are self-selected and may be inaccurate. My guess is that most of the unaffiliateds usually vote for Democrats even if they are not registered Democrats. In this regard I am reminded of media people who claim to be independent even though everyone knows they vote overwhelmingly for Democrats.

    So this is a worthless survey for research purposes. It is not, however, worthless, for business purposes, as I am sure it will generate a lot of discussion and outraged debunking by bloggers, and therefore a lot of traffic for The Economist‘s Web site. It may also help to get Obama elected, and perhaps that is part of the plan.

    Where have we seen this kind of politically driven statistical analysis before?

    UPDATE: The vagueness of the self-reported categorizations, “Republican”, “Democrat” and “independent” is obvious. One wonders why the survey did not also, or as an alternative, ask respondents to report for whom they voted in recent elections.

    Posted in Anti-Americanism, Leftism, Media, Politics, Polls, Statistics, The Press, USA | 20 Comments »

    Nullification, Diffusion, and Probability

    Posted by Jay Manifold on 16th August 2008 (All posts by )

    Via the usual source, we are directed to a Randy Barnett post over on VC, which in turn discusses Juror Becomes Fly in the Ointment. The key passage, largely ignored in subsequent discussion, is (emphasis added):
    Read the rest of this entry »

    Posted in Crime and Punishment, Human Behavior, Law, Law Enforcement, Political Philosophy, Society, Statistics | 12 Comments »