This is an interesting and entertaining article, a bit long but worth reading. I’m not sure that Lewis completely understands some of the concepts here (or maybe I don’t understand them), and I think that he overpersonalizes his discussion by framing it as a narrative about mostly one person, which I suppose comes with the territory in journalism. It’s still quite a good article, however.
The central problem with pricing disaster risk is that disasters are by definition infrequent events, and there aren’t enough data to calculate odds with a high degree of confidence. It’s like the global-warming issue. We don’t have 100,000 years of accurate data, so instead we (that is, the better risk-modelers) take accurate data from the past few decades, estimate the rest, estimate worst-case costs for particular disasters (not that hard to do), and perform Monte Carlo simulations to estimate the odds of high-cost outcomes. If you do this carefully you can generate useful estimates.
Apparently, these techniques, which are common in finance, were applied to disaster modeling only quite recently. This happened because insurers were underpricing policies based on a few decades of quiet weather (anchoring bias), got blasted with huge claims after Hurricanes Andrew in 1992 and Katrina in 2005, and were forced to update their methods. Lewis’s hero is a quant who has been highly successful in pricing risk in the (fat?) tails of disaster-probability distributions and appears to be more street smart about trading than Lewis’s initial description of him suggests.
Check out Lewis’s piece if you are at all interested in finance or risk modeling.