Comet and Asteroid Impact Hazards on a Populated Earth
by John S. Lewis
A book review by Mark L. Olson
Academic Press, 1999, 200 pp, $49.95
(So this isn't SF. So sue me!) Comet and Asteroid Impact Hazards on a Populated Earth is somewhat dry reading but well worth while. Lewis has done a full-fledged modeling study of the hazards of asteroid and comet impact.
The problem in evaluating the danger of impacts is that they are complicated events and not susceptible to simple extrapolations. There are two main classes of bodies: asteroids and comets, and the orbits they follow differ significantly. Among asteroids, the carbonaceous chondrites are by far the most common, but they are physically fragile and reach ground less frequently than the stony-irons and the irons. Comets and chondritic meteorites are more likely to explode in the air, yielding an airburst (like Tunguska in 1908) which can cause even wider devastation than a ground strike.
A ground strike can produce craters on land and tsunamis in the ocean. An airburst can produce flash-heating fires, and over-pressure which destroys buildings or "merely" throws glass shrapnel over everything.
Lewis has produced a model where the Earth is bombarded randomly by bodies of all sorts, the frequency of each sort of body being what has been measured (or extrapolated sometimes) in nature. Each body follows an orbit randomly selected from the natural range of orbits for that kind of body and has a mass and composition randomly selected from the naturally-occurring population. In other words, his model bombards Earth with a rain of bodies which reproduces the natural flux in orbit, frequency, mass, and composition.
He then has it fall on a model Earth with a proper probability of hitting over water or land, and, if on land, over various densities of 20th century population. Wherever it hits, he computes the effects combining the effects from its orbit, its angle of striking (a shallow angle tends to produce more airbursts, a steep angle more cratering), its composition and its mass and the population below it to estimate casualties.
The results are interesting. Over about 100,000 years of simulations, he finds that fatalities are consistently dominated by the largest events (the worst-case event in all of those runs - a large chondrite airbursting over a heavily-populated area like Japan - caused eleven million deaths). (I should note that his model was not designed to reproduce events like that which killed the dinosaurs, and the chance of an event like it appearing in only a few hundred thousand years of simulation is small.)
The overall death rate is a few hundred per year, but because of the dominance of a few large events, most years show none at all.
One of the most interesting chapters is an analysis of historical records looking for the events which must be present if his model is accurate. Somewhat to my surprise, he finds a much larger number of probable impacts than I expected - I'd always sorta remembered the statement that no one has ever been killed by a meteor. That happens not to be true at all: there are ample historical records of meteor-caused deaths, and when you get into the less-clear literature, even more. (E.g., a Chinese record of a village being destroyed by a "rain of stones from heaven". Was it meteors? You can't tell for sure, of course, but that is just the sort of record you'd expect to find, so it's not improbable.) Given the nature of historical records, you would not expect to have 100% coverage even in the 20th century. Earlier, the chance of an event being recorded rapidly decreases to near-zero.
It's also interesting to note that military satellites put up to monitor for missiles and nuclear explosions detect a large flux of 10 meter bodies which nearly all explode in the atmosphere with up to Hiroshima-sized bangs. There are many of these each year, few of which are noticed from the ground.
Interestingly, his analysis also tells us a lot about mitigation. Because the death rate is dominated by a few large events, and because the large events are by far the easiest to deal with, it turns out that in makes sense to stand guard against the worst cases and to ignore the rest.
Current-day technology can detect the kilometer-sized bodies which do the most damage - we probably already have cataloged 30% of them - but it would take multiple order of magnitude improvements to go down to 10-100 meter-sized bodies which are both much more common and much harder to spot.
Because the kilometer-sized bodies can be spotted so easily, we can expect to spot them 10-100 years (and many orbits of the sun) before they will hit the Earth. This gives us ample time to nudge them out of a collision orbit using even today's technology. 100-meter bodies do much less damage and are discovered much later so there is less time to divert them. The combination of less damage and a higher cost to avert the damage means it's generally not worthwhile to do so.
(It's also interesting to note that using a nuclear bomb to fragment some bodies is useful only if done early enough. If done only hours before impact, it actually can make things worse by guaranteeing that all the body's energy is channeled into the airburst and that the airburst covers a larger area.)
This is a fascinating analysis of a very real problem.
NESFA homepage | Review Index | More Reviews by Mark L. Olson