I’m pretty sure I’ve never met New Zealand-born University of Indiana statistics professor Brad Luen, an early member of the Witnesses commenting community generated by Microsoft’s blog-like Expert Witness version of the Consumer Guide—a community that connected me to many warm friends and acquaintances new and old, some virtual and some live-and-direct, including my onetime personal assistant David Schweitzer, who died suddenly in his early forties, and the recently deceased Richard Cobeen. But a year or so ago I became aware of Luen’s Semipop Life Substack newsletter, which once a month devotes itself to a Consumer Guide-style roundup that includes a few albums I’ve already written up and a few I’ve never heard of but am moved to check out. It’s the only Substack I’ve felt inclined to recommend, at least so far, and it’s so consistently smart that I always skim it at the very least. So as a convinced but relatively recent global warming alarmist, I jumped on Luen’s roundup of end-of-the-world-as-we-know-it books. After all, Luen is a statistician—a professor of statistics at the University of Indiana, to be precise—and I’m not, which meant I could trust his recommendations more confidently than those of my fellow alarmists. Whether I’ll ever read any of the books he covers in this Semipop Life post I’m not sure—The Uninhabitable Earth and Deadliest Enemy are the most likely candidates. But now I have a substantially clearer overview of the issues at hand, and found Luen’s analyses so stimulating that I’m proud to run his roundup as a guest post on And It Don’t Stop.
The Semipop Review of Catastrophic and Existential Risks
It might be the end of the world as we know it (pr. < 0.02) and I feel middling
The attention paid catastrophic risks (which I’ll take to mean things that could kill off a decent percentage of the world) and existential risks (things that could result in the collapse of civilization) has recently boomed. One reason is of course that the last seven years or so have been pretty doomy in ways you don’t need me to recount. Another is the Effective Altruism movement, once a small group of do-gooders telling you, correctly, that donating to third world public health was the most measurably life-saving thing you could do with a marginal buck, has grown in size and bankroll and started to include thinkers taking utilitarianism to extremes, most notably the so-called longtermists willing to think beyond a human lifetime or two. Will MacAskill’s recent What We Owe the Future is a clear summary of this line of inquiry. It doesn’t make much of an attempt to convince readers of its premises—that utilitarianism is basically right, that humans might be around for millions of years, and that somebody a million years from now is worth a non-trivial amount of moral concern relative to someone today—and it doesn’t try hard to reconcile how different these premises’ implications are from every ethical system from before my lifetime: not just the Repugnant Conclusion, but various repugnant corollaries, like whether one should help an elderly person cross the street should mainly depend on whether they have anything to offer longtermism. Still, the movement’s central dogma that killing off 100% of the population is substantially worse than killing off 99% of it is difficult to dispute.
Me, as an unprincipled philosophical centrist who has occasionally helped out elderly people without asking their opinions on existential risk first, I try to avoid excessive pessimism or optimism. While I think it’s unlikely that anybody recognizable as human will be around in a million years (it would require a rapid reduction of existential risk to levels that, while not totally implausible, would be counter to the longstanding human tendency to blow stuff up), I think it’s very likely that we’ll get through the next hundred years with civilization intact: humans in power want to survive, we have most of the technology we need to combat risks that aren’t technology itself, and we may be very slowly learning that preventing disasters is, if nothing else, good value for money. But that means it’s vitally important to avoid the percent or two chance of outcomes that result in extinction or, almost as depressing, the apex of Homo sapiens turning out to have been 2015. It also means that we should try to improve our historical record of not being able to plan out anything besides large religious buildings that requires thinking about timescales longer than a decade or two. What follows is an opinionated literature review more than a call to action: while in some cases the things we should do are obvious (phase out fossil fuels, spend tens of billions on vaccine research and development), in many others they’re not. This isn’t the end of the world, probably: we have some time to work this out, but not, like, a million years.
AI RISKS
Martin Ford’s Rule of the Robots is the best book for generalists to start thinking about the current state and current and future risks of artificial intelligence. It helps to clarify why self-driving cars have flopped while robots are relied upon increasingly in places like Amazon warehouses, and how the latter hasn’t so far led to unemployment as expected by some (including the author in his previous books) but has nevertheless been bad for workers left with the tasks too backbreaking for the bots. Some risks raised are more pressing than others: AI bias is here now, deepfakes may not be worth the effort when people are willing to be fooled by shallow fakes, swarms of killer autonomous drones seem slightly far-fetched but would be pretty bad so yeah we should worry about that one. When we finally get to existential risks, superhuman artificial intelligences indifferent to human survival get eight pages, which is perhaps what they deserve, but the discussion doesn’t get much deeper than showing that some smart people are very worried and disagree violently on whether it’s a problem for this decade or next century.
The classic story of runaway AI is Nick Bostrom’s Superintelligence, yet I’m hesitant to recommend it (and not just because it’s written in the moderately annoying standard British popsci style) because its argument is caricaturable to the point that you might not be able to take it seriously enough. The central thought experiment is that of an algorithm designed to optimize paperclip production that discovers it can make itself endlessly smarter, and ends up taking over pretty much the entire universe in order to fill it with paperclips. (Why a world-dominating algorithm with infinite intelligence by our standards would feel compelled to stick to its initial human-specified paperclip-centric goal is left unexplained.) There are much more plausible if more technical versions of this story written by Internet forum weirdos (that’s a compliment), but Superintelligence does capture a key intuition: that above-human intelligence could lead to an accumulation of power (whether by the AI itself or by its proprietors, who in the short term may be better positioned to get around the problem that Daleks can’t climb staircases) that could become a monopoly, and historically the possibility of a monopoly on power has usually led to fatal shit. The problem is that since no one has an idea of what genuine machine intelligence will be like, nobody has any great ideas as to what to do: attempts to get AI to have objectives aligned with human ones seem to me to be quite unlikely to work (but nevertheless might be worth a try if you think this is a fate of the world thing.) Apart from closing our eyes and hoping the computers turn out to be chill, the only alternative that seems like it might have a strong chance of working would be to slow AI development until we have a better understanding of, say, what consciousness is. The fact that nobody claiming to be extremely worried about AI takeover has yet been arrested for industrial sabotage against Google and Microsoft perhaps suggests the problem is not yet as urgent as one might fear.
CLIMATE CHANGE
There’s no shortage of books giving you the worst-case scenario. With some reservations, I recommend David Wallace-Wells’s well-organized The Uninhabitable Earth: its basic point that climate change is already really bad and could be really really bad isn’t in dispute. To be a great book, it would’ve helped readers calibrate their doominess by finding a way to quantify simultaneously the “really really” and, harder still, the “could be” for a lay audience, and if Wallace-Wells isn’t up to it—he repeatedly mentions bell curves, for instance, when asymmetric distributions are kind of the point of his book—I don’t know who is (not me and not Nate Silver). So he only changed my opinion on the underdiscussed far future, which looks really, really different—which doesn’t absolutely imply bad, but you know, realistically.
Still, if you want to think clearly about climate change, you might as well go to the IPCC reports, which are parsable enough if not exactly reader-friendly, and at least stare at the graphs in the key academic papers they cite. The consensus seems to me that if we cut emissions to net-zero some time this century, we’re likely to keep warming well under three degrees and almost certainly under four. There’s no magic threshold where climate change suddenly switches from bad to civilization-threatening, but we have a pretty good understanding of what would happen at warming of up to four degrees (above that and you start to risk the Arctic and/or Antarctic completely melting, with unpredictable results.) To be clear, it would be optimal to hit net-zero much much sooner than end-of-century: even two-and-a-bit degrees would mean the extinction of whole ecosystems and hundreds of millions of people displaced. It probably wouldn’t directly result in a net increase of the order of ten of millions of deaths a year though, in part because up to a point, lives saved through lack of cold would offset lives lost to heat (though one caveat is that I don’t think we’re sure how an increase in temperature in already-hot places affects cardiovascular deaths more generally, rather than just on the hottest days.) So hitting net-zero in a timely manner is the first priority from an existential risk perspective. Large-scale negative emissions may be difficult, but the remaining technological breakthroughs required to achieve things like carbon-neutral steel and cement seem very achievable. The major dangers are political: one or more countries unilaterally refusing to end fossil fuel use out of self-interest (despite air pollution’s death tolls, which are likely to exceed climate change’s for quite some time yet) or spite, and countries finding it harder than expected to build wind farms and transmission lines because of local opposition. Land use: it’s an existential issue.
BIO-RISKS
Michael T. Osterholm and Mark Olshaker’s 2017 Deadliest Enemy: Our War Against Killer Germs pegged flu rather than a coronavirus as the most likely next pandemic, but was otherwise prescient save for failing to predict that we wouldn’t learn much from the experience. It’s clear that the next next pandemic is likely to be another respiratory illness, that it could be an order of magnitude or two worse than COVID (much higher infection fatality rates are very plausible), and that we’ve done little over the last three years that will much delay or alleviate it. By far the best thing we could do in this respect is develop pan-flu and pan-coronavirus vaccines; developments in mRNA tech have meant some progress along this front, but with Congress and the public having lost interest in pandemics, to a large extent it’s being left to Big Pharma to find a profit motive to do it.
The book mentions other bio-risks with catastrophic potential. Antimicrobial resistance—bacteria and microorganisms evolving so that drugs like penicillin used to treat them are less effective—could have a death toll of 10 million a year by 2050. Again progress has been slow but non-zero: the U.S. still uses more antibiotics on livestock than people, and it’s not at all clear how much of that use is legit, yet our use is dwarfed by the rest of the world. Bio-terror or poor lab safety leading to near-extinction seems to me exceedingly unlikely—it’s pretty hard to engineer something that both spreads quickly and kills literally everyone. We should however do our bit to prevent over-curious academics from making this an iota easier for no tangible gain.
WAR
It now seems risible that as recently as early this year, while Russian forces were building up near Ukraine’s border, people who should’ve known better and Stephen Pinker clung to the belief that the tendency towards international war was dying out. Bear F. Braumoeller’s 2019 Only the Dead: The Persistence of War in the Modern Age kneecaps Pinker’s case with a couple of graphs, but deals with more subtle versions of the thesis using more careful empirical arguments. Post-Ukraine, the much more pressing question he addresses is whether international conflict is becoming less deadly. Braumoeller says it isn’t, but his justification relies on power law extrapolations that aren’t self-evidently justified. My tentative reading of his data is that there might’ve been a bit of a drop in the intensity of war relative to population sizes, but that the risk of the average war involving a major power blowing up to World War-or-worse proportions remains intolerable given the consequences.
If you’re not in Iceland or New Zealand, you might currently be taking a special interest in the chances of the current war turning into a nuclear one. It’s clear that an escalation to a global nuclear war would require huge mistakes, but this might not be as unlikely as one might prefer. The simulations in Bruce G. Blair’s 1993 The Logic of Accidental Nuclear War are very simplistic by modern standards, but the underlying message is robust: during the tensest periods of the Cold War, it would not have taken many errors for the US and USSR to start a thermonuclear war based on false alarms, and there isn’t any strong reason to believe things are meaningfully better today. Tactical nuclear war (one of the all-time great euphemisms), on the other hand, depends heavily on the whims of the chains of commands of individual countries, especially those chains that are essentially one person. And yet it’s not clear that increased understanding helps the layperson much here. Fiona Hill and Clifford G. Gaddy’s 2015 Mr. Putin: Operative in the Kremlin, for instance, fills out one’s understanding of ol’ Vlad by dividing his personality into discrete aspects with cutesy names (The Statist, The History Man, The Survivalist etc.) While parts of his life remain very mysterious for a person of his prominence, the authors provide plenty of evidence for each fragment of him they identify. Ironically this mostly proves Putin is inherently very hard to predict, as it’s hard to say which aspects of him will be ascendant regarding any given decision. That we might not be able to do better than trusting the State Department doesn’t inspire confidence.
THE INTERSECTION
It’s clear that at least the last three risks have the potential to strongly affect each other. Climate change exacerbates the damage of existing public health problems like malaria and diarrheal diseases and is likely to change the nature of future pandemics in ways we haven’t worked out yet, and any mass cross-border migrations it causes are likely to be as destabilizing as most past mass cross-border migrations. A great power war would be devastating to the international cooperation required to hit net-zero and potentially fuel an arms race in bioweapons of mass destruction, which even if not intentionally deployed would be a huge increase in existential risk. As COVID has shown, pandemics can make entire countries crazy, which doesn’t exactly help things. Thinking about these kinds of risks in isolation, then, is probably necessary but not sufficient. Planning is hard, especially about the future, but it has to be done.