Thinking About Compute and Armageddon
A hidden assumption underlies every optimistic AI forecast. It’s geopolitical, and it’s enormous.
A little bit of thinking aloud here this Saturday, turning to the physical currency of AI scaling and plausible U.S.-China conflicts.
For the past several years, the dominant framework for thinking about AI progress has been scaling laws: the empirical observation (though theoretically still poorly understood and increasingly contested) that model performance improves predictably as you add compute, data, and parameters. This seems pretty robust among the smart set of AI researchers and thinkers to have become somewhat foundational to thinking about where the technology is going—if you know the compute budget’s trajectory, you can roughly forecast where things are going. The future becomes, if not legible, at least not entirely dark.
What I want to briefly think through today intersects with my ongoing research on the effects of plausible near-term intense great power wars, including wars featuring nuclear use. Scaling laws describe a world of relative compute abundance: one in which the semiconductor supply chain continues to function, capital continues to flow into fab expansion, and the extraordinary geographic concentration of leading-edge (“too big to fail”) chip manufacturing doesn’t become a liability overnight.
I want to slightly—maybe more than slightly—provoke by thinking through the extent to which an intense war, and especially, a U.S.-China war with even reasonably limited nuclear use1 would represent an intensely sharp discontinuity for current projections on scaling.
The basic premise to proceed to is another empirical observation: bleeding-edge semiconductor fabrication is among the most geographically concentrated2 critical industries in human history. If you think about nuclear targeting policies and potential damage in a war, that should pique your interest. This is not a system designed for resilience, and nor should it have been from the perspective of the post-Cold War trajectory of many of the key business. The hyperconcentration has allowed a form of efficiency that’s great … until it’s not.
Let’s just use TSMC for the sake of thinking through this. A Taiwan Strait conflict—even a purely conventional one, even one that never saw direct missile strikes on a fab building—would disrupt everything from the continuous flow of ultra-pure chemicals, specialized gases, and components that TSMC’s operations require within weeks. It would displace the irreplaceable workforce whose tacit knowledge represents decades of accumulated expertise. The fabs might survive physically while becoming operationally inert. Either way, the discontinuity for how we’d project out on the total mass of compute coming out of TSMC fabs would require adjustment.
There’s another dimension here that I think is worth pondering: China appears to be actively prepared for a compute-scarce world in ways the West has not, which changes the incentive structure around TSMC dramatically in a crisis.
Following successive rounds of U.S. export controls, China has invested heavily in domestic semiconductor capacity—SMIC, state-subsidized chip design, HBM memory alternatives, large-scale chip stockpiling before restrictions tightened (and more on the way). None of this matches TSMC at the frontier. But it represents a deliberate strategic response to the broader geopolitical split between Beijing and the West that just happens to have some implications for the scenario I’ve alluded to before. If compute scarcity is thrust on the world, an important question becomes who between the United States and China is better able to weather the storm—in the immediate term especially.
I’m still thinking through the intra-crisis escalation implications of this (if it’s right—and there’s uncertainty here on my part so do chime in). Actors who perceive themselves as already operating from a position of loss are more willing to accept risk to recover parity than actors reasoning from a position of gain—losses loom larger than equivalent gains. China’s leadership, surveying a semiconductor landscape in which U.S. export controls have already imposed significant costs and in which the gap with the Western frontier may be widening, is plausibly operating in the loss frame. From that position, the calculus around TSMC looks quite different than it does to Western analysts.
If China assessed that a conflict was coming regardless—or that its window for action was closing—destroying or denying TSMC would not simply be an act of denial. It would be a strategic equalizer. A world without TSMC is a world in which China’s domestically constrained semiconductor ecosystem is suddenly far more competitive relative to the West than it is today, especially in a “post-war” reconstruction scenario potentially (where the rate of reconstitution of compute availability might diverge considerably, even if China lacks in sophistication today, as it does). The asymmetric effects of that loss, from Beijing’s perspective, may make TSMC a more attractive target than many of us reasoning on these issues often acknowledge from a gain frame about the value of capturing the facility intact. Some of these thoughts are found in longstanding debates about the “deterrent” value of TSMC being on Taiwan itself (a gain framing if I’ve ever seen one).3
Limited nuclear exchanges, which appear plausible through a number of pathways in possible U.S.-China conflicts over Taiwan, would layer additional disruption on top of these conventional pathways (putting it lightly). And then there would be the ripple effects across capital markets, insurance, and much more. The escalation pathway from a Taiwan conflict is where the nuclear and AI arguments intersect most uncomfortably. In a lot of the limited nuclear war work I’m doing now, I’m trying to think creatively about the types of arguments that might be made in favor of escalation and deescalation by a number of stakeholders in various states. It helps to imagine the consequences of an attack, think through plausible damage assessment, and then … get dark. Even in the conventional-only TSMC damage scenario over Taiwan, you might imagine the argument that failing to escalate (possibly with nuclear first-use) cedes too much strategically to China. In wargaming settings, this is often a motivator for U.S. teams to resort to nuclear use, and the strategic case can often not even have anything to do with TSMC or AI at all. The scenario where a conventional conflict produces compute scarcity and creates conditions for nuclear use is not a low-probability compound event. The two risks are positively correlated, it seems. This post would get too long if I went into other scenarios, but consider too that, increasingly, compute-heavy critical infrastructure targets in the United States might see reprisals intended as symmetric (non-nuclear or nuclear).4 Granted, I want to take this moment to remind readers that this post, unsettling as it may be, is written in the spirit of thinking about the unthinkable—I am not predicting a U.S.-China war is inevitable or imminent.
A lot of this sort of risk appears to have been walled off from AI capability forecasting; when it does appear, it is acknowledged as a long-tail exogenous shock before the analysis returns to compute trajectories. An implicit assumption as a result, for accelerationists in particular but not exclusively, is that the physical substrate of AI development will continue to exist and expand.
This is not a neutral analytical choice. It systematically biases forecasts toward optimistic compute trajectories. And it isn’t hard to understand why: the people most invested in scaling trajectories—lab leadership, venture capital, some researchers—have strong incentives to assume a favorable compute environment. And maybe those of us war-gaming a Taiwan conflicts are not asking enough about what happens to AI development trajectories. The upshot of this post isn’t to reorient the debate toward pessimistic trajectories, but to underscore that there are a bunch of risks that deserve greater thought.5
The AI boosters most aggressively forecasting transformative capabilities in the near term are making an implicit bet that great power peace holds, that the Taiwan situation resolves without conflict, that the nuclear threshold remains uncrossed. That may prove correct. But it should be recognized as a bet — a large, largely unacknowledged geopolitical wager sitting underneath every optimistic capability forecast.
The bottom line for me is that as much as I’m persuaded by much of the scaling way of thinking and the empirical relationships they describe are genuine and important, they ultimately describe a world. And that world is more fragile than almost anyone building on top of them is willing to say.
Editor’s Note: As I was wrapping this up and running the argument by a friend in the AI space, they pointed out that Dario Amodei briefly alludes, in this podcast, to the risk of Taiwan-specific fab damage scenarios, in the context of fundamental uncertainty about AI trajectories. I haven’t listened to the whole discussion, but you can find the relevant bit here:
On the basic hypothesis of, as you put it, within ten years we’ll get to what I call a “country of geniuses in a data center”, I’m at 90% on that. It’s hard to go much higher than 90% because the world is so unpredictable. Maybe the irreducible uncertainty puts us at 95%, where you get to things like multiple companies having internal turmoil, Taiwan gets invaded, all the fabs get blown up by missiles.
On a future newsletter, I do want to share more about what nuclear strategy (particularly non-U.S. countervalue strategies) looks like in a AI-proliferated world.
There’s the age-old debate in our field about whether escalation can be controlled. I won’t be resolving it in this Substack post, alas. Suspend disbelief, please.
TSMC’s fabs in Taiwan produce the overwhelming majority of the world’s most advanced logic chips still. The Netherlands’ ASML similarly holds a monopoly on the EUV lithography machines without which those chips cannot be made, and the machines themselves depend on supply chains spanning dozens of countries. Samsung and SK Hynix in South Korea dominate advanced memory. A handful of Japanese firms control critical materials and equipment. Some of these matter more than others for this discussion (Taiwan!).
There’s a natural relationship here to the debate a few years ago about whether the U.S. and Taiwan should, in fact, destroy TSMC to deny it to China.
Maybe this will deserve another post.
And, as I write, I think of more possibilities still: even modest wars with less-than-existential implications for TSMC could cause a fundamental reorientation of compute resources toward state actors and militaries (and even change the nature of the public-private relationships between governments and frontier labs).




Good thinking piece! Back in 2021 US Army War College produced this strategic think-piece that posits one tactical move is like a “broken nest” - that fab is removed from play by destroying it. https://press.armywarcollege.edu/parameters/vol51/iss4/4/
But then what next? Dovetails well into your commentary.