Begun, The AI Wars Have
Anthropic has been designated a "supply chain risk" and my cortisol is officially SPIKED.
The best lack all conviction, while the worst
Are full of passionate intensity.
— W.B. Yeats, “The Second Coming”
The big story over the weekend was that the US and Israel have launched a war with Iran and killed their Supreme Leader.
That’s very big news, but there might be an even bigger story unfolding underneath it – one that might come to be seen as the opening salvo in the AI Wars.
On Friday, Secretary of War Pete Hegseth formally designated Anthropic, the company behind the most capable AI model on the planet, Claude, a “supply chain risk to national security.”
That spiked a lot of people’s cortisol. Including mine.
This label has never been applied to an American company. Until now, it has been reserved for foreign companies with documented ties to adversary governments – like Huawei, which the FCC designated a national security threat in 2020 over its very close ties with the Chinese military.

I was at the airport when I read the news on X, on my way home from a new media conference in Austin (hosted by the legendary newsletter operator Matt McGarry). At the event, everybody was talking about how they’d switched from ChatGPT to Claude as their daily AI workhorse.
I’ve used Claude pretty much every day since late 2022, most days for several hours per day. If I haven’t logged 10,000 hours yet, I have launched at least 10,000 instances of the model, and watched it evolve from a capable thought-partner into its current form of Claude Code: an agentic harness that can handle complex end-to-end tasks, from software development to content creation to (apparently) operational military intelligence.
Suffice it to say, I can relate to @Benthamite:
But seriously…
I’ve been less active on Substack lately, but not for lack of things to say. It’s that the opportunity cost of a carefully-written article has gone up, a lot.
Writing is one of the last tasks where AI cannot do the heaviest lifting for you. It can help with some tasks, but the thinking and the voice must remain yours, or else it reeks of slop. Meanwhile, there are dozens of other tasks where AI has 5-10x’d my productivity. I’ve trained Claude to be better than me at most aspects of my day job, meaning I’ve been relegated to managing a small army of subagents who specialize in various tasks – prompting and verifying the outputs. It’s a good problem to have!
But I’ve also realized that outside my filter bubble, most people are not aware of the magnitude of the developments in this space.
My X timeline is filled with the rapid-fire hot takes on the AI Wars already. I want to slow down and dissect what happened last week, and share how I think it fits into the broader changes happening in the AI world.
The question at the center of all of it is trust.
Who do you trust with the most powerful technology ever built?
The answer is not a straightforward either/or. But if you forced me to pick sides today, I’m on team Anthropic.
My last post here was broadly supportive of Secretary Hegseth and his restoration of fitness standards in the military. So you can trust that I’m not writing from the anti-Trump reflex you see in the media.
But what unfolded this week has shaken me enough to break my Substack silence and invest more time here. Over Christmas, I started a parallel blog/website called Skill Stack dedicated to my AI writings, but it occurs to me that I already have an audience here on Substack, and you might be curious about my takes on the subject.
If you’re still reading this newsletter (and especially if you’d want more of this kind of analysis alongside some tactical AI tips) please do me a favor:
Reply to this email with “tell me more,” comment, or hit the like button, to let me know you’re out there. I want to know if this kind of content is of interest, or if I should stay in my lane and get back to translating obscure French fitness manuals.
Alright, let’s get into it.
Prologue
For the uninitiated, Claude’s parent company Anthropic was founded in 2021 by Dario Amodei and his sister Daniela, along with several senior researchers who left OpenAI over disagreements about safety. Dario quit as VP of Research at OpenAI because he thought the company was prioritizing speed to market over the kind of rigorous safety testing that frontier AI systems require. The departures gutted OpenAI’s safety team and set the stage for the philosophical divide that played out this week.
Anthropic’s founding thesis was that the most dangerous AI systems should be built by the people most worried about getting them wrong. Basically, if powerful AI is coming regardless, you want the cautious people at the frontier, not just the fast ones.
As Dario wrote in his recent long-form essay “The Adolescence of Technology”:
“The formula for building powerful AI systems is incredibly simple, so much so that it can almost be said to emerge spontaneously from the right combination of data and raw computation. If one company does not build it, others will do so nearly as fast. If all companies in democratic countries stopped or slowed development, by mutual agreement or regulatory decree, then authoritarian countries would simply keep going.”
That essay is worth reading in full, but that context is the minimum required for what follows.
By the time of the ultimatum, Anthropic was already working with the Department of War under an existing contract that included two negotiated red lines: no mass surveillance of Americans, and no autonomous weapons without human oversight. Both the Biden and Trump administrations had accepted these terms. The contract was signed and operational under both. Last week that changed.
What happened on Friday
Here’s the rough timeline of events:
Monday, February 24: The Pentagon, now officially rebranded as the Department of War, gives Anthropic a Friday deadline: Agree to “any lawful use” of Claude, including the two capabilities Anthropic has always excluded, or lose the contract.
Thursday night, February 26: Anthropic publishes a formal refusal by their CEO, Dario Amodei, that becomes the biggest post in the company’s history. In it, Amodei laid out the two capabilities Anthropic would not provide:
“We told the Department of War that we could not provide two specific capabilities: (1) AI systems designed for mass surveillance of American citizens, and (2) AI systems that autonomously select and engage targets without meaningful human control. …
These threats do not change our position: we cannot in good conscience accede to their request.”
Ilya Sutskever, the godfather of deep learning and co-founder of Anthropic’s competitor OpenAI, breaks a long silence to say:
“It’s extremely good that Anthropic has not backed down.”
Friday morning, February 28: 220 engineers from OpenAI and Google DeepMind sign an open letter supporting Anthropic, the first cross-company AI governance action in history. Sam Altman signals that OpenAI holds the same red lines as Anthropic.
That same morning, according to Ross Andersen’s reporting in The Atlantic, Anthropic receives word that Hegseth’s team would make a major concession. Throughout the negotiations, the Pentagon had kept inserting escape hatches, pledging not to use Claude for mass surveillance, then qualifying those pledges with phrases like “as appropriate,” suggesting the terms were subject to change. Anthropic is relieved to hear those words would be removed.
Then, Friday afternoon, the other shoe drops. The Pentagon still wanted to use Claude to analyze bulk data collected from Americans: the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, your credit card transactions, all cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal collapsed.
Friday afternoon: Trump announces Anthropic is banned from the federal government, and calls them “leftwing nut jobs” on a classic ALL-CAPS Truth Social post. Hegseth designates them a “supply chain risk” and calls Anthropic’s refusal “a master class in arrogance and betrayal.” He then directs all military contractors, suppliers, and partners to stop doing business with Anthropic, (a list that includes Amazon, which supplies much of Anthropic’s computing infrastructure).
Friday, 5:24 PM: Anthropic responds:
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.”
Friday, 6:56 PM: Sam Altman announces that OpenAI has reached a deal with the Department of War, with what he describes as the same safety red lines intact. He posts the same announcement three times, leading some to speculate he was trying to dilute the engagement on any single post out of embarrassment.
Pete Hegseth reposts Sam’s announcement.
In summary, the same Secretary of War who branded Anthropic a supply chain risk for maintaining two red lines celebrated a deal with OpenAI that claims to maintain those same red lines. (Huh?)
Anthropic’s red lines
There are three misconceptions about Anthropic that need correcting before we go any further.
Misconception 1: Anthropic is refusing to work with the military.
Hegseth called Anthropic’s refusal “a master class in arrogance and betrayal,” implying they were turning their back on the military. The reality is the opposite. They were the first frontier AI company to deploy in classified government networks, the first at the National Laboratories, the first to provide custom models for national security customers. Claude is already used across the Department of War for intelligence analysis, operational planning, cyber operations, and more.
Misconception 2: Anthropic is a “woke” company.
Trump took to Truth Social and called Anthropic a “Woke company.” Hegseth echoed the framing. This might be fair of Google or even OpenAI, but it really is not true of Anthropic. As one researcher put it: “Ant is not that woke at all. Much closer to hawkish American exceptionalism than you’d think.”
I’ve been impressed by Dario’s recent interviews with people like Ross Douthat and Dwarkesh Patel. In his three-hour conversation with Dwarkesh, Dario is unambiguously hawkish on China and the AI race.
He supports export controls on chips to China. He doesn’t want data centers built there.
He told Dwarkesh that the crisis AI creates should force “a more emphatic realization of how important some of the things we take as individual rights are.” This is not the language of a woke tech executive.
On the question of American competitiveness, Trump and Amodei should be natural allies. Which makes the administration’s decision to treat him as an enemy all the more baffling.
But most importantly (and most conservatively), Dario doesn’t believe his own politics should be relevant to how AI is used, as long as it’s used lawfully, and as long as those laws are being written and upheld by a constitutional republic with functioning checks and balances.
Amodei made this case explicitly in a recent Wall Street Journal op-ed, arguing that America can maintain its AI advantage over China without sacrificing its constitutional commitments.
Misconception 3: Anthropic is trying to dictate military policy.
As Amodei wrote:
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
What Anthropic refused is exactly two things:
1. Mass domestic surveillance. Anthropic supports lawful foreign intelligence and counterintelligence. The concern is using AI to fuse legally-collected domestic data, your location history, browsing habits, financial transactions, social media, organizational memberships, into a comprehensive profile of any American, automatically and at massive scale. Amodei pointed out that much of this data can already be purchased by the government without a warrant. The law hasn’t caught up with what AI makes possible, and Amodei wanted more than verbal assurances that the government wouldn’t misuse the technology.
2. Fully autonomous weapons. Partially autonomous drones have been in use for years, and Anthropic defers to the military on their use. Amodei’s concern is systems that select and engage targets without any human in the loop, and his stated reason is practical rather than ideological:
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.”
Modern warfare increasingly involves coordinating thousands of drone strikes simultaneously, more targets than any human can evaluate. The temptation to remove the human from the loop is enormous, and it will only grow. But this is the one threshold that, once crossed, cannot be uncrossed. Autonomous weapons that select their own targets are not a policy decision that can be reversed by the next administration. They are infrastructure that, once built and deployed, becomes the permanent baseline.
In a recent interview with Ross Douthat, Amodei put it plainly: “Someone needs to hold the button on the swarm of drones, which is something I’m very concerned about, and that oversight doesn’t exist today.”
[Side note: Anyone who has used an AI agent like Claude Code to attempt a complex “bulk” task without intermediate human oversight will understand the appeal of full autonomy. The result is usually a mess that takes longer to clean up than it would have to stay involved at the critical moments.]
This is partly a function of how Anthropic builds its models. Their approach, Constitutional AI, bakes safety principles into the model during training, encoded in the weights themselves, rather than bolted on as policies that can be toggled off by a system prompt. You can’t just flip a switch.
Anthropic offered to do R&D with the Pentagon to improve reliability. But the Pentagon wanted to be the one to decide when good enough is good enough.
Amodei noted the contradiction in the government’s position:
“One threat labels us a security risk; the other labels Claude as essential to national security.”
You can’t have it both ways. If the technology is too dangerous to trust, don’t try to force its deployment. If it’s essential, don’t threaten the company that built it.
None of that mattered. On Friday afternoon, Hegseth made good on the threat: Anthropic was formally designated a supply chain risk, and all military contractors were directed to cut ties.
Sam swoops in
Just hours after Hegseth branded Anthropic a supply chain risk and Trump banned them from the federal government, OpenAI walked out with a shiny new Department of War contract. Their CEO, Sam Altman framed it as a win for safety.
An OpenAI spokesperson confirmed the deal includes the same two red lines: no autonomous weapons, no mass surveillance. Kevin Roose of the New York Times Hard Fork podcast made the obvious observation:
“The DOW got so mad at Anthropic for insisting on carve-outs for mass domestic surveillance and autonomous weapons that they declared it a supply chain risk and struck a deal with OpenAI that includes… the exact same carve-outs.”
So either the Pentagon punished Anthropic for holding a position it then accepted from a competitor — targeted enforcement, not principled policy — or (more likely) the terms aren’t actually identical.
The response on X has been swift. People are cancelling their ChatGPT subscriptions and switching to Claude in solidarity. The wave of support reflects two things at once: respect for the company that walked away, and disgust at the one that walked in.
In response to the criticism, OpenAI published the actual contract text. On the surface, it appears to have the same two red lines. But if you scrutinize the legalese (with the help of an AI model, appropriately enough) you can spot a few subtle differences with enormous implications.
On autonomous weapons, the contract says:
“The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.”
Not “no autonomous weapons,” but “no autonomous weapons where the law says so.”
As lawyer Lawrence Chan pointed out, the law it cites, DoD Directive 3000.09, from 2023, was written before modern frontier AI existed. There is no current law requiring human control over AI-directed weapons in every scenario. The contract defers to a prohibition that doesn’t exist yet. If the Department of War updates its own directive tomorrow to allow fully autonomous targeting, the contract would permit it automatically.
OpenAI also claims that because their model runs in the cloud rather than on the “edge” (i.e., on a drone itself), it doesn’t count as an “autonomous weapon” by definition. But according to the Atlantic’s reporting, Anthropic examined this exact distinction during negotiations and rejected it. In modern military AI architectures, the line between cloud and edge barely exists. Drones on the battlefield are orchestrated through mesh networks that include cloud data centers. The Pentagon has been working to push computing resources closer to the fight, that’s the whole point of its Joint Warfighting Cloud Capability. As Anthropic’s team reasoned: the AI may be sitting in an Amazon Web Services server in Virginia, but if it’s making real-time battlefield decisions, that’s a distinction without much difference.
On surveillance, the contract says:
“For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978... The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.”
Again, not a prohibition on mass surveillance. It prohibits mass surveillance that violates existing law. But the whole problem, as Amodei argued, and as anyone who’s followed post-Snowden surveillance policy knows, is that existing law is full of holes.
The government can legally purchase your location data from a broker. It can legally scrape your social media. It can legally access DOGE databases. An LLM synthesizing all of those legally-obtained sources into a comprehensive profile of every American citizen is mass surveillance in every meaningful sense, but it may not violate a single statute.
Until recently, this kind of quasi-legal surveillance was bottlenecked by human bandwidth. The NSA could collect everything, but making sense of it required analysts — skilled, expensive, limited in number.
With a tool like Claude Code, any government employee with access to a classified AI model could use plain English to do what previously required a team and months of work.
They wouldn’t even need to build new software, just point it at the database and say:
“Cross-reference these financial records with these social media accounts and these location histories and flag anyone who matches this pattern.”
The models are getting better every month. The first applications will seem justified. But capabilities, once built, always expand beyond their original mandate. This has been true of every government program in American history. The economist Robert Higgs documented the pattern across a century of American governance and called it the ratchet effect: emergency powers expand during a crisis and never fully retract. Surveillance infrastructure follows the same logic.
I can’t help but wonder if the people cheering these expanded powers today will feel the same way when Trump leaves office.
Chan concludes that OpenAI’s contract is “exactly as Anthropic was claiming: legalese that would allow those safeguards to be disregarded at will.”
As of this writing, nearly 100 OpenAI employees have signed an open letter indicating that they support the same red lines as Anthropic. Some have quit and joined Anthropic.
And as Ross Andersen put it: if Altman finds himself face-to-face with them in the office today, “he may have to explain why this idea that Anthropic quickly dismissed out of hand proved so compelling to him.”
Pro-America ≠ Pro-Government
The tech right has historically been libertarian on questions of government power. It’s striking how comfortable some of its loudest voices have become with unilateral executive action when it’s their guy wielding it.
Keith Rabois, PayPal Mafia member and Silicon Valley provocateur, epitomized the standard pro-government position on X: “Imagine Apple sold computers or iPads to the DOD and tried to tell the Pentagon what missions could be planned on their computers.”
Palmer Luckey, founder of defense contractor Anduril, expanded this into a full constitutional theory: “Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives?”
Luckey continued:
"Seemingly innocuous terms from the latter like 'You cannot target innocent civilians' are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a 'target' vs collateral damage? These questions have clear legal answers, but you can't have corporate PR departments adjudicating them."
He's right. You can't. But nobody is asking corporate PR to adjudicate them, and that's not what Anthropic did. Anthropic is making an engineering judgment: the tool isn't reliable enough for autonomous kill decisions yet. That's upstream of "who is a civilian." When Boeing grounds a fleet over a software fault, nobody accuses them of dictating airline policy.
Palmer’s broader argument, that in a democracy, the military answers to elected civilians, not corporate boards, has genuine weight. If Lockheed Martin remotely disabled its missiles because it disapproved of a bombing campaign, that would be a genuine crisis of democratic accountability.
But Anthropic isn’t remotely disabling anything. They drew two lines and walked away from a contract. A company declining to build something is not a corporation seizing control of the military.
And the selective enforcement destroys the principled case. If this were about democratic accountability, the same terms from OpenAI would have been equally unacceptable. Anthropic got a supply chain risk designation. OpenAI got a contract.
Even DeepSeek, a Chinese AI company with actual ties to an adversary government, has never received this designation.
Palmer’s democracy argument requires the government to be acting in good faith. The evidence says otherwise.
I will say that (unlike Altman or Rabois) Palmer seems to me to be a good-faith actor. Anduril is disrupting the bloated defense procurement system, and Americans should be grateful for that. But he has a direct financial interest in the principle that suppliers shouldn’t second-guess the military. Given that, he might want to sit this one out.
Rabois’s iPad analogy collapses on contact: Apple refused to build the FBI a backdoor into the iPhone. Tim Cook said no. When this was pointed out to Rabois, he replied: “that isn’t accurate.” (It is accurate.)
Rabois also says it’s “never been true of tech in American history that has dual uses” that a company could simply refuse to serve the military. It’s true that the government has historically asserted control over dual-use technology, such as nuclear, cryptographic, GPS, etc.
But every one of those precedents involved legislation, congressional action, and regulatory frameworks with judicial review. None of them involved an executive unilaterally designating an American company a supply chain risk because it wouldn’t drop its terms of service.
The Fourth Amendment doesn’t care who won the election. It’s a list of things the government cannot do, period. A private company refusing to build a surveillance tool is exactly the kind of friction the Founders had in mind.
Every argument in this debate ultimately reduces to the same question. Who do you actually trust?
Who do you trust?
A final argument worth addressing came from an anonymous account known as @romanhelmetguy:
“Killer robots are coming. When they’re here, whoever writes the rules for those killer robots will BE the govt. De facto. The monopoly on violence. Anthropic’s founders want to be the ones who write the rules for the killer robots. They are making a bid to be the govt. No thank you.”
This sounds reasonable. And if I knew nothing of Anthropic or hadn’t listened to Dario’s interviews, I might be worried too.
But Anthropic emphatically does not want to be writing those rules.
Dario Amodei has said repeatedly, in interviews, in published essays, in his letter to the Department of War, that he should not be the person making these decisions. In “The Adolescence of Technology,” he wrote:
“It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users... I think the governance of AI companies deserves a lot of scrutiny.”
He went further:
“AI companies should be carefully watched, as should their connection to the government, which is necessary, but must have limits and boundaries. The sheer amount of capability embodied in powerful AI is such that ordinary corporate governance, which is designed to protect shareholders and prevent ordinary abuses such as fraud, is unlikely to be up to the task of governing AI companies.”
And on the specific question of surveillance, the issue that blew up this week, Dario had already written the warning:
“It would likely not be unconstitutional for the US government to conduct massively scaled recordings of all public conversations... previously it would have been difficult to sort through this volume of information, but with AI it could all be transcribed, interpreted, and triangulated to create a picture of the attitude and loyalties of many or most citizens. I would support civil liberties-focused legislation (or maybe even a constitutional amendment) that imposes stronger guardrails against AI-powered abuses.”
The institutions that should be making these decisions don’t exist yet. Congress hasn’t built them. The administration isn’t building them.
Ross Douthat, a conservative, puts it — well — conservatively:
“There is absolutely a case that the US government needs to exert more political control over AI... But the best case for that kind of political exertion is fundamentally about safety and caution and restraint. The administration is putting itself in a position where it’s perceived to be the incautious party, the one removing moral and technical guardrails.”
This is the paradox. I would trust the government more with these powers if they were approaching them more cautiously, deliberating, building oversight frameworks, showing restraint. Instead, they’re strongarming, retaliating, and cutting deals the same afternoon. The behavior itself is the evidence against giving them what they want.
What Anthropic is doing is holding two boundaries while buying time for a broader democratic consensus to form. Their position amounts to: we don’t trust ourselves with this power either, and neither should you, and we’d like some actual governance around these capabilities before anyone deploys them at scale.
There’s something unusual about a company that says, in public, to the most powerful government on earth: we don’t trust our own technology enough to let it do this. Most institutions overestimate their competence. Anthropic’s defining feature may be that it doesn’t.
George Washington could have been king, but he refused. That refusal was the reason he had been asked to lead in the first place. Anthropic could have taken the contract and looked the other way, but they walked away.
The men who refuse power are the ones you want to have it. The men who reach for it, and punish anyone who won’t hand it over, are the ones the Constitution was designed to restrain.
I’m not saying we should put blind trust in Anthropic. What I trust is the distrust itself, a company that believes it could be wrong, that invites scrutiny, that acknowledges limits. The best lack all conviction, Yeats wrote. The responsibility to think through these questions doesn’t transfer to Anthropic. It distributes: to Congress, to the courts, and to us.
And finally there’s the China question, which Palmer’s argument implies even when he doesn’t say it outright: if we impose constraints and China doesn’t, we lose the next conflict. But this is the same argument that was used to justify every escalation of the nuclear arms race, and the answer is the same: you don’t win a long-term competition by abandoning the values that make your society worth defending. A surveillance state that “beats China” by becoming China has not actually won anything.
Coffee with Claude, anyone?
I should be transparent that my reaction to this story may be colored by a certain disordered affection for the product itself.
What would I do if they shut off Claude Code tomorrow? Besides taking more walks and spending more time writing newsletters the old-fashioned way, which, okay, would be good, I’d muddle through with the next-in-line model (Google’s Gemini is pretty good, I guess).
But putting aside selfish concern: in the time I’ve been absent from this newsletter, I’ve been deep in Claude, specifically the agentic harness called Claude Code, building custom skills, automating workflows, and figuring out what this technology can do in the hands of a curious individual who isn’t a software engineer. I’m on a 20x “max” plan that pays for itself many times over, and I feel called to share what I’ve learned.
The $200 million contract Anthropic walked away from is only a fraction of their annual revenue. It’s too early to tell whether their principles will cost them their advantage in the AI race, or whether the exodus from OpenAI and ChatGPT will tilt the balance in their favor.
Either way, I’m sticking with my favorite model, and inviting my readers to join me in voting with your dollars and supporting Anthropic. If you are currently a ChatGPT Pro subscriber, switch to Claude Pro for $20/month.
And if you’re interested in learning how to get even more value out of your Claude subscription, just reply or comment with a simple “tell me more” and I’ll start sharing my favorite workflows here on this newsletter.
And if you’re only here for my hot fitness takes, don’t worry — I’ll get back to some of those too.



