Silicon Valley's Oppenheimer Moment
Palantir's viral statement crystallizes arguments from The Technological Republic and proclaims a broader shift in Silicon Valley's relationship with American power
Palantir Technologies posted a 22-point manifesto to X this week, excerpted from CEO Alex Karp’s book The Technological Republic. It wasn’t a product launch or an earnings call. It was a philosophical defense of building AI weapons systems, complete with arguments about cultural decline, elite intolerance of religion, and why “some cultures have produced vital advances; others remain dysfunctional and regressive.”
The document states flatly that Silicon Valley engineers have a “moral debt” to build military software. That free societies need “hard power” to survive, and “hard power in this century will be built on software.” That adversaries won’t pause for “theatrical debates” about AI ethics while racing to deploy autonomous weapons.
Five years ago, this would have been corporate suicide. Today, it’s a #1 New York Times bestseller.
The Google Walkout Is Ancient History
In 2018, thousands of Google employees staged walkouts protesting Project Maven, a Pentagon contract using AI to analyze drone footage. The backlash was so intense that Google didn’t renew the deal and published AI ethics principles committing not to develop weapons systems.
That same year, Palantir was essentially a pariah. The company worked with ICE, built surveillance tools for intelligence agencies, and cheerfully took the defense contracts other companies rejected. Former employees told NPR they signed non-disparagement agreements to avoid retaliation for criticizing the company’s work after leaving.
The cultural shift since then has been dramatic. In the past six months alone: OpenAI signed major Pentagon contracts (then faced backlash and added “more guardrails than any previous agreement”). Anthropic landed a $200 million defense deal, then terminated it after internal conflicts. Defense Secretary Hegseth publicly warned Anthropic to “let the military use company’s AI tech as it sees fit.” Google, Microsoft, Amazon, and Oracle now share a $9 billion military cloud computing contract.
The debate isn’t whether tech companies should work with the military anymore. It’s which ones will, how much they’ll charge, and whether they’ll admit it publicly.
What Changed
Two things happened simultaneously: the technology became militarily decisive, and the geopolitical situation stopped feeling theoretical.
Ukraine demonstrated what happens when one side has better software. Drones coordinated through AI systems, real-time battlefield intelligence, encrypted communications- the country’s survival has depended partly on which tech companies were willing to provide tools and which weren’t. It’s hard to maintain that military contracts are morally suspect when you’ve watched them keep a democracy alive on live television for three years.
Meanwhile, the race with China went from abstract concern to concrete competition. When the Pentagon started warning that AI superiority would determine the next century’s balance of power, it became difficult for engineers to claim their work had nothing to do with national security. Palantir’s manifesto hammers this point relentlessly: “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.”
The framing is deliberate- it treats the question as already settled. Not should AI weapons be built, but who will build them and for whom. Democratic societies or authoritarian ones. That’s not an argument designed to convince skeptics; it’s one designed to make skepticism look naive.
The Manifesto’s Actual Claims
Beyond the headline-grabbing stuff about moral debts and cultural superiority, the document makes several specific arguments worth examining:
On deterrence: “The atomic age is ending. One age of deterrence, the atomic age, is ending, and a new era of deterrence built on A.I. is set to begin.” This treats AI weapons as the logical successor to nuclear arsenals, tools that prevent wars by making them unthinkable.
On peace: “American power has made possible an extraordinarily long peace. Too many have forgotten or perhaps take for granted that nearly a century of some version of peace has prevailed in the world without a great power military conflict.” The claim is that three generations have avoided World War III because of American military dominance, and maintaining that requires staying ahead technologically.
On duty: “If a U.S. Marine asks for a better rifle, we should build it; and the same goes for software.” This reframes military contracts as supporting troops rather than enabling war- a subtle but significant shift in how the work is justified.
On Germany and Japan: “The postwar neutering of Germany and Japan must be undone. The defanging of Germany was an overcorrection for which Europe is now paying a heavy price.” This isn’t just about AI, it’s arguing that decades of enforced pacifism in former Axis powers has created security vacuums that adversaries are exploiting.
The document also includes genuinely odd inclusions: criticism of iPhone culture (”Is the iPhone our greatest creative if not crowning achievement as a civilization?”), complaints about public servants’ compensation, arguments for universal national service, and defenses of religious belief against elite intolerance. It reads less like a corporate mission statement and more like a comprehensive diagnosis of Western civilizational decline.
Who This Is Actually For
The manifesto isn’t aimed at activists who already think Palantir is evil. Those replies; calling it fascist, Straussian manipulation, techno-authoritarianism—were written before the document posted. The audience is engineers who feel uncomfortable with Silicon Valley’s traditional pacifism but haven’t had sophisticated language for that discomfort.
By wrapping defense contracts in the language of duty, historical precedent, and democratic survival, it offers what amounts to a permission structure. It tells talented engineers that building military software isn’t a betrayal of progressive values but their fullest expression because American power underwrites the long peace that makes those values possible.
Whether that argument works depends entirely on what happens next geopolitically. If AI-driven military competition intensifies and democracies find themselves desperate for technical talent, Palantir’s manifesto will look prescient. If the conflicts don’t materialize, it’ll read like expensive justification for surveillance capitalism with better branding.
What’s undeniable is that the ground has shifted. When a defense contractor can publish a bestselling book arguing that elite culture is too secular, too pluralistic, and too pacifist and the controversy is muted compared to what it would have been in 2018- something fundamental has changed about which arguments are sayable in public and which institutions get to say them.
The real question isn’t whether Palantir is right. It’s whether the speed of this shift suggests clarity finally emerging, or whether we’re moving too fast to notice what we’re leaving behind.




The whole idea of stopping for "theatrical debates" is critical. In war (cold or hot), any inaction regarding development of new weapons/technology gives the enemy an advantage. Alluding to Oppenheimer, if we had let such intellectual hand-wringing dictate policy in the 1940s, we may well be speaking German or Japanese today.