OpenAI has moved to revise its newly announced agreement with the U.S. Department of War after a wave of criticism from users, researchers, and civil liberties advocates. CEO Sam Altman acknowledged the rollout was rushed and poorly communicated, and the company says the amended teXt adds stricter language on domestic surveillance. But critics argue the changes leave major loopholes and say the deal remains vague on autonomous weapons.
What Changed in the Contract After Initial Rollout
According to OpenAI, the updated provisions explicitly prohibit using its systems to intentionally surveil U.S. persons and nationals, with the caveat that this is “consistent with applicable law.” The company characterized the Department’s stance as aligned with that boundary and emphasized that mass domestic monitoring is off-limits under the agreement.
The revisions appear designed to quell concerns sparked by the initial announcement, which critics interpreted as allowing wide latitude so long as uses were technically lawful. While the new wording narrows the scope for domestic targeting, it still hinges on legality, a standard that can shift with policy changes or new authorizations. Notably, the contract language shared publicly does not directly address fully autonomous weapons, a point repeatedly flagged by researchers and human rights groups in broader AI policy debates.
Legal Lines and Ethical Questions Raised by Deal
Altman has framed OpenAI’s posture as deference to “democratic processes,” indicating the company will follow government direction rather than impose expansive ethical restrictions of its own. In internal messages later shared on X, he conceded the announcement looked opportunistic and said the company should have taken more time to explain the trade-offs.
That stance unsettles many in the AI safety and civil liberties communities. For years, privacy advocates have warned that relying on what is lawful rather than what is technically possible or societally acceptable can open the door to “incidental” collection at scale. The Edward Snowden disclosures a decade ago, and ongoing debates over Section 702 surveillance reauthorization, illustrate how legal frameworks can permit very broad data access unless tightly constrained and independently audited.
OpenAI also told employees and users that intelligence agencies such as the NSA would require a contract amendment to use its tools. While that is a procedural check, skeptics note it offers little comfort if the company’s threshold is simply whether a request is lawful.
How the Backlash Landed Across Users and Experts
The blowback was immediate. Developers voiced fears that “intentional use” language could allow broad data sweeps that still capture Americans, while security researchers warned that autonomous or semi-autonomous systems can exhibit surveillance behaviors without explicit instruction. Political researcher Tyson Brody argued that emphasizing intent invites loopholes around incidental collection, and technologists echoed concerns that real-world deployment rarely maps cleanly onto neat policy boundaries.
There are early signs of business impact. App analytics firms observed a sharp spike in ChatGPT subscription cancellations in the days following the announcement, with some estimates pegging uninstalls up by 295%. At the same time, Anthropic’s Claude briefly overtook ChatGPT in U.S. App Store free downloads, underscoring how fast consumer sentiment can swing in a trust-driven market.
The Competitive and Policy Context Surrounding AI
The timing amplified scrutiny. The deal followed reports that federal agencies were ordered to stop using Anthropic’s services after Anthropic declined to strip safeguards against mass domestic surveillance and autonomous weapons, according to CEO Dario Amodei’s account. OpenAI moved quickly to fill the vacuum, a decision Altman now concedes appeared opportunistic even if motivated by a desire to shape outcomes from the inside.
Major tech firms have grappled with similar lines for years. Google published AI Principles in 2018 restricting certain weapons work and surveillance capabilities. Microsoft faced internal dissent over defense contracts before establishing a responsible AI framework tailored to government use. The lesson: absent bright lines and verifiable oversight, trust erodes—especially when capabilities like multimodal perception, long-context reasoning, and large-scale retrieval make dragnet surveillance and targeting more feasible.
What to Watch Next for Transparency and Trust
Key signals will come from transparency and enforcement. Independent audits, red-team reports focused on surveillance misuse, and publication of detailed, testable prohibitions—covering both domestic monitoring and autonomous targeting—would give the public more than promises. Clear escalation and shutdown procedures for suspected misuse, backed by external oversight, are now table stakes.
On the policy side, watch for congressional inquiries into AI procurement, potential inspector general reviews, and whether the contract text around “applicable law” is tightened to include explicit statutory limits and reporting requirements. In the market, track whether churn stabilizes and whether enterprise customers demand opt-out clauses from defense-related training or use. OpenAI’s revisions may slow the backlash—but without firmer, verifiable guardrails, the trust gap is likely to remain.