Anthropic CEO Dario Amodei has sounded an alarm about artificial intelligence, arguing that self-improving systems could arrive within a couple of years and bring risks from bioterror to autonomous weapon swarms. The essay is sweeping, urgent, and sincerely motivated. It also gets key things wrong about how today’s AI works, what evidence says about near-term risks, and where policy attention should go right now.
A Compressed Timeline Without Convincing Proof
Amodei’s claim that superintelligent, self-improving AI may be only one to two years away is extraordinary. Extraordinary claims need more than trend lines and anecdotes. The Stanford AI Index reports that benchmark gains are increasingly incremental across saturated leaderboards, while costs, compute, and energy demands are climbing steeply.

The International Energy Agency projects data center electricity use could roughly double by the middle of the decade, driven partly by AI training and inference. That is a constraint on unbounded scaling. Hardware is improving, but physics, power, and capital are imposing real friction on the idea of a sudden intelligence explosion on a fixed, short clock.
Anthropomorphism Muddies the Science of Today’s AI
Describing current models as psychologically complex or imbued with self-identity implies internal goals that do not exist. Large language models are sequence predictors trained to match patterns in data, not agents with wants, feelings, or theory of mind. Researchers have repeatedly cautioned that fluent outputs create an illusion of understanding.
Conflating convincing text with cognition has real costs. It distracts from measurable failure modes such as hallucinations, safety bypasses, and bias. It also risks encouraging the public to treat chatbots as confidants, a phenomenon mental health professionals and major newspapers have documented in cases where vulnerable users ascribe personhood to software.
Bioterror And Drone Armies Need Real-World Context
Amodei is right that misuse matters. But the evidence on AI-fueled biothreats is more nuanced than his framing. Controlled studies by policy researchers and industry labs find that safety filters and domain friction substantially limit novice misuse, even as expert capability remains the primary risk factor. The National Academies and NIST have urged focusing on access controls, screening, and human oversight rather than assuming models alone unlock catastrophic capability.
On weaponized autonomy, the battlefield tells a mixed story. Conflicts have shown explosive growth in small drones and loitering munitions, but also the effectiveness of jamming, air defense, and logistics choke points. A drone swarm is not a singular mind; it is a supply chain, a radio spectrum, and batteries. Any serious assessment must account for countermeasures, governance, and the very human constraints that define modern warfare.

Economic Displacement Is Significant And Uneven
Warnings that AI could make human workers obsolete at scale overlook a growing body of evidence on augmentation. A widely cited study by Stanford and MIT found a 14% productivity lift for customer support agents using generative tools, with the largest gains for less-experienced workers. Early deployments in coding assistants show faster completion for routine tasks, not wholesale replacement.
That does not mean workers are safe. Misuse of automation to justify layoffs, the spread of low-quality synthetic content, and surveillance creep are tangible harms. Regulators and labor bodies should prioritize disclosures, impact assessments, and bargaining over tool adoption, aligning incentives to share efficiency gains rather than offload risk.
What Sensible AI Safeguards Should Look Like
Amodei calls for regulation, up to and including constitutional change. A better path is faster, narrower, and testable. Start with enforceable safety evaluations for frontier models, drawing on work by the UK AI Safety Institute and NIST’s AI Risk Management Framework. Require pre-deployment red-teaming, incident reporting, and secure model release practices tied to model capability and compute used.
For bio and chemical risks, mandate provider-level content filters, identity checks for sensitive queries, and vendor obligations to screen DNA synthesis orders, consistent with recommendations from public health agencies. In the information domain, codify watermarking and provenance standards championed by leading research labs and media coalitions to combat deepfakes and election manipulation.
Focus On Measurable Risks Not Sci-Fi Narratives
The most pressing AI harms are already here: synthetic media used for fraud and nonconsensual pornography, opaque model decisions in lending and hiring, and brittle systems in high-stakes settings. These are solvable with audits, liability clarity, and procurement rules that require robustness and transparency.
Amodei’s warning is useful if it catalyzes concrete guardrails. But overstating imminence and anthropomorphizing current systems clouds the policy conversation. Treat models as powerful pattern engines, regulate deployments based on demonstrated capability, and invest in public-interest research and evaluation. That approach mitigates real risks today while keeping speculative fears in perspective.
