I did what millions do after a splashy Super Bowl ad. I typed AI.com into my browser, created an account, and handed over a credit card. Within minutes, I wished I hadn’t. The service wasn’t ready to use, yet it had already captured payment details and personal identifiers—and its fine print reads like a liability boomerang aimed squarely back at the user.
AI.com is positioning itself as the on-ramp to autonomous “AI agents” that act on your behalf. The domain became a headline in its own right last year, reportedly purchased for around $70 million by Crypto.com’s CEO, a figure that would make it one of the largest domain deals on record. After Sunday’s ad blitz—30-second Super Bowl spots have averaged roughly $7 million in recent years, according to Kantar—the site buckled under traffic. What stood out more than the downtime, though, was what signing up actually entails.

What AI.com Promises With Its Autonomous AI Agents
The pitch is sweeping: spawn a personal agent that can trade stocks, manage your calendar, triage email, and even polish a dating profile. This “agentic AI” trend has exploded across the industry, with fast-moving projects like OpenClaw touting bots that coordinate tasks with other bots. Some of that hype has frayed at the edges—MIT Technology Review recently found that supposed bot-only forums included more human curation than initially advertised—but the direction is clear. Companies want you to trust software with real-world actions, not just text replies.
During onboarding, AI.com asked me to secure two “handles,” one for me and one for my future agent, then abruptly put me in a queue. No agent. No dashboard. No controls. Just a placeholder confirming my details were locked in and that access would arrive “later.” That mismatch—collecting sensitive data before delivering basic functionality—is where the story turns from slick marketing to serious risk.
The Fine Print Shifts All the Risk to You
AI.com’s terms effectively put users on the hook for anything their agents do, including actions they didn’t expressly initiate. The documents warn that outputs may be inaccurate, incomplete, or fabricated, and they instruct users to verify information before relying on it. They also emphasize user responsibility for “high-stakes” actions such as financial transactions, communications, or data changes.
That raises immediate operational questions. If an agent is truly autonomous, what does pre-approval or supervision look like in practice? Will there be granular, real-time prompts, or only after-the-fact logs? The National Institute of Standards and Technology’s AI Risk Management Framework stresses human oversight and well-defined escalation paths for safety-critical systems. Without clear controls, “you’re responsible” becomes a hollow warning—one that’s nearly impossible to meet at scale.
The terms also avoid guaranteeing that your use is lawful or compliant, which matters if agents touch domains regulated by securities, privacy, or communications rules. In plain English: if something goes sideways—copyright, data mishandling, unauthorized access—you’re standing there alone.

Data Collection That Goes Way Beyond Chat
AI.com’s privacy materials contemplate aggressive data ingestion. The service may capture screen recordings, including everything visible and all system audio. It encourages connecting third-party accounts like email and calendars so agents can “ingest” content to complete tasks. The company says it doesn’t control what you choose to share, effectively shifting the burden of data minimization to the user.
Security pros have a term for this: expansive attack surface. The more systems an agent can touch, the more avenues an attacker—or a malfunctioning model—has to do damage. IBM’s long-running Cost of a Data Breach studies peg the global average breach cost in recent years at over $4 million, a reminder that “oops” moments are rarely small. Groups like the Electronic Frontier Foundation have also urged platforms to adopt strict minimization and clear limits on secondary use—guardrails that consumers should look for before handing over API keys and inboxes.
A Premature Launch With Pay-First Onboarding
The most frustrating part of the experience was the sequencing. The site pushed for credit card or Apple Pay details at the outset, then reserved my handles and placed me in a generation queue. Requiring payment information before providing a working product is not automatically a scam, but it is a dark-pattern-adjacent tactic that consumer advocates and the Federal Trade Commission have repeatedly flagged in other contexts. At minimum, it should come with unambiguous disclosures, limited scopes, and immediate access to privacy and permission settings.
Agent platforms have to earn trust the hard way: with transparent logs, reversible actions, human-in-the-loop approvals, strict data minimization, per-integration permissions, and clear liability boundaries. External security attestations (like SOC 2 or ISO/IEC 27001), independent audits, and red-teaming reports are fast becoming table stakes for services that want access to your email, money, and identity.
Bottom Line: Ambition Without Accountability Is Risky
AI.com’s promise is ambitious, and the Super Bowl spotlight worked. But ambition without accountability is a bad bargain. Until the company demonstrates granular oversight, constrained permissions, and a liability model that doesn’t leave customers holding the bag, the safest move is the simplest one: don’t plug an unfinished agent into the center of your digital life.
