Moxie Marlinspike, the cryptographer best known for co-founding Signal, has introduced Confer, a privacy-first alternative to mainstream chatbots. The service aims to deliver the convenience of ChatGPT-like assistants without the data collection that typically shadows them, promising that user conversations never become training material and never fuel targeted ads.
Confer arrives as AI platforms explore new revenue models, including advertising, and as consumers increasingly confide sensitive details to conversational systems. The pitch is straightforward: get modern AI utility without exposing your life story to a data broker’s playbook.

How Confer Locks Down Conversations and Data
Confer’s security architecture blends familiar standards with hardware-backed safeguards. On the client side, it uses WebAuthn passkeys to establish a cryptographically bound session, so the request that leaves your device is tied to a key you control. Rather than terminating in a typical server process, messages are decrypted only inside a Trusted Execution Environment (TEE), a hardware-isolated enclave designed to keep plaintext out of the host’s reach.
Remote attestation verifies that the enclave is running the expected code before any secret touches memory, a technique similar in spirit to Intel SGX, AMD SEV, or AWS Nitro Enclaves. That means operators can’t quietly swap binaries or siphon logs without detection. Inside the enclave, an array of open-weight foundation models handles queries, favoring transparency over opaque, closed-weight systems.
There are practical constraints. Passkeys work best on mobile and on recent macOS builds like Sequoia; Windows and Linux users may need password managers to bridge compatibility. And enclave inference adds complexity compared with a conventional AI stack. But the result aligns with the core promise: the host can’t read your chats, store them in a lake, or reuse them for training.
Why Privacy Matters for Everyday AI Assistants
Chat interfaces invite highly personal disclosures—health anxieties, legal questions, work product, and family matters—precisely the kinds of data that advertising ecosystems value. Marlinspike argues that mixing that intimacy with ads is a step too far. Privacy researchers largely agree that confessional interfaces raise the stakes: misuse or breach risk is amplified when models accumulate long-lived, identity-linked transcripts.
Public sentiment backs the caution. Pew Research Center has repeatedly found that large majorities of Americans are uneasy with how companies use their data. On the enterprise side, Cisco’s 2024 Data Privacy Benchmark reported that 94% of organizations believe customers won’t buy from them if data isn’t properly protected, a reminder that privacy is now a market differentiator, not a niche concern.
The broader industry is inching toward private inference. Apple’s Private Cloud Compute relies on attested enclaves for server-side processing, and cloud providers have expanded confidential computing options. Confer pushes that trajectory to its logical extreme for a consumer-facing assistant: zero operator access to chat content by design.

Pricing and positioning for a privacy-first assistant
Confer’s free tier allows 20 messages per day and five active chats—enough to test workflows but deliberately constrained to limit operating costs. A $35 monthly plan unlocks unlimited access, more capable models, and personalization. That price sits above popular plans from big-name chatbots, reflecting the overhead of enclave compute and the absence of ad subsidies.
Expect interest from compliance-heavy sectors—law, healthcare, finance—where data residency and confidentiality rules collide with the convenience of generative AI. While Confer isn’t marketed as a regulated-industry product, its architecture dovetails with guidance from frameworks like the NIST AI Risk Management Framework and common controls mapped in SOC 2 and ISO 27001.
Trade-offs and open questions about privacy-first AI
Privacy by design can limit features that depend on long-term server-side memory. Confer says its paid tier supports personalization, but doing so without persistent, operator-readable profiles requires careful engineering—think encrypted, user-controlled state tied to attested code paths. Enclaves also introduce performance overhead; low latency at scale will be an ongoing test.
Model quality is another watchpoint. Open-weight systems have surged—top models now post competitive results on benchmarks like MMLU and GSM8K—but the best proprietary models still edge them out in reasoning and tool use. Confer’s bet is that for many tasks, “good enough” paired with genuine privacy beats “state of the art” coupled to data retention.
What this means for the AI market and privacy
Confer resets expectations for what a private assistant can look like, transforming techniques long discussed in research circles into a consumer product. If adoption is strong, it will pressure incumbents to offer enclave-backed, attested modes that disable logging and training by default, not as buried opt-outs.
The choice in front of users is now sharper: convenience subsidized by attention models, or capability delivered inside cryptographic guardrails. With Confer, Marlinspike is betting that trust is the next feature users will pay for—and that true privacy, not just policy promises, is how you earn it.