FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Moxie Marlinspike Unveils Private ChatGPT Rival

Gregory Zuckerman
Last updated: January 19, 2026 1:07 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

Moxie Marlinspike, the cryptographer best known for co-founding Signal, has introduced Confer, a privacy-first alternative to mainstream chatbots. The service aims to deliver the convenience of ChatGPT-like assistants without the data collection that typically shadows them, promising that user conversations never become training material and never fuel targeted ads.

Confer arrives as AI platforms explore new revenue models, including advertising, and as consumers increasingly confide sensitive details to conversational systems. The pitch is straightforward: get modern AI utility without exposing your life story to a data broker’s playbook.

Table of Contents
  • How Confer Locks Down Conversations and Data
  • Why Privacy Matters for Everyday AI Assistants
  • Pricing and positioning for a privacy-first assistant
  • Trade-offs and open questions about privacy-first AI
  • What this means for the AI market and privacy
Three professionals, two men and one woman, are seated at a conference table in a modern office with large windows, engaged in a meeting.

How Confer Locks Down Conversations and Data

Confer’s security architecture blends familiar standards with hardware-backed safeguards. On the client side, it uses WebAuthn passkeys to establish a cryptographically bound session, so the request that leaves your device is tied to a key you control. Rather than terminating in a typical server process, messages are decrypted only inside a Trusted Execution Environment (TEE), a hardware-isolated enclave designed to keep plaintext out of the host’s reach.

Remote attestation verifies that the enclave is running the expected code before any secret touches memory, a technique similar in spirit to Intel SGX, AMD SEV, or AWS Nitro Enclaves. That means operators can’t quietly swap binaries or siphon logs without detection. Inside the enclave, an array of open-weight foundation models handles queries, favoring transparency over opaque, closed-weight systems.

There are practical constraints. Passkeys work best on mobile and on recent macOS builds like Sequoia; Windows and Linux users may need password managers to bridge compatibility. And enclave inference adds complexity compared with a conventional AI stack. But the result aligns with the core promise: the host can’t read your chats, store them in a lake, or reuse them for training.

Why Privacy Matters for Everyday AI Assistants

Chat interfaces invite highly personal disclosures—health anxieties, legal questions, work product, and family matters—precisely the kinds of data that advertising ecosystems value. Marlinspike argues that mixing that intimacy with ads is a step too far. Privacy researchers largely agree that confessional interfaces raise the stakes: misuse or breach risk is amplified when models accumulate long-lived, identity-linked transcripts.

Public sentiment backs the caution. Pew Research Center has repeatedly found that large majorities of Americans are uneasy with how companies use their data. On the enterprise side, Cisco’s 2024 Data Privacy Benchmark reported that 94% of organizations believe customers won’t buy from them if data isn’t properly protected, a reminder that privacy is now a market differentiator, not a niche concern.

The broader industry is inching toward private inference. Apple’s Private Cloud Compute relies on attested enclaves for server-side processing, and cloud providers have expanded confidential computing options. Confer pushes that trajectory to its logical extreme for a consumer-facing assistant: zero operator access to chat content by design.

Two women are engaged in a conversation, with one woman holding a tablet and gesturing as she speaks.

Pricing and positioning for a privacy-first assistant

Confer’s free tier allows 20 messages per day and five active chats—enough to test workflows but deliberately constrained to limit operating costs. A $35 monthly plan unlocks unlimited access, more capable models, and personalization. That price sits above popular plans from big-name chatbots, reflecting the overhead of enclave compute and the absence of ad subsidies.

Expect interest from compliance-heavy sectors—law, healthcare, finance—where data residency and confidentiality rules collide with the convenience of generative AI. While Confer isn’t marketed as a regulated-industry product, its architecture dovetails with guidance from frameworks like the NIST AI Risk Management Framework and common controls mapped in SOC 2 and ISO 27001.

Trade-offs and open questions about privacy-first AI

Privacy by design can limit features that depend on long-term server-side memory. Confer says its paid tier supports personalization, but doing so without persistent, operator-readable profiles requires careful engineering—think encrypted, user-controlled state tied to attested code paths. Enclaves also introduce performance overhead; low latency at scale will be an ongoing test.

Model quality is another watchpoint. Open-weight systems have surged—top models now post competitive results on benchmarks like MMLU and GSM8K—but the best proprietary models still edge them out in reasoning and tool use. Confer’s bet is that for many tasks, “good enough” paired with genuine privacy beats “state of the art” coupled to data retention.

What this means for the AI market and privacy

Confer resets expectations for what a private assistant can look like, transforming techniques long discussed in research circles into a consumer product. If adoption is strong, it will pressure incumbents to offer enclave-backed, attested modes that disable logging and training by default, not as buried opt-outs.

The choice in front of users is now sharper: convenience subsidized by attention models, or capability delivered inside cryptographic guardrails. With Confer, Marlinspike is betting that trust is the next feature users will pay for—and that true privacy, not just policy promises, is how you earn it.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
OpenAI Brings Ads to ChatGPT’s Free Tiers
Refurbished MacBook Air Available Under $200
Musk Seeks $134 Billion From OpenAI and Microsoft
Phone AI Chips Still Underused After Eight Years
AI-Generated Image-to-Video Technology in Education: Practical Ways to Create Dynamic Learning Media
Aurzen EAZZE D1R Brings Roku TV To $200 Projectors
The Visual Economy of 2026: Why AI Automation is the New Standard for Digital Growth
TikTok Launches PineDrama Microdrama App
PromptBuilder Offers $79 Lifetime Prompt Writer
Kobo Remote Review Finds Small Gadget Big Impact
Android Flagships Mirror iPhone Design Trend
T-Mobile Debuts Family Plan Claiming $1,000 Savings
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.