Dot, the AI personalized companion designed to listen and connect with you, not like a chatbot, is sunsetting.
The start-up behind the app, New Computer, informed users it will keep the service up for a brief period so people can take their conversations and memories with them before the product vanishes. The founders said the reason was divergent visions — one of them described a “split in our ‘North Star’” — in making the decision.
Dot aimed to design a highly personal assistant that would learn a user’s preferences, mood and habits over time. The app was conceived by co-founders Sam Whitmore and designer Jason Yuan as a reflective companion more than a productivity aid, and supportive mirror for everyday life. The idea itself found resonance among early adopters, but it turned out that building a safe, sustainable and genuinely personal AI at scale is even more of a tall order than the pitch suggested.
A rough niche within a growth category
Friend chatbots are one of consumer AI’s most adhesive use cases. Character. AI, Replika and Pi are proof that millions will spend countless hours — and often, money — chatting with artificial personalities. Industry trackers have estimated Character. AI’s traffic in the tens of millions of monthly visits, and Replika has claimed a sizable subscription base. But traction is spotty: Dot’s lifetime iOS downloads are “about” 24,500, according to estimates from Appfigures, highlighting a significant distance between curiosity and daily utility for upstart challenger banks.
Dot’s iOS-only approach, though design-forward, probably limited its addressable market. And while entertainment-first bots that rely on role-play and communities for virality, like X-Pass, don’t take much, Dot aspired to introspection and emotional support — a higher bar for reliability, privacy and trust. All that positioning can deepen loyalty, of course, but it also ratchets up the stakes when the product changes or goes away.
Safety scrutiny sets the bar higher
As partner AI went mainstream, safety concerns rose. Clinicians and researchers have cautioned that chatbots that are too persuasive can accidentally validate delusions, a hazard that has at times been seen in case reports as AI-induced or AI-amplified psychosis. In one widely reported lawsuit, parents claimed that the chats with a general-purpose chatbot were a factor in a teenager’s death. State attorneys general have been grilling big AI providers on the question of guardrails, and civil-society groups have called for more disclosure when AI talks start being about mental health.
Dot did not cite safety problems as the reason it shut down. It does, however, entail the need for strong crisis-handling protocols, clinician-reviewed self-harm claims, age gating and continual red-teaming — expensive, specialized labor that is heavy even for the largest labs. For a tiny startup, the balance of moral duty and regulatory expectation may be existential.
The economics of intimacy AI
Personalization is expensive. Companion apps have to remember over long time, be able to recall contexts, and can generate witty response in timely manner. That translates to a steady budget for inference, vector databases and safety pipelines. Larger companies and enterprises can spread these costs out with massive scale or enterprise contracts; consumer-first startups less so, living or dying based on their ability to offer subscriptions (“$10 per month to use me”) that seldom match the underlying compute or storage bill. Even well-heeled players have shifted to business customers to protect margins.
Dot’s design ambitions are impressively high-quality. But when founders’ visions don’t align — in particular around model choice, safety posture and monetization — the way forward becomes blurry. Rather than dilute their product philosophy, New Computer pulled back. In a market in which the command to “move fast” can sometimes butt heads with “do no harm,” that perhaps is the most responsible result.
Users should do this now
The company said in a statement that users can export their data from the app’s settings before the service ends. If you depended on Dot for journaling or memory, you should download an archive of it and consider asking for it to be deleted later. Under privacy regimes such as G.D.P.R. and California’s C.P.R.A., people have rights to data access and erasure; even beyond those jurisdictions, many companies respect such requests.
Those looking at alternatives should consider three things: clear safety policies (especially with self-harm content), transparency about data handling and the ability to opt out of having their image used to train models. If feeling better is the aim, then “it’s going to take a combination of a chatbot and human resources” friends, family or mental health professional, experts caution. AI can be good company, but it’s not a clinician.
What Dot’s exit signals
Dot’s closure highlights a more general reset for consumer AI: novelty, alone, no longer can sustain products that make intimate promises. The next generation of companion apps will require stronger guardrails, clearer value beyond conversation, and ad economics that are not predicated on unsafe engagement. For the founders, the lesson is to think of emotional use cases as you would medical-adjacent products — design with harm minimization in mind, measure outcomes, and budget for the safety as a first-class feature.
New Computer exits with a clever concept that asked what a chatbot might be. Its departure is a warning sign that building trustworthy, personal AI is not just about great models and UX, but also about aligned leadership, rigorous safety and business models that can sustain the weight of intimacy.