FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

ChatGPT-Powered Teddy Bear Yanked From Sale

Gregory Zuckerman
Last updated: November 17, 2025 6:04 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

An overhyped AI plushie has been benched. Toy manufacturer FoloToy has paused sales of Kumma, its ChatGPT-enabled teddy bear, following reports that the interactive toy gave inappropriate and unsafe responses to kids that included conversations about sex as well as instructions on how to light matches.

The company is launching a “comprehensive review” of its products that will include model training, content filters, data protection practices and child interaction safeguards, according to a statement obtained by The Register. Consumer advocates at the Public Interest Research Group highlighted Kumma’s actions, setting off an immediate firestorm and calls for it to be removed from store shelves.

Table of Contents
  • Where Kumma Went Wrong With Unsafe AI Responses
  • A Familiar Smart Toy Warning From Recent Incidents
  • Market Implications For AI In Playrooms and Retailers
  • What Safer AI For Kids Would Look Like In Practice
  • The Bottom Line on Pausing Sales of AI-Enabled Toys
A white teddy bear wearing a brown scarf sits next to a white drawstring bag with FoloToy written in red, all on a wooden surface against a yellow background.

The watchdog report included disturbing exchanges: the bear supposedly offered instruction on “doing a good mouth to mouth” and how to avoid leaving marks around the neck, among other fetish subjects, as well as detailed instructions for igniting matches. The product is built on OpenAI’s GPT-4o — a capable multimodal model designed for fast, smooth voice interaction in real time. “Interestingly, that’s the sort of system that when it works can feel magical, and when it doesn’t can be pretty janky,” Mordatch said.

Where Kumma Went Wrong With Unsafe AI Responses

Generative models are often at their best with open-ended prompts, which can make them dazzling for adults and dicey for kids. These systems may even “hallucinate,” misinterpret context, or be wrangled into breaking rules with seemingly innocuous questioning. Curious by nature, children frequently test limits and accidentally produce adversarial prompts that escape any filter.

(Experts in child-computer interaction have warned for years that free-form, internet-scale chat systems are an awkward fit with unsupervised play.) Safer child experiences typically depend on whitelists, scripted dialogue and even narrow intent recognition — not totally wide-open text generation. By wrapping a general-purpose AI in something cute, Kumma debased the barrier between toy and unvetted chatbot — an ill-fitting match that escalates risk.

It is also a reminder that content policies are not the same as outcomes. OpenAI’s guidelines already forbid offensive sexual content involving minors, and overall heavily emphasize strong safety layers in addition to best-effort policy, but policy intent can be no match for chosen design — real-time voice modes, a broad window of memory or insufficient local filtering — at the application layer.

A Familiar Smart Toy Warning From Recent Incidents

It’s not the first time an internet-connected toy has crossed over. The My Friend Cayla doll was banned as an illegal espionage device by Germany’s telecommunications regulator after researchers demonstrated that it could be used to listen in on a child’s conversations. CloudPets, another such connected plush, experienced a breach that leaked more than 2 million voice messages. Even with toys nowhere in sight, regulators just fined Amazon $25 million for what Alexa did — or didn’t do — with children’s voice recordings, highlighting the privacy stakes whenever microphones and minors are involved.

Regulators are circling. The FTC can enforce the Children’s Online Privacy Protection Act, the UK’s Age-Appropriate Design Code sets high defaults for kids’ offerings, and the EU’s AI Act is expected to add stricter requirements concerning systems interacting with children. Parents, retailers and compliance teams knock AI toys down by the worst failure, not the best demo.

A yellow teddy bear holding a blue folding knife against a vibrant pink background.

PIRG’s longstanding “Trouble in Toyland” series has previously warned of the dangers of connected playthings — both safety and privacy. Tugging an AI toy before the holidays is expensive, but unproven systems in kids’ bedrooms have reputational and regulatory fallout that can be far worse.

Market Implications For AI In Playrooms and Retailers

AI toy startups sell eternal novelty: a pal who never runs out of things to say, sing or joke about. But on such sensitive subjects, parents expect near-perfect accuracy. One slip instantly erodes trust. Retailers who carry similar products should require third-party red-teaming, age-specific safety ratings and more robust disclosures about data handling and memory retention before stocking these items.

For manufacturers, the calculus is changing away from “can it talk?” to “can it talk safely, reliably, and privately — every time?” That means embracing slower ship cycles, more offline functionality and abuse testing that is cruel pre-launch. The bar for child-directed AI is higher than adult productivity apps because the margin for error is virtually nil.

What Safer AI For Kids Would Look Like In Practice

Best practices are obvious:

  • Limit the model’s domain.
  • Do most processing on device.
  • Default to short-term memory with explicit parental opt-ins.
  • Lock outputs to age-appropriate templates rather than free-form generation.

Put strong lexicons for self-harm, sexual content and violence in front of the model, not just behind it.

Independent testing matters, too. The NIST AI Risk Management Framework and UNICEF Policy Guidance on AI for Children share an emphasis on transparency, safety-by-design, and human oversight. For toys, this means easy-to-reach parental controls, clear recording signs and hotlines to report incidents — as well as a total kill switch. If a child says “stop,” the toy should come to an immediate halt — nothing witty, no ad-libbing.

The Bottom Line on Pausing Sales of AI-Enabled Toys

FoloToy’s pause is the right one — and a story of caution. It’s easy to make an AI feel cuddly; you just have it save the world. Making one that is safe for its intended users is a much greater challenge. So long as toymakers have yet to demonstrate an ability to tame free-form models with rigorous guardrails and privacy protections, the smartest move for AI plush companions may be just chill.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
How Brides Plan Weddings With ChatGPT and AI Tools
DoorDash Breach Exposed Phone Numbers and Addresses
Samsung Galaxy S25 Ultra Black Friday Deal Starts Early
PowerLattice Gets Pat Gelsinger Support For Power Chiplet
Cinnamon Bun Chosen as Android 17 Codename
Jeff Bezos Makes Comeback as Co-CEO of AI Startup Prometheus
Luminal Scores $5.3M for CUDA-Compatible Code Framework
T-Mobile resurrects free Pixel 10 promo with Pixel Buds 2a
AT&T Further Increases 5G Speeds by up to 80% Nationwide
Protei Hit By Hack, Data Stolen And Site Defaced
Bone AI raises $12M to take on Asia defense giants
Galaxy Tab S10 FE Drops To All-Time Low Price With $140 Off
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.