Some users of Google Home say their cameras are naming people who don’t exist, evidence that the system’s new AI-driven summaries may be creating fake identities. Reports are about ordinary occurrences such as emptying trash baskets or taking dogs for walks, attributed to strangers with first and last names in homes occupied by one person. The pattern is less reminiscent of breaking and entering, more of the kind of thing AIs are known to do when they’re on tilt.
Reports show false names appearing in Google Home summaries
In recent posts on Reddit’s r/googlehome, one person said an activity summary announced that “Michael” had thrown out the garbage, despite no one of that name living there. Another said their day recap indicated a “Sarah” had finished chores, even though the resident is a man who lives alone. One person even received a reminder about vacuuming that her friend didn’t actually send, from someone who wasn’t there. In numerous instances, the source video allegedly depicted the homeowner doing the work themselves.

These anecdotes don’t yet add up to a wholesale breakdown, but they share commonalities: Names pop out of nowhere, and specific details — the sex of someone’s pet or how many they have — can be wrong. A Nest community moderator has been urging affected customers to provide footage so the team can analyze, which implies Google is taking this VERY seriously.
Why Google Home’s AI may invent people in activity recaps
Google has been incorporating Gemini into the Home experience to allow users to ask conversational questions about events and actually get richer summaries. Large language models are probabilistic pattern-completers, though. Presented with a scene to describe — person wheeling a bin down to the curb, say — an LLM that has been trained on tons of text can “fill in” reasonable but bogus details (like proper names) because it does not actually check facts. That is, if “Michael” commonly occurs with household chores in the data used for training the model, it might be fairly nonplussed to emit that name without any reference to your home’s data.
That’s quite different from Nest’s Familiar Face Detection, which has to be explicitly opted in to and uses names provided by the user for recognized faces. You should never see a new name from Familiar Face Detection that you didn’t give; “Unknown person” is what it’s supposed to do. The oddities described here at least appear to be more a case of language-model exaggeration attached to visual events than an error in recognizing faces or confusion between bank accounts.
Researchers and standards bodies have cautioned that hallucinations remain a continuing threat in generative systems; hearing what’s not there, particularly when they are summarizing or filling in blanks for ambiguous inputs. As NIST’s AI Risk Management Framework guidance has suggested guardrails around uncertainty handling and traceability — attributes that consumer smart-home summaries do not generally reveal today.

Trust and safety risks when summaries invent identities
Smart home alerts play a role in real decisions: whether to call a neighbor, reach out to authorities or visit a family member. A fabricated identity poses a risk of false alarms or – worse – desensitizing users to real emergencies. The mismatch also pushes consent boundaries: Full names are personal and intimate data points, and systems that conjure them up risk fooling household members or others.
Regulators have indicated increased interest in scrutinizing exaggerated AI claims and opaque automation in consumer products. If a camera summary presents AI-generated fare as true without being sufficiently labeled or confidence signaled, it might pique the interest of consumer protection agencies. At the very least users are entitled to know when data isn’t dictated by a sensor, but “guessed” by a generative model.
Practical steps users can take to reduce false summaries
- Check the camera settings in the Home app and see if Familiar Face Detection is turned on. Make sure no other names appear that you didn’t bring in, and delete the face library to fix any errors or mix-ups.
- If you are in any public preview or experimental features, disable them to return to more conservative notifications that don’t include AI-generated blurbs.
- Check summaries against real event clips prior to taking action. If a named person is called out during a recap, view the timeline and post the clip in Nest Community channels for diagnostics.
- Stay current with firmware and app updates. Updates to the model and pipeline frequently bring safety fixes, which cut down on hallucinations, and lock thresholds on descriptive speech.
What Google should clarify to rebuild trust and accuracy
There are also simple guardrails that could rebuild trust. The first one is: Never make up proper names, unless they are user-provided from user-created labels/contacts, and explicitly show that in the UI. Second, display confidence indicators and a clear badge when a summary has been AI-generated. Third, offer a one-tap way to read the raw, unvarnished event text accompanying the clip.
Transparency about where Gemini is run (we learned that it runs on-device, but in general) and how voice and video are processed, as well as retention policies around summaries, would also be helpful. Lastly, Google should release a postmortem for the reports that were affected and promise to default to more factual minimalism than narrative flourish in security contexts.
Generative AI may help us speak with smart homes more easily, but it shouldn’t invent people. Until the reports are confirmed and fact-checked on the ground, your best bet is to consider any identity named as probably dubious, and allow the footage itself, rather than the flourish that accompanies it, make your decisions for you.