A social media stunt is populating the internet with images of “homeless intruders” in family homes, but police say the joke isn’t a laughing matter.
The prank, which has migrated to TikTok and other social media platforms, features creators texting relatives images of a stranger on the front stoop, lounging on the couch, or raiding the refrigerator — produced by AI programs that suck in information from other images of strangers and spit out plausible replicas — then filming panicked responses for views.
- How the AI Intruder Prank Fools Families and Friends
- Police: Online Risks Translate Into the Real World
- Dehumanization Wrapped as Content Harms the Unhoused
- Platform Policies and the AI Gray Zone for Synthetic Media
- What Families and Creators Can Do to Reduce Real Risks
- The Bottom Line on a Risky AI Prank and Its Fallout
How the AI Intruder Prank Fools Families and Friends
In standard clips, a creator purports to notice a stranger who “looks homeless” breaking into their home. They send staged, photorealistic images with the person edited as though moving from the door to a living room and into bedrooms; these are often created using consumer AI tools. In a single TikTok, created by Joe Mele, the images build into an increasingly heated back-and-forth between him and his father; it has been seen easily more than 10 million times and inspired dozens of imitators with similar prompts and techniques.
Artists typically say that they used popular AI image editors to create or modify a small number of base photos, with a made-up intruder replacing them and realistic shadows or reflections added in. The scenes begin to feel like phone snapshots, convincing family members the break-in is real, sometimes even calling police on their own before their creator reveals the twist.
Police: Online Risks Translate Into the Real World
Multiple agencies say they’ve already received calls related to these pranks. Dorset Police in England said emergency lines had been inundated by the trend and urged people to check such messages before dialing 999 when they receive disturbing pictures from family members. Online, the Salem Police Department in Massachusetts criticized videos of people filming themselves shouting about being inside one another’s homes as “reckless,” given that police are required to treat such calls as in-progress burglaries, increasing the likelihood of risky confrontations.
Seconds count in a real emergency, say emergency services. Misinformation-fueled calls can distract responders, stretch already-scarce resources, and delay assistance for people who are really in danger. The dynamic is not the same as that of swatting, but experts say there’s a key risk that it has in common with making false police reports: fabricated urgency leading to potential harm in cases where authorities arrive believing they are about to encounter a volatile situation.
Dehumanization Wrapped as Content Harms the Unhoused
The gag perpetuates harmful stereotypes of people experiencing homelessness, advocates have warned, besides being unsafe. The punch line casts an “unhoused-looking” stranger as disgusting, or invasive, or criminal — stereotypes that fuel stigma and fear. In the United States, some 653,000 Americans were homeless on a single night in 2023, according to an estimate by the Department of Housing and Urban Development — marking the highest number since national counts began, and with families and older people among the fastest-growing. In England, the Department for Levelling Up, Housing and Communities reported 3,898 people sleeping rough in a 2023 count, a significant year-over-year increase.
Organizations like the US National Alliance to End Homelessness and Crisis in the UK emphasize how public narratives influence policy, as well as day-to-day dealings. By treating the unhoused as props or threats, such content normalizes not including them and can ripple out into more hostile community-level responses. That’s particularly worrisome at a time of increasing homelessness and housing scarcity and inflation.
Platform Policies and the AI Gray Zone for Synthetic Media
Platforms have their own rules that implicate this trend from many angles. TikTok bans dangerous behavior and demands that realistic synthetic media be labeled; it also forbids AI depictions of private individuals delivered without consent. But enforcement takes time to catch up with infectious mechanics, and creators can claim the images are of an imaginary person rather than a clear target. And as AI image tools integrate into apps — from, say, the smartphone photo editor to big ol’ AI suites — get more seamless, the threshold for making believable “receipts” keeps dropping.
Researchers who study misinformation caution that the intimate, closed-channel delivery — texting a convincing image to a parent or partner — is a powerful persuasion tool. Unlike a public post, a private message has built-in trust. That trust, along with visuals that are synthetic and malleable enough, can bypass skepticism long enough to trigger a 911 call or confrontational response at home.
What Families and Creators Can Do to Reduce Real Risks
Tips for families and partners
- Verify before escalating: call the sender, request live video, or check with a neighbor or smart doorbell if possible.
- If someone appears to be an immediate threat, contact the police; otherwise, treat bizarre intruder photos as suspect until proven otherwise.
- Set a house rule: no “gotcha” pranks related to emergencies.
Advice for creators considering the trend
- Skip it; don’t make this prank.
- Use AI tools without distress or deception, and avoid stigmatizing people who are already vulnerable.
- If you do use synthetic graphics, label them with proper prominence, avoid real-life panic triggers, and never encourage viewers to call the police.
The Bottom Line on a Risky AI Prank and Its Fallout
AI makes it laughably simple to rob a house. That novelty, combined with a platform reward system, is the engine behind a viral prank that squanders emergency resources, endangers responders and families, and dehumanizes people who are homeless. The views aren’t worth the harms — and the costs may come home to roost.