Several high-profile YouTube creators who had their channels removed suddenly are accusing an “increasingly automated” rights management system of being linked to accounts that they had nothing to do with. The takedowns are renewing questions about just how far YouTube’s AI-driven moderation can go without human guardrails — and what redress creators really have when it messes up.
Creators report sudden terminations across YouTube
Tech creator Enderman, who had about 350,000 subscribers, said he knew his days were numbered after YouTube shut down a smaller channel of his and posted a warning that similar accounts would be next. The twist came in the purported “relation”: YouTube’s notice referenced association with a Japanese-language channel that had already been pulled down due to multiple copyright strikes — one, Enderman told me, he has no affiliation with.

Others tell similar stories. Scratchit Gaming, which has over 400,000 subscribers, said that their channel was deleted based on an apparent connection to the same Japanese channel. A second creator who goes by 4096 and is followed by nearly a half million users said the same. In recent weeks it has removed more large accounts under a policy called “spam, deceptive practices and scams,” which has caused even more confusion.
Unrelated channels linked over alleged AI clues
What they suspect is that the AI behind it somehow “nets associations” from signals that can be dirty in the real world: shared devices or IP addresses, recovery email, AdSense or bank details; multi-channel networks (MCN), contractor access; even content-fingerprint overlaps — none of which necessarily connects users together. Any of these can generate false positives if a freelancer, agency or manager touches multiple accounts, or if one set of credentials is stolen.
For years, security researchers have noted that compromised accounts frequently spawn hidden connections across services when bad actors reuse sessions, tools or payment routes. That can appear to an automated system as if someone is evading a ban. Our machine is capable of error, it turns out, and without a human double-checking these sorts of signals, even a legitimate creator can wind up flagged as “associated” with a channel they never graced with their presence.
Automation versus human review in YouTube moderation
YouTube has championed the use of machine learning for years to train their policies at scale. According to the company’s transparency reports, nearly all of these initial flags come from automated systems, and YouTube has said that when models are very confident in their predictions, they can act without human oversight. In other instances, the AI gives a cue to trained reviewers before removals happen.
But creators say the appeals process is beginning to feel automated, too — quick responses full of boilerplate language and with no clear way to provide additional context. Digital rights groups like the Electronic Frontier Foundation and researchers at the Oxford Internet Institute have raised alarms that at YouTube’s scale even a small error rate would mean thousands of wrongful actions, especially when those decisions cascade across “associated” channels.

Policy Context And What Results In A Ban
YouTube’s ban-evasion policies are stringent: if one channel is terminated, so can any channel “owned or operated” by the same user. Copyright is equally unforgiving — three strikes can kill a channel. The “spam, deceptive practices and scams” category includes:
- Clickbait metadata or faked content
- Fake subscription requests, polls and live countdowns
- Impersonation
- Phishing
- Engagement fraud schemes
None of that is new. The difference is that for the first time there’s a concern that anonymous bulk association might be taken down this route because some sort of AI has decided they are related to a bad guy. If the claims are true, a lone miscategorization on a small, outlying account can have ripple effects and take down unrelated creators who share nothing but some tenuous signal in a database.
What creators can do now to protect their channels
Audit access immediately.
- Discard inactive channel managers.
- Delete OAuth tokens for third-party tools.
- Rotate recovery emails and passwords.
- Enable 2-step verification for hardware keys.
- Create separate personal and brand accounts.
- Review AdSense and banking info for overlap with contractors or agencies.
- If you are in an MCN, make sure there are no dangerous cross-linkages.
For appeals, be clinical.
- Record why the alleged association is wrong.
- Document everyone else who’s had access in the past.
- Provide proof of independent ownership and logs (IP/device provenance where possible).
- If you have a Partner Manager, escalate or contact Creator Support and (where applicable) your MCN.
- Public attention is one point of leverage, but actual evidence is more effective.
What YouTube says about automation and human review
YouTube has said its systems trigger automatic measures when confidence is high, and that most decisions regarding content are made by human reviewers. The company stresses that it continues to invest in improving the accuracy and minimizing false positives of takedowns. To be sure, creators want a higher bar for full channel terminations — especially when “associations” are used to justify the decision — and want assured human review before this is deployed as the nuclear option.
Until YouTube explains how its models draw connections between accounts (or how those appeals get to a human), the fear is: just one bit of opaque AI spaghetti code might be all it takes to destroy a livelihood. For a creator economy built on trust in the platform, that kind of uncertainty is the most destabilizing blow of all.