Google’s $1 million bet on Animaj, an AI-driven children’s media studio, has ignited a fresh outcry from child safety advocates who warn the move normalizes low-quality, algorithmically generated videos for very young viewers. The criticism lands as YouTube, owned by Google, continues grappling with a flood of AI-made kids content that experts say can mislead, overstimulate, and keep infants glued to screens.
Why the investment sparked backlash from advocates
Google recently selected Animaj for support through its AI Future Funds accelerator, with reporting indicating the company will receive exclusive access to tools such as Veo and Imagen alongside the funding. Fairplay, a nonprofit focused on children’s digital well-being, argues the deal sends the wrong signal: rather than slowing the spread of AI “slop” for kids, it could supercharge it.
Rachel Franz, who leads Fairplay’s Young Children Thrive Offline program, says the platform is already awash in hypnotic, low-value videos aimed at babies and toddlers. Backing a studio whose channels target infants, she contends, effectively invests in content that can harm babies by displacing play, social interaction, and caregiver engagement.
The growing AI slop problem on YouTube for young kids
YouTube has acknowledged the challenge, pledging to demonetize “low-quality clutter.” Yet independent examinations suggest the problem persists. A New York Times analysis this year identified thousands of AI-generated videos aimed at children, including examples that appeared to violate YouTube’s child safety policies. The same report noted the platform does not require AI labels on animated videos, leaving families with limited visibility into what is machine-produced.
The stakes are high: YouTube remains the default TV for many households with young kids, and its recommendation engine can funnel children into endless loops of bright, repetitive content. Research from child media groups has repeatedly found that autoplay and algorithmic suggestions increase time spent watching, often beyond what parents intend.
What Animaj builds and why its expansion matters
Animaj positions itself as a next-generation kids media company that scales existing intellectual property with AI. Its portfolio includes familiar brands like Pocoyo, Maya the Bee, and Ubisoft’s Rabbids, as well as affiliations with channels for infants, such as Hey Kids. According to Bloomberg, Animaj-affiliated channels amassed more than 22 billion views in 2025.
The studio’s pitch is speed and scale: produce more of what children already love, delivered wherever they are, on demand. Critics counter that the very ability to crank out infinite, attention-grabbing clips—with nursery rhymes, looping animations, and sensory overload—prioritizes watch time over developmental value.
What pediatric guidance says about screens for kids
The American Academy of Pediatrics advises avoiding digital media for children younger than 18 to 24 months (except video chatting), and prioritizing co-viewing and interactive, slower-paced programming for older toddlers and preschoolers. The World Health Organization recommends no sedentary screen time for infants and limited screen time for ages 2 to 4.
Several studies reinforce these cautions. Research published in JAMA Pediatrics has linked greater screen exposure in early childhood to poorer performance on developmental measures, including language and attention. Educators emphasize that infants and toddlers need real-world back-and-forth interaction, physical exploration, and unstructured play—activities that are crowded out when “mesmerizing” video dominates their day.
It’s Not Just The Content It’s The Container
Even high-quality videos can be undermined by the environment around them. Features common to video platforms—autoplay, endless scroll, and algorithmic feeds—are engineered to maximize engagement, not to match young children’s developmental limits. Advocates argue that default-off autoplay, robust age safeguards, and transparent AI labeling are baseline requirements if platforms want to be considered safe-by-design for kids.
Fairplay and other groups also call for enforcing existing child safety rules consistently, auditing recommendation systems for kids’ content, and using metrics beyond raw watch time to judge success—such as measures of learning, sleep-friendly pacing, and opportunities for caregiver interaction.
Google’s bet on kids’ AI meets a growing trust gap
Leaders tied to Google’s AI Future Funds have framed the Animaj partnership as a blueprint for the future of responsible children’s media. But trust will hinge on outcomes families can see: clearer labels for AI-made videos, swift removal of exploitative or policy-violating content, age-appropriate defaults, and fewer algorithmic rabbit holes for the youngest viewers.
Until then, critics insist, pouring resources into AI production for kids risks deepening a status quo where scale outruns safety—and where the smallest children bear the biggest costs.