YouTube is testing out an AI chatbot that attempts to rein in wild recommendations you get on the Home tab by allowing some users with a “Your Custom Feed” chip that appears alongside Home to literally tell the company precisely what they want more and, we assume, less of. The test effectively gives users access to the driver’s wheel, rather than simply tweaking in a passive way at the algorithm.
How The Chatbot Influences Your Home Feed
Early testers who encounter the “Your Custom Feed” chip can tap on it and enter prompts like “More long-form explainers about astrophysics” or “Fewer prank videos and celebrity gossip.” The chatbot interprets those instructions and instantly recalculates Home feed around your expressed passions without you having to dig through settings or wonder how to “train” the system by liking and clicking.
Though YouTube hasn’t fully explained how it all works, the idea is that you’re shown videos even more accurately matched to your interests by training its algorithms to interpret via natural language and turn into interest vectors and negative signals that constantly train on your profile in real time. YouTube’s recommendation architecture, described in Google’s widely quoted deep neural network research for recommendations, is based on embeddings that model user interests and video attributes. A conversational layer on top can guide those embeddings more specifically than likes, watch time, or “Not interested” taps alone.
Importantly, the chatbot is also additive rather than a complete reset. Think of it as the speediest retune possible: You can literally prod the system toward a new mix — say, “more cooking tutorials, fewer Shorts, no crypto” — and see it take effect right away. If that idea holds, it could rescue users from the classic click hole where one curiosity click dominates everything else for several weeks.
Why YouTube Is Changing Its Recommendations
With over 2 billion logged-in users visiting every month, the Home feed is one of YouTube’s most influential surfaces. It’s also a pain point. One off-kilter click can lead to an avalanche of tangential content that seems impossible to shake. Pew Research Center has long tracked that YouTube reaches a large majority of American adults, meaning even minor tweaks to recommendation control could alter how hundreds of millions come across information, entertainment, and news.
User frustration isn’t new. The Mozilla Foundation’s examinations of “regret” reports have suggested that problematic or irrelevant content is often the result of automatic recommendation. YouTube has over time fought back with tools such as “Don’t recommend channel,” topic chips, and more transparent feedback controls. The chatbot experiment takes it a step further by allowing people to express intent in natural language — what is essentially an inclination becomes a direct command, something that the system can immediately act on.
The change also is part of a broader industry trend toward transparency and user control over recommender systems, in part under intense scrutiny around the world by policymakers. More visible controls — and ones that actually function — make platforms demonstrate that personalization doesn’t mean you have to be opaque.

How It Compares to Other Platforms’ Recommendation Controls
Rivals have experimented with AI-led curation to mixed success. X rolled out an AI-optimized timeline built with its Grok model to show more of what users say they like. Spotify’s AI DJ announces picks while it rearranges your queue. Thumbs ratings and a “Double Thumbs Up” have helped Netflix refine its suggestions. YouTube’s catch is immediacy and ubiquity: the Home feed is where discovery happens for creators and viewers anyway, so a reactive, conversational editor at that on-ramp could be unusually powerful.
And compared to passive signals, chat-style directives are clearer. Instead of crossing your fingers that a barrage of “Not interested” taps lands the right way, you can say “show me more beginner-friendly Python tutorials” or “dial down political commentary” and see how the Home page adjusts on the spot.
What Testers Need to Look Out For During the Trial
Expect limits. This is an experiment that seems to be showing up in YouTube’s support documents, and it appears more limited, kind of a bummer. You might not end up seeing the chip, and YouTube may change or abandon it depending on results. If you do have access, the most helpful prompts are particular: subjects and formats; people who make that content; and even their time horizons. For instance, “More 20–30-minute documentary breakdowns on climate tech this week; fewer Shorts and reaction videos.”
Keep in mind how that intersects with watch history. If your history is paused, or if you are on a shared device, the model might have trouble with maintaining steerage. Conversely, the pendulum swing of broad prompts can induce whiplash. A handful of granular instructions and testing feedback tools alongside them — “Not interested,” “Don’t recommend channel,” and improving subscriptions, for example — should produce cleaner results.
Privacy questions will follow. (In theory, all conversational inputs that influence the suggestions can be used to tweak your profile.) YouTube’s long-held data policies still hold, but enhanced disclosure on its part about the retention of the prompts and for how long would create greater confidence among users, in particular younger ones or those falling into vulnerable categories.
The Bottom Line on YouTube’s Conversational Home Feed Test
If the level of success shown by this test scales, YouTube’s Home tab could become less like a black box and more like a tool that you calibrate. For creators, that could mean an audience discovering them by intent rather than accident. For viewers, it’s an exit strategy from the recommendation spiral — one prompt at a time.