Google is quietly testing a listening feature within the Chrome browser on Android that lets you ask Google Assistant to read your screen, but in addition to being able to detect speech, it can turn articles or pages into podcast-style conversations.
First noticed in testing by Android-centric watchers, the feature introduces two natural-sounding hosts who riff on what’s on the page, ask each other clarifying questions, and walk you through the material in a manner that seems far closer to a show than a monotonous screen reader.

It’s a natural progression of Chrome’s current “Listen to this page,” but with generative smarts akin to the audio summaries currently offered in Google’s NotebookLM. Consider it as a nimble, on-the-spot audio show created out of whatever you’re reading — no RSS feeds or production teams necessary.
What you need to know about our podcast-style readings
Under the hood, Chrome combines advanced text-to-speech with a massive language model. Rather than regurgitating every single line, the AI reads through the page and selects salient points to write a host-to-host narration that keeps context but loses filler. The result: a conversational ease that focuses on headlines, translates jargon, and even occasionally seems to anticipate questions readers might ask.
You’ll find two different voices playing roles — one guide, one inquisitive co-host — and the result is a back-and-forth that’s more digestible than listening to everything from the perspective of one robot narrator speaking directly to you. Early demos have the pair pausing to define terms, previewing what’s next, and reiterating key takeaways at the end — as more experienced podcasters do.
This foray into dynamic audio pairs well with Google’s larger Chrome efforts. For on-page summaries, the company recently brought its Gemini assistant to the desktop browser; the ears-first complement is this new audio mode, tailored to those who would rather listen while commuting, cooking, or working out.
How to try it in Chrome for Android right now
Open any article in Chrome on Android, tap the three-dot menu, and select “Listen to this page.” A player with typical media controls will be displayed. Whether your device has the upgrade or not, you’ll see a little toggle in the lower-left. That’s for normal playback and AI mode. A label above the controls tells you which mode you’re in.
The rollout is gradual and server-side, so not all users will see it immediately. Desktop and iOS availability is not yet official. Google’s help documentation for “Listen to this page” hasn’t had the AI version added yet, indicating the company may still be fine-tuning the experience before a wider rollout.
Why it matters for attention, access and publishers
Audio is the answer to a growing attention problem: we’re drowning in long reads but short on time. According to Edison Research’s Infinite Dial, about one in three Americans listen to podcasts every week, indicating that ears-on consumption has hit the mainstream. By making any page a podcast-like session, Chrome is reducing the friction for those who would rather have something to listen to than stare at a screen.

Accessibility is another win. (And a conversational format can make dense topics — whether policy analyses or scientific explainers — friendlier to people with visual impairment, learning differences, or listening habits.) Average speaking rates fall in the 160–180 words-per-minute range, and the average adult reads text at 200–250 words per minute, so a prompt conversation that surfaces important information can make listening as competitive with scanning.
For publishers, it is an effortless audio channel with no cost. Newsrooms and bloggers that haven’t invested in their own storytelling now have an audio identity without having to rebuild workflows. The Reuters Institute has tracked the rise and consumption of news podcasts, especially among younger audiences; Chrome’s feature might serve to widen that funnel by turning the open web into listenable inventory.
Limitations and privacy to be aware of today
It’s still AI. The voices sound a bit less robotic, but not quite human-like, and the chitchat is sometimes stiffly scripted. Parsing may choke on complex layouts, paywalled segments, embedded widgets, or comment sections, and we get strange transitions or context missing in action. Look for rapid improvement as models become better at understanding the structure of the web.
On privacy, creating the audio almost certainly means sending page text to Google’s servers for processing. That’s par for the course with cloud TTS and summarization, but it’s something to consider if you’re reading sensitive documents aloud. Chrome’s current data controls and Incognito mode pertain to browsing, but they don’t ensure that third-party services won’t process content for things like this. Those who prefer to only play audio locally will always be able to switch back or turn the feature off.
What comes next for listenable webpages in Chrome
Today’s narration, from two hosts simultaneously, suggests where this is heading: smarter voice personas, chapter markers that get built on the fly, and playlists that stitch together multiple places into one cohesive briefing.
It might integrate with existing Chrome features to let you read topic-based summaries, have citations read aloud, and jump quickly to parts that matter to you.
For now, the pitch is straightforward and effective: press a button and Chrome turns your scrappy text into a spectacle. If you’re already accustomed to on-page summaries from Gemini, or a regular user of NotebookLM’s audio briefs, the concept is the same but redesigned for the open web — it might just become your new default way to “read.”
