Google’s Gemini 2.5 Flash–powered image generator and editor is being tested, and it seems to be testing Nano Banana right inside the Chrome address bar on Android.
Code and assets recently discovered in Chrome Canary indicate users may one day type a command, click an integrated “Create image” tool, and produce images on the fly without ever leaving their browser.
- What the Canary build shows about Chrome’s AI image tool
- Why it’s significant for Chrome and Android users
- Under the hood and the probable limits of Nano Banana
- Safety labels and policy guardrails for AI image creation
- How it might change day-to-day workflows in Chrome
- What to watch next as Chrome tests address bar image AI

What the Canary build shows about Chrome’s AI image tool
As reported by Windows Report and based on screenshots posted at Android Authority, the plus button is a new insertion in Chrome’s address bar that comes with five shortcuts:
- Camera
- Gallery
- Files
- AI Mode
- Create image
Choosing “Create image” opens a prompt and passes it off to Nano Banana, which generates the requested image and puts it inline in your browser where you can download or share it.
Functionality mirrors previous tests on the desktop in Canary builds, which revealed Nano Banana hooks. Although it is still experimental, the Android integration fits into a wider push to make visual creation as simple as typing in a URL — with no need for another app or tab or plug-in.
Why it’s significant for Chrome and Android users
Chrome is the world’s most popular browser, with roughly 63 percent global market share on desktop, according to StatCounter, and it comes preinstalled on billions of Android phones. “The idea is to move image creation with the omnibox close to zero friction, traditionally resulting in higher uptake of new features. If adopted widely, a single prompt bar could transform into a creative surface for memes, mood boards, work-in-progress, and quick on-the-go social posts.”
Competitively, this brings Chrome closer to the all-in-one assistant model seen on other platforms. Microsoft has relied on Copilot in Edge, while Apple is weaving image tools into system experiences. For Google, tucking Nano Banana into the address bar directly complements its current AI themes for Chrome, alongside general text help for writing throughout the web, creating a more cohesive set of “create where you are” tools.
Under the hood and the probable limits of Nano Banana
Nano Banana is based on a speed-optimized version of Gemini 2.5 Flash, for faster, lower-latency work. That’s a plausible profile for an address bar workflow where users are used to instant feedback. That said, those production images are probably limited to on-device inferencing with a carefully chosen threshold; for most mobile devices, image generation at scale will still be done in the cloud for model size/performance trade-off reasons, even with midrange hardware and janky cellular connections.

Expect Canary-style caveats: features may be behind flags, in limited testing, or not working at all for some users. If fully rolled out, expect staged cohorts and server-side switches to enable or disable features, with age and policy gates on content generation possible.
Safety labels and policy guardrails for AI image creation
Google has announced watermarking of AI-created images through a system like SynthID, but details can get lost in translation. If Nano Banana is rolled into Chrome as a core feature, consistent watermarking and metadata tagging will be essential to address provenance concerns, especially since creation is moving up into the address bar — a place where users are accustomed to finding navigation and trust signals.
Content policies, such as limitations on violent, explicit, or dangerous imagery, would also need to reflect Google’s current AI safety framework across Search, Photos, and Workspace. As Search Memo pointed out, prompt filtering will need to be strong — and there must be full user disclosures.
How it might change day-to-day workflows in Chrome
For casual users, the omnibox could enable one-tap generation that currently requires detours to dedicated apps when working on invitations, thumbnails, or visual notes. For creators and students, it might quicken ideation — say, fast storyboards, draft product shots, or concept art — all without breaking the flow of one’s browser. And because the output writes right to the web page, saving an image to your device or dropping it into a chat or document is at most two clicks.
Early testers will look for latency, image quality, and timeliness. Gemini 2.5 Flash is about maximum speed; whether it can regularly hit the stylistic sweet spots users have come to expect from dedicated image models, particularly on mobile networks, remains to be seen.
What to watch next as Chrome tests address bar image AI
References to Nano Banana on Android are worth watching for in the Chrome Canary release notes, developer flags, and — as alluded to here — in Chromium Gerrit. If Google follows what’s become tradition, it’ll be a Canary-to-Beta move before any stable-channel release, with initially narrow availability cohorts.
Should the omnibox become a front door for image generation at Chrome scale, even modest use could alter user practices. The browser’s address bar would no longer be merely a launchpad for destinations — it would become the canvas itself.
