Google seems to be readying a serious visual search upgrade: the company is building prompts around Nano Banana, its prompt-based image editor that may soon join up with the recently redesigned Google Lens and Circle to Search. There is early evidence within the latest builds of the Google app that Lens has a Create flow and should work similarly to what’s currently being trialed for Search AI Mode, suggesting a bigger vision to integrate generative editing alongside everyday discovery.
Evidence from inside the Google app points to Lens edits
Strings and buried UI components found in version 16.40.18 of the Google app for Android also suggest that this feature is just around the corner.
- Evidence from inside the Google app points to Lens edits
- What Nano Banana could mean for Google Lens users
- Circle to Search integration appears to be coming next
- How this fits with Google’s broader imaging offerings
- Privacy, safety, and attribution for AI image edits
- Availability outlook and how a rollout could unfold
Images in Lens get a fresh look with the Live option, which sits alongside the existing Search and Translate tabs, and a new Nano Banana Create experience entry point. Upon choosing the Nano Banana tile, you are allegedly greeted with an animated intro that tells you to take, make, and share a picture before being taken through to a text box that says “Type a description of your edits.”
The flow closely tracks the Search AI Mode experiments of recent months: you give it a simple, plain-language prompt—say, “remove the background and add soft studio lighting”—and Nano Banana returns various edits with preview states you can choose to accept or tweak. The feature seems to be server-side, so you won’t see the UI on the mentioned app version.
What Nano Banana could mean for Google Lens users
For years, Lens has been identifying objects, translating text, and answering questions about what your camera sees. Nano Banana would move Lens from passive recognition to active production. Picture snapping a product and changing its background instantly before adding it to a listing, or imagine aiming at a landmark and pulling off an epic sky replacement before sharing.
In testing, Nano Banana has featured remarkably fast, prompt-driven edits, although the app can sometimes be flummoxed by non-standard aspect ratios and exceptions at the edges.
If worked out, embedding the tool directly in Lens could also mean on-the-fly editing that feels as intuitive and effortless as scanning a QR code — snap something, describe the change, export or share.
Practical examples include erasing a reflection from your menu photo, adding portrait-style bokeh to a pet shot, or generating tidy cutouts for notes and shopping lists. The key convenience win is the ability to perform this from the camera viewfinder, rather than needing to move to a separate editor.
Circle to Search integration appears to be coming next
It also appears that Nano Banana hooks are being added in tandem with Circle to Search, the cross-app gesture that allows you to circle any object on screen and have more information pulled up. Teaser UI hints at a Create option, but it doesn’t yet work end-to-end. If finished, this would allow users to choose a portion of any on-screen image and request adjustments without exiting the current app — an unusually snug loop for visual creation on Android.
Since the feature already has a growing footprint on recent Pixel and Galaxy devices, nesting Nano Banana into Circle to Search could put generative editing behind a long-press across an array of browsers, social apps, and galleries.
How this fits with Google’s broader imaging offerings
Google already has Magic Eraser and Magic Editor in Photos, while Gemini can handle text and multimodal searches. Nano Banana is the prompt-first, camera-adjacent version; it could be faster with on-device acceleration, potentially supported by Gemini Nano, and fall back to the cloud if necessary. On-device execution would reduce latency and be more private too, but might constrain the most computationally intensive effects on older hardware.
In terms of competition, this brings Lens nearer to Adobe’s Firefly-powered Generative Fill and consumer apps’ one-tap edits, albeit with a very significant edge that is music to any platform maker’s ears: system-level reach. By joining creation and search in a single surface, Google’s combo could eliminate the mental tax of hopping between apps for short answers.
Privacy, safety, and attribution for AI image edits
Broader use will raise familiar questions about responsible image production. Google has previously focused on content labeling and provenance, with policies including SynthID-style watermarking for AI-generated media and policy filters that address sensitive requests. A Lens-based editor would likely inherit those guards and indicate plainly when something has been altered by decision-making AI.
The fewer images that leave the phone, the less data there is about those users floating around on servers in an era of increasing privacy regulations. For people who edit personal photos or documents that are Lens captures, that could be a significant trust signal — particularly in places where there are strong data protection norms.
Availability outlook and how a rollout could unfold
Like everything else at Google, the presence of assets and strings does not ensure a quick release. The Nano Banana controls are also more tucked away in Lens Clock Check than in Circle to Search, although again both are behind flags and could deploy incrementally, depending on device capabilities, or end up with different naming prior to launching.
Still, the direction is clear. By including prompt-based editing within Lens and, eventually, Circle to Search, Google is creating a frictionless path from seeing something interesting to producing a new thing. Get speed, reliability, and clear labels right too, and Nano Banana could give Lens more staying power as a default camera-adjacent editor for tens of millions of Android users.