Google’s Nano Banana image generator is becoming more than the sum of its parts. You don’t need the Gemini app to give it a shot anymore—Nano Banana is coming to Google Lens and the new AI Mode within Search, with wider integrations on the horizon. Here’s how to use it, what’s new and where it’s going next.
How to use Nano Banana in Google Lens on Android and iOS
Open either the Google app or the standalone Lens app, tap on the Lens icon and take a photo or choose one from your gallery. In the extra controls at the bottom of Lens, find the Nano Banana feature. Add a brief prompt — “turn this into watercolor,” “make a retro poster” or “give it an anime look” — and the tool will create stylized versions of your source image.
- How to use Nano Banana in Google Lens on Android and iOS
- AI Mode in Search for visual makeovers and shopping
- Styles for NotebookLM Video Overviews using Nano Banana
- Photos integration is on the way for Nano Banana
- Availability and practical tips for getting started
- Why this matters for Lens, Search, and Photos users
In early testing, Nano Banana showed up right where users are used to finding rapid actions in Lens, making it feel like a camera-native feature instead of an extension. That matters: Lens is used for billions of visual searches a month, Google says, so featuring generative remix tools practically just one click away from “Search” and “Translate” cuts through friction.
(Note: Even though some users saw Nano Banana experiments within Circle to Search, that pathway hasn’t yet appeared broadly.) If you don’t see the Lens option right away, it’s an update that is slowly making its way to devices and might also be dependent on having the latest Google app build.
AI Mode in Search for visual makeovers and shopping
AI Mode in Search creates another path in itself. Begin a query in the Google app, shift to AI Mode and add an image — one of a product, art reference or sketch, for example. You can ask the assistant to adjust the image (“make it minimalist matte black”), then search for visually similar results based on its updated appearance, or choose to explore shopping and how‑to results immediately.
This combination of image-to-image generation with real-time search ranking is the larger story. AI Mode can turn those jokey remixes into tangible results instead of stopping at sharing: you get to finding comparable items, sourcing materials or coming across creators who work in the style that was just applied on your end. It’s a small, though meaningful shift from novelty to utility.
Styles for NotebookLM Video Overviews using Nano Banana
Nano Banana is also being called up for use in adding visual interest to Video Overviews on NotebookLM. When it creates a highlights movie from your footage, you’ll discover style presets that include watercolor, retro and anime. You can’t feed it your own images in here, but the new styles will help tailor the tone — handy for educators and teams who need polished explainers without too much editing.
It’s a part of a larger movement in the world of productivity tools more broadly: generative visuals that bend around audience and context rather than pursuing photorealism. Internal demos have prioritized legibility and consistency over shock value, echoing advice from AI safety researchers who caution hyper-real outputs could muddy credulity in factual content.
Photos integration is on the way for Nano Banana
Google says Nano Banana features will also come to Photos. Details are still unknown, but there will be characteristics like mergeable images, a collage-building feature and the ability to fuse parts of several shots into one composition. If executed as hinted, this would also join current tools like Magic Editor and generative fill to expand Photos from a touch-up utility into a creative canvas.
Photos’ gargantuan worldwide footprint — industry estimates peg the number of users well into the billions — would have instantly delivered Nano Banana to a vast audience, but launch might be phased in. Server-side switches and account-level eligibility will no doubt gate early access.
Availability and practical tips for getting started
The Lens and AI Mode activations are kicking off in the US and India, with other regions following soon. If the feature hasn’t shown up for you, look for app updates, make sure you’re signed in with an eligible account and try again after a bit—these features are often phasing in over days or even weeks but not hours.
Like with other generators from big platforms, some content policies apply and safety filters are in place. Images may be reviewed to prevent harmful or classified outputs and interactions can be used by Google to enhance models in the aggregate. If you are building something based on your own photos, consider checking out your backup and data-sharing options within account settings.
Why this matters for Lens, Search, and Photos users
Moving Nano Banana into Lens and Search eliminates the largest hurdle to using it: app context. The average user is much more likely to press a button in their camera process than open some external, model-specific app. By going to where users are already pointing their cameras, Google transforms a viral image toy into a useful assistant for discovery, education and creative expression.
If it arrives as advertised, the Photos integration and Circle to Search support would make Nano Banana the most accessible image-to-image conversion tool on mobile yet. For the time being, Lens is the easiest way to get started—snap a photo, insert a prompt and see what you can coax out of the model without having to fuss with Gemini at all.