Google’s lab for artificial intelligence is open for business, and the first model of its AI-based image editor has already rolled out. And it’s going right where millions of people already spend their time: Search.
Rather, powered by the company’s in-house image‑generating model, named Nano Banana (also referred to as Gemini 2.5 Flash Image), this tool allows you to change photos using natural language prompts without pro editing skills. Here’s how you can use it directly in Google Search, and where else it is appearing — as well as what you can expect.
- How to use the image editor directly in Google Search
- What the tool does well for everyday editing tasks
- Other places to experience the editor beyond Search
- Safety, attribution, and limits for responsible use
- Tips for better results with Google’s editor
- Why editing in Search is important for everyday users
How to use the image editor directly in Google Search
- On your phone or computer, open Google Search and search for Create mode. Mobile users will see an image creation button (frequently a whimsical banana icon), then have the option to follow similar steps. Tap it to enter the editor.
- Upload one or take a photo on the spot. It could be a room you want to redesign, a picture, or a product shot.
- Describe what changes you’d like in simple language. Don’t be vague about what people should hold on to and what they can toss or replace. For instance: “Keep the bed and windows, add bold patterns, brighten the lighting, and dopamine decor aesthetics.”
- Examine the options that the model produces. You can easily iterate on the prompt itself — seek a different color palette, change up backgrounds, ease off shadows, or play with composition until it looks and feels true.
- Export, save, or share. The editor usually includes a fast save for use in social, as well as downloads to maintain high-quality versions of your work, and side-by-side comparisons so that you can monitor your edits.
In Google’s own demo, an undressed bedroom photo turns into a maximalist reimagining within seconds of the user asking for something brighter and patterned, demonstrating how the model reads both practical instructions (keep this lighting) and stylistic indicators (interior trends).
What the tool does well for everyday editing tasks
In addition to simple filters, the editor can relight scenes, change color grading, and replace backgrounds. It can apply styles to subjects — say, changing the texture of an outfit or incorporating design motifs — and even combine elements from different photos. When you’re taking pictures of products, it gets rid of mess and clutter fast and provides clean white backdrops with thematic styles. In practice, that means you can turn a so-so studio photo (of an oriental carpet or mahogany desk) into a crisp, catalog‑ready image; give a gizmo a creative “steampunk” treatment, complete with metallic tones and ornate highlights.
Google notes that in the time since integrating Li’s technology into the Gemini app earlier this year, users have collectively generated more than five billion images, proving to us casually minded users that conversational editing is a lot more casual than what Photoshop has become. The draw is speed: a couple of quick queries in Search can do the work of tasks that formerly took many steps in traditional software.
Other places to experience the editor beyond Search
NotebookLM: The model now also supports fresh visual styles in Video Overviews and Briefs, with options including watercolor and anime among six total. Students and researchers who use NotebookLM to generate material can draw the summaries without losing their focus.
Google Photos: The company adds that the editor will be arriving in Photos next, bringing those conversational edits to the app many people use to manage their libraries. Although the timing has yet to be sketched out in detail, look for more one‑tap lifting with deeper, prompt-based controls.
Safety, attribution, and limits for responsible use
Google highlights protections that are typical of its generative stack. Outputs get distinguished with content credentials or watermarking thanks to technology developed by Google DeepMind (like SynthID), which should assist platforms and viewers in verifying AI‑assisted imagery. The system limits some edits, such as those related to sensitive topics and realistic manipulation of identifiable individuals, and may reject requests that violate usage policies.
Quality varies with input. Well-lit, high-res photos lead to more reliable edits; busy scenes or extreme requests can introduce artifacts. With most AI editors, more specific prompts about what to keep and what to change result in better results. If pixel‑perfect control is essential, then you’ll still be considering the old-timers; if speed and plausible transformations take precedence over everything else, this has been constructed to put a smile on your face.
Tips for better results with Google’s editor
Anchor the prompt with constraints: “Preserve sharpness on the subject, keep original shadows, replace only the background with a coastal sunset.” Reference styles and materials where applicable, e.g., “mid‑century wood tones,” “studio softbox lighting,” or “matte pastel palette.” If your first pass is kind of close, iterate on small, targeted changes to the prompt and don’t rewrite everything each time.
Taking straight-on shots of rooms or products is also useful, as it can help capture balanced lighting. For portraits in particular, focus on not having heavy occlusions such as very large hats or hands over faces, and you should be able to minimize odd artifacts. And when compositing many photos together, keep similar perspectives and lighting for photorealistic results.
Why editing in Search is important for everyday users
Integrating AI editing in Search drops the barrier for casual creation. Instead of having to open up a design suite and all, people can just type out what they would like changed when they are already thinking about what inspires them. For small businesses and creators, this equates to faster product imagery, mood boards, and social visuals at a fraction of the time and cost. As professional analysts at places like Gartner have reported, generative editors are moving fast into regular workflows; Search integration is a natural extension that might nudge the dial from niche tool to default habit.