Google is testing a new “Gemini” interface that looks very close to the ChatGPT mobile app, showing renewed attention to user-friendliness and potential parity with its category champion. Using a recent Google app beta, we’ve found a new Quick Tools menu to replace the various ones around other apps, voice input that follows you even as you change what’s on screen, more hooks into Maps, and an updated Labs entryway inside Gemini Live.
None of the additions are widely accessible just yet, although they seem developed enough at this point to signal a staged rollout on its way. The changes come as generative A.I. assistants vie for the mainstream of mobile behaviors, where minor interface tweaks can nudge engagement in meaningful ways.
- What’s Changing With The Gemini Interface
- Long-form Voice Input Arrives for Extended Dictation
- Deeper Google Maps Integration for Richer Results
- Why the ChatGPT-like Quick Tools Menu Matters
- What to Watch Next as Google Tests New Gemini UI
- Availability and Rollout Timeline for Gemini App Changes
- The Bottom Line on Google’s Latest Gemini App Updates

What’s Changing With The Gemini Interface
On its way out: the floating tools icon. Instead, tapping the plus button invokes a bottom sheet that pulls together image creation, file actions, and other utilities in one place. It’s a decidedly familiar format for those who have played around with the ChatGPT app, which helped make that tight little tool drawer popular on mobile.
This slideshow has been condensed for clarity and ease of navigation. Instead of having to go hunting for features strewn across icons and menus, users have a single predictable surface for actions. In mobile AI apps, where the conversation view takes up most of users’ screens, that bottom sheet has emerged as a de facto standard.
Long-form Voice Input Arrives for Extended Dictation
Google is also experimenting with a press-and-hold microphone gesture that keeps recording until you tap stop, addressing a common shortcoming of voice input as it currently exists — it times out too damn fast. By contrast, short breaks tend to terminate dictation early today — the new flow would seem to cut out strict VAD, allowing people to give a multi-part query naturally.
Importantly, this longer input menu appears in both the Gemini app and in Gemini’s floating overlay. That’s important for on-the-go work: Composing a complex prompt while toggling with another app, a web page (or page of an email) is much less irritating when your recording doesn’t cut off mid-thought.
Deeper Google Maps Integration for Richer Results
Gemini’s place recommendations are getting richer snippets from Google Maps, such as photos, short videos, reviews, and critical details. The assistant will not only recommend “best coffee near me” — it will return visual context and sentiment that help you make a decision quickly, then enable you to export a shortlist straight to Maps.
That pipeline exports inspiration and action from one populace to another. With over a billion users making monthly use of Google Maps, a way to convert from an AI-curated short list to saved places and turn-by-turn directions may be one of Gemini’s most utilitarian workflow wins (looking at you, travel planning and group outings).

Why the ChatGPT-like Quick Tools Menu Matters
Interface familiarity is strategic. In late 2023, OpenAI announced it had achieved 100M weekly active users for ChatGPT, and many of those use cases today involve users hopping from assistant to assistant for distinct tasks. For millions of users who already understand the pattern by which Gemini has laid out its tools, this reduces their switching costs and enables them to try without learning.
The timing is notable. AI apps are in a race to the bottom, and convergence of UIs is accelerating as teams experiment with what users will tolerate on tiny bits of glass. Given that Android claims circa 70% of global mobile OS share, by StatCounter’s numbers, even small elevations to Gemini’s mobile experience could be transformational across a very large user base.
What to Watch Next as Google Tests New Gemini UI
A new Labs icon is present in the Gemini Live experience, suggesting experimental features are on their way, possibly related to multimodal understanding or real-time interactions.
There are faster conversations in live systems and richer contexts; expect to see tests that enhance responsiveness whilst not increasing the risk of “hallucination” for spoken sessions.
Privacy and transparency are also part of the puzzle with constant voice recording. Evidence of end-user trust can be seen in such visual cues, on-device audio processing when applicable, and fine-grained options for saving or deleting audio. Google has relied on offline speech features in the past, and porting that playbook to Gemini would be a welcome move.
Availability and Rollout Timeline for Gemini App Changes
The changes have been sighted as part of a recent Google app beta build, which typically means things are nearing release if all the testing goes well. As always, the deployment will be gradual and limited to regions and server-side flags. If the past is any indication, Maps improvements and voice upgrades might make it first, while Live experiments follow.
The Bottom Line on Google’s Latest Gemini App Updates
Gemini’s ChatGPT-esque menu, longer voice input, and tighter Maps integration are less about copycatting than they are a way to refine the mobile AI workflow. By simplifying tools and tethering advice to real-world action, Google is nudging Gemini from novelty in the direction of utility — precisely where the next stage of AI adoption will be won.
