Google appears to be preparing real-time speech translation for the Meet app on Android, a major step toward bringing desktop-grade meeting AI to phones. A hidden settings entry surfaced in a recent beta build, hinting that mobile users could soon hear spoken translations during calls instead of relying solely on captions or desktop access.
Meet already offers live speech translation on computers for select languages, but access is tied to premium plans. Extending that capability to Android would put instant multilingual meetings in people’s pockets — a meaningful upgrade for global teams, frontline workers, educators, and frequent travelers.

Evidence of live speech translation found inside Meet app
The latest public beta of Meet for Android includes code that surfaces a new “speech translation” setting when triggered, although it doesn’t appear to function yet. That kind of dormant switch is a classic sign of a feature in active development: engineers wire up UI and permissions before flipping it on server-side for limited tests.
While Google hasn’t announced anything, the presence of dedicated controls points to deeper integration than a quick experiment. Translation settings typically need device prompts for microphone access, language pairing, and in-call routing — all signals this isn’t a throwaway test.
How mobile live translation in Meet could work on Android
Live speech translation on phones has two main paths: cloud processing for heavy models or on-device processing for privacy and low latency. Meet’s desktop implementation leans on Google’s cloud AI, but Android introduces complexities like variable network quality, battery constraints, and audio routing over Bluetooth or speakerphone.
Google has a head start. Recent Pixel devices support on-device Live Translate and Interpreter features, proving that near-real-time translation can run locally for some languages. A mobile Meet rollout could adopt a hybrid approach: on-device for supported pairs to minimize delay, cloud fallback for broader language coverage or higher accuracy, all stitched into the WebRTC pipeline that powers Meet audio.
Latency is the critical metric. Translation needs to feel conversational — ideally under a second or two. Expect options to choose between translated audio, translated captions, or both, plus controls to keep the original audio audible for context.
Who gets it on Android and what the feature might cost
Today, Meet’s live translation on desktop sits behind business and paid AI plans. It’s reasonable to assume a similar paywall on mobile at launch, potentially tied to Gemini for Workspace or other premium tiers. The big unknown is whether any devices get early access or exclusive features, especially if on-device translation is involved.

Google could also mirror its captioning model: broad availability for basic features with advanced quality, expanded language packs, or voice replacement reserved for paid accounts. Admin controls will matter too — IT teams will want to manage languages, recording policies, and data handling for regulated environments.
Why mobile real-time translation in Meet on Android matters
Phones are where many meetings actually happen — on commutes, on-site with customers, or in classrooms. Bringing real-time translation to Android would remove one of the last desktop-only barriers for inclusive meetings. For global organizations, that could shrink scheduling overhead, reduce reliance on human interpreters for routine calls, and make ad-hoc collaboration feasible across time zones.
Competitively, it keeps pace with rivals. Microsoft Teams offers translated captions across dozens of languages, and Zoom has leaned into AI-assisted meeting tools including translation and summary features. The differentiator for Google could be deeper OS-level optimization and on-device privacy options, if those appear at launch.
There’s also a broader cultural impact. With over 7,000 living languages cataloged by linguists, friction-free conversation remains a massive technical challenge. Each improvement in speech recognition and translation accuracy has outsized effects in education, telehealth, and public services, especially where bilingual staff are scarce.
What to watch next as Google tests mobile speech translation
Keep an eye on Google’s release notes and the Workspace Updates feed for mention of “speech translation” on Android. Key details to watch include:
- supported languages at launch
- whether translated audio will be available alongside captions
- on-device versus cloud processing
- admin controls
- any tie-in to premium AI plans
For now, the signal is clear: Google is laying the groundwork to make multilingual meetings as routine on phones as on PCs. If testing goes smoothly, Android users could soon join a call and simply hear the other side in their own language — no extra app juggling required.