Apple is near a deal with Google worth about $1 billion annually to make a new “Gemini” voice assistant the sole option for Siri, Bloomberg reported. The integration would see Google’s monumental generative model fit in behind new Siri functionality as Apple seeks to supercharge its AI, with a release aimed for next spring but still in a state of flux.
The deal reflects a pragmatic move for Apple, which like many corporations tends to use its own stack but increasingly deploys a hybrid approach: feed an external best-in-class model into the tech giant, their house-made models of equal comparative strength are stretched thin. Apple was said to have considered systems from both OpenAI and Anthropic earlier this year, but decided on Google after conducting head-to-head tests.
Why Apple Would Partner With Google on AI
At the center of such discussions is a bespoke version of the Gemini model containing some 1.2 trillion parameters, an order of magnitude larger than models that have been publicly associated with Apple Intelligence. Industry reporting has pegged Apple’s current cloud model at around 150 billion parameters, meaning that Google’s candidate is roughly eight times bigger and appropriate for heavier-handed reasoning, longer contexts and more reliable tool use. Size isn’t everything in AI, but model size generally correlates with performance of multi-step and ambiguous requests at scale.
Larger models also have larger compute bills. Anticipate Apple to send only the most complicated inquiries to the cloud, and leave simpler tasks on device. Apple has touted privacy-preserving infrastructure for Apple Intelligence, including methods that strip out personal data and a “private cloud” architecture reviewed by independent researchers. Baking Gemini into that architecture would seek to strike a balance between capability and Apple’s privacy posture.
The Economics and the Precedent for Apple and Google
As an amount in raw dollars, $1 billion per year is nothing to sneeze at but is not unheard of for the two companies’ relationship. The search case brought by the U.S. Department of Justice also reveals that Google pays Apple between $18 and $20 billion for making it the default search option on Safari. By contrast, a $1 billion AI services deal is small, but would represent one of the largest contracts known for recurring spend related to consumer-facing AI features.
Generative AI is still expensive to compute at iPhone installed base scale. And again, inference costs can vary from somewhere below a cent to several cents per 1,000 tokens depending on model size and the kind of latency guarantee you want. A bespoke Gemini with priority capacity and aggressive service-level agreements would also carry premiums for reliability and regional coverage. Apple can mitigate such costs by using on-device fallbacks, usage caps for some features, building large models into smaller ones and progressively substituting its own as they gain maturity.
What a Gemini-Powered Siri Might Be Able to Do
A smarter Siri should be able to manage context, continuity and action more effectively. Consider: knowing follow-ups across multiple apps, summarizing an evergreen thread, authoring responses in your style or configuring complex multi-step workflows such as “reschedule my dentist appointment, add it to my calendar and notify my spouse” with little guidance. Multimodal reasoning could let Siri merge voice, text and images in a seamless way – understanding what is on your screen and acting upon it, but then punting more complex analysis to the cloud as required.
Apple will probably maintain a clear stance on data practices, telling you when something leaves your device and why. Look for controls to enable cloud-assisted tasks and (visible) progress when cloud processing takes place, and clear documentation on what data will be kept for how long. Some workflows — especially those that are associated with sensitive content — will necessarily live on device, even if that sacrifices sophistication.
Competitive and regulatory ripples from an Apple-Google deal
Picking Google over OpenAI or Anthropic provides Apple instant scale and a partner it is already working with, but also risks more scrutiny. U.S. and E.U. regulators are already eyeing the companies’ search distribution agreement; why not add another Apple payments deal on top that could invite questions about exclusivity, defaults and whether rivals can bid for showcase slots across Apple’s platforms?
Apple, for its part, has signaled a “multi-modular” philosophy, or the prospect of using multiple providers for different tasks or regions. That flexibility is important for Apple if it cares about negotiating leverage, redundancy in case one partner lets us down, or the freedom to rotate vendors as models leapfrog each other. For OpenAI and Anthropic, not getting the first Siri slot restricts their mass-market potential — but they could still appear in niche features, developer tools or even regional offerings.
Timeline and what to watch as Apple tests new Siri
The updated Siri is slated for next spring, but the schedule may change as testing expands. Look for signals on latency and uptime targets, whether Apple credits Google branding in product surfaces and how granular user controls will be when it comes to cloud processing.
Also important: how quickly Apple can close the gap with its own offerings.
Should Apple’s in-house systems approach Gemini performance on most tasks, Cupertino might be able to trim its dependence on the cloud, and tamp down spending over time. Until then, a $1 billion bridge to Google’s AI horsepower may be the quickest route to making Siri feel truly new.