Apple is preparing a major evolution of Siri: an LLM-powered “World Knowledge” search capability designed to answer open-ended questions, summarize web content, and pull in context from photos, videos, and nearby places. The feature, described by people familiar with internal plans as an answer engine, aims to turn Siri from a command-based assistant into a conversational system that can reason across the web and a user’s device.
What ‘World Knowledge’ Means for Siri
Unlike today’s Siri responses, which frequently bounce users to a web page, the new experience will synthesize information into concise, source-aware summaries. Ask for the best ways to reduce jet lag, and Siri could produce a short, evidence-backed brief, surface a few reputable sources, and show relevant images or videos. Searching for a restaurant might combine menu analysis, recent reviews, and proximity data into one cohesive answer.

Apple’s internal name—World Knowledge Answers—signals a scope beyond simple lookups. The interface is expected to support follow-ups, comparisons, and clarifications, closer to the behavior found in tools like ChatGPT or AI Overviews, but with Apple’s emphasis on privacy and reliability.
Inside the LLM Architecture
According to reporting from Bloomberg, the overhauled Siri is built on a second-generation architecture centered on large language models. It combines three coordinated systems: a planner that interprets intent from voice or text, a search layer that retrieves from the web and on-device data, and a summarizer that crafts the final answer.
Apple software engineering leadership has indicated the rebuilt stack is producing materially better outcomes than the prior approach, enabling a broader upgrade than originally scoped. The company’s Foundation Models are expected to handle any processing that touches personal data—emails, messages, calendars—so sensitive content remains within Apple’s privacy boundary. For online queries, Apple’s Private Cloud Compute architecture is designed to execute models on hardened servers with verifiable software images, limiting data exposure and retention.
Who Powers the Model
Bloomberg reports Apple has a formal agreement to evaluate a custom Google Gemini model for parts of Siri’s summarization pipeline. At the same time, Apple continues testing Anthropic models and advancing its in-house systems, particularly for planning and device-resident tasks. This hybrid approach mirrors a broader industry pattern: partner where it accelerates quality, own the layers that differentiate experience and protect user data.
Expect Apple to keep a clear separation between personal context and third-party models. In practical terms, Siri might lean on external LLMs for general web synthesis while reserving Apple’s models for anything requiring your messages, files, or on-screen content.
Personalization and App Actions
The LLM Siri push is not just about search. Apple has previewed deeper personalization capabilities—using your mail, notes, or messages to answer questions or locate information, with explicit consent controls. On-screen awareness will let Siri understand what you’re looking at and act within apps. Think: “Summarize this PDF and draft a reply,” or “Add these dates to Calendar and share the invite in Messages.”
For developers, the shift will likely build on App Intents and SiriKit, letting Siri chain actions across apps. The planner component is key here: it figures out when to call an app’s intent, when to search, and when to ask for clarification.
Rollout and What to Expect
World Knowledge Answers is slated to debut inside Siri first, with potential expansion to Spotlight and Safari under consideration. Internally, the feature is aligned with a mid-cycle iOS release target, with some references pointing to an iOS 26.4 update window. Later in the cycle, Apple is also working on a visual refresh for Siri and a built-in health capability tied to a paid wellness service.
As with any LLM-powered system, quality will hinge on guardrails and evaluation. Academic benchmarks like MMLU or HELM offer directional signals, but user trust will depend on real-world accuracy, transparent sourcing, and graceful handling of uncertainty. Expect Apple to focus on conservative answers, citations, and clear handoffs to the web when confidence is low.
Why It Matters
Siri has long trailed rivals in general knowledge and conversation. A credible answer engine could reposition Apple’s assistant as a daily research tool, not just a voice remote for timers and texts. For publishers and businesses, that means optimizing content for summary-ready answers and structured data. For users, it promises faster decisions with less hopping between browser tabs and apps—delivered with Apple’s privacy-first posture.
The bottom line: a more capable, context-aware Siri is moving from promise to product. If Apple executes, “What can Siri do?” may soon have a much longer—and more useful—answer.