Apple is poised to launch a significant next phase to Siri: an LLM-powered “World Knowledge” search feature that will allow Siri to answer open-ended questions, plus summarize web pages, and integrate with photo, video, and nearby location context sources. The feature, known internally as an answer engine, is designed to turn Siri into a conversational information-fetching library for users on everything from the web to a person’s device.
What ‘World Knowledge Means for Siri
Rather than the typical Siri responses that often bounce you to a web page for more information, the new Siri will consolidate information into short, source-aware summaries. Inquire about the most effective treatments for jet lag, and Siri might spit out a succinct, evidence-based summary, display some authoritative sources and even exhibit related images or videos. Looking for a restaurant could mean the combination of menu scanning, recent reviews and proximity information all in a single answer.
Apple’s internal name for it — World Knowledge Answers — hints at a much larger mission than simple lookups. The interface is likely to come with follow-up, comparison and clarification capabilities, more in line with the way tools like ChatGPT or AI Overviews work, but Apple’s focus on privacy and dependability will be there.
Inside the LLM Architecture
The revamped Siri is based on second-generation architecture based on large language models, according to a report from Bloomberg. It draws on three synchronized systems: a planner, which interprets intent from voice or text; a search layer that scours web and on-device data; and a summarizer that composes the final answer.
Apple software engineering leadership has informed us that the rebuilt tech stack is leading to significantly better performance and is allowing them to open the upgrade to more users than was first anticipated. The company’s Foundation Models are tasked with performing any processing that involves personal data — emails, messages, calendars — this way, sensitive material stays behind Apple’s privacy border. * In online mode, the Apple PC2 architecture is intended to perform calculations on hardened servers with attested software images, limiting data exposure and retention.
Who Powers the Model
Apple has a deal with Google to check out a custom Google Gemini on parts of Siri’s summarization pipeline, according to Bloomberg. Apple is at the same time still testing Anthropic models and moving forward with their inhouse systems, particularly at the level of planning and device-resident tasks. This hybrid technique reflects a larger industry trend: partner where it accelerates quality, own the layers that distinguish experience and safeguard user data.
Apple will most likely maintain a clear divide between personal context and third-party models. In practice, Siri could turn to external LLMs for broad web synthesis, using Apple’s own LLMs for anything that might involve your messages, files, or even what’s on your screen.
Personalization and App Actions
The LLM Siri push is not search-only. Apple has teased even deeper personalization options — using your mail, notes or messages to answer questions or find information, with more explicit consent controls. On-device intelligence will help Siri know what you’re looking at and can carry out actions within apps. For example: “Summarize this PDF and write a response,” or “Add the dates to Calendar and send the Invite in Messages.”
From a developer perspective, the transition would presumably bring us more powerful “app intents” and extending SiriKit to chain together different actions across apps. Here, the planner is crucial: it will decide when to invoke an app’s intent, when to search, and when to ask the user for clarification.
Rollout and What to Expect
World Knowledge Answers will make its first appearance in Siri, and potentially in Spotlight and Safari. Internally, the feature is slated to debut with a mid-cycle iOS release in mind, with some references citing an iOS 26.4 update window. Further down the road, Apple is developing a visual refresh of the menu and an on-board health feature with a paid wellness service.
Like anything LLM-powered, quality is going to depend on guardrails and evaluation. Academic references like MMLU or HELM afford directional clues, but user confidence will hinge on real world accuracy, transparent sourcing, and graceful handling of uncertainty. Apple would emphasize conservative answers, citations, and when confidence was low, clear handoffs to the web.
Why It Matters
Siri has long lagged competitors in both general knowledge and the art of conversation. A credible answer engine could reposition Apple’s assistant as something to be used every day as a research tool, and not just a voice remote for timers and texts. For publishers and businesses, that means not only optimizing content for summary-worthy responses, but also structured data. For people who use it, it promises quicker decision-making, reduced clicking around between browser tabs and apps — and all with Apple’s privacy-first stance.
The upshot: a more capable, context-aware Siri is making the leap from promise to product. If Apple pulls it off, “What can Siri do?” may soon get a much longer — and more useful — response.