I turned Apple Intelligence on day one, and it has remained enabled with every major update since. I wanted to find out if Apple had been able to keep pace with how broad and fast Google’s Gemini ecosystem is growing — after a year of real-world use, from photo fixes to voice assistance and never-stopping consultation when I really need something done fast and right, I have my answer.
Image Tools and Generative Results Compared Today
Apple’s Playground pledged whimsical, personal illustrations and avatars driven by your Photos library. The look‑’em‑long results still veer off into the uncanny valley as a matter of practice. Mild facial artifacts, flattened expressions, and a lack of styling consistency make it difficult to share crops without caveats. That’s not just a minor quibble; academic research has repeatedly demonstrated that it’s in the department of near-human renderings that trust is first broken — and Apple’s existing models are still too close to that edge.
Gemini’s outlook is more conservative with regard to personal likenesses but more reliable concerning composition, lighting, and prompt fidelity. Meanwhile, when I request a poster in the style of one made for a run club or a sketch to kick off thoughts on an upcoming presentation, Gemini’s drawing interface tends to result in lines that are smoother and have fewer jittery errors. The distinction appears in edits, as well. Apple’s Clean Up can remove plain background objects, but complex scenes frequently produce distorted textures or repetitive patterns. OpenAI’s GPT‑3 still leaves artifacts, as do Gemini’s Magic Editor and Help Me Edit, but I can iterate with natural-language prompts — Crop Frame, Relight Subject, Sky Color, etc. — until the result passes the social-share test.
The takeaway: Apple’s image features are getting better, but Gemini provides me with increased control and a speedier path to something usable when I’m going for carefully tuned edits rather than novelty avatars.
Everyday Calls And The Real-World Friction
Robocalls and spam remain a nearly daily nuisance throughout the U.S.; YouMail’s Robocall Index estimates there are tens of billions of them going out annually. That’s why funky stuff like call handling is a real quality-of-life measure, not a demo. Even Google’s Call Screen and Hold for Me have had years of iteration, and it shows. Screening is aggressive, transcripts are quick, and the model is confident enough to filter without allowing in too much junk.
Apple’s latest iterations feel like version one. Live Voicemail is nifty, and Hold Assist Detection saves the day when I need to answer that little voice in my head — “Can you hold?” — but screening is more lax, and I’ve still found unwanted calls seeping through. When I’m slammed with work, I like predictable behavior, and Gemini’s phone features fewer surprises.
Assistant Depth and On‑Screen Awareness
Apple has followed through on bold claims for Siri: richer on‑screen knowledge, personal context, and cross‑app actions. Those are the right goals; we’re moving toward agents that understand what’s on your display and can chain tasks together. But a lot of these marquee features are still in progress, and the available versions suffer from limited intent coverage.
And Gemini puts things in a multimodal context today. Circle to Search has become second nature to me — circle a math step in a PDF, circle an actor’s jacket from a video frame, or a line item on a receipt, and get instant knowledge that is materially grounded. Apple’s Visual Intelligence is generally good in concept, but it too often bails on those same screenshots or frames that Gemini Live quietly handles with no fuss. When I am deep into work, those minor failings disrupt flow and confidence.
There’s also a cadence difference. Google has shipped a consistent drumbeat of useful connective tissue throughout Photos, Search, and Android via Gemini Live improvements. Apple is Googling, but the assistant gulf persists.
Privacy, Performance, And Practical Trade‑Offs
Apple should be commended for the privacy framework it has built into Apple Intelligence. On‑device models and Private Cloud Compute are truly thoughtful, and it’s also true that the restriction to later chips will bring performance and security advantages. If privacy is your bottom line, Apple’s approach seems like a no-brainer.
But productivity is where I evaluate such tools, and Gemini’s willingness to punt more weighty lifts to beefier servers pays off in speed and freedom. When I’m working with massive images, posing long, context‑rich questions, or changing my mind again and again as multiple edits slot into place, Gemini tires are spinning faster and those dead ends are more remote. Many are counting on the moment at scale: Counterpoint Research estimates that the number of “GenAI Phones” shipped has recently passed nine figures, and for use to become mainstream, the assistant has to feel immediately helpful, not aspirational.
A Year In The Seat And What Apple Needs To Fix
A year later, I still use Gemini to sort and group my icons and text more neatly across desktops; in the morning, after mounting my drives on a new iMac (I swap computers roughly once every eighteen months), my ten or so desktop spaces resemble an art dealer’s monthly display of squibs.
Apple Intelligence can do a lot more, but it’s also got to deliver three hard things to impress me:
- Avatar and portrait-generation that avoids the uncanny valley
- Image-editing that can, without fail, turn around complex backgrounds
- Siri enhancements that provide actual on‑screen awareness and cross‑app execution, not just roadmap slides
I’m going to leave Apple Intelligence turned on — privacy‑first AI is an important balance. But when I have better things to do and a minute to spare, I still ask Gemini.