Forget synthetic benchmarks. The single most important improvement that the Pixel 11’s Tensor G6 can deliver isn’t a bigger TOPS unit, or even a shiny GPU bump. It’s the kind of practical, everyday excellence in video that is, using Google’s new Video Boost to get great results immediately and without any need for a trip to the cloud or scavenging through settings every time you open up your camera.
Video Boost should be instant and fully offline on-device
Video Boost, which we first saw on the Pixel 8 Pro, can take average clips to a final beautiful clip by fixing exposure where appropriate, cleaning up tinges for cooler color gradations, canceling noise, and adding stabilization, among other improvements. The catch here is that it sends your video to Google’s servers to crunch it, which means you have to wait — sometimes several hours — before you can share the final version. A one-minute 4K60 HEVC clip is typically around 400MB to 800MB depending on bitrate, and this may not be a trivial upload even on fast Wi‑Fi — server-side queues introduce additional delay here.
This is the type of workflow that should happen on-device. With Tensor G6, Google has an opportunity to make Video Boost a process that runs close to real time on-device at 1080p and 4K, falling back to the cloud only in rare cases. If Apple can serve up best-in-class in-camera video features such as Cinematic Mode and rock-solid stabilization without round trips, then slowly but surely integrating them into the existing Google Photos experience may be the way to go; at least meet that bar, if not raise it.
Why the cloud model takes a great feature down
There are two points of friction for Video Boost that keep it from becoming a default tool. First, latency. Creators want to shoot, edit, and post; the 120-minute wait for processing breaks the moment. Second, the toggle doesn’t stick. That setting resets when you close your camera, so you have to remember to turn it on before recording, and you can’t apply it after the fact. That’s the opposite of what a best-in-class camera would do.
And there are also cost and privacy angles. Server-side processing uses compute as well as bandwidth, so it’s natural for defaults to lean on that end. Offloading much of the work to the phone cuts down on operational overhead and limits the need to send personal videos up to the cloud, a victory for user trust.
What the Tensor G6 silicon should offer for on-device video
On-device Video Boost isn’t magic — it’s a pipeline problem. The G6 requires three things:
- A stronger image signal processor with powerful HDR fusion
- A video encoder delivering high-quality 10-bit output at 4K60 without thermal throttling
- An NPU design for sustained load as opposed to short spurts
Think temporal denoising, motion-compensated super-resolution, per-frame semantic segmentation for sky, skin, and foliage, and learning-based tone mapping — all running within a tight power envelope. Sustained performance is the keyword. It’s not so much peak TOPS as sustaining that throughput for a few minutes without dropping threads or dimming the display to cool down.
Memory bandwidth and I/O also matter. The 4K60 pipelines of today load hundreds of gigabits per second of internal traffic when you account for multi-frame HDR and multi-pass processing. Efficient tiling, on-chip SRAM, and smart scheduler policies can reduce DRAM trips and control heat. Competitors have already jumped into this: Qualcomm boasts comparable long-run AI video features in its latest platforms, and Apple’s vertical integration of hardware and software has made its iPhone Pro models the video benchmark in reviewer testing from outlets such as DXOMARK and other major tech publications.
Smarter software to keep up with the hardware
Tensor alone won’t solve the experience. What the Pixel Camera app requires is a Video Boost control that is kept as a toggle with three states: Auto, On-Device, and Cloud Max.
- Auto: Perform processing locally by default, escalating to the cloud for advanced extras
- On-Device: Keep processing on the phone for speed and privacy
- Cloud Max: Use the cloud for maximum quality and heavy-duty options like 8K upscaling
Provide users with a queue that displays progress, allow them to set battery thresholds, and make use of the “process when charging” scheduler. Provide real-time previews, so you can see how things will look before you start recording. And let creators re-boost clips from the gallery, as they can in photos; non-destructive editing is table stakes by 2025.
Guardrails for battery, thermals, storage, and formats
Practical limits make sense. Scope on-device Boost at 1080p and 4K for now, save 8K upscaling for the cloud, and allow users to set battery thresholds — e.g., only boost when you’re above 50% or plugged in. Featuring UFS 4.0 storage, Wi‑Fi 7 for faster sharing when you do need the cloud, and a thermal profile tuned for sustained camera use, the Pixel 11 might at last bring Google’s software ambition in line with real-world usability.
If Google wants the Pixel to be the phone you pull out when the light is weird or the moment is fleeting, Tensor G6 would do better to favor a frictionless video pipeline than to chase raw benchmark glory. Make Video Boost instant, local, and reliable — and the Pixel 11 would feel like a leap forward where it matters most.