Samsung is rumored to be planning a big graphics shake-up for its flagship phones, with a homemade GPU expected to launch inside the Exynos 2800 and potentially powering some Galaxy S28 models. The report, from Korean publication Hankyung, indicates that Samsung has put the finishing touches on its own GPU architecture, making it clear that the company’s willingness to bring licensed technology into its fold is long gone and a more Apple-like approach of ultimate vertical integration when it comes to mobile graphics might be coming soon.
If true, it would be the most ambitious graphics turnabout in years by the company. By working with a custom solution instead of adapting an off-the-shelf GPU, Samsung would be able to fit the graphics core and AI accelerators, drivers, and One UI software stack around its own performance-per-watt targets, on-device AI workload needs, and long-term feature support.

Why a Custom GPU Changes the Mobile Playbook
Modern phone silicon is as much about orchestration as brute power. All of this is made possible by a custom GPU, which allows Samsung to co-design its graphics core with not just its CPU cluster, but also its NPU, ISP, memory fabric, and scheduler—down to the amount of cache and how hot or cool the device will run. For phones, which need to operate within a few watts of power and a matter of degrees whether to hold the line on 60 fps or throttle down to 30 after five minutes, that matters.
It also matters for AI. While many neural workloads are being offloaded to NPUs, GPUs still play a crucial role in mixed-precision math, transformers, and graphics-accelerated generative features. A homemade GPU could allow support for sparsity, low-bit operations, and attention-friendly tiling to be built into the chip itself, all of which make it easier to do faster, more efficient local AI. Expect closer hooks into Samsung’s on-device AI stack, including models from the Samsung Gauss family.
Breaking from AMD and the Xclipse GPU partnership
The latest high-end Exynos chips from Samsung have employed AMD’s graphics based on its RDNA architecture under the Xclipse banner, and they bring PC-class features like hardware-accelerated ray tracing to phones. On paper (and in short stints), the partnership delivered clear wins, but long-term performance has been hampered by thermals. Independent tests have shown Exynos-based phones can drop frames sooner than their Snapdragon equivalents in long gaming sessions, despite the fact that peak scores may seem competitive.
The story from Hankyung says that Samsung thinks a general-purpose GPU stands in the way of being able to “fully implement” its AI roadmap and realize perfect software optimization. It’s possible that a proprietary shrink would allow the company to focus on its own workloads, rather than trying to adapt desktop-derived IP for mobile constraints. Now, to be clear, that doesn’t mean that AMD’s tech no longer looks set to continue on other tiers or in other regions, but the flagship trajectory certainly seems to point toward in-house silicon.
Pulling an Apple on silicon and software
Apple provides the template for why this move is significant. Because Apple controls its GPU architecture and Metal API, it’s able to be tightly tuned between iPhone and iPad on features and performance. When its new A-series Pro chipset landed, Apple boasted of up to a 20% GPU uplift along with hardware ray tracing—not just because of transistors, but because everything from drivers through to compilers and tools was designed in unison.
Samsung’s opportunity mirrors that playbook. A custom GPU enables deep integration into the Android Vulkan pipeline, Samsung’s Game Optimization Service, and developer tooling. It can also harmonize features across form factors—phones, tablets, wearables, and extended reality—without having to wait on third-party roadmaps. Hankyung also added goals outside of phones, such as smart glasses, infotainment systems, autonomous platforms, and even humanoid robots.

What it might actually mean for Galaxy S28 performance
For anyone who buys in, the headline benefits might not end with higher sustained frame rates and improved battery life while gaming, but extend to more capable on-device AI. Imagine higher frame rates (say, a smoother 60 or even 120 fps in demanding titles), ray tracing that lasts longer before the heat sets in, and camera and creative tools running high-end generative models without ever stuttering over local processing.
Benchmarks will tell the story. If Samsung’s custom GPU is able to improve stress-test stability in tools like UL’s 3DMark, close the shader-heavy workload gap, and boost ML kernels inside graphics contexts, we will see meaningful real-world gains. The more interesting wins may be in software: quicker updates to graphics drivers; fewer compatibility oddities; and developer APIs that are fine-tuned for Galaxy-only features.
Risks to consider and what to watch with a custom GPU
Designing a high-end GPU is one thing; delivering mature drivers, Vulkan and OpenGL ES support, and rock-solid game compatibility is something else. Apple, Qualcomm, and Nvidia all emphasize just how long it takes to optimize compilers, shader caches, and power management. Samsung will also need to win over developers early on with SDKs, profiling tools, and crystal-clear guidance on ray tracing and AI best practices.
The competitive stakes are high. Samsung reached about 20% in global smartphone shipments, and its premium phones face strong GPU competition from Snapdragon and Apple (AAPL). An in-house GPU that significantly boosts sustained performance and AI abilities would be a tangible point of differentiation—and a strategic hedge against dependence on external IP.
All indications are we’re getting a bolder graphics future for Galaxy.
Of course, if Samsung’s own custom GPU lands in time for the S28, this won’t simply be a matter of spec sheet changes—it would be a statement of intent to control the full stack and drive mobile graphics and AI well beyond where off-the-shelf parts can. That’s the Apple playbook—now rewritten for Exynos.
