FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

AI Developer Enables Nvidia RTX eGPU on MacBook Pro M3

Gregory Zuckerman
Last updated: October 30, 2025 9:18 am
By Gregory Zuckerman
Technology
7 Min Read
SHARE

An independent AI developer has proved an Nvidia RTX graphics card capable of handling compute workloads on an M3‑chip MacBook Pro using Apple’s M3 architecture. Although it has long been deemed impractical on Apple Silicon, a proof of concept shared by TinyCorp allows AI inference to be sent to an external RTX GPU via USB4 and Thunderbolt 4. Contrary to Apple’s lack of official support, it is achieved externally, proving to be another exciting option to do local AI acceleration on Macs. Apple’s switch to its own M‑series architecture has eliminated all third-party GPU support, and Nvidia’s macOS driver pipeline was largely halted years ago. Enthusiasts have tried to bridge the gap, but with little success. The demonstration does not need to operate a display; it is not for gaming, and that is what makes all the difference for AI.

How TinyCorp’s RTX eGPU compute-only approach works

TinyCorp does not seem to have released a full description of how their solution works, but it appears they leverage USB4 and Thunderbolt 4 to connect to the GPU, with their TinyGrad machine learning software stack in control. USB4 and Thunderbolt 4 can move PCIe traffic at speeds of up to 40 Gbps, sufficient for providing a computing device and transferring tensors, but not comparable to a desktop PCIe x16 connection. API calls in the software stack use user-defined CUDA and NVML. Apple documentation makes it quite clear that eGPUs are difficult to accomplish; they view this as compute-only plumbing rather than reopening antique eGPU macOS graphics.

Table of Contents
  • How TinyCorp’s RTX eGPU compute-only approach works
  • External RTX GPUs broaden support for AI inference on Mac
  • Bandwidth limits eGPU performance compared to desktop PCIe
  • Power and thermal constraints complicate external RTX setups
  • Software fragility and limited driver support are concerns
  • Compatibility and support across GPUs and Apple Silicon
  • Industry context shows strong demand for Mac-based AI
  • What to watch next for code releases and portability
  • Future interconnects may narrow the external GPU gap
A professional image of a silver laptop with a dark, abstract wallpaper on its screen, set against a soft, light gray background with subtle, curved patterns.

External RTX GPUs broaden support for AI inference on Mac

TinyCorp had already shown external AMD cards running AI compute on Apple Silicon through USB3; moving to USB4/TB4 broadens the pipeline and brings in Nvidia RTX. The developer says the configuration identifies RTX 30, 40, and 50‑series cards, and it recognizes AMD GPUs from RDNA2 up to RDNA4. Local inference is the first win. These cards have Tensor Cores and more VRAM, and using larger cards makes running big language models on-device feasible: depending on your VRAM, you can fully run 7–8B parameters in 8‑bit, or 13–14B in 4‑bit. 4‑bit quantization cuts these requirements roughly in half, implying a midrange GPU would be surprisingly sufficient for chat and coding helpers. The neural engine inside Apple’s M3 is fast for mobile‑style inference, but GPUs are especially good at matrix multiplication, a workload for which GPU cores are highly specialized. Vendor‑comparison figures are probably apples‑to‑oranges, but the practical advantage can be substantial. RTX enjoys much more built‑in throughput on the types of low‑precision operations used in LLMs and diffusion when they can hold the model completely in VRAM.

Bandwidth limits eGPU performance compared to desktop PCIe

Thunderbolt 4 tops out at 40 Gbps, about 5 GB/s effective, which is similar to PCIe 3.0 x4; desktop GPUs enjoy PCIe 4.0/5.0 x16 links, i.e., 32–64 GB/s. This matters for training or workloads that constantly stream activations back and forth to and from system memory. For steady‑state inference, when the model’s weights are on the card, link speed is less crucial, and performance can scale far better.

Power and thermal constraints complicate external RTX setups

High‑end RTX boards require substantial power delivery and cooling—typically an external enclosure with a PSU—so this is not a travel‑friendly setup. And since this is compute‑only, you should not hook displays up to the RTX output or play games; there is no change in macOS’s graphics stack.

Software fragility and limited driver support are concerns

Reproducibility is limited without released drivers and documentation. Thus, AI frameworks like PyTorch or JAX are not plug‑and‑play in this configuration. The demo depends on TinyGrad, keeping the scope focused, but that also means compatibility is reduced in the near term.

A MacBook Pro on a wooden desk, with plants, headphones, and a LUMON mug.

Compatibility and support across GPUs and Apple Silicon

TinyCorp says RTX 30/40/50‑series and AMD RDNA2/3/4 can be addressed; newer GPUs are favored for their Tensor Core designs and larger VRAM. Apple still does not support eGPUs on Apple Silicon, and Nvidia lacks macOS CUDA for M‑series. In any community rollout, open code, stable user‑space drivers, and careful coordination to avoid OS update breakage will be necessary.

Industry context shows strong demand for Mac-based AI

As per developer surveys, macOS is a popular workstation platform, and many AI practitioners prefer Mac laptops for battery life and build quality. In addition, bridging to Nvidia compute reduces context switching between a Mac notebook and a separate Linux box or cloud instance.

What to watch next for code releases and portability

The main questions are when code and setup steps will be published, how portable the approach is across macOS versions, and how easily mainstream frameworks can target the external GPU. If the community can make this a repeatable toolchain, we could reinvent how Mac‑first developers prototype models at low cost locally.

Future interconnects may narrow the external GPU gap

Future interconnects such as Thunderbolt 5, which aim to run at 80/120 Gbps, would also help external access narrow the gap. In the meantime, TinyCorp’s demo is a strong testament that Apple Silicon laptops can leverage Nvidia RTX compute for AI without further invasive changes to macOS.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Etsy Appoints Kruti Patel Goyal as New CEO
Character.AI limits teen chats after safety concerns
Monster Open Ear AC225 Earbuds Get 69% Off
Rule 5: Measure outcomes, not opinions with DORA metrics
Microsoft and OpenAI reshape partnership with new agreement
Canonical outlines Ubuntu 26.04 Mainstream Desktop plan
Global launch timing, livestream details, and sales info for OnePlus 15
OpenAI and Microsoft revise partnership with AGI at core
What changed in Windows 11’s redesigned Start menu
Microsoft Surface SE Drops Below $200 in Limited-Time Deal
YouTube Improves TV App With QR Shopping And AI Upscaling
Grammarly Gets Superhuman And Delivers An AI Assistant
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.