Empowering customers to stroll down the halls of their own AI Factories, Amazon is taking the battle in high‑performance computing and the artificial intelligence arms race directly into the data centers of its customers with a managed offering it calls AI Factories, built in conjunction with chipmaker Nvidia.
The offering delivers AWS‑managed, Nvidia‑powered compute to on‑premises data centers and provides customers with the ability to keep their sensitive data in place while running model training and inference across the same TensorFlow stack they run in the AWS cloud.
- What Amazon is shipping with its on‑prem AI Factories
- Why on‑prem AI is gaining momentum and regulatory support
- Rivals raise the stakes with competing on‑prem AI offerings
- The Nvidia plus Trainium equation for performance and TCO
- Power economics and procurement considerations for AI Factories
- What customers will get from AWS‑managed on‑prem AI Factories
- Bottom line: hybrid AI becomes a strategic operating model

What Amazon is shipping with its on‑prem AI Factories
The AI Factories offering wraps Nvidia’s newest Blackwell‑class GPUs, if users prefer that to Amazon’s new Trainium3 accelerators, in AWS networking, storage, security and operations. Customers supply the power, floor space and connection; AWS supplies, sets up and manages the cluster and can connect it to services like Amazon Bedrock for model access and governance or SageMaker for creating or adjusting your models.
Think of it as a private supercomputer for AI, fully managed by AWS. Data never leaves a site unless the customer elects to federate with public cloud, meeting data sovereignty, latency and IP control needs while maintaining a single toolchain and support model.
Why on‑prem AI is gaining momentum and regulatory support
Regulators and boards are demanding greater control of high‑value data. Banks, hospitals, government defense organizations and operators of critical infrastructure can often never relocate training data outside of their own offices. GDPR, sectoral privacy regulations and national security policies all drive sensitive workloads to sovereign or customer‑controlled environments.
Latency‑sensitive use cases add momentum. Factory vision, clinical imaging and real‑time fraud prevention all take advantage of pushing inference right up against the data stream. By co‑locating training and fine‑tuning with those feeds, organizations can shorten feedback loops and harden governance without needing to construct a bespoke stack from the ground up.
Rivals raise the stakes with competing on‑prem AI offerings
Amazon is not alone. Microsoft has been working on Nvidia‑powered AI Factory infrastructure to handle large model workloads, and it has Azure Local for deploying managed hardware on customer premises. Google offers its Distributed Cloud lineup, both hosted and sovereign in conjunction with regional providers, and has been pushing Anthos for hybrid orchestration. Oracle, Inc., also focuses on regulated segments with its Oracle Alloy and dedicated region offerings.
The key difference there is approach: AWS is closely coupling on‑prem delivery with its AI platform services, and offering more choice around Nvidia Blackwell GPUs or AWS silicon, a hedge against supply constraints and a play for workload portability across cloud and customer sites.

The Nvidia plus Trainium equation for performance and TCO
Nvidia’s Blackwell generation is designed specifically for large‑scale training and high‑throughput inference, and the company has been hyping LLM efficiency gains versus its previous H‑class systems. At the same time, AWS Trainium3 drives Amazon’s push for price‑performance control and multi‑year roadmap certainty. Enterprises can standardize on Blackwell to support the broadest range of software ecosystem, choose Trainium3 for TCO at scale, or mix nodes to align with workload profiles.
Under the hood, AWS contributes its Nitro and Elastic Fabric Adapter technologies, managed storage tiers and security services, while the Nvidia software stack provides framework support and cluster orchestration for GPU nodes. The idea is to hide complexity: customers get capacity planning, firmware and driver currency, patching, and incident response behind an SLA‑backed service.
Power economics and procurement considerations for AI Factories
AI Factories are power‑dense. Under normal operation, these will draw far more power than most IT racks are used to handling (more in the 30–60 kW range and similar), and with flagship GPU blades capable of taking that higher still, this starts requiring refresh of cooling infrastructure, power distribution and floor loading. Rising rack densities are being cited by the Uptime Institute and other industry groups as among the top issues facing data center operators, and AI clusters concentrate them in a single spot.
Operationally, the managed model moves expertise and lifecycle risk to AWS while enabling customers to maintain data locality. Anticipate flexible commercial structures — space reserved, consumption‑based and multi‑year commitments — similar to cloud procurement but on customer premises. For buyers, it’s the calculus of time‑to‑value versus building an internal team to integrate GPUs, networking, storage and MLOps at scale.
What customers will get from AWS‑managed on‑prem AI Factories
For regulated entities, the pitch is simple: keep models and data inside your walls, audit everything with existing controls, and still leverage a global cloud ecosystem when you need something like burst capacity or managed foundation models. With global enterprises grappling with data residency, AI Factories could be regional hubs sitting behind the architecture in each jurisdiction, each replicating wherever possible and with like‑for‑like tooling.
Early adopters range from banks, which are fine‑tuning multilingual LLMs on proprietary transaction data, to hospitals training imaging models against locally governed datasets and manufacturers deploying vision systems at the edge where milliseconds make a difference. And government agencies can operate classified or sensitive workloads without incurring any cross‑border data flows.
Bottom line: hybrid AI becomes a strategic operating model
With AI Factories, Amazon is moving the cloud line back to wherever customers can get electricity to power and cool the gear. The maneuver cuts against competing sovereign and local cloud plays, aligns with Nvidia’s trajectory and provides AWS a response to buyers interested in supporting cloud operating models without giving up physical control. In a world where demand for AI and control of the data are both ratcheting up, hybrid is no longer a halfway house — it’s the game plan.