Industry News | 9/2/2025
G42 diversifies AI chips with AMD, Cerebras, Qualcomm
Abu Dhabi’s G42 is pursuing partnerships with AMD, Cerebras, and Qualcomm to broaden its AI hardware stack while maintaining Nvidia as a key collaborator. The move aligns with a broader UAE-U.S. AI Campus project and signals a strategic shift toward multi-vendor resilience in a rapidly evolving market.
G42’s strategic pivot
G42, the Abu Dhabi-based AI powerhouse, is quietly pursuing a multi-vendor hardware strategy. While Nvidia remains a central partner for some of its most demanding workloads, the company is exploring ties with AMD, Cerebras Systems, and Qualcomm to diversify its supply chain and increase flexibility across its growing AI ambitions. Think of it as adding more lanes to a highway that’s already crowded with big trucks—the goal isn’t to abandon Nvidia, but to reduce risk and improve throughput under varying conditions.
Why diversify now?
- Supply-chain stability: The AI hardware market has seen bottlenecks in GPU availability as demand surges. By engaging multiple vendors, G42 aims to smooth procurement and reduce single‑supplier exposure.
- Geopolitical realignments: The company has sharpened its U.S. relationships, reinforced by a $1.5 billion Microsoft investment and a broader push to divest from certain Chinese technologies. In this context, working with American chipmakers helps align with its strategic partnerships and regional goals.
- workload-specific economics: Different tasks – from training colossal models to running inference at scale – can benefit from hardware with distinct strengths. A mixed stack allows G42 to optimize performance, cost, and power consumption on a workload-by-workload basis.
The UAE-U.S. AI Campus and Nvidia’s continuing role
G42 is at the helm of the UAE-U.S. AI Campus in Abu Dhabi, a project that could rank among the world’s largest AI-dedicated data-center deployments. Part of this effort involves a 5-gigawatt campus, with a 1-gigawatt cluster nicknamed “Stargate UAE.” Nvidia is slated to provide next‑generation Grace Blackwell GB300 systems for a significant portion of this build, underscoring Nvidia’s continued dominance in cutting‑edge, large‑scale deployments.
But the remaining four gigawatts are where a multi‑vendor approach comes in. G42 is evaluating a suite of specialized processors from AMD, Cerebras, Qualcomm, and other potential partners to power different workloads at scale. The company is rumoured to be negotiating with anchor tenants such as Google, Microsoft, AWS, Meta, and xAI to shape the campus’s usage model and hardware mix.
Cerebras: a specialized path for large models
Cerebras’ Condor Galaxy network represents a deliberately different approach to AI training. Instead of sprawling GPU clusters, Cerebras uses wafer-scale engines designed to handle massive on‑chip memory and bandwidth. The latest Condor Galaxy 3 setup employs the CS‑3 system with Wafer‑Scale Engine 3 technology, a configuration optimized for colossal model training where traditional GPU clusters can struggle with memory bottlenecks. For G42, Cerebras has already become more than a vendor—it's a partner and investor in projects that push the frontier of foundational‑model training.
AMD and Qualcomm: complementary strengths
- AMD Instinct MI300X: Built for large‑scale training and inference, featuring substantial memory (HBM3 with up to 192 GB), and strong throughput for large language models. An eight‑GPU Instinct platform can outperform equivalent Nvidia setups on specific datasets, offering a compelling option for certain workloads.
- Qualcomm Cloud AI 100 Ultra: Focused on inference with high performance per watt, delivering a favorable total cost of ownership. A single 150‑watt card can handle models with well over 100 billion parameters, making it attractive for scalable inference at the network edge or cloud data centers.
What comes next?
- Workload-aware selection: G42’s approach suggests a future where the underlying hardware is chosen to fit the job—training at Cerebras, inference at Qualcomm, HPC-style throughput from AMD when appropriate.
- Strategic partnerships: The company’s discussions with U.S. technology giants point to a growing ecosystem around the UAE‑U.S. AI Campus, where regional data centers serve as anchors for hyperscale services and research collaborations.
Why this matters for the AI landscape
The move toward multi‑vendor AI infrastructure isn’t just a branding exercise. It reflects a broader industry trend toward supply‑chain resilience, geopolitical alignment, and specialized hardware that can tackle a wider range of AI workloads. Nvidia’s Grace systems will still play a pivotal role for the most ambitious deployments, but G42’s multi‑vendor strategy signals a maturation in how large-scale AI platforms are built, tested, and operated.
In short, this isn’t about abandoning Nvidia; it’s about strengthening the AI backbone by weaving in different hardware partners that excel under different conditions. If the UAE‑U.S. Campus proves successful, other global AI hubs may follow suit, nudging the industry toward more nuanced, workload‑specific architectures.
Note: This piece synthesizes information from multiple public reports and industry analyses and does not rely on a single official statement. The evolving nature of partnerships means plans could shift as negotiations progress.