
Advanced Micro Devices projected bold expectations for its artificial intelligence trajectory during its Advancing AI event in San Jose on 12 June 2025, emphasising system-level openness and ecosystem collaboration. CEO Dr Lisa Su unveiled the Instinct MI350 accelerator series, introduced plans for the Helios rack-scale AI server launching in 2026, and fortified AMD’s software stack to challenge incumbent leaders in the sector.
Top-tier AI customers including OpenAI, Meta, Microsoft, Oracle, xAI and Crusoe pledged significant investments. OpenAI’s CEO Sam Altman joined Su onstage, confirming the firm’s shift to MI400-class chips and collaboration on MI450 design. Crusoe disclosed a $400 million commitment to the platform.
MI350 Series, which includes the MI350X and MI355X, are shipping to hyperscalers now, with a sharp generational performance leap — delivering about four times the compute capacity of prior-generation chips, paired with 288 GB of HBM3e memory and up to 40% better token‑per‑dollar performance than Nvidia’s B200 models. Initial deployments are expected in Q3 2025 in both air‑ and liquid‑cooled configurations, with racks supporting up to 128 GPUs, producing some 2.6 exaflops FP4 compute.
Looking further ahead, AMD previewed “Helios”—a fully integrated rack comprising MI400 GPUs, Zen 6‑based EPYC “Venice” CPUs and Pensando Vulcano NICs, boasting 72 GPUs per rack, up to 50% more HBM memory bandwidth and system‑scale networking improvements compared to current architectures. Helios is poised for market launch in 2026, with an even more advanced MI500‑based variant expected around 2027.
Dr Su underscored openness as AMD’s competitive lever. Unlike Nvidia’s proprietary NVLink interface, AMD’s designs will adhere to open industry standards—extending availability of networking architectures to rivals such as Intel. Su argued this approach would accelerate innovation, citing historical parallels from open Linux and Android ecosystems.
On the software front, the ROCm 7 stack is being upgraded with enterprise AI and MLOps features, including integrated tools from VMware, Red Hat, Canonical and others. ROCm Enterprise AI, launching in Q3 or early Q4, aims to match or exceed Nvidia’s CUDA-based offerings in usability and integration.
Strategic acquisitions underpin AMD’s infrastructure ambitions. The purchase of ZT Systems in March 2025 brought over 1,000 engineers to accelerate rack-scale system builds. Meanwhile, AMD has onboarded engineering talent from Untether AI and Lamini to enrich its AI software capabilities.
Market reaction was muted; AMD shares fell roughly 1–2% on the event day, with analysts noting that while the announcements are ambitious, immediate market share gains are uncertain.
Financially, AMD projects AI data centre revenues growing from over $5 billion in 2024 to tens of billions annually, anticipating the AI chip market reaching around $500 billion by 2028.
These developments position AMD as a serious contender in the AI infrastructure arena. Its push for rack‑scale systems and open‑standard platforms aligns with the growing trend toward modular, interoperable computing. Competition with Nvidia will intensify through 2026 and 2027, centred on performance per dollar in large‑scale deployments.