Meta deploys Nvidia Grace CPUs at scale for AI and infrastructure
Summary
Meta is deploying Nvidia's standalone Grace CPUs for general and AI workloads, expanding beyond Superchips. This marks a shift from Intel/AMD and custom Arm chips for Meta.
Meta adopts Nvidia Grace CPUs
Meta is deploying Nvidia’s standalone Grace CPUs to power its general-purpose infrastructure and agentic AI workloads. This move makes Meta one of the first major hyperscalers to use Nvidia’s processors without an attached GPU. The social media giant is already running these systems at scale to handle backend data center tasks.
Most organizations previously used Nvidia’s Grace chips as part of "Superchips" that paired a CPU directly with a Hopper or Blackwell GPU. Meta is breaking that trend by using the 72-core Arm-based Grace processor as a primary server chip. The company found that the Grace architecture provides 2x the performance per watt compared to the traditional x86 processors it used previously.
The deployment focuses on workloads that do not require the massive parallel processing power of a GPU. Meta uses these chips for its "agentic" AI services, which require high-speed logic and backend processing rather than raw graphical throughput. This shift signals a direct challenge to Intel and AMD, who have dominated the server CPU market for decades.
Efficiency gains from Arm architecture
Nvidia’s Grace CPU utilizes 72 Arm Neoverse V2 cores that run at speeds up to 3.35 GHz. Each chip supports up to 480 GB of LPDDR5x memory, a type of RAM usually found in high-end laptops or mobile devices. This memory choice provides significantly higher bandwidth and lower power consumption than the DDR5 modules typically found in enterprise servers.
Meta is also utilizing the Grace CPU Superchip, which connects two Grace processors over a high-speed interconnect. This configuration offers 144 cores and up to 960 GB of memory. The dual-chip setup provides a total of 1 TB/s of memory bandwidth, allowing Meta to move data between cores faster than traditional server architectures allow.
The technical specifications of the Grace deployment include:
- 72 Arm Neoverse V2 cores per single CPU socket
- 3.35 GHz peak clock speeds
- 480 GB to 960 GB of LPDDR5x memory capacity
- 1 TB/s of peak memory bandwidth in Superchip configurations
- Scalable Coherent Fabric with over 3.2 TB/s of bisection bandwidth
Vera CPUs arrive next year
Meta will upgrade its infrastructure to Nvidia’s upcoming Vera CPUs starting in 2025. The Vera architecture increases the core count to 88 custom Arm cores and introduces simultaneous multi-threading. These chips will serve as the foundation for Meta’s next generation of private data processing.
The Vera chips include confidential computing features that allow Meta to process data in a secure, hardware-encrypted environment. Meta plans to use this capability to enhance privacy for WhatsApp. By running encrypted messaging workloads on Vera, Meta can perform AI-driven tasks without exposing raw user data to the rest of the system.
Nvidia Vice President Ian Buck confirmed that Meta has already tested early Vera silicon on its internal workloads. The transition from Grace to Vera allows Meta to maintain a consistent Arm-based software stack while increasing compute density. This roadmap ensures that Meta remains less dependent on traditional x86 vendors for its long-term scaling needs.
Billions in AI capital expenditure
Meta expects its total capital expenditure for 2026 to fall between $115 billion and $135 billion. A significant portion of that budget will go toward Nvidia hardware, including "millions" of GB300 and Vera Rubin Superchips. These high-end systems combine Nvidia’s latest GPUs with the new Vera CPUs to handle massive AI training and inference tasks.
Industry analysts estimate that this partnership will contribute tens of billions of dollars to Nvidia’s annual revenue. A single rack of high-end Nvidia Blackwell systems can cost more than $3.5 million. If Meta deploys these at the scale indicated, the financial impact will likely exceed the $31.9 billion in net income Nvidia reported in its most recent quarter.
This massive investment supports Meta's Andromeda recommender system and its growing fleet of AI agents. While other tech giants like Amazon and Google build their own custom Arm chips like Graviton and Axion, Meta is doubling down on Nvidia’s off-the-shelf silicon. This strategy allows Meta to move faster than competitors who must manage their own chip design cycles.
Diversifying with AMD and networking
Meta is not relying exclusively on Nvidia to build its AI empire. The company maintains a large fleet of AMD Instinct GPUs and helped design AMD’s Helios rack systems. Meta will likely deploy these Helios systems later this year to ensure it has multiple hardware suppliers for its data centers.
Alongside the CPU and GPU orders, Meta is buying Nvidia Spectrum-X networking hardware to link its servers. Spectrum-X is an ethernet-based fabric designed specifically to reduce the "tail latency" that slows down AI clusters. By using Nvidia for the CPU, the GPU, and the network, Meta creates a tightly integrated stack that minimizes bottlenecks.
The following hardware components form the core of Meta’s new data center strategy:
- Nvidia Grace and Vera CPUs for general purpose and secure backend tasks
- Nvidia GB300 GPUs for large-scale AI model training
- Nvidia Spectrum-X networking for high-speed data transfer
- AMD Instinct GPUs for diversified AI compute workloads
- AMD Helios racks for open-standard server deployments
Meta’s aggressive hardware acquisition comes ahead of Nvidia’s Q4 earnings report later this month. The scale of Meta’s commitment suggests that the demand for high-end AI silicon remains high despite rising costs. By securing millions of chips now, Meta is insulating itself against potential supply chain shortages in the coming years.
Related Articles

Pi for Excel adds AI sidebar to Microsoft spreadsheets
Pi for Excel is an open-source AI sidebar for Excel. It reads and edits workbooks using models like GPT or Claude, with tools for formatting, extensions, and recovery.

Midsummer Studios shuts down, reveals unreleased life sim Burbank
Midsummer Studios, founded by ex-Firaxis director Jake Solomon, is closing. It revealed a first look at its AI-driven life sim "Burbank" before shutting down.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.

