As memory shortage persists, vendor price quotes are not long remembered
Summary
AI demand is causing memory prices to skyrocket, forcing hardware vendors like Cisco and HPE to shorten quote validity and adjust prices. Act fast on server deals.
Cisco and HPE change price rules
Cisco and HPE are shortening the time they honor price quotes as the cost of computer memory doubles. Both companies recently updated their terms to allow for price adjustments between the time a customer places an order and the date the hardware actually ships.
HPE cut its quote expiration window in half, giving customers 14 days to commit to a purchase instead of the previous 30-day standard. This change applies to server and GreenLake orders, though the company currently excludes public sector and B2B contracts from the new policy.
Cisco notified its channel partners this week that it can now cancel compute orders up to 45 days before the scheduled shipment date. The networking giant also reserved the right to adjust pricing on those orders or reduce price protection on specific deals before the hardware leaves the warehouse.
These aggressive policy shifts respond to a volatile component market where DRAM and NAND flash prices are rising faster than vendors can track. Hardware manufacturers are struggling to maintain profit margins while component suppliers prioritize high-paying buyers in the artificial intelligence sector.
AI demand drives memory shortages
The sudden pivot to AI infrastructure is consuming the global supply of high-performance memory. IDC reports that memory prices for many products have nearly doubled over the last few months as cloud providers race to build out massive data centers.
Memory suppliers are currently "allocating" supply, which means they decide which customers get chips and which have to wait. They are prioritizing market segments capable of absorbing 100 percent price hikes, often leaving traditional enterprise server and PC markets with the leftovers.
While industry headlines focus on Nvidia GPUs, those chips require massive amounts of supporting memory to function. AI clusters use high-bandwidth memory (HBM) and high-capacity DDR5 modules that pull production capacity away from standard consumer and business hardware.
The shortage is broad-based because memory is a foundational component for almost every electronic device. When Samsung or SK Hynix shift production lines to focus on high-margin AI memory, the supply of standard RAM for laptops and office servers drops immediately.
Big Tech spends billions on infrastructure
Four major tech companies are fueling this supply crunch with unprecedented capital expenditures. AWS, Google, Meta, and Microsoft recently told investors they expect to spend a combined $600 billion on AI-related projects and data center construction.
This massive spending creates a "fluid environment" where component costs change weekly. HPE executive Simon Ewington told partners in a recent email that "significant constraints" on worldwide components forced the company to tighten its pricing windows.
Industry analysts at IDC see no signs of this demand slowing down through 2026. Cloud providers are locking in long-term supply agreements for 2025 and 2026, which effectively crowds out smaller hardware vendors and enterprise customers who buy on shorter cycles.
The current market dynamics include several specific changes to how hardware is sold:
- Vendors now frequently renegotiate existing pricing contracts before delivery.
- Price protection, which once guaranteed a set cost for long-term projects, is being scaled back or eliminated.
- Suppliers are shifting production to HBM3e and DDR5, reducing the availability of older, cheaper standards.
- Lead times for high-capacity storage arrays and enterprise servers are stretching as vendors wait for memory allocations.
Shortages could last for years
The timeline for a market recovery remains a point of contention among industry leaders. Intel board member Lip-Bu Tan recently stated that he expects memory shortages to persist until 2028 as the industry struggles to build enough fabrication capacity.
IDC vice president Jeff Janukowicz offers a slightly more optimistic view, suggesting that prices might moderate later in 2024. However, he cautioned that as long as AI demand remains "robust," the underlying pressure on the memory market will continue to drive costs higher for everyone else.
HPE spokesman Adam Bauer described the company's price adjustment policy as a "rare" measure. He claimed the company would only trigger these clauses in response to "material increases" in forecasted commodity costs, but the policy change itself suggests those increases are now a regular occurrence.
For IT departments and procurement officers, the era of stable hardware budgets is over for the foreseeable future. A quote received on the first of the month may no longer be valid by the fifteenth, and a signed purchase order no longer guarantees the final price on the invoice.
The impact on the enterprise market
The ripple effects of these shortages extend beyond high-end AI servers to the standard hardware used by most businesses. When memory prices double, the cost of a standard 1U rack server or a high-end workstation increases by thousands of dollars.
Enterprises are now facing a difficult choice: pay the "AI tax" on standard hardware or delay necessary infrastructure refreshes. Many companies that planned 2024 budgets based on 2023 pricing are finding their funds won't cover the required number of units.
The Register first reported these policy changes, noting that Cisco did not respond to requests for comment regarding its new cancellation rules. The silence from major vendors suggests they are still calibrating how to communicate these price hikes to a frustrated customer base.
This volatility is also hitting the storage market, where SSD prices are climbing alongside RAM. NAND manufacturers have cut production to drive up prices after a period of oversupply, and the AI boom provided the perfect opportunity to tighten the market.
The following metrics highlight the current state of the hardware market:
- $600 billion: Total projected AI spend from the four largest hyperscalers.
- 14 days: The new lifespan of an HPE price quote.
- 45 days: The window in which Cisco can now cancel a compute order.
- 100 percent: The approximate price increase for some memory components in early 2024.
As long as the "Big Four" continue to pour capital into AI, the secondary market for servers and PCs will remain expensive. Hardware buyers should expect more vendors to follow the lead of Cisco and HPE by shortening quote windows and adding price-adjustment clauses to their standard contracts.
Related Articles
Why 40% of AI projects will be canceled by 2027 (and how to stay in the other 60%)
Many AI projects fail due to siloed efforts on speed, cost, and security. Success requires a unified AI connectivity platform that integrates all three for sustainable deployment.
From notebooks to nodes: Architecting production-ready AI infrastructure
Guide to scaling AI from notebooks to production using Ray on Kubernetes, feature stores, and observability for high-throughput workloads.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
