NVIDIA has announced its largest-ever acquisition, agreeing to pay $20 billion in cash for the key assets of AI chip startup Groq. This landmark deal underscores NVIDIA's aggressive strategy to consolidate its leadership by integrating a disruptive new technology into its portfolio, specifically targeting the high-growth AI inference market.

The transaction employs a non-exclusive licensing model rather than a full buyout. NVIDIA will gain access to Groq’s entire portfolio of assets and technology licenses, while Groq's GroqCloud cloud service will continue to operate independently. Crucially, Groq's core technical team will join NVIDIA to facilitate technology integration. This "technology + talent" acquisition approach has become a preferred model for tech giants securing cutting-edge capabilities, following NVIDIA's similar $900 million deal for Enfabrica in September.
The strategic rationale centers on Groq's novel approach to AI inference. Founded by veterans from Google's original TPU team, Groq made waves in 2024 with its LPU (Language Processing Unit) inference chip. Built on a proprietary TSP (Temporal Streaming Processor) architecture on a 14nm process, the chip delivers breakthrough performance by eschewing traditional DRAM. Instead, it integrates a massive 230MB of on-chip SRAM, achieving a remarkable 80 TB/s of memory bandwidth and a compute power of 1000 TOPS.
Reported benchmarks are compelling. In cloud servers running models like Llama 2, the Groq LPU can reportedly generate 500 tokens per second—over 10 times faster than comparable NVIDIA GPU-based services—while consuming one-tenth the energy per token. This performance has attracted over 2 million developers and several Fortune 500 companies, valuing Groq at $6.9 billion in a recent funding round.

ICgoodFind's Insight
NVIDIA's massive acquisition of Groq is a defensive and offensive masterstroke. It neutralizes a potentially disruptive competitor in the critical inference space while directly absorbing a proven, high-performance technology that could accelerate its own roadmap, further widening its moat in the AI hardware ecosystem.