Rambus Presents First GDDR7 Memory Controller in the Age of AI 2.0


Rambus has announced its latest GDDR7 memory controller IP for high-performance, AI 2.0 memory interfaces. GDDR7, or Graphics Double Data Rate 7 Synchronous Dynamic Random-Access Memory (GDDR7 SDRAM), is a JEDEC memory standard characterized by its high bandwidth and double data rate interface for high-performance computing, game consoles, and graphics cards.

 

As AI models become more complex, fast and efficient memory is critical for continued innovation. Image used courtesy of ArXiv
 

Though the GDDR7 standard for graphics memory has been publicly available for some time, Rambus calls its IP the “industry’s first GDDR7 memory controller.” Designers can leverage the GDDR7 hardware to improve key memory latency and throughput that could limit the data-heavy applications of AI 2.0.

 

The First GDDR7 Memory Controller IP

To give designers a head start in designing GDDR7 devices, Rambus equipped its newest IP with several features specific to the new standard. In addition to supporting all GDDR7 link features, the GDDR7 IP includes an optimized command sequence to ensure the highest bus utilization.

 

The Rambus memory controller

The Rambus memory controller can be used alongside additional add-on cores to create an off-the-shelf GDDR7 solution, accelerating design times. Image used courtesy of Rambus
 

The Rambus GDDR7 memory controller supports up to a 2.5 GHz CK4 clock (1.25-GHz controller clock) with an internal 32x data path memory width. The controller supports up to 40 Gbps per pin (160 Gbps total), allowing designers to improve the performance of next-generation, high-throughput AI models using GDDR7 GPUs. In addition, Rambus integrated low-power operation modes and optimized operation with varying data loads into the design. The company claims its GDDR7 controller yields an overall 67% throughput improvement from previous generations.

 

GDDR7 Rises to the Challenges of AI 2.0

Advanced AI models for generative AI rely heavily on graphics processing units (GPUs) to rapidly carry out required computations in parallel. GPUs can tackle multiple, simultaneous computations in models such as ChatGPT instead of dedicating a single core to each individual math computation.

 

The GDDR7 architecture

The GDDR7 architecture supports more channels within the same memory device, improving parallel processing performance. Image used courtesy of Rambus
 

Much like traditional processors, however, GPUs depend on fast and efficient memory to operate effectively. As a result, dedicated standards such as the GDDRx standards have evolved to provide designers with a common ground on which to develop their own devices. As the limitations of GDDR6 become apparent, designers have turned to GDDR7 as a solution.

The GDDR7 standard differs from previous generations in several ways. Notably, it uses Pulse Amplitude Modulation (PAM3) to transmit three bits over two clock cycles, providing up to 50% increased data transmission. In addition, the number of independent channels has doubled from two to four, further improving the standard’s performance.

 

Memory Races to Keep Up With AI Advances

Although it is easy to get lost in processor specs, it’s important to remember that memory is just as important as core count, especially when considering today’s data-hungry applications. The GDDR7 memory standard improves memory latency and throughput to help drive innovation in AI and edge AI applications. 

Rambus built its new memory controller IP to tap into the high-bandwidth memory standard and enable development in areas such as edge AI, high-end graphics, or high-performance computing.



Source link

Leave a comment