Home GADGETS Samsung unveils SOCAMM2 memory, teams up with Nvidia for AI accelerators

Samsung unveils SOCAMM2 memory, teams up with Nvidia for AI accelerators

Samsung unveils SOCAMM2 memory, teams up with Nvidia for AI accelerators

Samsung, the world’s biggest memory chip maker, has officially unveiled its second-generation Small Outline Compression Attached Memory Module (SOCAMM2) memory module. Similar to high-bandwidth memory (HBM), SOCAMM2 memory is used in AI servers. However, it is designed to be power-efficient.

SOCAMM2 memory modules offer higher bandwidth and improved power efficiency compared to conventional memory. They also offer flexibility to a server system, as SOCAMM2 memory modules can be detached to upgrade to newer memory, and there is no need to replace the whole board. Samsung’s SOCAMM2 module is made using multiple LPDDR5X DRAM chips, offering twice the memory bandwidth while consuming just 55% of the power compared to traditional Registered Dual In-Line Memory Modules (RDIMMs).

Samsung unveils SOCAMM2 memory, teams up with Nvidia for AI accelerators

These new memory modules from Samsung are an alternative for servers that need performance and efficiency at the same time. Moreover, due to their smaller size and horizontal layout (compared to RDIMM’s vertical layout), they leave more space for other things in the system, such as better heat sink placement and better airflow.

Samsung said it is working closely with Nvidia to optimise SOCAMM2 memory for Nvidia’s AI infrastructure. The company is ensuring that its new memory offers the efficiency and performance needed by next-generation inference system, Vera Rubin, which is expected to launch in 2026. It is also collaborating across the AI ecosystem to improve the adoption of low-power memory in server environments.


Google Preferred News Source

Source link