RELEASE: Supermicro expands its portfolio of AI-optimized products with a new generation of systems (2)

(Information sent by the signatory company).

RELEASE: Supermicro expands its portfolio of AI-optimized products with a new generation of systems (2)

(Information sent by the signatory company)

- Supermicro expands its portfolio of AI-optimized products with a new generation of systems and rack architectures including new NVIDIA Blackwell architecture solutions

Powerful, energy-efficient solutions for large-scale CSP and NSP Featuring a full stack of next-generation NVIDIA GPUs and CPUs with the NVIDIA Quantum X800 platform and NVIDIA AI Enterprise 5.0

SAN JOSE, California, March 19, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a total IT solutions provider for AI, cloud, storage and 5G/Edge, announces new AI systems for large-scale generative AI including the next generation of data center products from NVIDIA, including the latest NVIDIA GB200 Grace™ Blackwell superchip, the NVIDIA GPU B200 Tensor Core and B100 Tensor Core. Supermicro is upgrading its current NVIDIA HGX™ H100/H200 8-GPU systems to be ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in reduced delivery time. Additionally, Supermicro will further strengthen its extensive line of NVIDIA MGX™ systems with new offerings including the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack-level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the liquid-cooled NVIDIA HGX B200 8-GPU 4U system.

"Our focus on building block architecture and rack-scale Total IT for AI has allowed us to design next-generation systems for the enhanced requirements of GPUs based on the NVIDIA Blackwell architecture, such as our new 8-GPU NVIDIA HGX B200 4U liquid-cooled. These new products are based on the proven HGX and MGX system architecture from Supermicro and NVIDIA, optimizing for the new capabilities of NVIDIA Blackwell GPUs. Supermicro has the expertise to incorporate 1 kW GPUs in a wide range of air- and liquid-cooled systems, as well as a rack-scale production capacity of 5,000 racks/month and anticipates being the first in the market to deploy full rack clusters with NVIDIA Blackwell GPUs.”

Supermicro's direct-to-chip liquid cooling technology will increase the thermal design power (TDP) of the latest GPUs and deliver the full potential of NVIDIA Blackwell GPUs. Supermicro's HGX and MGX systems with NVIDIA Blackwell are the cornerstones of the future of AI infrastructure and will deliver breakthrough performance for multi-million-dollar parameter AI training and real-time AI inference:

A wide range of GPU-optimized Supermicro systems will be ready for NVIDIA Blackwell B200 and B100 Tensor Core GPUs and validated for the latest NVIDIA AI Enterprise software, which adds support for NVIDIA NIM inference microservices. Supermicro systems include:

To train massive fundamental AI models, Supermicro is poised to be the first to market with the NVIDIA HGX B200 8-GPU and HGX B100 8-GPU systems. These systems feature 8 NVIDIA Blackwell GPUs connected via a high-speed 5th generation NVIDIA® NVLink® interconnect at 1.8 TB/s, doubling the performance of the previous generation, with 1.5 TB of total high-speed memory. bandwidth and will deliver 3x faster training results for LLMs, such as the GPT-MoE-1.8T model, compared to the NVIDIA Hopper architecture generation. These systems feature advanced networking to scale to clusters and support NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet options with a 1:1 GPU to NIC ratio.

"Supermicro continues to bring to market an amazing range of accelerated computing platform servers that are optimized for AI training and inference that can address any need in today's market," said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. "We work closely with Supermicro to deliver the most optimized solutions to customers."

For the most demanding LLM inference workloads, Supermicro is launching several new MGX systems built with the NVIDIA GB200 Grace Blackwell superchip, which combines an NVIDIA Grace CPU with two NVIDIA Blackwell GPUs. Supermicro's NVIDIA MGX systems with GB200 will deliver a huge jump in performance for AI inference with speedups of up to 30x compared to NVIDIA HGX H100 systems. Supermicro and NVIDIA have developed a rack-scale solution with the NVIDIA GB200 NVL72, connecting 36 Grace CPUs and 72 Blackwell GPUs in a single rack. All 72 GPUs are interconnected with 5th generation NVIDIA NVLink for 1.8 TB/s GPU-to-GPU communication. Additionally, for inference workloads, Supermicro announces the ARS-221GL-NHIR, a 2U server based on the GH200 product line, which will have two GH200 servers connected via a high-speed 900 Gb/s interconnect. Visit the Supermicro booth at GTC for more information.

Supermicro systems will also support the upcoming NVIDIA Quantum-X800 InfiniBand platform, consisting of the NVIDIA Quantum-X800 QM3400 switch and SuperNIC800, and the NVIDIA Spectrum-X800 Ethernet platform, consisting of the NVIDIA Spectrum-X800 SN5600 switch and SuperNIC800. Optimized for the NVIDIA Blackwell architecture, NVIDIA Quantum-X800 and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.

To learn more about NVIDIA Supermicro solutions, visit https://www.supermicro.com/en/accelerato...

Supermicro's next line of systems with NVIDIA B200 and GB200 consist of:

Supermicro at GTC 2024

Supermicro will demonstrate a full portfolio of GPU systems for AI at NVIDIA's GTC 2024 event March 18-21 at the San Jose Convention Center. Visit Supermicro at Booth #1016 to see solutions built for a wide range of AI applications, including generative AI model training, AI inference, and edge AI. Supermicro will also showcase two rack-level solutions, including a concept rack with systems including the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Supermicro solutions on display at GTC 2024 include:

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in application-optimized total IT solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for 5G edge enterprise, cloud, AI, and telecom/IT infrastructure. We are a total IT solutions manufacturer with servers, AI, storage, IoT, switching systems, software and support services. Supermicro's motherboard, power and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the United States, Taiwan and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® enables customers to optimize their exact workload and application by selecting from a broad family of systems built from our flexible, reusable building blocks that support a full set of form factors, processors, , memory, GPU, storage, networking, power and cooling solutions (air conditioning, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names and trademarks are the property of their respective owners.

Photo - https://mma.prnewswire.com/media/2365522...Logo - https://mma.prnewswire.com/media/1443241...

NEXT NEWS