HOME

TheInfoList



OR:

Nvidia DGX is a line of
Nvidia Nvidia CorporationOfficially written as NVIDIA and stylized in its logo as VIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as VIDIA with a large italicized lowercase "n" on products from the mid 1990s to ...
-produced servers and workstations which specialize in using
GPGPU General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditiona ...
to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs (Typically Intel Xeons, though recently the DGX A100 and DGX Station A100 utilize AMD EPYC CPUs). The main component of a DGX system is a set of 4 to 16
Nvidia Tesla Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 ser ...
GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket.


Models


Pascal - Volta


DGX-1

DGX-1 servers feature 8 GPUs based on the Pascal or
Volta Volta may refer to: Persons * Alessandro Volta (1745–1827), Italian physicist and inventor of the electric battery, count and eponym of the volt * Giovanni Volta (1928–2012), Italian Roman Catholic bishop * Giovanni Serafino Volta (1764–184 ...
daughter cards with 128GB of total HBM2 memory, connected by an
NVLink NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was ...
mesh network A mesh network is a local area network topology in which the infrastructure nodes (i.e. bridges, switches, and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate wit ...
. The DGX-1 was announced on the 6th of April in 2016. All models are based on a dual socket configuration of Intel Xeon E5 CPUs, and are equipped with the following features. * 512 GB of DDR4-2133 * Dual 10Gb networking * 4 x 1.92 TB SSDs * 3200W of combined power supply capability * 3U Rackmount Chassis The product line is intended to bridge the gap between GPUs and
AI accelerator An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications ...
s in that the device has specific features specializing it for deep learning workloads. The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing, while the Volta-based upgrade increased this to 960 teraflops. The DGX-1 was first available only with the Pascal based configuration, with the first generation SXM socket. The later revision of the DGX-1 offered support for first generation Volta cards via the SXM-2 socket. Nvidia offered upgrade kits that allowed users with a Pascal based DGX-1 to upgrade to a Volta based DGX-1. * The Pascal based DGX-1 has two variants, one with an Intel Xeon E5-2698 V3, and one with an E5-2698 V4. Pricing for the variant equipped with an E5-2698 V4 is unavailable, the Pascal based DGX-1 with an E5-2698 V3 was priced at launch at $129,000 * The
Volta Volta may refer to: Persons * Alessandro Volta (1745–1827), Italian physicist and inventor of the electric battery, count and eponym of the volt * Giovanni Volta (1928–2012), Italian Roman Catholic bishop * Giovanni Serafino Volta (1764–184 ...
based DGX-1 is equipped with an E5-2698 V4 and was priced at launch at $149,000.


DGX Station

Designed as a turnkey deskside AI supercomputer, the DGX Station is a
tower A tower is a tall structure, taller than it is wide, often by a significant factor. Towers are distinguished from masts by their lack of guy-wires and are therefore, along with tall buildings, self-supporting structures. Towers are specifi ...
computer that can function completely independently without typical datacenter infrastructure such as cooling, redundant power, or 19 inch racks. The DGX station was first available with the following specifications. * Four
Volta Volta may refer to: Persons * Alessandro Volta (1745–1827), Italian physicist and inventor of the electric battery, count and eponym of the volt * Giovanni Volta (1928–2012), Italian Roman Catholic bishop * Giovanni Serafino Volta (1764–184 ...
-based Tesla V100 accelerators, each with 16 GB of HBM2 memory * 480 TFLOPS FP16 * Single Intel Xeon E5-2698 v4 * 256 GB DDR4 * 4x 1.92 TB SSDs * Dual 10 Gb Ethernet The DGX station is water-cooled to better manage the heat of almost 1500W of total system components, this allows it to keep a noise range under 35 dB under load. This, among other features, made this system a compelling purchase for customers without the infrastructure to run
rackmount A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is wide. The 19 inch dimension includes the edges or "ears" that protrude from each side of the equ ...
DGX systems, which can be loud, output a lot of heat, and take up a large area. This was Nvidia's first venture into bringing
high performance computing High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a multi ...
deskside, which has since remained a prominent marketing strategy for Nvidia.https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-station/nvidia-dgx-station-a100-datasheet.pdf


DGX-2

The successor of the Nvidia DGX-1 is the Nvidia DGX-2, which uses sixteen
Volta Volta may refer to: Persons * Alessandro Volta (1745–1827), Italian physicist and inventor of the electric battery, count and eponym of the volt * Giovanni Volta (1928–2012), Italian Roman Catholic bishop * Giovanni Serafino Volta (1764–184 ...
-based V100 32GB (second generation) cards in a single unit. It was announced on the 27th of March in 2018. The DGX-2 delivers 2 Petaflops with 512GB of shared memory for tackling massive datasets and uses NVSwitch for high-bandwidth internal communication. DGX-2 has a total of 512GB of HBM2 memory, a total of 1.5TB of DDR4. Also present are eight 100Gb/sec
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also us ...
cards and 30.72 TB of SSD storage, all enclosed within a massive 10U rackmount chassis and drawing up to 10 kW under maximum load. The initial price for the DGX-2 was $399,000. The DGX-2 differs from other DGX models in that it contains two separate GPU daughterboards, each with eight GPUs. These boards are connected by an NVSwitch system that allows for full bandwidth communication across all GPUs in the system, without additional latency between boards. A higher performance variant of the DGX-2, the DGX-2H, was offered as well. The DGX-2H replaced the DGX-2's dual Intel Xeon Platinum 8168's with upgraded dual Intel Xeon Platinum 8174's. This upgrade does not increase core count per system, as both CPUs are 24 cores, nor does it enable any new functions of the system, but it does increase the base frequency of the CPUs from 2.7 GHz to 3.1 GHz.


Ampere


DGX A100 Server

Announced and released on May 14, 2020. The DGX A100 was the 3rd generation of DGX server, including 8
Ampere The ampere (, ; symbol: A), often Clipping (morphology), shortened to amp,SI supports only the use of symbols and deprecates the use of abbreviations for units. is the unit of electric current in the International System of Units (SI). One amp ...
-based A100 accelerators. Also included is 15TB of
PCIe PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common m ...
gen 4 NVMe storage, 1 TB of RAM, and eight Mellanox-powered 200GB/s HDR
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also us ...
ConnectX-6 NICs. The DGX A100 is in a much smaller enclosure than its predecessor, the DGX-2, taking up only 6 Rack units. The DGX A100 also moved to an AMD EYPC 7742 CPU, the first DGX server to not be built with an Intel Xeon CPU. The initial price for the DGX A100 Server was $199,000.


DGX Station A100

As the successor to the original DGX Station, the DGX Station A100, aims to fill the same niche as the DGX station in being a quiet, efficient, turnkey cluster-in-a-box solution that can be purchased, leased, or rented by smaller companies or individuals who want to utilize machine learning. It follows many of the design choices of the original DGX station, such as the tower orientation, single socket CPU
mainboard A motherboard (also called mainboard, main circuit board, mb, mboard, backplane board, base board, system board, logic board (only in Apple computers) or mobo) is the main printed circuit board (PCB) in general-purpose computers and other expand ...
, a new refrigerant-based cooling system, and a reduced number of accelerators compared to the corresponding rackmount DGX A100 of the same generation. The price for the DGX Station A100 320G is $149,000 and $99,000 for the 160G model, Nvidia also offers Station rental at ~$9000 USD per month through partners in the US (rentacomputer.com) and Europe (iRent IT Systems) to help reduce the costs of implementing these systems at a small scale. The DGX Station A100 comes with two different configurations of the built in A100. * Four
Ampere The ampere (, ; symbol: A), often Clipping (morphology), shortened to amp,SI supports only the use of symbols and deprecates the use of abbreviations for units. is the unit of electric current in the International System of Units (SI). One amp ...
-based A100 accelerators, configured with 40GB (HBM) or 80GB (HBM2e) memory,
thus giving a total of 160GB or 320GB resulting either in DGX Station A100 variants 160G or 320G. * 2.5 PFLOPS FP16 * Single 64 Core AMD EPYC 7742 * 512 GB DDR4 * 1 x 1.92 TB NVMe OS drive * 1 x 7.68 TB U.2 NVMe Drive * Dual port 10Gb Ethernet * Single port 1Gb BMC port


Hopper


DGX H100 Server

Announced March 22, 2022 and planned for release in Q3 2022, The DGX H100 is the 4th generation of DGX servers, built with 8 Hopper-based H100 accelerators, for a total of 32 PFLOPs of FP8 AI compute and 640GB of HBM3 Memory, an upgrade over the DGX A100s HBM2 memory. This upgrade also increases
VRAM Video random access memory (VRAM) is dedicated computer memory used to store the pixels and other graphics data as a framebuffer to be rendered on a computer monitor. This is often different technology than other computer memory, to facilitate ...
bandwidth to 3 TB/s. The DGX H100 increases the
rackmount A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is wide. The 19 inch dimension includes the edges or "ears" that protrude from each side of the equ ...
size to 8U to accommodate the 700W TDP of each H100 SXM card. The DGX H100 also has two 1.92TB SSDs for
Operating System An operating system (OS) is system software that manages computer hardware, software resources, and provides common daemon (computing), services for computer programs. Time-sharing operating systems scheduler (computing), schedule tasks for ef ...
storage, and 30.72 TB of Solid state storage for application data. One more notable addition is the presence of two Nvidia Bluefield 3 DPUs, and the upgrade to 400Gb/s
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also us ...
via Mellanox ConnectX-7 NICs, double the bandwidth of the DGX A100. The DGX H100 uses new 'Cedar Fever' cards, each with four ConnectX-7 400GB/s controllers, and two cards per system. This gives the DGX H100 3.2Tb/s of fabric bandwidth across Infiniband. The DGX H100 has two currently unspecified 4th generation
Xeon Xeon ( ) is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeted at the non-consumer workstation, server, and embedded system markets. It was introduced in June 1998. Xeon processors are based on the same a ...
Scalable CPUs (Codenamed Sapphire Rapids) and 2 Terabytes of System Memory. No pricing is currently available for the DGX H100.


DGX SuperPod

The DGX Superpod is a high performance turnkey supercomputer solution provided by
Nvidia Nvidia CorporationOfficially written as NVIDIA and stylized in its logo as VIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as VIDIA with a large italicized lowercase "n" on products from the mid 1990s to ...
using DGX hardware. This tightly integrated system combines high performance DGX compute nodes with fast storage and high bandwidth
networking Network, networking and networked may refer to: Science and technology * Network theory, the study of graphs as a representation of relations between discrete objects * Network science, an academic field that studies complex networks Mathematics ...
to provide a unique plug & play solution to extremely high demand machine learning workloads. The Selene Supercomputer, at the
Argonne National Laboratory Argonne National Laboratory is a science and engineering research national laboratory operated by UChicago Argonne LLC for the United States Department of Energy. The facility is located in Lemont, Illinois, outside of Chicago, and is the lar ...
, is one example of a DGX SuperPod based system. Selene, built from 280 DGX A100 nodes, ranked 5th on the
Top500 The TOP500 project ranks and details the 500 most powerful non- distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coinc ...
list for most powerful supercomputers at the time of its completion, and has continued to remain high in performance. This same integration is available to any customer with minimal effort on their behalf, and the new Hopper based SuperPod can scale to 32 DGX H100 nodes, for a total of 256 H100 GPUs and 64 x86 CPUs. This gives the complete SuperPod a whopping 20TB of HBM3 memory, 70.4 TB/s of bisection bandwidth, and up to 1 ExaFLOP of FP8 AI compute. These SuperPods can then be further joined to create even larger supercomputers. The upcoming Eos supercomputer, designed, built, and operated by Nvidia, will be constructed of 18 H100 based SuperPods, totaling 576 DGX H100 systems, 500 Quantum-2
InfiniBand InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also us ...
switches, and 360
NVLink NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was ...
Switches, this will allow Eos to deliver 18 EFLOPs of FP8 compute, and 9 EFLOPs of FP16 compute, making Eos the fastest AI supercomputer in the world. As Nvidia does not produce any storage devices or systems, Nvidia SuperPods rely on partners to provide high performance storage. Current storage partners for Nvidia Superpods are
Dell EMC Dell EMC (EMC Corporation until 2016) is an American multinational corporation headquartered in Hopkinton, Massachusetts and Round Rock, Texas, United States. Dell EMC sells data storage, information security, virtualization, analytics, cloud c ...
, DDN, HPE, IBM,
NetApp NetApp, Inc. is an American hybrid cloud data services and data management company headquartered in San Jose, California. It has ranked in the Fortune 500 from 2012–2021. Founded in 1992 with an IPO in 1995, NetApp offers cloud data services ...
, Pavilion Data, and
VAST Data VAST Data is a privately held technology company focused on artificial intelligence (AI) and deep learning computing infrastructure. Founded in 2016, the company offers a data computing platform that allows users to train AI models by storing and ...
.


Accelerators


See also

* Deep Learning Super Sampling *
Nvidia Tesla Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 ser ...
* Supercomputer
Page on high performance computing with 4x and 8x A100 per computer node
also showing switch topology dumps.


References

{{Nvidia AI accelerators GPGPU Nvidia products Parallel computing