Nvidia reveals why it chose rival AMD over Intel for its deep learning system

Spread the love

PCIe 4.0 support had a lot to do with it

In context: Like Sony vs. Microsoft, Intel vs. Qualcomm, and Apple vs. everyone, Nvidia vs. AMD is one of the tech industry’s big rivalries. So, it came as a surprise when team green chose its main competitor to provide the server processors for its new DGX A100 deep learning system, rather than using Intel’s Xeon platform. Now, the company has revealed the reason behind its decision.

In Nvidia’s first two DGX systems, Intel’s Xeon CPUs were the preferred processor, but the company dropped them in the DGX A100 for two of AMD’s 64-core, Zen 2-based Epyc 7742 CPUs. The system, which uses the new, Ampere-based A100 GPUs, boasts 5 petaflops of AI compute performance and 320 GB of GPU memory with 12.4 TB per second of bandwidth.

Speaking to CRN, Nvidia’s Vice President and General Manager of DGX Systems, Charlie Boyle, said the decision came down to the extra features and performance offered by the Epyc processors. “To keep the GPUs in our system supplied with data, we needed a fast CPU with as many cores and PCI lanes as possible. The AMD CPUs we use have 64 cores each, lots of PCI lanes, and support PCIe Gen4,” he explained.

RECOMMENDED READ:  70% of people surveyed said they’d download a coronavirus app. Only 44% did. Why the gap?

In addition to having eight more cores than the Xeon Platinum 9282, Epyc 7742 also supports eight-channel memory, whereas Intel’s Xeon Scalable processors support just six memory channels. AMD’s offering is also a lot cheaper—$6,950 vs around $25,000—and has more cache and a lower TDP.

PCIe 4.0 support is one of the major factors for choosing Epyc, with Intel’s processors still only supporting PCIe 3.0. It means AMD’s CPUs offer 128 lanes and a peak PCIe bandwidth of 512GB/s. “The DGX A100 is the first accelerated system to be all PCIe Gen4, which doubles the bandwidth from PCIe Gen3. All of our IO in the system is Gen4: GPUs, Mellanox CX6 NICs, AMD CPUs, and the NVMe drives we use to stream AI data,” Boyle said.

AMD, of course, has the advantage of using the 7nm manufacturing process, though Intel’s 10nm Ice Lake server CPUs, which are expected feature PCIe 4.0 support, arrive later this year.

RECOMMENDED READ:  Intel Core i9-10900K overclocked to 7707.62 MHz using liquid helium

 186 Interactions,  2 today

READ ALSO:
Virginia software company provides workers with the opportunity to get paid in Bitcoin, Ether.

The company will allow employees to defer a portion of their salary and receive it as part of a savings Read more

Super foreign CryptoPunk NFT sold for 605 ETH equivalent of $750K.

The NFT market is getting hot molten as a rare CryptoPunk "alien" sells more than $750,000. In the midst of Read more

How unsustained Ether surge to $1400 saw its network difficulty and hash rate hit new levels

Soon after the price of Ethereum crossed the critical $1400-level a few days ago, the charts dropped. However, amid the Read more

The proof-of-work (PoW) cryptocurrency Firo announced 51% Attack on Its Network

Proof-of-work (PoW) cryptocurrency Firo reported that its protocol had been assaulted by 51 percent, notifying its holders to interrupt transactions. Read more

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

%d bloggers like this: