|
[Sponsors] |
New 128 mini cluster - Cascade Lake SP or EPYC Rome? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
November 21, 2019, 17:28 |
New 128 mini cluster - Cascade Lake SP or EPYC Rome?
|
#1 |
Member
Join Date: Jul 2011
Posts: 53
Rep Power: 15 |
I'm moving up from a 30ish core setup to a 120ish core setup.
I've already got 2 Skylake SP nodes (Dual Xeon 6146) with a total of 44 cores. I've had licenses to run on 36 of these cores. I can now spend ca. €40,000 on expanding my compute setup (am in Norway, stuff is expensive here!). In addition I'll be expanding my HPC license so that I can run on up to 132 cores (3 ANSYS HPC Packs). Am running Windows Server. I have two options
If I stick with Intel I'll end up with 144 cores spread across 6 nodes. If I ditch Intel and go with EPYC I'll end up with 128 cores spread across 4 nodes. Licensing wise I'll be able to run on 132. Infiniband interconnect. The two options above cost approximately the same. If I go with EPYC I'll probably keep one of my existing Skylake nodes to use as a head/storage node. The Skylake SP CPUs only really scale decently up to 9 - 10 cores per CPU. So my 6 node Intel setup will probably only scale decently up to 120 cores. The Epyc Rome benchmarks in the OpenFOAM thread are pretty spectacular, and indicate pretty good scaling up to 32 cores on a dual cpu single node. So the EPYC setup would most likely scale well all the way to 128 cores in my 4 node setup. If you take some artistic liberties with the numbers in that thread I'd say that a compute setup based on the EPYC 7302 is approx. 20 % faster than a Skylake SP setup for an equivalent amount of cores. ctd's post here: OpenFOAM benchmarks on various hardware Code:
2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, Ubuntu 18.04.3, OF v7 # cores Wall time (s): ------------------------ 1 711.73 2 345.65 4 164.97 8 84.15 12 55.9 16 47.45 20 38.14 24 34.21 28 30.51 32 26.89 Code:
2 x Intel Xeon Gold 6136, 12 * 16 GB DDR4 2666MHz, Ubuntu 16.04 LTS, # cores Wall time (s): ------------------------ 1 874.54 2 463.34 4 205.23 6 137.95 8 106.04 12 74.63 16 61.09 20 53.26 24 49.17 This compares to times of 47.5s and 34.21 for 16 and 24 cores on the dual EPYC. Thus the 2 x EPYC 7302 is ca. 16 - 25 % faster compared to the 2 x Xeon 6146 for the same number of cores on a single node. I haven't found any numbers indicating the improvement of Cascade Lake vs Skylake Xeons, so it's hard to say where exactly Cascade Lake stands. Appreciate your thoughts! What would you guys do with the €40,000? Last edited by SLC; November 22, 2019 at 13:49. Reason: Mixed up cores/nodes in a couple of places :-) |
|
November 22, 2019, 10:12 |
|
#2 |
Senior Member
Micael
Join Date: Mar 2009
Location: Canada
Posts: 157
Rep Power: 18 |
Epyc system should be faster and cheaper. Faster as you have exposed, cheaper because those EPYC CPU are much cheaper (I am surprise you found both option are about same price).
Also, I think you mixed up the words "node" and "core" in few places in your post. |
|
November 22, 2019, 13:54 |
|
#3 | |
Member
Join Date: Jul 2011
Posts: 53
Rep Power: 15 |
Quote:
The CPUs are cheaper, but part of it is made up for a few more sticks of ram for the EPYC systems. Got an updated price offer today on the systems as described above, and the EPYC machines end up approx. 12.5 % cheaper than the Intel Xeon systems. One thing that could be troubling is EPYC performance on Windows Server (will be running Windows OS on bare metal). Linux just isn’t an option for me, I would have no idea what I was doing |
||
November 22, 2019, 15:24 |
|
#4 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
The difference is more than just a few more DIMMs for Epyc. You configured 12x8GB vs 16x16GB. A mistake? Intel CPUs will also benefit from 2 ranks per channel.
Epyc 2nd gen should be much easier to run on Windows, compared to first gen. At least when configured with only one NUMA domain per CPU. Tread with caution when using the results in out OpenFOAM benchmark thread. There were huge variances between similar setups, just based on who ran them. And there were outliers which turned out to be invalid results. So with only one result for Epyc 2nd gen, take it with a grain of salt. |
|
November 23, 2019, 10:43 |
|
#5 |
Senior Member
Micael
Join Date: Mar 2009
Location: Canada
Posts: 157
Rep Power: 18 |
I would consider building around "Supermicro A+ Server 2124BT-HTR - 2U BigTwin". Might be the most cost effective.
|
|
November 23, 2019, 15:45 |
|
#6 | |
Member
Join Date: Jul 2011
Posts: 53
Rep Power: 15 |
Quote:
If i put in 12 x 16 GB dual rank sticks in the Intel nodes then they become 23% more expensive than the Epyc nodes. So it’s quite a significant saving. By going for 4 EPYC nodes I’ve saved enough money to pay for my infiniband setup (switch + NICs). Would you personally go for Xeon or Epyc nodes flotus? I see that the max memory bandwidth is achieved using NPS4, I guess that will present 8 NUMA domains to Windows OS. Not sure if that’s problematic for Fluent/CFX. It’s a fair warning to treat the benchmark result with care, but there’s not much else out there. There is this benchmark from AMD on Fluent, where they state that a 2 x EPYC Rome 7542 32-core (64 core total) is 62 % faster than a 2 x Xeon Gold 6248 20-core (40 core total) setup. Of course they gleefully ignore the fact that the EPYC system has 60 % more cores and so “ought” to be 60 % faster https://www.amd.com/system/files/doc...SYS-FLUENT.pdf Last edited by SLC; November 24, 2019 at 03:20. |
||
November 24, 2019, 03:19 |
|
#7 |
Member
Join Date: Jul 2011
Posts: 53
Rep Power: 15 |
||
November 24, 2019, 12:55 |
|
#8 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
Quote:
With a Linux operating system, it would be much easier to decide. Current Xeons can't hold a candle against Epyc 2nd gen for CFD. Due to the lack of benchmarks on Windows, I can't make a clear recommendation. Other than switching to Linux of course. For maximum performance in NUMA-Aware applications like Fluent, Epyc Rome CPUs need to be configured in NPS4 mode. Which will result in 4 NUMA nodes per CPU presented to the OS. |
||
December 16, 2019, 17:25 |
|
#9 |
Member
Join Date: Jul 2011
Posts: 53
Rep Power: 15 |
Little update.
I am awaiting to receive one of each of the following machines for testing and benchmark purposes:: Dell PowerEdge R640
Dell PowerEdge R6525
I’ll post Fluent benchmark results (on Windows Server 2109) as soon as I can. Will probably be in about a months time as the lead times on the machines from Dell is several weeks at this point. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[Other] mesh airfoil NACA0012 | anand_30 | OpenFOAM Meshing & Mesh Conversion | 13 | March 7, 2022 18:22 |
AMD Epyc Mini Cluster Hardware for StarCCM+ | clearsign | Hardware | 1 | April 24, 2019 17:28 |
[blockMesh] non-orthogonal faces and incorrect orientation? | nennbs | OpenFOAM Meshing & Mesh Conversion | 7 | April 17, 2013 06:42 |
[blockMesh] error message with modeling a cube with a hold at the center | hsingtzu | OpenFOAM Meshing & Mesh Conversion | 2 | March 14, 2012 10:56 |
channelFoam for a 3D pipe | AlmostSurelyRob | OpenFOAM | 3 | June 24, 2011 14:06 |