|
[Sponsors] |
June 22, 2020, 08:34 |
Suggestions on the hardware configuration
|
#1 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Hi all,
I am planning to purchase a new desktop for my Lab for numerical simulations using Fluent v.16.2 and 18.1 (with no restrictions on the number of cores (research license)) and XFlow v.2020x (with restrictions of 32 cores), and more likely CCM+ in the near future. I would be working on multiphase (Euler-Lagrange, Euler-Euler and free surface flow) problems, conjugate heat transfer as well as street canyon based problems. Mostly, I would prefer LES simulations over the number of cells in a range 8 to 12 millions (or more). Our budget is around $5,000. The time step size may go below 1.0e-5 second in some simulations, with total runtime of 20-50 seconds. What I have learnt from this forum is that the performance of AMD Epyc series is ahead of the Intel (due to scalability issues with Intel). However, my first priority would be Intel, in the range of $5K. Is it possible to have a configuration with a decent scalability and overall performance? All options are invited. |
|
June 22, 2020, 09:13 |
|
#2 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Let's assume those 5000$ would all go towards hardware, and not into the pockets of a big OEM. Then the fastest Intel-system you could buy would look like this:
SSI-EEB case - 100$ Power supply - 130$ NVMe SSD 2TB: 300$ CPU coolers - 160$ Graphics card - 250$ Motherboard: ASUS WS C621E Sage - 550$ CPUs: 2x Xeon Silver 4216- 2100$ RAM: 12x16GB DDR4-2400 dual-rank - 950$ That's about 4550$, so some budget left for additional hard drives or whatever else you might need. Stepping up within Intels portfolio is next to impossible due to budget constraints. The next higher performing CPU that makes some sense is the Xeon Gold 6226R, which costs over 1500$, and requires faster (=more expensive) memory. |
|
June 22, 2020, 14:44 |
|
#3 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Dear Flotus1, thanks very much for your reply and suggestion regarding the system configuration.
Xeon Gold 6226R seems to be a better choice, and for this I will have to increase my budget by another $1k (now, a total of $6K). Could you please recommend hardware required for the 2 x Xeon Gold 6226R processor? Just out of curiosity, will the above configuration (with a total of 32 cores) perform better (especially the scalability) than if I go for 1 x AMD EPYC 7F72 (24 cores)? This AMD model seems to be promising. If yes, could you please provide me the configuration for this as well? In this, is it possible to use a dual socket motherboard with only one processor installed and the other slot left blank for adding up another EPYC 7F72 in the future? |
|
June 22, 2020, 15:14 |
|
#4 | |||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Quote:
Quote:
Quote:
It would be negligent of me not to ask two questions here: 1) Why Intel? Just playing it safe, or any other reasons? 2) Why the 7F Epyc CPUs? They may outperform their non-F counterparts slightly, but at the cost of much worse price/performance. Even cheaper Epyc CPUs like 7302 outperform Intels high frequency models on a per-core performance metric: Xeon Gold Cascade Lake vs Epyc Rome - CFX & Fluent - Benchmarks (Windows Server 2019) |
||||
June 23, 2020, 10:28 |
|
#5 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Alex, thanks a lot. This information was really helpful for me.
1. Why Intel? Just playing it safe, or any other reasons? Actually, I may jump to GPU based simulations in the future. I am not very much sure if AMD configuration would support Nvidia (CUDA cores), and to what extent. Secondly, no one in my circle (including myself) have used AMDs. 2. Why the 7F Epyc CPUs? They may outperform their non-F counterparts slightly, but at the cost of much worse price/performance. That's correct Alex, and I agree with you on this. Just a small thought.... If one uses 2x Epyc 7302 (a total of 32 cores) with 128 gb ram (8gb x 16), and 2 x Xeon Gold 6226R (a total of 32 cores) with 96 gb ram (8gb x 12) on the other hand, which among the two would be faster on the simulation (physical) run time? I could not locate any benchmark report on these two variants (if it isn't, whats your view on this?). Secondly, which among these two configurations would turn out to be cost effective. Your views on the above would mean a lot to me to make a final decision. |
|
June 23, 2020, 11:35 |
|
#6 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Your choice of CPU has no impact on CUDA support.
Funny sidenote: Just look at what Nvidia did with their DGX systems, which is at the absolute high end of what is currently possible with GPU acceleration. They used AMD Epyc CPUs due to their higher overall PCIe bandwidth. There are other factors to consider though. If you are building a system yourself with 2 AMD Epyc CPUs, your only motherboard choice is Supermicro H11DSi. Which only has two PCIe x16 slots, which are both connected to CPU1. So not ideal in case you want to use multiple GPUs. Then again, there are quite a few obstacles to overcome when using GPU acceleration in software like Fluent or CCM+. One of them are extremely expensive GPUs. To be frank: if you are on a budget of 6000$ now, GPU acceleration won't be a viable option. A Quadro GV100 costs around 10000€. Quote:
|
||
June 24, 2020, 14:11 |
|
#7 |
New Member
Join Date: Dec 2017
Posts: 11
Rep Power: 9 |
Alex, thanks very much for making the things clear
|
|
April 13, 2021, 15:10 |
|
#8 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Hi Alex.
Due to the pandemic situation, I could not get the required components viz. EPYC 7302 and other related hardware. Till date, I find it hard to get this stuff in my region; it has been out of stock since long even in the renowned shops. However, after huge efforts, I managed to get the hardware, the details of which are as follows: 1. 2X EPYC 7402 (a total of 48 cores) 2. 2X 2U Active Heatsink 3. AsrockRack ROME2D16-2T mainboard 4. 16X 16GB DDR‐4 ECC REG 3200MHz (256GB total) 5. Tower Chassis With 1300W PSU 6. Quadro P2200 graphics card 7. 1TB NVMe M.2 PCIe 4.0 8. 4X 8TB SATA Enterprise 7200RPM The total cost would be around $8,400 which is quite high. However, I have planned to buy this. I could have bought a single processor with 128 GB RAM, but I am getting a double with an addition of another $2400, which I can afford. A good resource would be helpful in the coming time too. I need your view on this. Do you think any changes is required. 1. Is the graphics card ok? 2. Don't we have liquid cooling for EPYC? The temperature here would be around 48 degrees (maximum), and I hope the cooling system I have chosen is sufficient. 3. I plan to combine the 3 SATA drives in a single volume of 24TB and one single 8TB. The 24TB space would help to save large files in XFlow. The XFlow software, being LBM based, saves files at the user defined frequencies and preserves them all. Hence, large drive size is required. 4. I hope the motherboard I have chosen is okay. If not, please suggest me one. Your views on above would mean a lot to me |
|
April 13, 2021, 17:00 |
|
#9 | ||||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Quote:
Quote:
Or you could do what I did: a full custom loop water cooling, including CPU VRMs. The CPUs themselves are relatively easy to cool due to their large surface area. The VRM heatsinks on these boards are designed for high airflow in server cases. In a quiet workstation case. CPU VRMs are usually the first component that causes thermal throttling. Full disclosure: water cooling wasn't really necessary in my case, I just got bored during the first lockdown. IMG_20200405_160041_small.jpg That being said, a large air cooler is usually enough for these CPUs. "2U active heatsinks" will be loud as hell. The options you have strongly depend on the case you pick. 48°C ambient is a real challenge though. Air conditioning seems like the easier solution, compared to designing a cooling system to handle that. And also helps the human in front of the desk Quote:
I dabble in LES with LBM myself. I have a RAID6 of spinning drives for capacity, and a single 7.68TiB NVMe SSD for fast storage with the current projects. Keep in mind that hard drives cap out at around 200MB/s sequential. That's rather slow to fill 24TB. And if you RAID0 them, all of the data will be gone with a single drive failure. Quote:
|
|||||
April 14, 2021, 02:27 |
raidz
|
#10 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
You may want to combine all four drives into a zfs raidz configuration. With ambient temperature at 48C, you are exposed to drive failures. |
||
April 14, 2021, 07:08 |
|
#11 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Dear Alex,
Thanks very much for your valuable suggestions; they are very helpful to me 1. There is a huge scarcity of graphics card here, so I am left with no other option. 2. As you have mentioned, I will start working on the cooling part to eliminate the drawbacks. The maximum (inside) temperature in my location ranges from 40 to 44 degrees during peak summers, otherwise it is lower than that. 3. The 7.68TB NVMe SSD that you have is a great stuff. Unfortunately, I am already out of my budget, and this storage media is very expensive for me. It could be left for future upgrade. However, I have planned to purchase 2TB NVMe M.2 PCIe 4.0 instead of 1TB, along with 2X 8TB HDD (merged) plus 1X 8TB (single). Fluent files could be stored on this 2 TB NVME SSD and later transferred to single HDD. Small XFlow files could also be handled in a similar way. Bigger XFlow files could be saved directly to 2X8 = 16 TB storage (I know the write speed would be significantly lower than on the SSD). Alex, is it advisable to go for 16 TB single storage instead of 2X 8 TB storage? Are 16 TB 7200rpm drives noisy? |
|
April 14, 2021, 07:10 |
|
#12 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
||
April 14, 2021, 08:24 |
|
#13 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Quote:
In order to get full capacity with 2 drives, you can either use JBOD or RAID0. The former usually only loses the data one the drive that fails. The latter loses all data in case of a single drive failure. Compared to a single disk, the risk of data loss is approximately doubled. As for noise: compared to the rest of the workstation chugging along in 40°C ambient temperature, you won't hear the hard drives. I bought the SSD used, for around 800€ if I remember correctly. New SSDs with those specs are way outside of my comfort zone. A more general remark: modern hardware seems to be very hard to source in your location, and at pretty steep prices. That's usually a good fit for these dual-socket 2011-3 setups you can get rather cheap from Aliexpress. They won't be as fast, but much cheaper. That's one way to "future-proof" your system, and in my opinion one of the best. Buying very cheap allows for more frequent upgrades. |
||
April 14, 2021, 13:24 |
|
#14 |
New Member
kukka
Join Date: Sep 2018
Posts: 15
Rep Power: 8 |
Dear Alex,
Thanks for this valuable information |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[OpenFOAM.org] Cannot open configuration file mpicc-wrapper-data.txt mpicc: Not found | wenjuny | OpenFOAM Installation | 0 | November 25, 2019 10:01 |
Suggestions for StarCCM+ Workstation configuration | ifil | Hardware | 15 | October 30, 2018 06:09 |
Best hardware configuration for cluster and server. | pradeep.uvce | Hardware | 0 | January 6, 2016 15:47 |
Multiple Configuration Simulation | Tristan | CFX | 0 | November 14, 2009 00:01 |
What hardware configuration should be preferred? | Albert | Main CFD Forum | 2 | February 27, 2003 19:15 |