|
[Sponsors] |
May 29, 2020, 08:08 |
|
#281 | |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
Quote:
Nice, thank you. It would also be interesting to have a column for memory speed and rank, but I guess that is out of the question right now seeing that the work to do so is quite massive. What makes me most interested when looking at this is the extremely impressive results of Ryzen 3900X. It manages the same results as the Threadripper 1950X (and similar 4 channel setups), while it uses only half of the memory channels. It is very clear that Ryzen 3rd generation benefits immensely from tight timings on the memory. |
||
June 8, 2020, 11:41 |
|
#282 |
New Member
Join Date: Apr 2020
Posts: 2
Rep Power: 0 |
HPE DL385 GEN10 Plus 2*EPYC 7542 32Core, 16*32GB 3200MHz Memory
Nothing optimized - just set Bios to HPC and installed Centos 8 with Openfoam 5.0 # cores Wall time (s): ------------------------ 1 600.7 2 348.89 4 152.27 6 101.78 8 76.09 12 53.49 16 40.26 20 36.63 24 31.95 28 29.98 32 27 36 26.45 40 25.85 44 25.09 48 23.27 52 23.47 56 23.46 60 22.5 64 21.96 |
|
June 9, 2020, 04:16 |
AMD Epyc 7542 256gb Ram
|
#283 |
Member
Giovanni Medici
Join Date: Mar 2014
Posts: 48
Rep Power: 12 |
Hi there,
we ran the benchmark on a similar setup as @pred, and found pretty similar results:
Ubuntu server Code:
# cores Wall time (s): ------------------------ 1 724.16 2 346.29 4 165.72 6 107.43 8 82.14 12 55.02 16 41.32 20 37.03 24 33.5 32 26.79 48 22.99 64 21.5 |
|
June 9, 2020, 15:06 |
Intel Xeon Gold 6140
|
#284 |
Member
Federico Zabaleta
Join Date: May 2016
Posts: 47
Rep Power: 10 |
2*Intel Xeon Gold 6140
96gb (12*8gb) 2666 MHz Code:
#cores Wall time (s): ------------------------ 1 981.72 2 488.92 4 217.97 6 146.36 8 113.38 12 85.34 16 68.23 20 60.27 24 55.94 28 52.5 32 50.76 36 49.87 Last edited by fedez91; June 9, 2020 at 16:35. |
|
June 9, 2020, 17:17 |
|
#285 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
Quote:
|
||
June 9, 2020, 19:05 |
|
#286 |
Member
Federico Zabaleta
Join Date: May 2016
Posts: 47
Rep Power: 10 |
Any way of improving this or it just means that I got 12 extra cores that are basically useless? Ot this would change with bigger meshes? I apologise if my questions are naive, but I do not a really understand the details of how cpu works.
Thank you for your help! Last edited by fedez91; June 9, 2020 at 20:20. |
|
June 9, 2020, 21:20 |
|
#287 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
There is not much you can do about it, once you purchased the hardware. Higher cell counts won't improve this behavior. It is a function of code balance, which does not change drastically with cell count.
With Intel Xeon CPUs, you can try to enable "cluster on die" mode in the bios. Might also be called "sub-NUMA cluster" with this generation. That should improve latency and bandwidth a bit for NUMA-aware software like OpenFOAM. Don't expect huge improvements though. https://en.wikichip.org/wiki/intel/m...UMA_Clustering And it might be a good idea to check whether the DIMMs are populated correctly, so each memory channel of each CPU has one DIMM. You should be able to look up the correct way in your server/workstation/motherboard manual, and then compare it to what you got. At least the additional cores are not entirely wasted, there is still a small speedup. And contrary to commercial CFD solvers, you don't have to buy additional licenses to use them. |
|
June 10, 2020, 11:31 |
help with workstation configuration
|
#288 |
New Member
sida
Join Date: Dec 2019
Posts: 6
Rep Power: 6 |
I feel lucky after reading this thread and before making my decision.
With 2500$ budget,( previously going to spend on Threadripper 3970x and its cooling system) which alternative CPUs do you suggest? I'm can't decide between one AMD EPYC 7452 or 2X EPYC 7302??? Thanks in advance Last edited by sida; June 10, 2020 at 17:05. |
|
June 10, 2020, 13:36 |
|
#289 |
Member
Federico Zabaleta
Join Date: May 2016
Posts: 47
Rep Power: 10 |
Thanks flotus1. I will enable 'cluster on die' and re-run the test. I will post the results if I see improvements. Thanks again!
|
|
June 15, 2020, 01:25 |
|
#290 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
|
||
June 19, 2020, 09:34 |
|
#291 |
Senior Member
Niels Nielsen
Join Date: Mar 2009
Location: NJ - Denmark
Posts: 556
Rep Power: 27 |
Tested on 2xEPYC Rome 7302, 256Gb ram 32x8Gb@2933
No core binding or trickery. With AOCC 2.1/GCC 9.2.0 and Openfoam 19.12 Code:
# cores Wall time (s) AOCC 2.1.0 GCC 9.2.0 -march=znver2 -march=znver2 Diff % 1 693.3 692.5 0% 2 470.3 470.88 0% 4 167.2 164.52 -2% 8 78.5 77.16 -2% 12 59.5 60.26 1% 16 42.3 41.79 -1% 20 41.2 41.07 0% 24 33.3 33.59 1% 28 34.0 32.36 -5% 32 28.2 27.95 -1%
__________________
Linnemann PS. I do not do personal support, so please post in the forums. Last edited by linnemann; June 24, 2020 at 06:58. |
|
June 26, 2020, 18:29 |
|
#292 |
New Member
FW
Join Date: Mar 2018
Posts: 8
Rep Power: 8 |
Results on a Ryzen 7 3700X with overclocked Memory (3800 - CL 16).
Code:
# cores Wall time (s) 1 857.26 2 380.49 4 253.1 6 219.94 8 212.16 |
|
June 26, 2020, 21:11 |
|
#293 | |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
Quote:
I think you can do much better. Have you tried Ryzen DRAM calculator? I only managed 3600 CL16 @ 1:1 infinity fabric, but with the secondary and tertiary timings set properly I got around 170 s in this test on my Ryzen 3700X system. |
||
June 27, 2020, 00:59 |
|
#294 |
New Member
FW
Join Date: Mar 2018
Posts: 8
Rep Power: 8 |
Okay that's a huge difference. I have chosen the values from a thread in the computerbase forum. The if-fabrik is 1:1 for me as well. I will check today.
|
|
July 11, 2020, 10:03 |
|
#296 |
New Member
Francisco
Join Date: Sep 2018
Location: Portugal
Posts: 27
Rep Power: 8 |
For anyone considering an older build, I ran this benchmark with 2 x E5645. Mind you, it is openfoam4.1 on debian Jessie (Kernel 3.16):
Code:
# cores Wall time (s): ------------------------ 1 1546.8 2 859.93 4 414.82 8 309.71 10 297.45 12 295.6 My guess is that the tri channel memory might holding it back from a better 10-12-thread scaling. Last edited by ships26; July 19, 2020 at 19:19. Reason: There are two E5645s, not one. |
|
July 11, 2020, 10:06 |
|
#297 | |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
Quote:
If you have sudo rights then you might be able to find out using: Code:
dmidecode -t 17 |
||
July 11, 2020, 10:11 |
|
#298 |
New Member
Francisco
Join Date: Sep 2018
Location: Portugal
Posts: 27
Rep Power: 8 |
Unfortunately I don't
I tried it without sudo, and it only gives me the info that there's 50GB, nothing about frequencies. I can try to find a way around it, though. Last edited by ships26; July 11, 2020 at 10:13. Reason: grammar |
|
July 14, 2020, 16:44 |
Doing as well as you can ships26
|
#299 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
My results with the faster X5670 processors: 2xX5675 3.07ghz 6 cores per cpu Meshing Times: 1 1998.08 2 1313.22 4 719.71 6 558.17 8 466.22 12 449.43 Flow Calculation: 1 1322.84 2 787.4 4 375.77 6 305.44 8 286.3 12 278.02 Looks to me like you are doing about as well as you can with the setup. |
||
July 15, 2020, 19:25 |
|
#300 |
New Member
Francisco
Join Date: Sep 2018
Location: Portugal
Posts: 27
Rep Power: 8 |
Thank you for sharing your results, wkernkamp! It's always great to have a point of comparison.
Did you stick to a single cpu in hyperthreading when running the benchmark with 12 threads? |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to contribute to the community of OpenFOAM users and to the OpenFOAM technology | wyldckat | OpenFOAM | 17 | November 10, 2017 16:54 |
UNIGE February 13th-17th - 2107. OpenFOAM advaced training days | joegi.geo | OpenFOAM Announcements from Other Sources | 0 | October 1, 2016 20:20 |
OpenFOAM Training Beijing 22-26 Aug 2016 | cfd.direct | OpenFOAM Announcements from Other Sources | 0 | May 3, 2016 05:57 |
New OpenFOAM Forum Structure | jola | OpenFOAM | 2 | October 19, 2011 07:55 |
Hardware for OpenFOAM LES | LijieNPIC | Hardware | 0 | November 8, 2010 10:54 |