|
[Sponsors] |
August 20, 2022, 07:31 |
|
#561 | |
Member
|
Quote:
But talking about impovement from epyc Rome to Milan, I think 15% (16/14, actually I could only get it to around 18 seconds) is pretty decent already .. I was assuming that only single thead performance could achieve such improvement. |
||
August 20, 2022, 09:56 |
|
#562 |
New Member
Join Date: Mar 2022
Posts: 7
Rep Power: 4 |
Thank all of you for the benchmarks!
I can build at the moment my own configuration and am now thinking of which processor/configuration would be the best. (15-20k€) Now I am thinking of 2x AMD Milan 7643 (48c). Does anyone have experience with this processor? Does the amount of cash have an big influence on the performance? https://www.phoronix.com/review/epyc-7003-linux-perf/8 In this benchmark with openfoam 8 the EPYC 7713 (64c) and 75F3 (32c) are really close. Well, also many benchmarks showed not so big improvements above 32 cores. So is it worth it to invest in 2x48c ? |
|
August 20, 2022, 10:38 |
|
#563 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Yes, both the amount of cash and the amount of cache have an impact on performance.
As you already noticed, there is not much to be gained beyond 32 cores per CPU. If you want further improvements, Milan-X CPUs are the way to go because of the larger L3 cache. The 32-core 7573X is currently the best processor for CFD. We don't have any benchmarks here, but you can expect somewhere around 20-30% higher performance compared to the 7543. Whether that is worth the 6000€ price tag is up to you. |
|
August 20, 2022, 11:40 |
|
#564 |
New Member
Join Date: Mar 2022
Posts: 7
Rep Power: 4 |
Thank you, that helps a lot!
In theory: if the larger cash helps in the scalability, could then be expected that the Milan 7773X with 64 cores and 768MB cash could have further increase in performance? Or is the "performance limitation" to 32 cores more due to the communication bottle neck between the cores? |
|
August 20, 2022, 12:34 |
|
#565 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
It's less about scaling, and more "performance per core".
The added L3 cache give a performance uplift at all thread counts, and thus won't allow better scaling beyond 32 cores. That and the 9000€ per CPU won't be easy to fit into a 20000€ budget. |
|
August 22, 2022, 03:06 |
|
#566 |
New Member
Join Date: Mar 2022
Posts: 7
Rep Power: 4 |
Ah ok, thank you!
Yes, the 9000€ does of course not fit into the budget. |
|
September 2, 2022, 12:41 |
|
#567 |
New Member
DS
Join Date: Jan 2022
Posts: 15
Rep Power: 4 |
Interesting article on WSL2 vs Linux (HPL HPCG NAMD) by Dr. Donald Kinghorn
https://www.pugetsystems.com/labs/hp...PCG-NAMD-2354/ |
|
September 6, 2022, 16:00 |
|
#568 |
Senior Member
Join Date: Jun 2016
Posts: 102
Rep Power: 10 |
14in MacBook Pro base model (M1 Pro, 6P+2E, 16GB RAM), macOS 12.5.1, Apple clang 13.1.6, OpenFOAM-v2206
# cores Wall time (s): ------------------------ 1 463.91 2 249.77 4 141.38 6 108.45 8 143.51 The efficiency core doesn't help at all. It's slightly faster than my M1 Mac mini due to more memory bandwidth (200GB/s). I really want to see the result on the 16-core Mac Studio with 800GB/s bandwidth. |
|
September 22, 2022, 12:42 |
|
#569 |
New Member
Join Date: Aug 2017
Posts: 3
Rep Power: 9 |
I just ran the benchmark on my laptop from 2018.
OS: Windows 10 with WSL1 running Ubuntu 22.04.1 LTS CPU: i5-8265u (4 cores) RAM: 2 x 8GB DDR4-2400 OF: openfoam.com precompiled ubuntu v2206 Code:
Meshing Times: #Cores Wallclock time [s] Speedup 1 1313.91 1.00 2 905.5 1.45 4 804.88 1.63 Flow Calculation: #Cores Wallclock time [s] Speedup 1 1187.68 1.00 2 543.19 2.19 4 442.45 2.68 It puzzled me to see a flow calculation speedup of 2.19 by using two cores, so I ran the testcase again, but got a similar result. Does anyone have an idea of why? My laptop chokes at 4 cores, which I presume is because of its low memory bandwidth. |
|
September 25, 2022, 06:12 |
|
#570 | |
New Member
Join Date: Aug 2017
Posts: 3
Rep Power: 9 |
Quote:
I updated from WSL1 to WSL2, which gave a significant faster 1-core performance (18% faster mesh, 62% faster flow calculation), while the 2- and 4-core timings were more or less the same. Hence, the flow calculation speedup is no longer superlinear. |
||
September 28, 2022, 23:38 |
|
#571 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 371
Rep Power: 14 |
Upgraded my DL560 Gen 8 with E5-4657Lv2 processors:
OpenFOAM v2112, 4x E5-4657Lv2, 16x *Gb R2 DDR3-1866 Meshing Times: 36 169.59 40 166.07 44 167.83 48 182.17 Flow Calculation: 1 1206.04 2 599.21 4 240.81 6 167.69 8 124.5 10 107.11 12 89.31 16 70.49 18 66.72 22 57.48 24 53.07 28 48.24 32 45.56 36 43.43 40 41.97 44 41.07 48 40.41 E5-4627v2 fastest time 48.62 on 32 cores. Last edited by wkernkamp; September 30, 2022 at 02:15. |
|
October 3, 2022, 19:06 |
openfoam benchmark of ryzen 7000
|
#572 |
Member
dab bence
Join Date: Mar 2013
Posts: 48
Rep Power: 13 |
This website compares the 7950X to the 5950X and Pro 5995WX on motorbike
https://www.pugetsystems.com/labs/hp...2368/#OpenFOAM |
|
October 4, 2022, 01:21 |
|
#573 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 371
Rep Power: 14 |
I like to see that the Tr Pro 5995wz (cpu = $7000) completes the benchmark in the same time as my cheap DL560 G8 (total $700).
|
|
October 9, 2022, 08:48 |
|
#574 |
New Member
Christian Reyner
Join Date: Aug 2016
Posts: 1
Rep Power: 0 |
Hello,
I've read almost all about the guidelines and comparisons here, which really lightened me up, thank you for all the efforts! My first preference is to use a dual Xeon E5 2680/2683 V4, However, I am budget tight, and going to build for my research project, and found several cheaper options (half processor price): E5 2650v4 and E5 2640v4 How is the performance difference? is it far or just around 5-10% ? Or, should I invest a bit more in E5 2667v4 ? (based on my understanding, this might be faster than E5 2680v4) Thank you very much! |
|
October 9, 2022, 14:24 |
|
#575 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 371
Rep Power: 14 |
Quote:
The E5-2640 v4 is less desirable for CFD, because it is limited to DDR4-2133 and a QPI link of 8 GT/s, while the others go to DDR4-2400 and 9.6 GT/s. This will cause a performance difference of about 12.5%. Probably a bit more due to the lower core count and smaller cache. On ebay, I saw the E5-2680 v4 for $62 and the E5-2650 v4 for $14.94. The E5-2650 v4 performance will probably be in your range of 5-10% less than the E5-2680 v4. |
||
October 18, 2022, 10:03 |
M1Pro, precompiled OpenFOAM.app
|
#576 | |
Member
Sourav Mandal
Join Date: Jul 2019
Posts: 55
Rep Power: 7 |
Quote:
PHP Code:
Last edited by sourav90; October 18, 2022 at 11:03. Reason: More info |
||
October 18, 2022, 11:31 |
|
#577 | |
Senior Member
Join Date: Jun 2016
Posts: 102
Rep Power: 10 |
Quote:
|
||
October 22, 2022, 20:25 |
|
#578 |
New Member
Prince Edward Island
Join Date: May 2021
Posts: 26
Rep Power: 5 |
Anyone have any comparisons between naples and rome?
|
|
October 23, 2022, 11:41 |
|
#579 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Epyc Rome: OpenFOAM benchmarks on various hardware
Epyc Naples: OpenFOAM benchmarks on various hardware Code:
-Solver run time in seconds- nthreads Naples Rome 01 907.7 643.8 64 27.7 16.0 |
|
October 26, 2022, 19:57 |
|
#580 |
New Member
Prince Edward Island
Join Date: May 2021
Posts: 26
Rep Power: 5 |
What would be the performance advantages from going to a 64-core dual epyc system to a 128-core dual epyc system?
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to contribute to the community of OpenFOAM users and to the OpenFOAM technology | wyldckat | OpenFOAM | 17 | November 10, 2017 16:54 |
UNIGE February 13th-17th - 2107. OpenFOAM advaced training days | joegi.geo | OpenFOAM Announcements from Other Sources | 0 | October 1, 2016 20:20 |
OpenFOAM Training Beijing 22-26 Aug 2016 | cfd.direct | OpenFOAM Announcements from Other Sources | 0 | May 3, 2016 05:57 |
New OpenFOAM Forum Structure | jola | OpenFOAM | 2 | October 19, 2011 07:55 |
Hardware for OpenFOAM LES | LijieNPIC | Hardware | 0 | November 8, 2010 10:54 |