|
[Sponsors] |
February 19, 2022, 22:53 |
clustering works!
|
#461 | ||
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
I found the link in this thread to my r810 cluster of 2 with 1Gb ethernet. It scales perfectly, so you should be fine with 2.5Gb Ethernet. Quote:
|
|||
February 22, 2022, 00:05 |
supermicro 4x opteron 6376
|
#462 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
supermicro H8QG6-F CSE-828 4x Opteron 6376 32x8MB DDR3-1600
1 2173.62 2 1456.88 4 512.48 8 252.12 12 194.95 16 157.38 24 123.78 32 96.17 48 90.1 64 86.31 |
|
February 22, 2022, 11:36 |
|
#463 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
Finally, the elusive quad-Opteron.
Can you share any insight for how this result compares to the quad-Xeon system you posted? I'm a bit surprised that the Opterons are so much faster. Or more precisely, the quad-Xeon seems a bit too slow. Was there anything holding the Dell Poweredge R810 back? |
|
February 22, 2022, 20:42 |
r810 has just 2 channels per processor
|
#464 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
The r810 with four processors has just 2 memory channels per processor (a really poor design choice). You can replace cpu 3 and 4 with a special blank that links their memory channels to cpu 1 and 2. So as a two processor machine it has four channels per processor.
|
|
February 23, 2022, 21:30 |
|
#465 | |
Member
Guy
Join Date: Jun 2019
Posts: 44
Rep Power: 7 |
Great numbers, ErikAdr. Your data points clearly illustrate that memory speed is the deciding factor when running these benchmarks.
Best price/performance ? Intel i5-12600 - C$359 Kingston Fury Beast 2x16 GB 6000 MHz - C$500 Asus TUF Gaming B660M-plus wifi - C$235 Total: $1,095 Execution time: 107 seconds. Used EPYC 7601 processor C$525 New Supermicro H11SSi C$490 8x4GB 2666MT/s RAM C$260 Total: $1,275 Execution time: 65 seconds. I included only the CPU, RAM and MB as the rest of these systems are pretty much the same. Quote:
You could probably build 2 dual processor 7601s and connect them with 10 GbE and have a run time of 25 seconds or so. Would be cheaper and faster than 4 12600 systems with fast RAM. |
||
February 23, 2022, 21:45 |
|
#466 | |
Member
Guy
Join Date: Jun 2019
Posts: 44
Rep Power: 7 |
Quote:
107 seconds is no slouch, especially for a "desktop" processor. Personal CFD workstations just became a lot more available to people on budgets. |
||
February 23, 2022, 21:55 |
opteron server cost with 256 Gb installed
|
#467 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
The price for the supermicro QUAD opteron 6376 was $320 256 Gb DDR3-1600 RDIMM's about $480 (I had it already, but that is what you pay on Ebay) The system came with a 40 Gb Infiniband card (working I checked) and dual Gigabit Ethernet Total: $800 Execution time: 86 seconds. |
||
February 23, 2022, 22:31 |
|
#468 | |
Member
Guy
Join Date: Jun 2019
Posts: 44
Rep Power: 7 |
Quote:
12600: 107/107 = 1x 7601: 107/65 = 1.646x Opteron: 107/86 = 1.244x Cost/speed ratio 12600: C$1095 / 1x = $1096 7601: C$1275 / 1.646x = $774 / 1x Opteron: US$800/.8 = C$1000 /1.244 = $803 / 1x It would be really interesting to see how fast the Intel 12th gen would be with more cores. The other interesting thing is that Zen4 is due out in April and Intel 13th gen later in 2022. DDR5 memory cost will fall like a rock at some point and the $/performance of these desktop systems will only rise. |
||
February 24, 2022, 00:17 |
|
#469 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Cost of memory is the issue with the newer systems versus the old servers.
|
|
February 24, 2022, 16:15 |
|
#470 |
Member
Erik Andresen
Join Date: Feb 2016
Location: Denmark
Posts: 35
Rep Power: 10 |
[QUOTE=linuxguy123;822980]Your processing time was still dropping with increases in cores used. I'm guessing the run time would drop further with processors with a higher core count, ie the 12700, 12800 or 12900.
The i5-12600 has 6 power cores, whereas all the i7 Alder Lake cpus have 8 power cores, and a number of efficient cores. I don't think efficient cores are usefull for HPC. How does mpi work in a heterogeneous environment? I think it is possible to switch off efficient cores on motherboards with a Z690 chipset, but I don't know if that is the case with a B660. I took the safe solution, and bought a cpu without efficient cores. |
|
February 24, 2022, 16:56 |
|
#471 | |
Member
Guy
Join Date: Jun 2019
Posts: 44
Rep Power: 7 |
Quote:
I can buy 2x16GB sticks of fast DDR5 for about the same price as 8x8GB sticks of DDR4 3200 for an EPYC setup. Don't get me wrong, I love my EPYC 7601 system. But it won't be long until desktop systems surpass it. The first architecture to use 4 channels of fast DDR5 memory with 12ish fast cores will make the 7001 Naples EPYC processors obsolete. Until then, I'll continue to use mine. As the core count on desktop processors climbs, the memory subsystem needs to be faster and faster to feed the cores. Zen4 is slated to remain at 16 cores. Probably because that is the core limit to feed with 2 memory channels. Rumor has it that Zen5 will have more cores - 24 or maybe 32. I'm guessing that they'll use a 4 channel memory system on Zen 5 to feed the cores. |
||
February 24, 2022, 17:00 |
|
#472 | ||
Member
Guy
Join Date: Jun 2019
Posts: 44
Rep Power: 7 |
Quote:
Quote:
I suspect that the Linux scheduler will get enhancements to deal with performance core and efficiency core scheduling in the near future. |
|||
February 25, 2022, 07:13 |
|
#473 |
New Member
Florian
Join Date: May 2021
Posts: 8
Rep Power: 5 |
The system of ErikAdr uses 2 single rank DDR5 sticks if I understand correctly. As far as I know for DDR4 you needed to have 2 dual rank or 4 single rank sticks to populate all memory channels of a 2 channel CPU. Does DDR5 RAM works the same way?
With this in mind, 8 performance cores of an i7 12700 and overclocking of the RAM, Alder Lake seems like a really good workstation option |
|
February 25, 2022, 07:46 |
|
#474 | |||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,428
Rep Power: 49 |
Quote:
Quote:
Quote:
|
||||
March 1, 2022, 05:40 |
Benchmark does not run anymore.
|
#475 | |
New Member
Roland Siemons
Join Date: Mar 2021
Posts: 13
Rep Power: 5 |
Quote:
Dear Forum, I ran the benchmark 1 year ago, and optimized my machine. Today I made some repairs to the memory. Tried to re-test its performance, but it appears that the benchmark test does not run anymore. Since the successful tests some changes have occurred in the Linux Mint OS. Perhaps Python was renewed, etc. Do you know if I should make edits to the benchmark code? Or perhaps should make other adaptations? Here is my terminal report, showing various error messages: Code:
@:~/Software/CFD/Benchmarking/bench_template_PostRepair$ bash run24.sh Prepare case run_6... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_6: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_6: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_6: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_6: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,034s user 0m0,005s sys 0m0,020s Prepare case run_10... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_10: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_10: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_10: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_10: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,024s user 0m0,013s sys 0m0,009s Prepare case run_14... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_14: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_14: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_14: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_14: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,017s user 0m0,011s sys 0m0,010s Prepare case run_18... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_18: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_18: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_18: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_18: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,017s user 0m0,009s sys 0m0,012s Prepare case run_22... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_22: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_22: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_22: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_22: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,018s user 0m0,008s sys 0m0,014s Prepare case run_24... surfaceFeatureExtract already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_24: remove log file 'log.surfaceFeatureExtract' to re-run blockMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_24: remove log file 'log.blockMesh' to re-run decomposePar already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_24: remove log file 'log.decomposePar' to re-run Error getting 'numberOfSubdomains' from 'system/decomposeParDict' snappyHexMesh already run on /home/roland/Software/CFD/Benchmarking/bench_template_PostRepair/run_24: remove log file 'log.snappyHexMesh' to re-run ls: cannot access 'processor*': No such file or directory ls: cannot access 'processor*': No such file or directory real 0m0,017s user 0m0,014s sys 0m0,006s Run for 6... Run for 10... Run for 14... Run for 18... Run for 22... Run for 24... # cores Wall time (s): ------------------------ 6 10 14 18 22 24 @:~/Software/CFD/Benchmarking/bench_template_PostRepair$ Code:
#!/bin/bash # Prepare cases for i in 6 10 14 18 22 24; do d=run_$i echo "Prepare case ${d}..." cp -r basecase $d cd $d if [ $i -eq 1 ] then mv Allmesh_serial Allmesh fi sed -i "s/method.*/method scotch;/" system/decomposeParDict sed -i "s/numberOfSubdomains.*/numberOfSubdomains ${i};/" system/decomposeParDict time ./Allmesh cd .. done # Run cases for i in 6 10 14 18 22 24; do echo "Run for ${i}..." cd run_$i if [ $i -eq 1 ] then simpleFoam > log.simpleFoam 2>&1 else mpiexec -np ${i} simpleFoam -parallel > log.simpleFoam 2>&1 # mpiexec -np 48 simpleFoam -parallel fi cd .. done # Extract times echo "# cores Wall time (s):" echo "------------------------" for i in 6 10 14 18 22 24; do echo $i `grep Execution run_${i}/log.simpleFoam | tail -n 1 | cut -d " " -f 3` done Greetz! Roland |
||
March 1, 2022, 06:01 |
|
#476 |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
@RolandS
Could you post the content of decomposeParDict? Also, please check the OF version that you are running. |
|
March 1, 2022, 07:30 |
|
#477 | |
New Member
Roland Siemons
Join Date: Mar 2021
Posts: 13
Rep Power: 5 |
Quote:
Hi Simbelmynė, This is OpenFOAM-v2012 (patch=210618) decomposeParDict: Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 4.x | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 6; method hierarchical; // method ptscotch; simpleCoeffs { n (4 1 1); delta 0.001; } hierarchicalCoeffs { n (3 2 1); delta 0.001; order xyz; } manualCoeffs { dataFile "cellDecomposition"; } // ************************************************************************* // |
||
March 1, 2022, 07:53 |
|
#478 |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
Is that from the basecase or is it from some run_ folder?
Perhaps you also need to clear the case (remove all run_* folders). |
|
March 1, 2022, 08:12 |
|
#479 | |
New Member
Roland Siemons
Join Date: Mar 2021
Posts: 13
Rep Power: 5 |
Quote:
Indeed, it was from a run_* folder. Here is the one from basecase: Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 4.x | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 6; method hierarchical; // method ptscotch; simpleCoeffs { n (4 1 1); delta 0.001; } hierarchicalCoeffs { n (3 2 1); delta 0.001; order xyz; } manualCoeffs { dataFile "cellDecomposition"; } // ************************************************************************* //
|
||
March 1, 2022, 10:11 |
|
#480 |
Senior Member
Join Date: May 2012
Posts: 552
Rep Power: 16 |
As far as I can tell there is no difference between the basecase and the run_* decomposeParDict?
In that case it seems that the sed command fails. You should have scotch not hierarchical after the sed command. The only suggestion I have left is to also look at the log.* files. In particular log.decomposePar, and try to find out why you execute the commands several times for each case. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to contribute to the community of OpenFOAM users and to the OpenFOAM technology | wyldckat | OpenFOAM | 17 | November 10, 2017 16:54 |
UNIGE February 13th-17th - 2107. OpenFOAM advaced training days | joegi.geo | OpenFOAM Announcements from Other Sources | 0 | October 1, 2016 20:20 |
OpenFOAM Training Beijing 22-26 Aug 2016 | cfd.direct | OpenFOAM Announcements from Other Sources | 0 | May 3, 2016 05:57 |
New OpenFOAM Forum Structure | jola | OpenFOAM | 2 | October 19, 2011 07:55 |
Hardware for OpenFOAM LES | LijieNPIC | Hardware | 0 | November 8, 2010 10:54 |