|
[Sponsors] |
July 24, 2008, 04:28 |
Heya,
Your setup looks OK.
|
#1 |
Senior Member
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,907
Rep Power: 33 |
Heya,
Your setup looks OK. Try playing with the pressure solver, starting from ICCG and then varying the AMG parameters if that does not help. You should get good performance at least up to 100 CPUs. Please keep us (The Forum) posted in case there's a real problem. Hrv
__________________
Hrvoje Jasak Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk |
|
July 24, 2008, 05:23 |
Hi Senthil!
Of course the o
|
#2 |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 |
Hi Senthil!
Of course the other interesting question is: how big is your case? And how are your CPU's connected. If for instance you're doing the damBreak over 100MBit ethernet, I'd be surprised if you see any speedup on 6 CPUs Bernhard
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request |
|
July 24, 2008, 13:05 |
Hi Hrv,
Thanks for the help
|
#3 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi Hrv,
Thanks for the help! Ill try your suggestions and keep the forum posted. Bernhard: The mesh has 2.7 million elements (hybrid). Its a shared memory machine. Regards, Senthil |
|
July 24, 2008, 13:26 |
What kind of shared memory mac
|
#4 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
What kind of shared memory machine?
|
|
July 24, 2008, 13:44 |
Hi Eugene,
Its Altix from S
|
#5 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi Eugene,
Its Altix from SGI (http://www.sgi.com/products/servers/altix/) The system runs a single copy of Linux over 128 Intel Itanium2 processors running at 1.5GHz. The system has 256GB of RAM of shared memory. Thanks Senthil |
|
July 24, 2008, 14:13 |
Hi Senthil,
I have some bad e
|
#6 |
Senior Member
Francesco Del Citto
Join Date: Mar 2009
Location: Zürich Area, Switzerland
Posts: 237
Rep Power: 18 |
Hi Senthil,
I have some bad experience with some SGI shared memory machine, where the memory is distributed on different motherboards and shared via kernel on top of an Infiniband network. Is this the case? I think so, as they write that the Altix has a "Modular blade design". If so, you can experience very poor performance if the computer is not configure for the software you are running... Francesco |
|
July 24, 2008, 16:42 |
Hi Francesco,
The same mach
|
#7 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi Francesco,
The same machine did demonstrate excellent speed up in case of a steady state simulation with simpleFoam. I got good speed up with 64 processors. Senthil |
|
July 25, 2008, 06:25 |
When running on Itanium it is
|
#8 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
When running on Itanium it is crucial that you use the Intel Icc compiler and the SGI native MPI otherwise performance will be poor.
|
|
July 25, 2008, 19:17 |
Hi All,
When I increase the
|
#9 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi All,
When I increase the number of processors, the solver stalls while calculating the pressure equation. I think it is the problem with GAMG. Any suggestions changing the following parameters? p GAMG { agglomerator faceAreaPair; nCellsInCoarsestLevel 100; cacheAgglomeration true; directSolveCoarsest true; nPreSweeps 1; nPostSweeps 2; nFinestSweeps 2; tolerance 1e-05; relTol 0.1; smoother GaussSeidel; mergeLevels 1; minIter 0; maxIter 10; }; Thanks Senthil |
|
August 13, 2008, 18:45 |
Hi All,
I have tried variou
|
#10 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi All,
I have tried various permutations and combinations for the pressure solver but I could not attain any speed up. Any Suggestions? Thanks Senthil |
|
October 3, 2011, 01:09 |
|
#11 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi All,
This issue has been resolved! The key to this problem was changing nCellsInCoarsestLevel in GAMG setting. Thanks Senthil |
|
October 3, 2011, 06:03 |
|
#12 |
Member
Tibor Nyers
Join Date: Jul 2010
Location: Hungary
Posts: 91
Rep Power: 17 |
Hi Senthil,
could you elaborate a bit on the matter please? I have made some parallel tests and used icoFoam with GAMG and the scale up was far from perfect. Thanks in advance! |
|
October 3, 2011, 14:04 |
|
#13 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi,
First, it is impossible to get linear speedup with any code. That said, icoFoam, performed relatively good on multiple processors. When you increase the number of processors, each processor gets a smaller chunk of the mesh. Therefore, you need to decrease the nCellsInCoarsestLevel settings in GAMG (fvSolution file). You can think of this number as the number of common cells shared between two processors. Also, OpenFOAM performed better with GNU OpenMPI 1.5 compared to the previous version (1.4). Thanks |
|
October 10, 2011, 12:06 |
|
#14 |
Member
Tibor Nyers
Join Date: Jul 2010
Location: Hungary
Posts: 91
Rep Power: 17 |
Thx Senthil for the tip, but it wont change anything in my case.
My test was a simple cavity3D created with blockMesh with 10M cells. Default icoFoam solver with GAMG for pressure on a Intel Xeon E5430. Iteration count 200. parallel thread - runtime 1 - 32,2 h 2 - 27.7 h 4 - 28.9 h 8 - 23.8 h |
|
October 10, 2011, 14:10 |
|
#15 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Hi,
Which version of OpenFOAM are you using? |
|
October 11, 2011, 03:41 |
|
#16 | ||
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Quote:
Quote:
|
|||
October 11, 2011, 04:14 |
|
#17 |
Member
Tibor Nyers
Join Date: Jul 2010
Location: Hungary
Posts: 91
Rep Power: 17 |
Hi,
OS: Ubuntu 11.04 OF: a month old 2.0.x |
|
October 11, 2011, 14:02 |
|
#18 |
Senior Member
Senthil Kabilan
Join Date: Mar 2009
Posts: 113
Rep Power: 17 |
Anton,
You are correct! I need to rephrase my wordings. I meant we cannot get linear speedup with OpenFoam utilities (at least the ones I have used). We will run into the interpolation issues if we have the nCellsInCoarsestLevel set to a very small number. It has to appropriate with the total number of elements on each processor. |
|
October 11, 2011, 14:20 |
|
#19 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Personally I've always encountered lower than linear speedup as well, but some people have claimed to get close to linear or better speedup using OpenFOAM:
http://www.hpcadvisorycouncil.com/pd...M_at_Scale.pdf http://web.student.chalmers.se/group...SlidesOFW5.pdf |
|
October 11, 2011, 18:07 |
|
#20 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
I would like to add to the knowledge being shared here and point you all to the following post: Parallel processing of OpenFOAM cases on multicore processor??? post #11 - it's a bit of a rant, but several scalability notions I've gotten with OpenFOAM and have gathered over time are written there. As for super-linear scalability, AFAIK it's only a half-truth: it's only super linear when we don't take into account the timings in serial mode! In other words, we can observe something similar on the "Report" I mention on that post above, where scalability sky-rockets when we don't take into account the serial run and the real theoretical speedup that should be expected. I've been trying to gather as much information on this subject as possible and have been keeping track of stuff on the blog post mentioned at the end of the post I mention above, namely: Notes about running OpenFOAM in parallel Best regards, Bruno
__________________
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Problem with IcoFoam in parallel | skabilan | OpenFOAM Running, Solving & CFD | 12 | April 1, 2008 06:55 |
IcoFoam parallel woes | msrinath80 | OpenFOAM Running, Solving & CFD | 9 | July 22, 2007 03:58 |
Parallel with Windoze, speed difference between PV | Charles | CFX | 3 | March 10, 2005 02:25 |
Parallel speed up | Soren | CFX | 18 | May 31, 2002 13:26 |
Parallel speed up for CFX 5 on PC's | Roued | CFX | 6 | November 28, 2001 19:02 |