|
[Sponsors] |
solved issue of running out of virtual memory crashes |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 13, 2012, 17:00 |
solved issue of running out of virtual memory crashes
|
#1 |
Senior Member
Mihai Pruna
Join Date: Apr 2010
Location: Boston
Posts: 195
Rep Power: 16 |
I looked for this particular issue and nobody seems to have answered it.
The dreaded message: "This does not necessarily mean you have run out of virtual memory"... Basically my SHM mesher would crash for large numbers of cells. This seems to have fixed it: On Ubuntu, Use this tutorial to increase the size of the swap partition or add a new one https://help.ubuntu.com/community/SwapFaq
__________________
Mihai Pruna's Bio |
|
April 17, 2012, 08:22 |
|
#2 |
Senior Member
Join Date: Aug 2010
Location: Groningen, The Netherlands
Posts: 216
Rep Power: 19 |
Hi mihaipruna,
using swap partitions is basically assigning a part of the hard-drive as RAM which the computer also may use to deal with big files. But this is not an optimal solution to deal with the problem, since the calculations (both mesh creation and solving) are slowed down significantly. This is due to the extra 'way' of communicating. Without swap just RAM and Processor have to communicate and these two are built that way to handle the amount of data the RAM can store within a short time span. Now also introducing a swap partition to increase the RAM capacities this additional way of communication , which is actually not meant to do so, but for storing (big) files permanently, slows down the process, for the data has to be hauled forth and back from there as well. I would be interested if the solver is actually not complaining about such a big mesh or if he is also using the swap partition without further notes? Another note: In a different thread where you posted the link to this thread I mentioned a rule of thumb for mesh creation with 1M cells per GB RAM. This rule of thumb should be extended: Don't use more than 2 GB of RAM per processor. So the number of processors is somehow also limiting your mesh size. In reality this means with a quadcore machine I'm limited to 8M cells as long as 8 GB RAM are provided. With these guidelines I got pretty good results without hitting any hardware limits.They are however just proven for PCs not for clusters or other 'super computers' best regards Colin |
|
April 17, 2012, 10:45 |
|
#3 |
Senior Member
Mihai Pruna
Join Date: Apr 2010
Location: Boston
Posts: 195
Rep Power: 16 |
I'm not sure how OpenFOAM handles multiple processors, do you have to tell it to run in parallel mode?
__________________
Mihai Pruna's Bio |
|
April 17, 2012, 16:45 |
|
#4 | |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
@Mihai: Quote:
The "damBreak" tutorial (the 2nd one, I think) in the OpenFOAM User Guide explains the first steps into running OpenFOAM applications in parallel. Best regards, Bruno
__________________
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
another issue about HPC cluster for running cfx, hepl PLZ. | happy | CFX | 4 | March 5, 2012 00:58 |
parallel running issue | xiaokang | Fluent UDF and Scheme Programming | 0 | January 18, 2012 18:42 |
Stability issue while running buoyantFoam | fandall | OpenFOAM Running, Solving & CFD | 13 | May 9, 2011 12:36 |
Issue with running in parallel on multiple nodes | daveatstyacht | OpenFOAM | 7 | August 31, 2010 18:16 |
Memory issue - Suse 10 - Opterons | Andy R | FLUENT | 1 | June 23, 2008 15:44 |