|
[Sponsors] |
May 18, 2007, 11:22 |
difference in CPU and wall clock time
|
#1 |
Guest
Posts: n/a
|
I'm trying to do the grid independency, then I run one case with several mesh resolutions. I found that, in low grid dense, the total CPU time and the total wall clock time, which is the output of CFX-Solver, is differ slightly. For example, 6 hours 12 minutes and 6 hours 16 minutes.(this case, the number of nodes is 127,637 and the number of elements is 407,270) However, in high grid dense, the difference is enormous. For instant, 8 hours 52 minutes and 1 day 19 hours 21 minuted. (this case, the number of nodes is 184,032 and the number of elements is 563,071) Can anyone explain this to me? and how can I made them to be closed? Thank you.
|
|
May 18, 2007, 12:45 |
Re: difference in CPU and wall clock time
|
#2 |
Guest
Posts: n/a
|
I have faced to such a problem when doing calculations by PVM local parallel (on Windows XP). Then switching to MPICH would help you. If you are running SerialI have no idea.
|
|
May 18, 2007, 23:32 |
Re: difference in CPU and wall clock time
|
#3 |
Guest
Posts: n/a
|
Look through the forum before you post, saves you a lot of waiting time. I know we have a very nice discussion on why it happens this way.
|
|
May 20, 2007, 06:50 |
Re: difference in CPU and wall clock time
|
#4 |
Guest
Posts: n/a
|
Hi,
Most of the difference is disk IO. Check that your simulation fits in memory and is not using the page file/swap file. If your model does not fit in physical ram then the time difference between wall clock time and CPU time can increase rapidly. Glenn Horrocks |
|
May 21, 2007, 03:05 |
Re: difference in CPU and wall clock time
|
#5 |
Guest
Posts: n/a
|
Dear Mr.Horrocks, I am sorry but I have no idea how to check if my simulation fits in physical model. Could you please provide me more idea? Thank you very much.
Atit |
|
May 21, 2007, 08:58 |
Re: difference in CPU and wall clock time
|
#6 |
Guest
Posts: n/a
|
Hi,
The easiest way to check is during a run. Start the run and wait for all the setup stuff to finish and the simulation has started doing the solution iterations. Fire up the task manager (I assume you are running a windows machine). If the simulation is running in physical memory the CPU should be at 100% or close to it for a single core machine (50% per process for 2 cores etc). It may have short blip below maximum for a bit but the average CPU load should be 90+%. The disk drive should not be working too hard. If the simulation is not running in physical memory the CPU load will be low, such as below 10%. The disk drive will be flogging itself. Some runs which are just outside of physical RAM start like this but after a while get going and run reasonably, but if the run stays like this it will be very slow. Best stop the run and either get more RAM, more machines for parallel operation or make the simulation smaller. Glenn Horrocks |
|
May 22, 2007, 09:43 |
Re: difference in CPU and wall clock time
|
#7 |
Guest
Posts: n/a
|
Dear Mr.Horrocks,
Your explanation is very clear to me. This sounds make sense very much. I think some cases might be too big for RAM, then the consequent processes occur as you said. Next time when I have a very fine mesh system again, I'll check as you sugges. Thank you very much. Atit |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Superlinear speedup in OpenFOAM 13 | msrinath80 | OpenFOAM Running, Solving & CFD | 18 | March 3, 2015 06:36 |
air bubble is disappear increasing time using vof | xujjun | CFX | 9 | June 9, 2009 08:59 |
difference between cpu time and wall clock time | tony | CFX | 3 | July 26, 2007 17:32 |
New topic on same subject - Flow around race car | Tudor Miron | CFX | 15 | April 2, 2004 07:18 |
CPU time | Leonard Lorentzen | FLUENT | 7 | June 23, 2000 13:53 |