|
[Sponsors] |
July 14, 2000, 22:22 |
Re: Hard Disk Access Time
|
#21 |
Guest
Posts: n/a
|
Hey that was Andy, not me.
I still stand by my original hypothesis that you don't have enough memory to run the problem you are trying to run. Only physical memory counts....virtual memory is just for temporary and small overloads. If a lot of virtual memory is used, you would see a lot of disk activity as the OS swaps your process in and out. Virtual memory should typically be configured as 0.5 times your physical memory....i.e. if you have 384Mb, configure virtual memory to 192Mb. (1Gb is ridiculous...any process that used that much fake memory would surely run dog-dog-slow). Of course it could be that your code is smart enough to see that you don't have enough physical memory and it actually switches over to an "out-of-core" mode where certain solution data needed every iteration are kept on disk instead of memory...I've seen codes that try to help the user that way. Perhaps that would explain the "large" temporary files you are seeing. Typically, for storing inter-block interpolation coefficients, these files should be small; after all, the number of points communicating is only proportional to the surface point count of your blocks, not the volumes. If these "temp" files are cumulatively even 25% the size of your final solution file, something is wrong. Good luck - I hope you figure it all out. And when you do, please post and let us know what was really going on. |
|
July 14, 2000, 23:55 |
Re: Hard Disk Access Time
|
#22 |
Guest
Posts: n/a
|
Ed, I don't believe it's a physical memory problem. I have a CFD model with about 2MB of grid points and I have about 900MB of RAM which is more than enough!
How familiar are you with Windows NT? I am almost totally ignorant of it. Maybe my interpretation of the paging file stats displayed by the the Task Manager may be incorrect. I assume that the paging file is related to virtual memory. My main experience is with Irix, Unicos and VAX. The two interpolation files are about 20Mb while the solution file (containing grid and flow fiel) is about 125Mb. As I said there are other temp files containing interface solution data. |
|
July 15, 2000, 00:12 |
Re: Hard Disk Access Time
|
#23 |
Guest
Posts: n/a
|
||
July 15, 2000, 07:02 |
Re: Hard Disk Access Time
|
#24 |
Guest
Posts: n/a
|
I am Andy and not Ed. I was not assuming parallel operation simply coupled blocks (although most approaches to coupling blocks give you a parallel capability if you want it).
I believe you have confirmed that you are running the code in a manner not really intended by the author. With little information it is hard to give accurate advice but here is some anyway: (1) Spending money on extra hardware to address a minor installation/software problem does not seem sensible. (2) Talk to the author of the software. (3) Install the parallel software. PVM, MPI and most parallel implementations are freely available and run quite happily in "parallel" on a single machine. It is likely the author used one of these. (4) Since you have the Fortran code (wise move) fix it. Simply replace the read/write to the external files with a read/write to an internal buffer stored in a common block. This is a very minor modification needing only an array of offsets to mark the start of each "internal file", a large character buffer and a bit of code to calculate the size of each file. It seems curious that something like this is not already implemented? (4) Install unix as well as NT on your PC. Of course, this may not be viable if you need to perform NT-constrained operations while running your CFD code. However, you may find the unix platform provides adequate substitutes. (5) There is a reason that people with a wide practical knowledge of CAE tools know little about the details of NT. It is a poor platform on which to perform serious CAE (expensive, missing tools, awkward user interface, few standards, unusably buggy for the first few years, etc...) and so few have experience with it. For a passive and non-demanding user the situation is probably different given its widespread use in the home and industry. You may well find this is a continuing source of problems and expense in the future if you stick with running CFD and it's related tools on NT. (6) Consider spending the money you seem to have available on obtaining good hands-on technical support. I know this is very easy to find but very difficult to obtain but you may get lucky. |
|
July 17, 2000, 06:41 |
Re: Hard Disk Access Time
|
#25 |
Guest
Posts: n/a
|
Steve, it is always possible that your CFD code has a switch or environment variable that makes it use physical memory rather than scratch files. For example, STAR-CD has the environment variable: RAMFILES. If I don't set this, small test jobs run really slowly (~15% CPU); if I do set it, they run at up to 98% CPU. Check the small print.
|
|
July 17, 2000, 09:44 |
Re: Hard Disk Access Time
|
#26 |
Guest
Posts: n/a
|
Steve, there is a switch to put the whole model in memory which will prevent swapping in and out of each block/zone. This is ALREADY turned on.
|
|
July 17, 2000, 09:47 |
Re: Hard Disk Access Time
|
#27 |
Guest
Posts: n/a
|
Joern, I've looked at used SGI workstations and they are expensive if you include sufficient RAM and disk space. SGIs are expensive even when used. Are the IRIX SGIs faster than the 770Mhz Pentiums?
|
|
July 17, 2000, 09:50 |
Re: Hard Disk Access Time
|
#28 |
Guest
Posts: n/a
|
Andy, how have I "confirmed that you are running the code in a manner not really intended by the author"?
|
|
July 17, 2000, 10:04 |
Re: Hard Disk Access Time
|
#29 |
Guest
Posts: n/a
|
"(2) Talk to the author of the software"
There are several authors who have moved on to places unknown. The organization, that supported this code in the past, no longer supports this code since it has been replace by a code that DOES NOT run on Windows NT." (3) Install the parallel software. PVM, MPI and most parallel implementations are freely available and run quite happily in "parallel" on a single machine. It is likely the author used one of these. " Do you have a source for parallel implementations on WINDOWS NT? Please remember that this code was written for UNIX. "(4) Since you have the Fortran code (wise move) fix it. Simply replace the read/write to the external files with a read/write to an internal buffer stored in a common block. This is a very minor modification needing only an array of offsets to mark the start of each "internal file", a large character buffer and a bit of code to calculate the size of each file. It seems curious that something like this is not already implemented?" This mod is beyond me since I have no idea what "offsets" are; have no idea why a character buffer is needed and have no idea how to calculate the size of each file. This code was meant to be run on a Cray in serial mode or Cray and SGI multiprocessors in parallel mode. The PC version was an afterthought. "(5) There is a reason that people with a wide practical knowledge of CAE tools know little about the details of NT. It is a poor platform on which to perform serious CAE " I agree! That Windows NT or 98 can be used reliably and efficiently is one of the largest hoaxes perputuated by Microsoft on the technical community. "(6) Consider spending the money you seem to have available on obtaining good hands-on technical support. I know this is very easy to find but very difficult to obtain but you may get lucky." Our tiny company can hardly afford me! I need an efficient and cheap solution. We are not Boeing or Microsoft |
|
July 17, 2000, 12:20 |
Re: Hard Disk Access Time
|
#30 |
Guest
Posts: n/a
|
(1). Double check it, even though I am sure that it will slow down the calculation when you turn it off. It is a good idea to double check it. (2). Last Saturday, I was at the local computer show, and I saw the Athlon -800MHZ mother board selling for $375. They said that it is the best buy . (3). So, by upgrading the mother board alone, you can get visible speed gain. That is the only viable option you have. So, start looking into high speed CPU and mother board, along with the high speed memory. And if you are sure that the high speed HD is cost effective, it is also a simple solution. (4). So, when you add all of these upgrade together, I think you will see the difference. (do not touch the source code, it can be a serious problem if you don't know what you are doing.)
|
|
July 17, 2000, 12:39 |
Re: Hard Disk Access Time
|
#31 |
Guest
Posts: n/a
|
The question is not if the SGI is faster or not for a single job but which machine allows you to finish a complete project faster and with less trouble.
|
|
July 18, 2000, 06:14 |
Re: Hard Disk Access Time
|
#32 |
Guest
Posts: n/a
|
By confirming (probably) that the algorithm is swapping information between blocks in a "standard/normal" manner and that the implementation operates in a "standard/normal" manner on a different platform (I hesitate to say "standard/normal" platform!). This external files business looks like a kludge. I would guess (but may be wrong) the authors used a shared memory model for exchanging coupling information on the original platform and this was not available on NT (or they could not access the MS information more likely) so they spent 5-10 minutes writing some code to use external temporary files instead. I am speculating.
|
|
July 18, 2000, 07:42 |
Re: Hard Disk Access Time
|
#33 |
Guest
Posts: n/a
|
MPI and PVM home pages:
www-unix.mcs.anl.gov/mpi/ www.epm.ornl.gov/pvm/pvm_home.html Follow the links for NT source code but it may be wise to check what the code uses first since it may use the underlying communications libraries directly (i.e. the libraries that MPI and PVM use to communicate). I assume you are aware that several stable implementations of unix are freely available for the Intel PC platform? (you have not indicated why you are not using unix). If I read the situation correctly (please correct me if I am wrong). You are using a CFD code for which there is no external support and there is nobody internally with the knowledge to provide support either. Suggestions: (1) If the CFD analysis is not key to the work you are doing then relax. Run the code (slowly), generate the pictures and get paid. The use of NT suggests you are probably in this camp. (2) If the CFD analysis is key to the work you are performing then you are in trouble. Running specialist codes without knowledge or support is not sensible. You will not be able to fix the minor things that always need addressing in order to perform the work efficiently. A specialist would probably have spent less than an hour inspecting/coding and a few hours testing/documenting the modification to the code. Perhaps more importantly you will not know what to believe and what to take with a pinch of salt in the answers. I would recommend having a word with the management and suggest the activity has to be resourced properly. There would seem to be various ways forward: train yourself, buy support for this particular code, buy a commercial code with support, adopt a freely available code which is still "alive". These options are not independent and the key (as always) is access to people who know what they are doing. I suspect there is no efficient and cheap solution given your current position. The most effective way forward will depend on where your company wants to go and how important CAE is internally. The only common denominator is likely to be the need for someone to get trained. |
|
July 18, 2000, 10:28 |
Re: Hard Disk Access Time
|
#34 |
Guest
Posts: n/a
|
(1). I like your conversations. (2). About the training, I must say that even in a large company, or world class company, it is very hard to get the proper training. (3). This is because of the organizational structure, policy, responsibility etc... And this is creating a huge uncertainty in the results of analysis. But the reality is : Who cares? This is real life. (4). The fact is, the life goes on (as the product is produced), regardless of the results of the analysis. and In many cases, the flow fields were not studied in the life time of the company. (5). Even today, when there is a need to estimate the loss in a highly complex duct, most often, the chart of the flat plate or pipe is used, instead of the 3-D cfd analysis.(assuming that some smart engineers have identified the area, otherwise, it does not even exist.) (6). Actually, getting a commercial code with support is not much better than using a code without support at all. The reason is, in most cases, the support engineers are not the part of the code development team. So, These codes are always being handled as the block boxes. (7). I must say that, it is not a good idea to run a code originally developed for super-computer on a PC. But if the trouble is only related to the speed of operation on PC, then upgrading the PC seems to be a practical solution.
|
|
July 23, 2000, 12:46 |
Re: Hard Disk Access Time
|
#35 |
Guest
Posts: n/a
|
Steve,
you can run various UNIX alike operating systems on a PC: Linux is currently the most popular one, but FreeBSD, SunOS, QNX and others are also available. Unfortunately most CFD code for PC's is written for NT. That's not only caused by the software developers, but also by the customers: ask for a Linux port to express your need for a UNIX/PC solution. |
|
August 3, 2000, 14:14 |
Re: Hard Disk Access Time
|
#36 |
Guest
Posts: n/a
|
Thanks for the URL. We do not use parallel processing since we do not have more than two PCs networked. In the future we may go to Linux.
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Multiple floating objects | CKH | OpenFOAM Running, Solving & CFD | 14 | February 20, 2019 10:08 |
Moving mesh | Niklas Wikstrom (Wikstrom) | OpenFOAM Running, Solving & CFD | 122 | June 15, 2014 07:20 |
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 | bookie56 | OpenFOAM Installation | 8 | August 13, 2011 05:03 |
Orifice Plate with a fully developed flow - Problems with convergence | jonmec | OpenFOAM Running, Solving & CFD | 3 | July 28, 2011 06:24 |
calculation diverge after continue to run | zhajingjing | OpenFOAM | 0 | April 28, 2010 05:35 |