CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS > CFX

Large jobs and memory

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 12, 2002, 15:16
Default Large jobs and memory
  #1
Peter Menegay
Guest
 
Posts: n/a
We do alot of large CFD runs at our site, mostly with Tascflow but increasingly with CFX 5. We are considering upgrading our workstations to dual Xeon 2.4 GHz with 8 GB RAM. The 8 GB is available in server configurations. My question is, even if we have all that memory, will CFX/Tascflow be able to access it? OS will be Linux, presumably 32 bit. So far, with dual Xeon 1.7 GHz, I have never been able to access more than 2 GB.

Thanks,

Peter
  Reply With Quote

Old   August 12, 2002, 18:23
Default Re: Large jobs and memory
  #2
Neale
Guest
 
Posts: n/a
You can recompile the Linux kernel to enable up to 4 GB memory access. I think that by default, Redhat, and possibly other distributions, come limited to 2 GB, which is what you observe.

Xeon's are a 32 bit CPU and standard Linux is a 32 bit OS, so 4 GB per process all you get. CFX-5 and TASCflow should be able to allocate that much under Linux. You will be able to run bigger jobs with CFX-5 though, because it's parallel implmentation is better.

Neale
  Reply With Quote

Old   August 23, 2002, 06:39
Default Re: Large jobs and memory
  #3
Gaikwad Suresh J
Guest
 
Posts: n/a
HI,Peter

I too had similar problems in handling large CFD models for Combustion studies.

U can use the following data.

small CFD cases individually on the NT machine having a RAM of 2GB while the large case could not be handled due to insufficient memory. In terms of approximate grid size, problems with less than 1.0 million nodes can be handled with NT and for handling large problems above 1.0 million nodes the physical memory of the machine need to be increased approximately at the rate of 1.7MB RAM for every 1000 Nodes.

it was found that a 32-bit architecture machine can account for a maximum of 2 GB physical memory while for higher memory allocations, a 64-bit architecture is needed. Therefore the options available for solving large CFD problems using Tascflow (1.0 million nodes and above)

1. Use of HP-Unix C3700 workstation with HP-UNIX 11.0 operating system with 6 GB RAM scalable up to 8GB 2. Use of HP-Unix J Class series workstation (HP J6700–dual processor) with 6 GB RAM scalable up to about 16 GB 3. Use of Itanium 2 chip on Windows–XP operating system (Yet to be released to the market)

• The capability to handle large problems on HP Unix workstations could be seen clearly as a distinct advantage with options of the C3700 with 8GB RAM expandability & the J6700 with up to 16GB RAM.

• HP team can also help with the sizing of the whole solution keeping the current & future requirements in mind, with the right mix of systems to meet our needs.

• Similar exercise on NT platform using WINDOWS XP operating system with Itanium 2 or McKinley chip with TASCflow Version 2.12 will be carried out to explore the possibilities of solving large CFD problems.

  Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Random machine freezes when running several OpenFoam jobs simultaneously 2bias OpenFOAM Installation 5 July 2, 2010 08:40
Memory issue - Suse 10 - Opterons Andy R FLUENT 1 June 23, 2008 15:44
TGrid Memory Issues Daniel Tanner FLUENT 0 May 23, 2006 17:03
linux cluster Johan Carlsson Siemens 6 July 4, 2003 13:02
Large Jobs and Memory Peter Menegay Siemens 2 September 4, 2002 08:53


All times are GMT -4. The time now is 00:20.