|
[Sponsors] |
August 14, 2000, 02:43 |
cfd-hardware
|
#1 |
Guest
Posts: n/a
|
Hy, a question to the "state of the art". What hardware do you use for cfd in the moment ? chris
|
|
August 14, 2000, 06:16 |
Re: cfd-hardware
|
#2 |
Guest
Posts: n/a
|
Chris,
I notice you asked "What hardware do you use", not "What is the best hardware to use", so my answer is just that, an answer rather than an opinion. For 3D CFD (VECTIS), we use a variety of UNIX machines for pre-processing, solving and post-processing. At the present time, the quickest of our lot per CPU are our HP J-class machines (J5600 and J6000) and our Compaq DS20. For parallel jobs, we tend to use an 8 processor SGI box. We do have a beowulf system set up here that appears to be pretty quick too, but nobody trusts it enough yet to use it for real work. For 1D CFD (WAVE), most of our work is now done on reasonably spec'd pentium boxes running NT. Hope this is a useful data point. - Steve |
|
August 14, 2000, 11:16 |
Re: cfd-hardware
|
#3 |
Guest
Posts: n/a
|
Steve,
We have a beowulf (8x800Mhz) that is about to go 'live' on large unsteady CFD problems. I would be interested to know what is causing the uncertainty with your cluster. Have you had 'issues'? Rich |
|
August 14, 2000, 12:01 |
Re: cfd-hardware
|
#4 |
Guest
Posts: n/a
|
Rich,
No, nothing technical. Just inertia really. |
|
August 14, 2000, 15:51 |
Re: cfd-hardware
|
#5 |
Guest
Posts: n/a
|
(1). Currently, I am using Sun/Ultra60 workstation. (2). Before that, two years ago, it was a HP/c200 (?) something like that. (3). There are other computers available, but I tend to use these two types of workstations. No big problems with commercial cfd codes.
|
|
August 15, 2000, 08:09 |
Re: cfd-hardware
|
#6 |
Guest
Posts: n/a
|
A powerful one is "Silicon Grphics" workstation, I use it now. Last time I used a sun "Spark 5" workstation.
|
|
August 15, 2000, 08:20 |
Re: cfd-hardware
|
#7 |
Guest
Posts: n/a
|
We use HP J-class workstations for pre and post-processing and Linux Clusters (high-end PCs runing RedHat) for large simulations. Nothing beats the price/performance of the Linux Cluster. We used to use HP V-class parallel compute-servers, but they have now largely been replaced by Linux Clusters.
|
|
August 15, 2000, 14:32 |
Re: cfd-hardware
|
#8 |
Guest
Posts: n/a
|
Jonas, do you ever run into memory limitations with the Linux cluster? I assume you're using 32-bit Intel or AMD processors, which can only access 2^32 ~ 4GB.
Any plans to switch to 64-bit or is there away around this that I don't know about. The reason I ask is that we are looking at Linux cluster, but I thought we'd wait until the Sledgehammer and Itanium are out. |
|
August 15, 2000, 15:28 |
Re: cfd-hardware
|
#9 |
Guest
Posts: n/a
|
True, Linux on x86 (Intel PIII etc) can only address up to maximum 4GB, and this is if you have done everything right... many Linux installations have 2 GB as the limit. This is one of the reasons why we don't do post and pre-processing on Linux PCs. When you run simulations though you always parallelize the case if it is big (ie demands a lot of memory). When you parallelize the case you most often split it up into smaller parts (domain decomposition), which easily fits into the memory of each Linux cluster CPU. In practise we have very seldom had any memory problems on our Linux clusters when running simulations. Parallel scaling vs. problem size often gives an optimum with something like 300 MB parts on each CPU. The only problem we've had is with an in-house code which is parallelized in a stupid way - you have to have a "mother node" to read in the case and this has to be able to hold the entire case in memory. The simple solution is to place the "mother node" on an HP workstation, which can address more than 4 GB, and place all compute-nodes on the Linux cluster.
We are about to benchmark an Itanium box and if it works out well we hope that Itanium machines will be an alternative to the HP J-class machines for pre and post processing. |
|
August 15, 2000, 17:03 |
Re: cfd-hardware
|
#10 |
Guest
Posts: n/a
|
For running CFDesign I use a dual processor Intel Petium 600 MHz with 512 Mbytes of RAM. As I can turnaround almost all the examples on our Website, at www.brni.com, overnight, I have not seen the need to invest in anything more expensive. I think the majority of our customers are now doing production models on PC's of this type of specification (although most have only one processor).
Regards Mike |
|
August 17, 2000, 12:39 |
Re: cfd-hardware
|
#11 |
Guest
Posts: n/a
|
Thanks for the info Jonas
|
|
August 18, 2000, 16:37 |
Re: cfd-hardware
|
#12 |
Guest
Posts: n/a
|
as CFDers we should have very little inertia ;-). be not afraid Beowulf is good for you like broccoli
|
|
August 29, 2000, 00:40 |
Re: cfd-hardware
|
#13 |
Guest
Posts: n/a
|
We couldn't be happier with our Linux cluster (USAF Academy). We have built it up to 64 processors and use the Government code, Cobalt (unstructured) http://www.va.afrl.af.mil/vaa/vaac/COBALT/ It churns out full aircraft solutions (1.5-6 million cells) in a day to a few days. The cluster came from Paralogic (www.plogic.com). Don't let interia hold you back. We have a 10:1 price performance over sgi's.
Jim |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
CFD Design...The CFD Future | John C. Chien | Main CFD Forum | 20 | November 20, 2015 00:40 |
STAR-Works : Mainstream CAD with CFD | CD adapco Group Marketing | Siemens | 0 | February 13, 2002 13:23 |
PC vs. Workstation | Tim Franke | Main CFD Forum | 5 | September 29, 1999 16:01 |
Which is better to develop in-house CFD code or to buy a available CFD package. | Tareq Al-shaalan | Main CFD Forum | 10 | June 13, 1999 00:27 |
Hardware for CFD | Santiago L. Torres | Main CFD Forum | 7 | March 25, 1999 16:19 |