CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Network hardware used in clusters

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 25, 2008, 09:32
Default Hello, I'm interested in what
  #1
kar
Senior Member
 
Kārlis Repsons
Join Date: Mar 2009
Location: Latvia
Posts: 111
Rep Power: 17
kar is on a distinguished road
Hello,
I'm interested in what kind of networking people use for effective parallel CFD computing? When it is sufficient with some cat5 wiring + switches and when something faster, like Infiniband is necessary?

Regards,
Kārlis
kar is offline   Reply With Quote

Old   May 25, 2008, 11:23
Default It is strongly application and
  #2
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20
bastil is on a distinguished road
It is strongly application and code-dependend. As a rute of thumb:

For calculations on more than 16 cpus "normal" gigabyte ethernet is too slow mostly. Nevertheless it is also dependend on number of cores per node,...

Regards
bastil is offline   Reply With Quote

Old   May 25, 2008, 16:41
Default You mean > 16core machine?? El
  #3
kar
Senior Member
 
Kārlis Repsons
Join Date: Mar 2009
Location: Latvia
Posts: 111
Rep Power: 17
kar is on a distinguished road
You mean > 16core machine?? Else it makes some kind of nonsense to me - switch should be able to handle properly all of it's connects, isn't that right?
kar is offline   Reply With Quote

Old   May 25, 2008, 17:35
Default I do not understand what you m
  #4
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20
bastil is on a distinguished road
I do not understand what you mean. Of course a switch can handle all. However, if you distribute a CFD case into more than about 16 parts communication overhead grows non-linear. That means for these cases with that much communication your network will definitely be the first bottleneck in cases of speed. This is refered to as speedup. If you run a case on one core you get a speedup of one. Running it on eg 8 cores has a theoretical speedup of 8 but you will only get less. And using gigabyte ethernet you will not get much quicker if you use 16 or 32 or 64 codes in general - of course this is case and architecture dependend. However using faster interconnects (eg infiniband) will give you further speedup if you switch from 16 to 32 parts... This is what I wanted to say.

All this is also dependend on number of cores per CPU and CPUs per node. Above numbers go for typical nodes with 2CPUs and 2 cores per CPU. I do not know to much about nodes with more cores on it...
bastil is offline   Reply With Quote

Old   May 26, 2008, 05:18
Default So the story is about timestep
  #5
kar
Senior Member
 
Kārlis Repsons
Join Date: Mar 2009
Location: Latvia
Posts: 111
Rep Power: 17
kar is on a distinguished road
So the story is about timestep computing time compared with time necessary to exchange boundary values. Gigabyte network might have two speed problems: too little transfer speed and latencies. By dividing, typical timestep computing time and speedup decreases, if network is slower than inter-core communication.

Just curious: how much those infiniband NIcards cost? And ~30 port switch?
kar is offline   Reply With Quote

Old   May 26, 2008, 17:46
Default And please don't forget to fac
  #6
Senior Member
 
Srinath Madhavan (a.k.a pUl|)
Join Date: Mar 2009
Location: Edmonton, AB, Canada
Posts: 703
Rep Power: 21
msrinath80 is on a distinguished road
And please don't forget to factor in the memory bandwidth bottleneck when using multi-core CPUs. The more cores that share memory bandwith, the worse is the speedup (even if onboard core interconnects are used).
msrinath80 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
CFD clusters + bonded GigE? Joe Main CFD Forum 8 September 19, 2006 08:58
Clusters on linux: PVM vs. HP MPI Alexey CFX 4 February 8, 2006 11:33
Best CFX Platform for developing clusters Javier O. Augusto CFX 0 August 25, 2005 13:09
MPI on Clusters wak Siemens 0 September 19, 2004 23:03
Beowulf clusters Sebastien Perron Main CFD Forum 18 May 17, 2001 19:11


All times are GMT -4. The time now is 08:05.