CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Cost Effiective Cluster Hardware

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 24, 2013, 07:10
Default Cost Effiective Cluster Hardware
  #1
New Member
 
John McEntee
Join Date: Jun 2013
Posts: 8
Rep Power: 13
jmcentee is on a distinguished road
I have been task with putting together a cost effective cfd cluster for about £15,000 use a £1=$1 coversion to get the right idea. The cluster is mainly going to be used for starccm+ (unlimited proccessor license) for trialing OpenFOAM, and maybe other CFD software. The cluster may need to be expanded later. The options I am cosidering are.

1.a) DIY built rackmount with Z77 motherboards intel i5 3770K with 16 GB RAM per node. Second hand ebay 20GB infiniband cards, switch and cables. budget with therefore streach to 30 nodes (120 cores).

1.b) same as above but with 10GBase-T ethernet, budget only goes to 17 nodes (68 cores).

2.) A DIY opteron 4 cpu motherboard, 16 cores per cpu, 256 GB RAM per node could streach to 2 nodes (128 cores)

3.) The starcmm+ partner solution. Blade system intel Xeon dual 8 core cpus per node, 48 GB RAM. Also 2 nodes (32 cores)


I currently favour option 1a). Does anyone have any idea of what point a gigabit network is too slow and infiniband is needed, is the number of cores important or number of nodes?

Option 2) I don't like as the AMD opteron chips, although have 16 cores, they only have 8 fpus, I don't know how the software would cope with that,plus when all are heavily used it starts being seriously reduced in speed.

Option 3) is just expensive

Getting new inifiniband kit seems difficult, and very expsenive.

Does anyone have any good reasons why I should not go with option 1?

Thanks

John
jmcentee is offline   Reply With Quote

Old   June 24, 2013, 08:52
Default
  #2
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18
CapSizer is on a distinguished road
Give some consideration to a modified version of 1.a), using i7 CPU's rather than i5's. The reason for this is that the i7 Socket LGA 2011 systems give you much more memory bandwidth per CPU. You will obviously be able to afford less cores, but this will also cost you a smaller number of IB cards & cables.

A very approximate rule of thumb for CFD performance is to start by looking at the total number of memory channels that you can get for your money. Your Option 1a gives you 60 in total, with the complexity of dealing with all the IB networking. 1a with i7's will only need 15 nodes for the same number of memory channels.

The 2 X quad Opteron system gives you only 32 memory channels (4X4 per board), which is its biggest weakness, rather more than the actual number of FPU cores. On the plus side, it will be by far the least amount of hassle, and will get you up & running much more quickly.

The blade solution I think gives you only 16 memory channels, which is its biggest shortcoming.

In your position, I would very likely just go for the 2X4 Opteron system, networked together directly. You could literally get it up and running in a day or so, compared to all the running around that the eBay Infiniband system will need. That time also counts!
CapSizer is offline   Reply With Quote

Old   June 24, 2013, 09:55
Default
  #3
New Member
 
John McEntee
Join Date: Jun 2013
Posts: 8
Rep Power: 13
jmcentee is on a distinguished road
Thank you for the quick response, I will look into the costs of the i7 option. As you say memory bandwidth is the best rule of thumb. Does that mean lokking at speed of memory is also important. as in the LGA 2011 chips have a higher clock so does that mean it will be more than twice as fast? Also the cost of a 6(12) core cpu seams to be more that double a 4(8) core cpu, but with the same memory bandwidth, which would indicate the cheaper cpu would be more cost effective.

I will look into the opteron a bit closer now, but the bits are harder to come by.

Thanks

John
jmcentee is offline   Reply With Quote

Old   June 24, 2013, 10:22
Default
  #4
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18
CapSizer is on a distinguished road
OK, so that is the question of how the i7 3820 quadcore compares to the i7 3930 K hexcore in CFD terms. Has anybody done this comparison? FWIW, a close approximation might be the E5-2643 vs E5-2667. The SpecFPRate numbers for these are 326 vs. 416, which suggests a useful percentage advantage for the hexcore. However, if you dig deeper, and select only the CFD-style test leslie3d, the difference narrows, 272 vs. 301, therefore only a 10% advantage to the more expensive CPU.

Last edited by CapSizer; June 24, 2013 at 13:11.
CapSizer is offline   Reply With Quote

Old   June 24, 2013, 13:01
Default
  #5
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,188
Rep Power: 23
evcelica is on a distinguished road
The 3770K is an i7, and you can get i7s in both 1155 and 2011 socket. The 1155 socket stuff (Ivy-bridge, Sandy-Bridge) have the dual memory channels. The 2011 stuff has four memory channels (Sandy bridge-E).
evcelica is offline   Reply With Quote

Old   June 26, 2013, 08:58
Default
  #6
New Member
 
John McEntee
Join Date: Jun 2013
Posts: 8
Rep Power: 13
jmcentee is on a distinguished road
Thanks for the help, my current finding are.

Opteron very difficult to get hold of, none in stock anywhere. And lots of intel nodes could be redeployed as desktops if the CFD work finishes.

The LGA 2011 motherboards that I can find do not have on board graphics. This means going from a 1u case per server to a 2u or 3u case, and buying a graphics card. Does anyone know of a LGA 2011 motherboard with on board graphics?

John
jmcentee is offline   Reply With Quote

Old   June 26, 2013, 11:07
Default
  #7
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18
CapSizer is on a distinguished road
Quote:
Originally Posted by jmcentee View Post
Thanks for the help, my current finding are.
The LGA 2011 motherboards that I can find do not have on board graphics. This means going from a 1u case per server to a 2u or 3u case, and buying a graphics card.
John
If you are really going to go down the E-Bay infiniband route, you will need desktop cases anyway, to allow you to fit the IB network cards, unless you were planning on fitting them with riser cards.

I'm quite curious to know how well an EBay IB cluster works out - it is an attractive idea, but I have a horrible suspicion it's going to be a lot of work to get everything working properly, and you will be on your own with respect to drivers and compatibility.

What the CFD world really wants is an affordable single socket LGA2011 board, in a blade or 1U format, with on-board IB and graphics. http://www.buildablade.com/ have some neat stuff, but I don't think LGA2011 will work there.
CapSizer is offline   Reply With Quote

Old   June 26, 2013, 11:27
Default
  #8
New Member
 
John McEntee
Join Date: Jun 2013
Posts: 8
Rep Power: 13
jmcentee is on a distinguished road
Currently plan to use PCI-e riser cards. 16 desktop cases is going to take up too much space. Probably need to buy a rack anyway.

The ebay IB has me curious, as is seems to be a fairly unknown specialist technology with very little second hand market. I guess because all the large companies what the support/warranty making the second hand market very cheap. There seems to be lots of success of home users buying cheap IB of ebay, and it is not a significant outlay. I currently believe I would need to work out how to get infiniband working even if I bought new stuff. The backup plan would be to use Ethernet.

A LGA 2011 with onboard 10Base-T ethernet would be good as the new netgear switches are quite cheap, but I can't find on of those either.
jmcentee is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Hardware for OpenFOAM Cluster Edge99 Hardware 1 March 6, 2013 15:09
Parallel rerun in cluster Andy_bm OpenFOAM Running, Solving & CFD 4 November 27, 2011 08:16
Parallel cluster solving with OpenFoam? P2P Cluster? hornig OpenFOAM Programming & Development 8 December 5, 2010 17:06
Cluster hardware and distros jvn OpenFOAM 4 July 3, 2005 05:14
Linux Cluster Performance with a bi-processor PC M. FLUENT 1 April 22, 2005 10:25


All times are GMT -4. The time now is 01:43.