|
[Sponsors] |
May 9, 2014, 14:43 |
OpenFOAM machine recommendation
|
#1 |
New Member
Calum Douglas
Join Date: Apr 2013
Location: Coventry, UK
Posts: 26
Rep Power: 13 |
Hi,
OpenFOAM, using mostly PisoFOAM solver for carbody external aerodynamics. Unstructured mesh, cellcount around 10million Want to spend under £3500 ($5900 USD) on the whole thing. (I have looked over the various existing threads on this forum - but not quite satisfied) 4 Options: 1) Single board QUAD socket 16 core Opteron 6274 £3500 64 Cores, 256 GB RAM, Supermicro H8QGi-F MB http://www.pugetsystems.com/featured...orkstation-100 Downside = only one FPU per core Upside = All in one built system - no messing around 2) Build HELMER cluster with perhaps 10 boards each with £3500 AMD FX8350 with 8 core cpu. (one board + cpu + 8GB ram is £250) Upside = can afford more cores Downside = got to set up network etc & would be ethernet with a Netgear 16port GS116 switch. 3) Rent out cluster time from Penguin / Amazon or Rescale.com who all claim to have OpenFOAM enabled systems. Upside = No hardware to get Downside = can be expensive to do all the setup runs when doing new simulations, performance of these type of services is not quantified in publications. 4) Something I didnt consider for under $5900 USD.... Im well aware that Intel CPUs are in many ways better options, but when I have no parallel licence cost issues and can run as many cores as I can buy - from a cost per FLOP perspective im struggling to see a winning combo from Intel. If I go Intel basically I almost half the number of CPU cores I can buy. Current workstation spec: Dual Socket Supermicro X7DWA-N MBoard Dual QUAD 3.33ghz X5470 XEONs QUADRO FX3700 GFX Any advice much appreciated. Regards Calum Douglas www.calumdouglas.ch www.scorpion-dynamics.com |
|
May 12, 2014, 06:31 |
|
#2 |
Senior Member
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18 |
I would suggest that you shouldn't get too hung up on the number of compute cores that the AMD processors offer. Rather look at what you can get in terms of the memory system performance. There have been some benchmarks published in this forum that quantify the benefit of the memory performance quite nicely. The quad socket AMD solution is not a bad option (because you get 16 memory channels on a single board), but you won't benefit that much from spending the money on the 16 core variants of the CPU. The downside of the 1X4 socket board option is that it limits you to 1600 MHz RAM, when it is possible to get significantly faster RAM on a single socket board, but then you also need to take care of the networking, preferably Infiniband.
Core i-7 computers are favoured, because you can get 4 memory channels per socket, which you can't get on other single socket systems. |
|
May 12, 2014, 16:34 |
network protocol
|
#3 |
New Member
Calum Douglas
Join Date: Apr 2013
Location: Coventry, UK
Posts: 26
Rep Power: 13 |
Ah yes I was afraid of the network speeds.
Anyone got any opinion on the new 10Gb Ethernet or Thunderbolt network protocols ? If I cant afford the networking of sufficient speed will probably just go for the 4 socket motherboard. Thanks for the info Calum |
|
May 12, 2014, 17:02 |
|
#4 |
Senior Member
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18 |
10 Gb ethernet is a strange animal. The cards and switches are virtually as expensive as infiniband, but much slower, FDR IB being able to hit 56 Gb/s. What makes it even worse for 10 Gb ethernet is that the latency is much worse than IB. The cheap trick is to look for used IB cards, cables and switch on eBay. The stuff is surprisingly cheap,but you will be on your own making it work.
|
|
May 16, 2014, 12:52 |
|
#5 |
Senior Member
Rick
Join Date: Oct 2010
Posts: 1,016
Rep Power: 27 |
Hi, I agree with Charles, you can get cheap ib cards and cables on ebay.
Recently I bought two cards from USA (refurbished), 2 mellanox MHGS18-XTC (20 Gb/s) and 5m mellanox cable (20 Gb/s) to connect them (MCC4L28-005). Each Mellanox card costed 32,99$ Cable costed 19$ Shipping costs to Italy: 25,38$ (ib cards) + 21,23$ (cable) VAT (yes...we have VAT here in Italy....): 11,66$ (for the cards) + 0$ (luckly the cable was very cheap, so no applied VAT) Conclusion: 2x 20 Gb/s mellanox ib cards + 5m cable for 143,25$, which I think is very cheap.. Daniele Last edited by ghost82; May 17, 2014 at 06:58. |
|
May 17, 2014, 09:28 |
|
#6 |
Senior Member
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 172
Rep Power: 13 |
my approach was to go even cheaper
H8QME boards with 4 x 8381HE processors giving only 16 cores per board. unfortunately this means building custom cases. board £100 4 cpus £36 16Gb memory £70 psu £30 fans £15
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET |
|
May 17, 2014, 15:37 |
|
#7 |
Senior Member
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18 |
Daniele, did you have any trouble getting the IB stuff to work, and what operating system are you running?
|
|
May 17, 2014, 17:54 |
|
#8 |
Senior Member
Rick
Join Date: Oct 2010
Posts: 1,016
Rep Power: 27 |
I'm running windows 7 professional 64 bit in 2 nodes.
No problems at all. As suggested by Erik in another post the only thing to do is to install correct drivers (I downloaded winof 2.1.2) and enable winsock direct protocol during installation. Ps: I'm using fluent, not openfoam http://www.cfd-online.com/Forums/har...d-win7x64.html |
|
May 19, 2014, 19:17 |
for openfoam when to go IB
|
#9 |
Senior Member
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 172
Rep Power: 13 |
With 20 x 8381 (quad core 2.5Ghz) processors spread across 7 boards is it worthwhile going to IB rather than gb ethernet? How substantial a performance diiference?
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET |
|
May 20, 2014, 04:32 |
|
#10 |
Senior Member
Charles
Join Date: Apr 2009
Posts: 185
Rep Power: 18 |
You should absolutely go with infiniband. Don't even think of doing such a cluster with Gb ethernet. Unfortunately I don't have the results with me here, but when testing a 6 node, dual socket, hex-core Opteron system a few years ago, the results were startling. For nominal testcases (something like 5 million cells), when using anything more than about 20 cores on Gb ethernet the performance was essentially flat, and even got worse with more cores. With QDR IB the improvement was more or less linear all the way up to 72 cores. You are looking at 80 cores for your system ... definitely must be IB. To put it differently, you would probably do better with 40 cores and IB than with 80 on Gbe.
|
|
May 20, 2014, 06:18 |
|
#11 |
Senior Member
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 172
Rep Power: 13 |
As I though, GbE two nodes/boards ok, three nodes/boards pushing it.
That means I can get to 32/48 cores if I only use the Quad socket boards with GbE, but the remaining cores will need InFB. More expense ... the InFB boards are quite cheap on Ebay but the switches are still much much bigger money than GbE.
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET |
|
Tags |
cluster, hpc, openfoam |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[Gmsh] 2D Mesh Generation Tutorial for GMSH | aeroslacker | OpenFOAM Meshing & Mesh Conversion | 12 | January 19, 2012 04:52 |
New OpenFOAM Forum Structure | jola | OpenFOAM | 2 | October 19, 2011 07:55 |
New building machine for OpenFoam | gerbervdgraaf | OpenFOAM Installation | 23 | December 9, 2009 03:39 |
64bitrhel5 OF installation instructions | mirko | OpenFOAM Installation | 2 | August 12, 2008 19:07 |
The OpenFOAM extensions project | mbeaudoin | OpenFOAM | 16 | October 9, 2007 10:33 |