|
[Sponsors] |
February 6, 2018, 13:55 |
Small Cluster for Ansys
|
#1 |
New Member
Alex
Join Date: Feb 2018
Location: USA Colorado
Posts: 2
Rep Power: 0 |
Hello
I want to build a Small Cluster for Auro Analysis with Ansys for a SAE Formula car. The cluster I want to build will be based off of 10-20 4gen intel i7/i5 computers with SSDs and 16GB of ram. Is this feasible or am I going to struggle for small gains. I have been told we needed 1 million points for our first basic analysis. The current best single computer available is a 5960x(8core 16 threads 20MB L3)@ 4.0ghz, 80gb ram @ 2800, sli gtx980 How screwed am I? Also I am new to Ansys and still learning from Friends/classmates |
|
February 7, 2018, 06:46 |
|
#2 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
That I7-5960x is a pretty neat CFD processor. Once you sanitize the memory population it should handle "small" and "medium" cases just fine. 1 million cells counts as "tiny" in this context. Our formula student team regularly uses ~200GB of RAM with cases that consist of 60 million cells or more.
80GB of RAM sounds like it is not configured properly. Each of the 4 memory channels should be populated equally for optimal performance, preferably with identical DIMMs. Building a cluster of 10-20 smaller machines is a different category. You will need Infiniband interconnects to get decent scaling on such a high number of nodes. Ethernet won't cut it. While decommissioned Infiniband gear can be bought cheap on ebay, you will need quite a lot of cards, cables and a large switch for this high number of low-power nodes. Don't know if you still need to buy the hardware or if you already have the I5/I7 nodes. In the first case, you might want to reconsider using dual Xeon E5 (v1 or v2) nodes instead. They offer a pretty decent price/performance ratio, DDR3 reg ECC is cheap and you will need less networking gear since each node has a higher performance -> smaller number of nodes required. This should also be significantly easier to manage compared to a high number of nodes based on consumer-grade hardware. |
|
February 7, 2018, 12:59 |
|
#3 |
New Member
Alex
Join Date: Feb 2018
Location: USA Colorado
Posts: 2
Rep Power: 0 |
Regarding the configuration of the 5960x RAM there is 2, 4 DIMM channels, one with 4x16 sticks, the other with 4x4 sticks. Will this configuration cause problems or be just fine along as I keep the channels with the same sticks?
With the I5/I7 notes they are the only ones we can afford because they are free but they don't have ECC RAM nor does the 5960x. We have little to no funding but after looking at the used Infiniband gear I think we can afford most of it. Would it be worth it to spend the $500-$1000 to get the gear to set up the node or is it not justabile with not enough gains and it more likely to crash without ECC?
__________________
Thank you for your time Alex Student Learning Ansys Knows Soildworks |
|
February 8, 2018, 04:23 |
|
#4 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
If you really need 80GB of RAM for your simulations, populate each memory channel with one DIMM of 16GB and one DIMM of 4GB. DIMM slots are usually labeled A1,A2,B1,B2,C1,C2,D1,D2 on X99 motherboards. Letters denoting the channels, numbers denoting slot numbers for each channel. Consult the motherboard manual to make sure. For maximum performance though, just leave the 4GB DIMMs out of the system and put each 16GB DIMM in the first slot of each channel. Talking about a performance increase in the order of 5% compared to properly populated 80GB.
Missing ECC is usually not that big of a deal when you are sure the systems run properly and memory has no errors. I would at least run an instance of memtest86+ prior to setting up a cluster with that many nodes. I hope your nodes all have similar performance, otherwise you will have additional hassle with load balancing. Reading a few of the DIY infiniband buildlogs before buying hardware might be a good idea to see if you feel up to the task of setting it up. Last edited by flotus1; February 8, 2018 at 05:28. |
|
Tags |
ansys, cluster computing, clusters, linux, sae |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[snappyHexMesh] .STL: non-closed manifold surface | giulio.topazio | OpenFOAM Meshing & Mesh Conversion | 32 | November 25, 2016 04:15 |
Preformance of Ansys CFX on a Linux Cluster | roued | CFX | 3 | September 30, 2012 08:07 |
Running Foam on multiple nodes (small cluster) | Hisham | OpenFOAM Running, Solving & CFD | 4 | June 11, 2012 14:44 |
How to install the OpenFoam in the cluster. Please help me! | flying | OpenFOAM Installation | 6 | November 27, 2009 04:00 |
Linux Cluster Performance with a bi-processor PC | M. | FLUENT | 1 | April 22, 2005 10:25 |