|
[Sponsors] |
April 4, 2016, 11:10 |
HPC hadware configuration
|
#1 |
New Member
David
Join Date: Aug 2011
Posts: 6
Rep Power: 15 |
Good Morning Every body,
I was wondering if you can give me some configurations for CFD HPC calculations, I'm thinking around 128 cores max (64 min). Normally de CFD software would be ANSYS CFX - Fluent. I have quite a decent budget around 20.000 Euros. Uses will go from Research to pre-design parametric configuration studies. Any recomendations. Also a good pre/post configuration is welcome Thank you in advance |
|
April 4, 2016, 17:47 |
|
#2 |
Senior Member
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,188
Rep Power: 23 |
How many cores does your licensing allow you to use?
The reason I ask is if you have 2 HPC packs you can run 32 cores, and if you have 3 HPC packs, you can run 128 cores. So 64 cores seems like an odd number for your cluster. you should build your machines based off how many cores you can run on. Last edited by evcelica; April 6, 2016 at 14:53. |
|
April 8, 2016, 04:32 |
|
#3 |
New Member
David
Join Date: Aug 2011
Posts: 6
Rep Power: 15 |
With Ansys we have 2 HPC whit the possibility of buying 1 more (so the 128) but we also we have Star CCM+ without limits
|
|
April 8, 2016, 05:12 |
|
#4 |
Senior Member
|
I don't know where to even start with a recommendation, to be honest.
Even though 20k EUR seems like a large budget, if you're buying retail I'm afraid you can't get a 128 core mini-cluster for that money. A single fast 8-core workstation built out of "gaming" components would run you around 2200 EUR, so you could get 8-10 of those, but then have another expense of Infiniband network cards and a switch which aren't cheap (few more grand). If you want to go via the Xeon E7-4890 v2 (or v3) which is 15 fast cores and scales well up to 4 sockets in a single machine, that is like 6000 EUR per cpu, so if you get just 4 CPUs you've blown your budget and are at 60 cores and not 128. And you still need a custom housing with some rather expensive redundant power supplies, a motherboard that'd be nearly 1k EUR, and a whole lot of RAM (few more grand). Same thing kind of happens if you pick the E5-2600 series and go for a dual socket machine/s. CPUs are around 2.8k EUR/piece for 14-18 or so cores (and you'd need to get fast ones), motherboard is around 500-600 EUR, RAM for a single workstation (let's say 128 GB) is around 700 EUR and with some miscellaneous items (power supply, case, one OpenGL graphics card) you're at like 8000 EUR for a single workstation. You can get two of those, two Infiniband cards and connect them directly without a switch and you'd be within your budget but still be at 72 cores out of the 128 that you'd want. If you really need/want 128 cores, then you need to re-evaluate your budget OR if you are a fearless bastard go for some shady ES/QS processors.
__________________
If you're in need of some free quality CFD video tutorials - check out SiriusCFD @ YouTube |
|
April 19, 2016, 16:28 |
|
#5 | |
Senior Member
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,188
Rep Power: 23 |
Quote:
One thing to keep in mind, is if using double precision, it usually doesn't scale too well past 4 cores per CPU, since you will be memory bandwidth bottlenecked. Single precision and lower clock speed CPUs will scale better, but that is just because the lower clock speed CPUs are slower to begin with, so it takes more cores to become memory bandwidth bottlenecked. My only thought would be to build as many cheap nodes as possible: Maybe some bare bones i7-5820K PCs with 64GB RAM each. Possibly these could be had for ~1300 each if you get components on the cheaper end. Let's just say you got 16 of these, that would be ~21K. Then add 16 Infiniband cards and a switch, hopefully you can pick these up used/refurbished for a ~1K to 2K. I would get DDR as a minimum, QDR or better would be preferred. Then you would have 96 total cores with 16 6-core nodes, and could use 64 of them if solving on only 4 cores per node. I'm not too proud of this configuration, and it has its drawbacks, it's just the best bang for the buck I could think of to run 64 cores as efficiently as possible. Drawback: Hopefully meshing/Pre/Post processing won't be a problem with only 64 GB of RAM on a single CPU node. Drawback: Installing/configuring software and maintaining 16 machines will be a pain. Here are the results of a CFX benchmark that I've run on many machines/clusters in the past, hopefully it will be of some use to your decision: |
||
April 19, 2016, 17:38 |
|
#6 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
This may seem like a good idea, and in fact a single Workstation with Intels X99 Platform really is a good choice from a price/performance perspective.
However, you will have a very hard time keeping several of these machines running simultaneously. Stability will be a major issue due to the lack of ECC support with such a huge total amount of memory. And even if it should run long enough finish a simulation, I would not trust the results. Here is a good read about the topic. Single-bit memory errors occur more often than you would guess http://spectrum.ieee.org/computing/h...and-bad-solder |
|
April 19, 2016, 19:49 |
|
#7 | |
Senior Member
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,188
Rep Power: 23 |
Quote:
I believe someone else on here had built a 15 node cluster from "gaming" hardware. They should have some feedback on this as well. For example: I've been running up to 5 machines in parallel for several years, a normal run for me can be several thousand CPU hours, so they run for days/weeks at a time for each run. I have had several problems with windows updates coming in and killing my jobs with a forced restart. I've also had power outages kill my jobs several times. But I've yet to lose a job to a stability problem. It seems stability and non-ECC RAM is the least of my worries. I'm not sure where you got the idea that you couldn't trust the results? That just isn't true. I guess if you really wanted to you could change the CPUs over to XEON 1650v3 and use ECC RAM, for a few hundred Euro more per node. Like I said before, I'm not too proud of this, and I wouldn't ever dream of doing a large cluster with 1000s of cores like this for the exact reasons you mention. But the budget proposed doesn't really allow us to build a really nice server level cluster made from DUAL or QUAD CPU machines. So this was the only way I could think of getting 64 cores of efficient solving for that low of a price. A QUAD CPU machine (E5-4667v3) with 64 cores will be over 20K just for the 4 CPU's, with nothing else, and will be very slow in comparison to just half of the nodes in the single CPU cluster option. But yes, what you say is true, and another drawback that I forgot to mention. It will definitely be less reliable than a server level machine, thanks for that input. On a separate note, here is the updated benchmark graph, I scaled up to using 5 and 6 cores per CPU on some benchmarks. |
||
April 20, 2016, 09:43 |
|
#8 |
Senior Member
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,188
Rep Power: 23 |
I thought about this and realized you only have ANSYS licenses for 32 cores.
So one option would be 4 nodes with Dual XEON E5-2643V3. That is 48 cores total 8 6-core, and it would be very efficient solving on 8 cores per node for 32 cores total. you would need the infiniband switch and network cards as well, but you may be able to get this all for ~20K. Stretching it to 64 cores makes you have to go to many single CPU systems like the previous option |
|
April 21, 2016, 03:36 |
|
#9 |
Senior Member
Maxim
Join Date: Aug 2015
Location: Germany
Posts: 413
Rep Power: 13 |
A couple of weeks ago I was asking around for a small cluster and got the following offer for a 32 core cluster:
HPC Cluster solution configured as 1 DoubleTwin Server 2 HE with each 4 independent Compute Nodes (hot-plug) + Master / FileServer Technical Data per node: - CPU: 2x Intel E5-2637 v3 QuadCore CPU (135W) 3,5 GHz - RAM: 64 GB (8x 8GB) DDR4 DIMM 2133 MHz - HDD: 1x 1TB hard drive - Network: IPMI 2,0 incl. KVM over LAN and Dual Intel Gigabit Ethernet, Infiniband FDR with QSFP on Board Technical Data Master: - 2 HE Cases with 740W redundant power supplies - CPU: 1x Intel Xeon E5-1650 v3 CPU 6Core 3,5 GHz - RAM: 64 GB (4x16GB) DDR4-2133 DIMM, REG, ECC, 2R - HDD: 8x 2 TB SATA-III hard drives 7,2k RPM as Raid-6 (12 TB netto) - Controller: 1x LSI 9271-8i incl. CacheVault - Network: IPMI 2,0 inkl. KVM over LAN and 2x Gigabit Ethernet Controller on Board, Infiniband FDR-Karte - DVD ROM including: - 2x Gigabit Switch incl. cables - 12-Port Infiniband switch incl. cables They were asking 30k € plus taxes including installation of the software, burn-in test, delivery and 3 years 1-day-replacement for hardware failures. I don't know as much as the other guys here and can't tell for sure if that's a good offer or not. I just thought I throw in that offer as possible configuration. You are also welcome to share your thoughts on this configuration / make recommendations. Best regards, Maxim |
|
April 21, 2016, 04:23 |
|
#10 |
New Member
David
Join Date: Aug 2011
Posts: 6
Rep Power: 15 |
Thank you all for your comments, I found them very helpful.
The decision was made in the end to increase the budget so now we found a server of 128 cores for 95k. Still have not the full configuration but i will keep you posted if you are interested ^^ cheers |
|
September 16, 2016, 10:43 |
|
#11 |
Senior Member
Gonzalo
Join Date: Mar 2011
Location: Argentina
Posts: 122
Rep Power: 16 |
Hi, can you please post the configuration of the cluster you have buyed and the budget. I'm trying to upgrade a cluster I'm usin and I need some number to get an idea of how much money i will need. Thanks
|
|
Tags |
hardware, hpc cluster |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Job submit code in HPC for Workbench | beyonder1 | ANSYS | 0 | January 4, 2016 09:58 |
Old Workstation for HPC | wgvanveen | Hardware | 4 | December 8, 2015 11:00 |
PC configuration | oj.bulmer | CFX | 7 | April 2, 2014 08:56 |
Can we merge HPC Pack licenses? | Phillamon | FLUENT | 0 | January 24, 2014 03:59 |
Multiple Configuration Simulation | Tristan | CFX | 0 | November 14, 2009 00:01 |