CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Server/Cluster configuration to multiphase simulations

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 25, 2019, 11:35
Default Server/Cluster configuration to multiphase simulations
  #1
New Member
 
Hilton
Join Date: Dec 2016
Location: Rio de Janeiro
Posts: 5
Rep Power: 9
HBetta is on a distinguished road
Hi guys.

I'm helping my team to configure a server/cluster to run some multiphase simulations with 50M cells, approximately, to OpenFOAM.

After some research, I've decided that 3x a server with: 2x Intel Gold 5218 + 12x 16GB 2667 MHz would be a good choice. The storage could be in another unit with only SATA HDs (7200RPM) and the parts connected with a 10 GbE switch.

I believe this strategy can make things easier if the team wants to increase cluster performance.

However, Dell Brazil and my company support, are recommending only one server with:

2x Intel Gold 6252
12 x 64GB 2667 Hz
12x 2.4TB HDs 10k RPM SAS
3.84 TB SSD.

As I'm not totally comfortable setting up hardware, I don't know what strategy should I take.

What's your guys opinion??

Thanks!!
HBetta is offline   Reply With Quote

Old   September 25, 2019, 12:32
Default
  #2
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
However, Dell Brazil and my company support, are recommending only one server
Did they give any particular reason for replacing 3 servers with 1? Other than the fact that it is obviously cheaper.
In terms of performance, 3 nodes will be much faster when solving your rather large models. But you will need an interconnect. 10Gigabit Ethernet might be ok'ish for only 3 nodes, but Infiniband will become mandatory if you ever want to increase the number of nodes.
And of course: AMD Epyc CPUs would be the better choice in terms of price/performance. There is a sticky thread here with lots benchmarks on various hardware. 1st gen Epyc CPUs are available with huge discounts these days, maybe you can negotiate a price for the whole cluster.

As a side-note:
Quote:
12x 2.4TB HDs 10k RPM SAS
Unless there is a very specific requirement that only this kind of storage setup can deliver, this is a huge waste of rack space and money. I wonder what that could be...maybe 10 users writing huge files simultaneously, so the box runs out of SSD cache?
Otherwise, I would insist on fewer, larger HDDs with 7200rpm or even 5400rpm. Seems like the SSD will be used for caching, so the speed of the HDD array does not really matter.
flotus1 is offline   Reply With Quote

Old   September 25, 2019, 19:14
Default
  #3
New Member
 
Hilton
Join Date: Dec 2016
Location: Rio de Janeiro
Posts: 5
Rep Power: 9
HBetta is on a distinguished road
Quote:
Did they give any particular reason for replacing 3 servers with 1?
Unfortunately, it looks like that kind of interconnect switches are not usual around here, so they are really expensive.

Quote:
Otherwise, I would insist on fewer, larger HDDs with 7200rpm or even 5400rpm. Seems like the SSD will be used for caching, so the speed of the HDD array does not really matter.
Great to hear that. We don't have that request. I havent noticed that HDs were taking most of budget.

Thank you, Alex.
HBetta is offline   Reply With Quote

Old   September 25, 2019, 19:30
Default
  #4
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I can only find two 10k SAS HDDs with 2.4TB capacity. They cost 380€ and 460€ respectively. That's quite a lot of money for 12 of them. You get decent enterprise-grade 12TB HDDs in that price range.

And yes, new Infiniband gear is quite expensive, not only in Brasil. The same can be said for basically any kind of node interconnect as soon as you go beyond 10 Gigabit Ethernet. It might be worth giving it (Ethernet) a try, and see how your simulations scale across nodes. You can still upgrade to a different interconnect later. But in my experience, multiphase simulations tend to put more pressure on the interconnect, compared to single-phase simulations.
flotus1 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Please Help...CFD Simulations Workstation sharath chandra Hardware 1 May 15, 2016 03:18
Lagrangian Particle Tracking in Eulerian-Eulerian Multiphase Flow DarrenC CFX 5 April 7, 2016 15:50
Low Mach Number Compressible Multiphase Flows DarrenC CFX 10 May 26, 2014 09:52
PC configuration oj.bulmer CFX 7 April 2, 2014 08:56
VOF multiphase - Validity of Fluent ? manxu FLUENT 2 January 2, 2014 12:17


All times are GMT -4. The time now is 02:50.