CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

HPC for OpenFOAM

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By chegdan

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 31, 2021, 15:07
Default HPC for OpenFOAM
  #1
New Member
 
Join Date: Mar 2021
Posts: 1
Rep Power: 0
dimdia is on a distinguished road
Hi everyone,

I am doing PhD on aerodynamic analysis of hybrid electric aircraft and I am trying to write a proposal for an HPC facility in my workplace. I will have to setup an HPC server to run OpenFOAM but the problem is that I am not an expert in this field and I don’t know from where to start.

The general idea is to have max 100-150 cores for 1-2 people running simulations.

Any guidelines or useful information?

Thanks in advance.
dimdia is offline   Reply With Quote

Old   April 1, 2021, 08:46
Default
  #2
Member
 
Roman
Join Date: Sep 2013
Posts: 83
Rep Power: 13
Roman1 is on a distinguished road
HPC Sabalcore, Kaleidosim, YandexCloud (rent virtual mashine) etc or your own cluster.
Roman1 is offline   Reply With Quote

Old   April 6, 2021, 11:13
Default
  #3
Senior Member
 
Daniel P. Combest
Join Date: Mar 2009
Location: St. Louis, USA
Posts: 621
Rep Power: 0
chegdan will become famous soon enoughchegdan will become famous soon enough
Hey Dimdia,

First off, good luck on writing the proposal writing ... its a roller coaster. Since nobody has mentioned anything I will jump in.
  1. Look at hardware providers: get a quote from a hardware provider (Penguin Computing, Sabalcore, etc. ) and tell them your application and maybe think about the following points about architecture, node communication, and memory.
  2. Chip Architecture: Almost all CFD applications are memory bandwidth limited. Getting a chip architecture that has more memory channels will help you get more out of CFD. In this case the recent generations of AMD are your best choice. Azure and Oracle cloud have them if you want to test things out.
  3. Memory: Memory is important so for meshing a good rule of thumb is 2GB per million cells of your simulation. think about this from a global perspective first and think about the amount of memory per socket and core that are running MPI instances.
  4. Storage: If you have the money to use SSD on each node, go for it. Also, it's nice to have a storage node on your cluster and data retention policies to prevent data hording on your actual compute. Force users to move data to the backup storage drive and keep your local nodes and home directories as small as possible. If you do this you can use SSD very effectively.
  5. Cells/Core: Again, this is kind of knowing how big your problem will be like above. I've seen anywhere from 50k cells to 250k cell per core do just fine for cases. It really depends on your solver but overall that is a very loose recommendation.
  6. Node Communication: If you have more than two nodes you are going to need some high speed node communications, it's not simply a router you buy off the shelf. You will compile your MPI or use something like INTEL MPI that will work with your local drivers to get maximum speed.
  7. Lots of other unknowns: there are quite a few more after you have the hardware, but how you setup the environment is key. What Linux OS, what queueing system (PBS, PBS Pro, SLURM, etc.), and more.
  8. Hardware Location: These are noisy and require cooling for optimal performance. Places like sabalcore or Penguin computing will literally keep the hardware for you and you pay a fee to cool it. Some universities will even allow you to do this if you reach out to them.
  9. Crunch the Numbers: Look at HPC compute providers and see what their rates are. Compare the numbers between your quotes from hardware providers, your custom built one you come up with, and the cloud or bare-metal compute providers. Think about the lifetime of your cluster and if you will need to hire someone or contract someone to maintain it. Think about cooling. We have 3 clusters in our group and it is great to have complete and quick access and really only think about electricity costs (for now). It stings when you need to replace things.

There are many things to think about and it is an incredible task to do it right so It's good to at least get a professional quote, wherever you are. if you're in the US, there are quite a few. Best of Luck.
LeOtTeRz likes this.
chegdan is offline   Reply With Quote

Reply

Tags
hpc, linux, openfoam


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Issues with OpenFOAM on an HPC with GCC+MVAPICH2. Prakriti OpenFOAM Programming & Development 0 October 18, 2020 16:48
linear solver crashing during transient run on HPC cluster Geoff67 CFX 3 August 28, 2020 19:53
Partitioning Error on HPC: ***Cannot bisect a graph with 0 vertices! MJM FLUENT 0 November 28, 2019 06:21
OpenFOAM now available on the HPCBOX HPC cloud platform devzr OpenFOAM Announcements from Other Sources 0 August 9, 2018 17:44
Can we merge HPC Pack licenses? Phillamon FLUENT 0 January 24, 2014 03:59


All times are GMT -4. The time now is 08:40.