|
[Sponsors] |
August 8, 2018, 07:17 |
interDyMFoam parallel run issues
|
#1 |
Member
Akshay Patil
Join Date: Nov 2015
Location: Pune, India
Posts: 35
Rep Power: 11 |
Hello,
I am trying to get interDyMFoam work on our linux cluster. Please click the link to download the case files https://drive.google.com/open?id=1RR...YI0_XmO29MXLhA When I run the case on my desktop (in serial) mode, the case works fine. However, I want to do the computations on the computing cluster. When I run the case with 20 processors in parallel mode there is a "floating point exception error". I also tried using less number of processors and currently it is working perfectly with 10 processors. However, when I run the case with 20 processors, it fails. I checked all my files and cannot seem to point out what the issue is. I am running the case using openfoam-4.1 on the computing cluster. Looking forward to your suggestions! |
|
August 9, 2018, 06:59 |
|
#2 |
New Member
Join Date: May 2018
Posts: 14
Rep Power: 8 |
Unfortunately, I noticed that the results vary according to the number of decomposition (10% of error between 3 and 15 processors on some simulations). I don't have a sure explanation to provide you.
If you find explanation, share it please. |
|
August 9, 2018, 09:04 |
|
#3 |
Member
Akshay Patil
Join Date: Nov 2015
Location: Pune, India
Posts: 35
Rep Power: 11 |
I think I figured out the problem. However I dont really understand if this is indeed what happened. The velocity and pressure field wasnt specified at the initial state. Maybe that is why interDyMFoam had the "Floating point exception" error. In order to solve it, I initialised the simulation for 0.1 [s] with interFoam. After this I used this as the base case for the interDyMFoam and now it is working with any number of processors. But I dont know why exactly this happened. Perhaps interDyMFoam needs an initial velocity field to be specified.
|
|
August 28, 2018, 09:55 |
|
#4 |
Member
Akshay Patil
Join Date: Nov 2015
Location: Pune, India
Posts: 35
Rep Power: 11 |
Hello Jeneas,
I was wondering why you said the results vary with different number of processors. Do you imply the computational results or the decomposition results. As I have made further progress with the simulation. The results do have a different behavior using 20 cores and 40 cores. Not sure why this is happening though. |
|
Tags |
interdymfoam, parallel computation |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Lagrangian particle tracking cannot be run in parallel for the cases with AMI patches | Armin.Sh | OpenFOAM Running, Solving & CFD | 7 | March 28, 2021 23:33 |
[snappyHexMesh] snappyHexMesh error "Cannot determine normal vector from patches." | lethu | OpenFOAM Meshing & Mesh Conversion | 1 | June 3, 2020 08:49 |
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' | muth | OpenFOAM Running, Solving & CFD | 3 | August 27, 2018 05:18 |
[mesh manipulation] Cannot get refineMesh to run in parallel | smschnob | OpenFOAM Meshing & Mesh Conversion | 2 | June 3, 2014 12:20 |
Parallel interDyMFoam cellLevel problem | tgvosk | OpenFOAM Running, Solving & CFD | 5 | February 19, 2014 03:24 |