|
[Sponsors] |
[RapidCFD] Discussion thread on how to install and use RapidCFD |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
June 6, 2019, 03:49 |
|
#61 |
New Member
ramune
Join Date: Jun 2019
Location: Japan
Posts: 5
Rep Power: 7 |
Thank you
The problem was the settings for graphic card. $ WM_PROJECT_DIR / wmake / rules / linux64Nvcc C and c ++ CC = nvcc-Xptxas-dlcm = cg-m64-gencode = arch = compute_70, code = sm_70 -arch = sm_70 If you change to, it will work without error. However, it is slower than running on my own PC, so I will try to solve this problem. I think that the content is different from this thread, so I will try my best. |
|
June 6, 2019, 06:22 |
|
#62 |
Senior Member
Agustín Villa
Join Date: Apr 2013
Location: Alcorcón
Posts: 314
Rep Power: 15 |
Hi
I have downloaded RapidCFD, and also CUDA 9.1, which is the one my laptop supports. Yes, maybe it is not the optimal place to run large cases, but I can to see how RapidCFD is working and test its adequacy in the future for me. Once I have installed CUDA, RapidCFD, and I start the installation, I get the error saying that my compiler GCC is newer than the one supported by CUDA9.1 (gcc 6). How can I get through this problem? did you experience the same before? I will post later the log file. |
|
June 6, 2019, 10:56 |
|
#63 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
agustinvo -
On the gcc compile error... there is a table of compatible gcc versions for compiling with CUDA. E.g, for CUDA 9.1, the compatible gcc versions are shown at (https://docs.nvidia.com/cuda/archive...nux/index.html). For the latest version of CUDA (now 10.1 update1), the table of compatible gcc versions is here, (https://docs.nvidia.com/cuda/archive...nux/index.html). Hopefully you can match your gcc compiler to the CUDA version + OS you are using. ramune - Congratulations on solving your compile problem! On your speed issue, as has been discussed in these forums, you need a really large grid to see speed improvements with RapidCFD. Inspired by the post (Comparison of OpenFOAM on i7, Xeon@32 cores, Xeon Phi Knights Landing, Tesla K20m) post, I recreated the damBreak case and placed it here, (https://github.com/TonkomoLLC/RapidCFD-Tests). Hopefully this case will help you with speed testing. |
|
June 6, 2019, 13:49 |
|
#64 | ||||
Senior Member
Agustín Villa
Join Date: Apr 2013
Location: Alcorcón
Posts: 314
Rep Power: 15 |
Thank you edaymo for your answer. I followed the steps given here:
https://stackoverflow.com/questions/...my-gcc-version Quote:
Now I'm facing the error Quote:
|
|||||
June 6, 2019, 18:41 |
|
#65 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, agustinvo -
Hmm, I don't think I personally encountered the error "what(): parallel_for failed: no kernel image is available for execution on the device" As you mentioned, there are some posts on this error on the internet. One that I read suggests that you might get this error if your GPU is of compute capability < 3 (I think RapidCFD requires at least compute capability > 3). Anyhow, I hope this lead is helpful. I hope you find the solution. Best regards, Eric |
|
June 6, 2019, 23:50 |
|
#66 |
New Member
ramune
Join Date: Jun 2019
Location: Japan
Posts: 5
Rep Power: 7 |
Hi,Eric
Thanks for your reply. I want to do this benchmark , but I think RapidCFD doesn't include MeshTools. Of course, It is possible to install OpenFOAM and use MeshTools. I don't know How to use MeshTools in RapidCFD. For now, I use MeshTools in OpenFOAM before using RapidCFD. |
|
June 7, 2019, 02:52 |
|
#67 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, Ramune,
Yes, you are correct. RapidCFD does not have blockMesh, etc. So you must use utilities (paraFoam, setFields, blockMesh, and so on) from standard CPU OpenFOAM (eg., 2.3.x). Best regards, Eric |
|
June 7, 2019, 04:07 |
|
#68 | |
Senior Member
Agustín Villa
Join Date: Apr 2013
Location: Alcorcón
Posts: 314
Rep Power: 15 |
Quote:
Hi Eric As you say, the compute capability could be the problem: my humble GeForce 710M has a compute capability of 2.1... But checking the Issues in the GitHub, I mhight have found a way to use it: https://github.com/Atizar/RapidCFD-dev/issues/59 which is to modify the flags of the nvcc. I will check it later on during the day. And yes, this graphic card it could not be the best (already 6 years), but I want to play a bit and get familiar to this fork. |
||
June 7, 2019, 10:59 |
|
#69 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, agustinvo
I hope that issue helps solve the problem. I read about the requirement of CC > 3 at: https://github.com/Atizar/RapidCFD-dev/issues/29 Personally, I have not tested CC < 3, so I cannot comment from personal experience. If you find a solution please let us know. Best regards, Eric |
|
June 9, 2019, 20:53 |
|
#70 | |
New Member
ramune
Join Date: Jun 2019
Location: Japan
Posts: 5
Rep Power: 7 |
Hi, Eric
I used blockMesh and setFields with OpenFOAM-v1812. After that, I used interFoam. Error was occured. Quote:
|
||
June 9, 2019, 21:25 |
|
#71 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hello, Ramune,
Please also see: https://github.com/Atizar/RapidCFD-dev/issues/56 I have seen this error when there is insufficient memory on the GPU for the case you are trying to run. To confirm, re-create the grid with fewer cells. Hopefully RapidCFD will run with a case that is appropriately sized for your GPU memory. Best regards, Eric |
|
June 11, 2019, 03:51 |
|
#72 | |
Senior Member
Agustín Villa
Join Date: Apr 2013
Location: Alcorcón
Posts: 314
Rep Power: 15 |
Quote:
Hi Eric, unfortunately, I tried to install ir using CC=2.1, but it didn't pass from the Allwmake command. It seems it is not possible to use this card for RapidCFD. Anyway, thanks for your comments! I hope one day I will have a good enough card to test it. |
||
January 6, 2020, 16:20 |
WM_PROJECT_DIR issue when installing
|
#73 |
New Member
Join Date: Jan 2020
Posts: 1
Rep Power: 0 |
This is most likely a very basic issue but I did not find any other instructions on how to fix it. Maybe this issue will result in the instructions in the README file being updated.
So I cloned the Repo and tried installing it with ./Allwmake. wmakeCheckPwd error: Current directory is not /opt/openfoam6 Error: Current directory is not $WM_PROJECT_DIR The environment variables are inconsistent with the installation. Check the OpenFOAM entries in your dot-files and source them. The problem is, that my current directory indeed was /opt/openfoam6. I even tried copying RapidCFD to /opt/ and to set $FOAM_INSTALL to the RapidCFD directory location. Any help is appreciated! |
|
November 5, 2020, 11:36 |
|
#74 |
Member
Join Date: Mar 2019
Posts: 81
Rep Power: 7 |
Dear Foamers (RapidCFDers?!),
I eventually managed to compile RapidCFD and successfully ran a simulation using icoFoam. My next step is to run a simulation with chtMultiRegionSimpleFoam but it seems missing as I get: Code:
Command 'chtMultiRegionSimpleFoam' not found, but can be installed with: sudo apt install openfoam Regards, MJ |
|
November 5, 2020, 16:56 |
RapidCFD chtMultiRegionSimpleFoam
|
#75 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, MJ,
If you go to `$WM_PROJECT_DIR/applications/solvers/heatTransfer/chtMultiRegionFoam`, please try`wmake` and then repeat `wmake` in the `chtMultiRegionSimpleFoam` directory. I never tried chtMultiRegionFoam with RapidCFD, but I noticed on my computer that it was not compiled. I was able to compile the transient and steady state CHT solvers by just going to the appropriate source directories and typing `wmake.` Hope you are successful in getting chtMultiRegionSimpleFoam to work. Best regards, Eric |
|
November 5, 2020, 17:37 |
|
#76 | |
Member
Join Date: Mar 2019
Posts: 81
Rep Power: 7 |
Quote:
Hi Eric, Your solution worked and I really really appreciate it But (hate to say "but"), now I am facing another problem: Code:
-> FOAM FATAL IO ERROR: Unknown patchField type compressible::turbulentTemperatureCoupledBaffleMixed for patch type mappedWall Thanks a lot. Regards, MJ |
||
November 5, 2020, 18:03 |
|
#77 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, MJ,
Looks like in: https://github.com/Atizar/RapidCFD-d...del/Make/files the BC you want is commented out, so it's not compiled. I tried uncommenting the BC you want, and compilation failed. Sorry, but it is probably some work to get this to work. I have no idea if it is a little or a lot of effort. Best regards, Eric |
|
November 5, 2020, 18:20 |
|
#78 | |
Member
Join Date: Mar 2019
Posts: 81
Rep Power: 7 |
Quote:
Hi Eric, I really appreciate your time. This helps me know where to begin. Hope I can get that beast running. Thank you very much. Cheers, MJ |
||
December 21, 2020, 15:46 |
|
#79 | |
Member
Join Date: Mar 2019
Posts: 81
Rep Power: 7 |
Quote:
Dear Eric/Foamers, I managed to resolve this issue by adding the following as lib to controlDict: Code:
libs ( "libturbulenceModels.so" ) Code:
Time = 1 Solving for fluid region water AINVPBiCG: Solving for Ux, Initial residual = 1, Final residual = 0.000548682, No Iterations 3 AINVPBiCG: Solving for Uy, Initial residual = 1, Final residual = 0.000573363, No Iterations 3 AINVPBiCG: Solving for Uz, Initial residual = 1, Final residual = 0.000636619, No Iterations 3 AINVPBiCG: Solving for h, Initial residual = 1, Final residual = 0.000759521, No Iterations 29 Min/max T:278.9 285.15 AINVPCG: Solving for p_rgh, Initial residual = 0.99995, Final residual = 9.86198e-07, No Iterations 461 time step continuity errors : sum local = 0.166464, global = -0.00138159, cumulative = -0.00138159 Min/max rho:990 995.83 AINVPBiCG: Solving for epsilon, Initial residual = 0.946346, Final residual = 0.000120173, No Iterations 4 bounding epsilon, min: 0 max: 3522.65 average: 0.0731319 AINVPBiCG: Solving for k, Initial residual = 1, Final residual = 0.000111449, No Iterations 4 ExecutionTime = 23.36 s ClockTime = 24 s Time = 2 Solving for fluid region water AINVPBiCG: Solving for Ux, Initial residual = 1, Final residual = 0.000174815, No Iterations 4 AINVPBiCG: Solving for Uy, Initial residual = 1, Final residual = 0.000173309, No Iterations 4 AINVPBiCG: Solving for Uz, Initial residual = 0.999999, Final residual = 0.000178103, No Iterations 4 AINVPBiCG: Solving for h, Initial residual = 1, Final residual = 0.000772795, No Iterations 30 Min/max T:-1.42592e+20 3.37913e+06 AINVPCG: Solving for p_rgh, Initial residual = 1, Final residual = 9.64568e-07, No Iterations 460 time step continuity errors : sum local = 2.61101e+09, global = 1.98948e+06, cumulative = 1.98948e+06 Min/max rho:990 995.83 AINVPBiCG: Solving for epsilon, Initial residual = 1, Final residual = 0.000766922, No Iterations 4 bounding epsilon, min: 0 max: 4.24267e+14 average: 9.19178e+10 AINVPBiCG: Solving for k, Initial residual = 1, Final residual = 0.000351024, No Iterations 4 ExecutionTime = 44.52 s ClockTime = 45 s Code:
Time = 1 Solving for fluid region water AINVPBiCG: Solving for Ux, Initial residual = 1, Final residual = 0.000548682, No Iterations 3 AINVPBiCG: Solving for Uy, Initial residual = 1, Final residual = 0.000573363, No Iterations 3 AINVPBiCG: Solving for Uz, Initial residual = 1, Final residual = 0.000636619, No Iterations 3 AINVPBiCG: Solving for h, Initial residual = 1, Final residual = 0.000759521, No Iterations 29 Min/max T:278.9 285.15 AINVPCG: Solving for p_rgh, Initial residual = 0.99995, Final residual = 9.86198e-07, No Iterations 461 time step continuity errors : sum local = 0.166464, global = -0.00138159, cumulative = -0.00138159 Min/max rho:990 995.83 AINVPBiCG: Solving for epsilon, Initial residual = 0.946346, Final residual = 0.000120173, No Iterations 4 bounding epsilon, min: 0 max: 3522.65 average: 0.0731319 AINVPBiCG: Solving for k, Initial residual = 1, Final residual = 0.000111449, No Iterations 4 ExecutionTime = 23.37 s ClockTime = 23 s Time = 2 Solving for fluid region water AINVPBiCG: Solving for Ux, Initial residual = 0.272723, Final residual = 0.000161735, No Iterations 3 AINVPBiCG: Solving for Uy, Initial residual = 0.263371, Final residual = 0.000155051, No Iterations 3 AINVPBiCG: Solving for Uz, Initial residual = 0.199896, Final residual = 7.86134e-05, No Iterations 3 AINVPBiCG: Solving for h, Initial residual = 0.392765, Final residual = 0.000357602, No Iterations 24 Min/max T:278.9 285.151 AINVPCG: Solving for p_rgh, Initial residual = 0.0593374, Final residual = 9.71312e-07, No Iterations 478 time step continuity errors : sum local = 11.8758, global = 0.0295116, cumulative = 0.0295116 Min/max rho:990 995.83 AINVPBiCG: Solving for epsilon, Initial residual = 0.82644, Final residual = 0.000255127, No Iterations 3 bounding epsilon, min: 0 max: 29851.8 average: 0.273613 AINVPBiCG: Solving for k, Initial residual = 0.720961, Final residual = 0.000259339, No Iterations 3 ExecutionTime = 23.8 s ClockTime = 24 s Any help is very much appreciated PS: I acknowledge that there is a problem with epsilon; however, the issue with convergence if stopped/resumed puzzles me... |
||
December 21, 2020, 16:10 |
|
#80 |
Member
Eric Daymo
Join Date: Feb 2015
Location: Gilbert, Arizona, USA
Posts: 56
Rep Power: 12 |
Hi, MJ,
Thanks for the update. That was a clever solution. I am honestly not sure why the mapped T BC it's working when you include the turbulence model library ("libturbulenceModels.so"), since this BC is not supposed to be compiled in RapidCFD. But nonetheless, well done for getting it working. However, I don't have a specific idea for your error. Could be a lot of causes, from the grid to a bad initial condition for your turbulence parameters to something RapidCFD (vs. OpenFOAM) related. Does the same exact case run fine in OpenFOAM v2.3.1, or does the same problem occur in OpenFOAM? If yes, then it's more a general OpenFOAM issue. If no, then it's something related to RapidCFD in particular. This feedback might help with the determination of the root cause of your problems. Best regards, Eric |
|
|
|