|
[Sponsors] |
September 28, 2022, 18:47 |
|
#21 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Click on the little arrow ">" to go to the post.
Quote:
|
||
September 30, 2022, 06:38 |
|
#22 |
New Member
Maik
Join Date: Sep 2022
Posts: 12
Rep Power: 4 |
Hi guys!
Yesterday I finished keyless ssh and it is working now on the whole system. So far everything is going like clockwork and in the next days I will take Wills advice for testing the cluster with those benchmark programs, he linked in his posting. About a scheduler: Only one person at a time is working on the server (at this time it is just me, later on it is at least one of my colleagues) so my question is, whether a scheduler is really necessary? Don't you need a scheduler, like SLURM, only if more than one person is working on the server and to allocate resources? Cheers, Maik |
|
September 30, 2022, 06:49 |
|
#23 | |
Senior Member
Join Date: Oct 2011
Posts: 242
Rep Power: 17 |
Quote:
I use slurm as many others, but honestly it is always a bit tricky to set up and not worth for you now in my opinion given that only two users will use the cluster. |
||
September 30, 2022, 09:03 |
|
#24 |
New Member
Maik
Join Date: Sep 2022
Posts: 12
Rep Power: 4 |
Thanks for your advice, I think I am going to try the mpi-thing you were talking about... In the mean time I tried to test OpenFOAM and went along with the instructions in the attachment.
I did that just with my Laptop and it ran through. The results are in the attachments. So what exactly am I looking at? And how is all this going to work with my six machines? I know... those are really "dumb" questions, but in this economy I am dumb But I want to learn Cheers, Maik |
|
September 30, 2022, 09:12 |
|
#25 |
Senior Member
Join Date: Oct 2011
Posts: 242
Rep Power: 17 |
Hello, I can't give you much support with openfoam as I do not know it. But it looks like you run the test using 1 proc, as one of the screenshots suggests. You may try first to run it with 2 procs on your laptop to check correct installation/detection of openmpi.
There are three steps in the example you posted, pre-processing (I guess creation of the mesh etc) which is blockMesh, then solver (incompressible I imagine, simpleFoam) and post-processing (probably conversion of results to vtk or so) which is paraFoam. You need to know which of these programs (for sure simpleFoam at least) benefit from parallelization though domain decomposition, then you can launch these ones on the cluster. That's why I suggest to start with a simpler mpi program to understand the mechanism and check your nfs/ssh/mpi etc installation. |
|
September 30, 2022, 14:18 |
|
#26 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
2. The "tutorials/incompressible/simpleFoam/motorBike" case runs on 6 cores. Try it and see if your mpi installation works. Note that OpenFOAM uses the "runParallel" script in the "allrun" script to initiate the parallel run. You can change the number of cores used in "system/decomposeParDict.6". 3. Next, get the benchmark case running. Now you can compare your machines. (You can also do the comparison with the motorBike tutorial above). 4. The benchmark files include deactivated code for multi node solutions. In the basecase directory do: a. Modify the "hostfile" file to correspond to the info for the nodes you want to use. It currently has a setup for 3x Dell R810 that I own. For the DL1000, you should enter the ip adresses for all four with slots=8. (I could call machines by name, because they are defined in dns) b. Modify the run.tst script by changing "Allmesh_parallel" to "Allmesh_cluster"; commenting out the mpirun command for simpleFoam and uncommenting the mpirun command above it; and selecting the number of cases (DL1000 cases="32 24 16"). Also ensure that prep=1 if you haven't generated the meshes. c. The run_simple.sh and run_snappy.sh scripts source your ~/.profile to ensure that the OpenFOAM environment is properly initiated. This will work if you can just login and run openfoam cases without further commands. Otherwise, you need to enter these other commands in the scripts as well. The reason is that mpi will log into the other machines with ssh. Prior to running simpleFoam, you need to ensure that your environment is correct. You can check by logging into the other machine by hand. That should happen without any interaction because of your key-less login setup. Next type simpleFoam: program not found! Try to run run_simple.sh: some other errors about files not found unless you are in the proper run_## directory: This means it is setup correctly. d. Type "./run.tst" and see what is running on the various machines with htop. Report results to this thread. After some optimization, report final results to OpenFOAM benchmarks on various hardware |
||
September 30, 2022, 15:07 |
|
#27 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
If things go really well, you can try a run with all your computers. For the slower machines, reduce the number of slots below the maximum cores available. This will speed up the memory access per core. In this way, you can balance the calculations better. Otherwise, your performance will be determined by the slowest core in your cluster. Your speed test under Item 3 above should give a hint as to which machines are slower.
Some people record power consumption along with time. They calculate total kWh for the simulation as a function of the number of cores (and in your case machines). Will be interesting to see what that looks like. My DL560 G8 completes the benchmark in 40.4 seconds for about 0.008 kWh. (685 W at load and 160 W idle). Last edited by wkernkamp; September 30, 2022 at 17:19. |
|
October 13, 2022, 04:48 |
|
#28 |
New Member
Maik
Join Date: Sep 2022
Posts: 12
Rep Power: 4 |
Hi everyone,
just fyi, I am a bit busy at the moment, I am getting back to the benchmark tests, etc. Best regards, Maik |
|
Tags |
hpc, openfoam, server, setup |
|
|