CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

How to write these time-dependent equation (PEM Fuel Cell)

Register Blogs Community New Posts Updated Threads Search

Like Tree8Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 1, 2024, 06:33
Post
  #41
Senior Member
 
Lasse Brams Vinther
Join Date: Oct 2015
Posts: 118
Rep Power: 11
Swagga5aur is on a distinguished road
Hello Ron,

Just checked the case, as I hadn't touched it in the process of updating the solver. Ran it just with a single processor, and it worked fine at least.

From what I can see the solver needs a parprep directory and this is not a part of the case but I believe its quite similar to the HTPEM solver from Jülich, I will try and give it a whirl, however, some changes must be made the case and therefore the general solver from my knowledge at least, mainly combining the bp regions into one for decomposition reference as its required with the mentioned decomposition method.

I'm currently in contact with the original HTPEM cell code developer regarding making it available as I can't find it at the old repository anymore. All of the decomposition is most likely very similar and would allow you to set it up yourself as my time sadly is limited.

A brief flow of the decomposition looks like this, but requires a custom decomposer using a gauge mesh being the bpp mesh in the cells generating the CellID directories, the command makemesh3dFoam from the HTPEM solver. Note region1 is due the generating the mesh with snappyHexMesh where the ambient mesh is not designated a region and therefore results in region1 which we then translate to interconnect.
Code:
#!/bin/sh

## edit system/decomposeParDict for the desired decomposition
## set environment variable NPROCS to number of processors.
##     e.g., setenv NPROCS 2
## make mesh
##
## then:
#!/bin/bash

rm -rf constant/*/cellID
rm -rf constant/cellID

echo "NPROCS = " $NPROCS

makemesh3dFoam

rm -rf constant/region1
cp -rf constant/interconnect constant/region1


cp system/decomposeParDictSim system/decomposeParDict

cp -rf system/decomposeParDict system/air/.
cp -rf system/decomposeParDict system/fuel/.
cp -rf system/decomposeParDict system/interconnect/.
cp -rf system/decomposeParDict system/electrolyte/.
rm -rf system/region1
cp -rf system/interconnect system/region1

# To reconstruct and visualize the regions, we need the *ProcAddressing files
# created by decomposePar -region <region name>
# After the region decomp, we rename the processor* directories as proc_*
# to (a) allow the parallel decomp to proceed 
# while (b) saving the *ProcAddressing files for later copy

decomposePar -force
decomposePar -region air
decomposePar -region fuel
decomposePar -region electrolyte
decomposePar -region region1

mpirun -np $NPROCS foamExec splitMeshRegions -cellZones -parallel

I=0
while [ $I -lt $NPROCS ]
do
	rsync -av processor$I/constant/ processor$I/1/

	cp -rf processor$I/1/. processor$I/constant/.

	rm -rf processor$I/1

	cp -rf processor$I/constant/region1 processor$I/constant/interconnect
	I=$((I+1))

done


mpirun -np $NPROCS foamExec topoSet -region fuel -parallel
mpirun -np $NPROCS foamExec topoSet -region air -parallel
I haven't tried collated threads but general decomposition of these custom solvers aren't just prepping the case, decomposing and then running.
Swagga5aur is offline   Reply With Quote

Old   October 2, 2024, 09:44
Default
  #42
New Member
 
Join Date: Dec 2022
Posts: 15
Rep Power: 4
Ron71 is on a distinguished road
Quote:
Originally Posted by Swagga5aur View Post
Hello Ron,

Just checked the case, as I hadn't touched it in the process of updating the solver. Ran it just with a single processor, and it worked fine at least.

From what I can see the solver needs a parprep directory and this is not a part of the case but I believe its quite similar to the HTPEM solver from Jülich, I will try and give it a whirl, however, some changes must be made the case and therefore the general solver from my knowledge at least, mainly combining the bp regions into one for decomposition reference as its required with the mentioned decomposition method.

I'm currently in contact with the original HTPEM cell code developer regarding making it available as I can't find it at the old repository anymore. All of the decomposition is most likely very similar and would allow you to set it up yourself as my time sadly is limited.

A brief flow of the decomposition looks like this, but requires a custom decomposer using a gauge mesh being the bpp mesh in the cells generating the CellID directories, the command makemesh3dFoam from the HTPEM solver. Note region1 is due the generating the mesh with snappyHexMesh where the ambient mesh is not designated a region and therefore results in region1 which we then translate to interconnect.
Code:
#!/bin/sh

## edit system/decomposeParDict for the desired decomposition
## set environment variable NPROCS to number of processors.
##     e.g., setenv NPROCS 2
## make mesh
##
## then:
#!/bin/bash

rm -rf constant/*/cellID
rm -rf constant/cellID

echo "NPROCS = " $NPROCS

makemesh3dFoam

rm -rf constant/region1
cp -rf constant/interconnect constant/region1


cp system/decomposeParDictSim system/decomposeParDict

cp -rf system/decomposeParDict system/air/.
cp -rf system/decomposeParDict system/fuel/.
cp -rf system/decomposeParDict system/interconnect/.
cp -rf system/decomposeParDict system/electrolyte/.
rm -rf system/region1
cp -rf system/interconnect system/region1

# To reconstruct and visualize the regions, we need the *ProcAddressing files
# created by decomposePar -region <region name>
# After the region decomp, we rename the processor* directories as proc_*
# to (a) allow the parallel decomp to proceed 
# while (b) saving the *ProcAddressing files for later copy

decomposePar -force
decomposePar -region air
decomposePar -region fuel
decomposePar -region electrolyte
decomposePar -region region1

mpirun -np $NPROCS foamExec splitMeshRegions -cellZones -parallel

I=0
while [ $I -lt $NPROCS ]
do
	rsync -av processor$I/constant/ processor$I/1/

	cp -rf processor$I/1/. processor$I/constant/.

	rm -rf processor$I/1

	cp -rf processor$I/constant/region1 processor$I/constant/interconnect
	I=$((I+1))

done


mpirun -np $NPROCS foamExec topoSet -region fuel -parallel
mpirun -np $NPROCS foamExec topoSet -region air -parallel
I haven't tried collated threads but general decomposition of these custom solvers aren't just prepping the case, decomposing and then running.
Hey Lasse again,

now I tried such a custom decomposer, resulting in a very nice kind of processor map, as shown in the attached figure for the cathodeFluid only. I can run the CFD until ~20its, but then its crashing again. Dump:
Solving cathode liquid water saturation
DILUPBiCG: Solving for S, Initial residual = 0.9893958, Final residual = 1.07864e+12, No Iterations 1000
ScathodeLiquidWater min mean max = -8.829769e+11 -6.783752e+10 8.668707e+11


I mean it's important not to have cut planes through channel regions, which is here not the case anymore. Another option for improvements might be to test a finer base mesh...

cheers, Ron
Attached Images
File Type: jpg cathodenFluid_procmesh.jpg (82.2 KB, 1 views)
Ron71 is offline   Reply With Quote

Old   October 2, 2024, 10:43
Default
  #43
New Member
 
Join Date: Dec 2022
Posts: 15
Rep Power: 4
Ron71 is on a distinguished road
SOLVED,
I changed the setting to solve the S scalar for the cathodeFluid:
S
{
solver GAMG;
smoother GaussSeidel;
tolerance 1e-13;
relTol 0.0;

nPreSweeps 0;
preSweepsLevelMultiplier 1;
maxPreSweeps 4;
nPostSweeps 2;
postSweepsLevelMultiplier 1;
maxPostSweeps 4;
nFinestSweeps 2;
interpolateCorrection no;
scaleCorrection yes; // Yes: symmetric No: Asymmetric
directSolveCoarsest no;

}

Not sure, whether all those settings really make sense, but it succeeded. The logfile is attached.

cheers, Ron


Quote:
Originally Posted by Swagga5aur View Post
Hello Ron,

Just checked the case, as I hadn't touched it in the process of updating the solver. Ran it just with a single processor, and it worked fine at least.

From what I can see the solver needs a parprep directory and this is not a part of the case but I believe its quite similar to the HTPEM solver from Jülich, I will try and give it a whirl, however, some changes must be made the case and therefore the general solver from my knowledge at least, mainly combining the bp regions into one for decomposition reference as its required with the mentioned decomposition method.

I'm currently in contact with the original HTPEM cell code developer regarding making it available as I can't find it at the old repository anymore. All of the decomposition is most likely very similar and would allow you to set it up yourself as my time sadly is limited.

A brief flow of the decomposition looks like this, but requires a custom decomposer using a gauge mesh being the bpp mesh in the cells generating the CellID directories, the command makemesh3dFoam from the HTPEM solver. Note region1 is due the generating the mesh with snappyHexMesh where the ambient mesh is not designated a region and therefore results in region1 which we then translate to interconnect.
Code:
#!/bin/sh

## edit system/decomposeParDict for the desired decomposition
## set environment variable NPROCS to number of processors.
##     e.g., setenv NPROCS 2
## make mesh
##
## then:
#!/bin/bash

rm -rf constant/*/cellID
rm -rf constant/cellID

echo "NPROCS = " $NPROCS

makemesh3dFoam

rm -rf constant/region1
cp -rf constant/interconnect constant/region1


cp system/decomposeParDictSim system/decomposeParDict

cp -rf system/decomposeParDict system/air/.
cp -rf system/decomposeParDict system/fuel/.
cp -rf system/decomposeParDict system/interconnect/.
cp -rf system/decomposeParDict system/electrolyte/.
rm -rf system/region1
cp -rf system/interconnect system/region1

# To reconstruct and visualize the regions, we need the *ProcAddressing files
# created by decomposePar -region <region name>
# After the region decomp, we rename the processor* directories as proc_*
# to (a) allow the parallel decomp to proceed 
# while (b) saving the *ProcAddressing files for later copy

decomposePar -force
decomposePar -region air
decomposePar -region fuel
decomposePar -region electrolyte
decomposePar -region region1

mpirun -np $NPROCS foamExec splitMeshRegions -cellZones -parallel

I=0
while [ $I -lt $NPROCS ]
do
	rsync -av processor$I/constant/ processor$I/1/

	cp -rf processor$I/1/. processor$I/constant/.

	rm -rf processor$I/1

	cp -rf processor$I/constant/region1 processor$I/constant/interconnect
	I=$((I+1))

done


mpirun -np $NPROCS foamExec topoSet -region fuel -parallel
mpirun -np $NPROCS foamExec topoSet -region air -parallel
I haven't tried collated threads but general decomposition of these custom solvers aren't just prepping the case, decomposing and then running.
Attached Files
File Type: zip log_parallel_MultiPhase_HTPEMFC.zip (134.2 KB, 0 views)
Ron71 is offline   Reply With Quote

Old   October 2, 2024, 11:15
Default
  #44
New Member
 
Join Date: Dec 2022
Posts: 15
Rep Power: 4
Ron71 is on a distinguished road
Attached is the empty case zipped now.


Quote:
Originally Posted by Swagga5aur View Post
Hello Ron,

Just checked the case, as I hadn't touched it in the process of updating the solver. Ran it just with a single processor, and it worked fine at least.

From what I can see the solver needs a parprep directory and this is not a part of the case but I believe its quite similar to the HTPEM solver from Jülich, I will try and give it a whirl, however, some changes must be made the case and therefore the general solver from my knowledge at least, mainly combining the bp regions into one for decomposition reference as its required with the mentioned decomposition method.

I'm currently in contact with the original HTPEM cell code developer regarding making it available as I can't find it at the old repository anymore. All of the decomposition is most likely very similar and would allow you to set it up yourself as my time sadly is limited.

A brief flow of the decomposition looks like this, but requires a custom decomposer using a gauge mesh being the bpp mesh in the cells generating the CellID directories, the command makemesh3dFoam from the HTPEM solver. Note region1 is due the generating the mesh with snappyHexMesh where the ambient mesh is not designated a region and therefore results in region1 which we then translate to interconnect.
Code:
#!/bin/sh

## edit system/decomposeParDict for the desired decomposition
## set environment variable NPROCS to number of processors.
##     e.g., setenv NPROCS 2
## make mesh
##
## then:
#!/bin/bash

rm -rf constant/*/cellID
rm -rf constant/cellID

echo "NPROCS = " $NPROCS

makemesh3dFoam

rm -rf constant/region1
cp -rf constant/interconnect constant/region1


cp system/decomposeParDictSim system/decomposeParDict

cp -rf system/decomposeParDict system/air/.
cp -rf system/decomposeParDict system/fuel/.
cp -rf system/decomposeParDict system/interconnect/.
cp -rf system/decomposeParDict system/electrolyte/.
rm -rf system/region1
cp -rf system/interconnect system/region1

# To reconstruct and visualize the regions, we need the *ProcAddressing files
# created by decomposePar -region <region name>
# After the region decomp, we rename the processor* directories as proc_*
# to (a) allow the parallel decomp to proceed 
# while (b) saving the *ProcAddressing files for later copy

decomposePar -force
decomposePar -region air
decomposePar -region fuel
decomposePar -region electrolyte
decomposePar -region region1

mpirun -np $NPROCS foamExec splitMeshRegions -cellZones -parallel

I=0
while [ $I -lt $NPROCS ]
do
	rsync -av processor$I/constant/ processor$I/1/

	cp -rf processor$I/1/. processor$I/constant/.

	rm -rf processor$I/1

	cp -rf processor$I/constant/region1 processor$I/constant/interconnect
	I=$((I+1))

done


mpirun -np $NPROCS foamExec topoSet -region fuel -parallel
mpirun -np $NPROCS foamExec topoSet -region air -parallel
I haven't tried collated threads but general decomposition of these custom solvers aren't just prepping the case, decomposing and then running.
Attached Files
File Type: zip empty_Case.zip (56.4 KB, 2 views)
Ron71 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Other] refineWallLayer Error Yuby OpenFOAM Meshing & Mesh Conversion 2 November 11, 2021 12:04
pressure in incompressible solvers e.g. simpleFoam chrizzl OpenFOAM Running, Solving & CFD 13 March 28, 2017 06:49
[Helyx OS] Helyx-OS (GUI for SnappyHexMesh elvis OpenFOAM Community Contributions 210 January 30, 2017 19:57
Extrusion with OpenFoam problem No. Iterations 0 Lord Kelvin OpenFOAM Running, Solving & CFD 8 March 28, 2016 12:08
Calculation of the Governing Equations Mihail CFX 7 September 7, 2014 07:27


All times are GMT -4. The time now is 13:52.