|
[Sponsors] |
Radiation Modeling Using Discrete Ordinates Method and Parallel Solver |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 15, 2015, 19:37 |
Radiation Modeling Using Discrete Ordinates Method and Parallel Solver
|
#1 |
New Member
Join Date: Apr 2015
Posts: 7
Rep Power: 11 |
Hi, I am trying to solve a coupled fluid flow problem using the FLUENT parallel solver for a 3m diameter x 5m Long cylinder. On one end of the cylinder is acting as the window where the radiation intel boundary condition is specified. The other end of the cylinder is acting as the inlet for the fluid flow, there is an inner tube that acts as the outlet for the flow. I am using the k-e model for flow, and the Discrete Ordinates radiation model. I have two user defined functions to specify the absorption and scattering coefficient for the fluid, for 11 wavelength bands. I've also split faces on the window side of the cylinder to specify different radiation boundary conditions for each cell ~4400 faces.
There are several areas where I would like to get input for: 1. The case file takes about 2 hrs to load ~52MB. How can I speed this up? 2. Once the case file loads it takes about 2 hrs to finish hybrid initialization. After the user input window says initialization is done, FLUENT hangs up for about 15-24 hrs (mouse pointer turns into a wheel and I cannot click any prompts). Is this normal for it to take so long? 3. Using the parallel solver, calculation speed is 1 iteration/24 hrs. Using the serial solver, calculation speed is 10 iterations/24hrs. All calculations are computed on a single computer with multiple cores. All nodes are on the same computer. The parallel solver should not be taking longer than the serial solver and I am trying to troubleshoot as to why there is such a large discrepancy in solver performance? FLUENT v14 Fluent Launcher Variable: -cl -s50000 Domain: 3m diameter x 5m long cylinder with outlet tube 0.6m diameter. Mesh Details: 495,914 nodes; 2,038,075 elements. DO Model Settings: 40 theta divisions, 1 phi divisions, 3 phi pixels, 3 theta pixels, 11 wavelength bands. Boundary Conditions: radiation inlet conditions specified for ~4400 faces at the window, mass flow inlet, and outlet conditions specified. parallel solver settings: 3D, Double Precisions, number of processors: 10, pcmpi. Parallel Solver Auto Partition Details: 10 Active Partitions: P Cells I-Cells Cell Ratio Faces I-Faces Face Ratio Neighbors Load 0 190821 14597 0.076 396350 19710 0.050 9 1 1 190822 14945 0.078 396506 19999 0.050 9 1 2 190822 14784 0.077 396460 19922 0.050 9 1 3 190822 14752 0.077 396427 19802 0.050 9 1 4 190821 14704 0.077 396498 20009 0.050 9 1 5 190822 14725 0.077 396480 19919 0.050 9 1 6 190822 14796 0.078 396472 19918 0.050 9 1 7 190822 14869 0.078 396576 20094 0.051 9 1 8 190822 14854 0.078 396462 19878 0.050 9 1 9 190822 14534 0.076 396325 19607 0.049 9 1 ---------------------------------------------------------------------- Collective Partition Statistics: Minimum Maximum Total ---------------------------------------------------------------------- Cell count 190821 190822 1908218 Mean cell count deviation -0.0% 0.0% Partition boundary cell count 14534 14945 147560 Partition boundary cell count ratio 7.6% 7.8% 7.7% Face count 396325 396576 3865127 Mean face count deviation -0.0% 0.0% Partition boundary face count 19607 20094 99429 Partition boundary face count ratio 4.9% 5.1% 2.6% Partition neighbor count 9 9 ---------------------------------------------------------------------- Partition Method Cylindrical Theta-Coordinate Stored Partition Count 10 Computer Specifications: - Microsoft Windows 7 (6.1) 64-bit Service Pack 1 (Build 7601) - CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - Display: AMD FirePro 2270 (512MB) - Memory: DIMM1(4096 MB), DIMM2(4096MB), DIMM3(16384MB), DIMM4(4096MB), DIMM5(16384 MB), DIMM6(4096MB), DIMM7(4096MB), DIMM8(4096MB) Couple things to note: 1. RAM is being utilized 99% during calculations. This may be a factor in slow calculations as FLUENT will utilize the HDD when RAM is utilized completely. 2. I've specified in the parallel solver settings that FLUENT use 10 processors. The processor on the machine I am using has 6 cores, and 12 threads. I may try running it again specifying 6 processors instead of 10. 3. RAM quad channeling may not be utilized properly. I am using my university lab computer so I may need to swap the RAM modules around, or get new ones that have the same amount of memory. Thank you for all your help! |
|
April 20, 2015, 19:04 |
|
#2 |
New Member
Join Date: Apr 2015
Posts: 7
Rep Power: 11 |
Would greatly appreciate any help or insight into this issue! Thank You!
|
|
April 21, 2015, 07:09 |
|
#3 |
Senior Member
Paritosh Vasava
Join Date: Oct 2012
Location: Lappeenranta, Finland
Posts: 732
Rep Power: 23 |
First you should consider converting your mesh to polyhedral. This should reduce the number of nodes significantly and thus help you speed up the calculations.
You have 40 theta divisions which may be too high especially in the very beginning of the solution. Start with something as low as 1 and increase gradually as the solution develops. The pixel 3 phi pixels and 3 theta pixels could also be lowered to 1 until your solution stabilized. I have not used the wavelength bands so no comments about that. But you can also try to keep them low for the early phase of the simulation. If your case involves flow, turbulence and radiations. Try to solve the case step-by-step. First by solve flow, then include turbulence, then radiation and so on. |
|
May 25, 2018, 15:25 |
|
#4 |
New Member
Zahra badiei
Join Date: Sep 2017
Posts: 5
Rep Power: 9 |
hello
I want to model radiation with DO model but I have one question, I have 2 bands but I don't know how to model it when i choose 2 bands i have to put 2 radiation in boundary condition and i don't know how to calculate the flux of radiation thanks for help |
|
Tags |
discrete ordinates, fluent, parallel, radiation |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
multiphase solver - parallel processing - GAMG | thibault_pringuey | OpenFOAM Programming & Development | 2 | August 27, 2013 23:03 |
Finite area method (fac::div) fails in parallel | cuba | OpenFOAM Running, Solving & CFD | 10 | November 20, 2012 08:03 |
subsetMotion solver in parallel | WiWo | OpenFOAM | 0 | March 21, 2012 11:21 |
Parallel Poisson solver | nikosb | Main CFD Forum | 0 | February 27, 2012 15:24 |
Working directory via command line | Luiz | CFX | 4 | March 6, 2011 21:02 |