CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS > CFX

Problem with CFX solution speed

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By ghorrocks

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 6, 2015, 06:39
Default Problem with CFX solution speed
  #1
New Member
 
kiarash kiani
Join Date: Mar 2015
Posts: 13
Rep Power: 11
reza_k is on a distinguished road
Dear experts,
I am a beginner at ANSYS CFX, and I have some problems with solution speed.

I am working on numerical simulation of film cooling. The domain Contains three zones,

1- Main (which could be assumed as a long flat plate)
2- The Plenum (contains coolant fluid)
3- The Passage (which is the link between plenum and the main).

Solution is conducted by:

a. CPU : Core i7-4820k
b. Parallel solver, 4 Real cores (just about 2 times faster than serial !?)
c. 3.2 million meshes
d. Two translational periodic interfaces exist in the domain.
e. average y+ = 0.5
f. material: Air Ideal gas
g. Turbulence model: SST-Kw
h. Solution is Steady state


According to these settings, each iteration takes about 100sec. I know CFX is fully a coupled solver and consequently
solution takes more time than a segregated... I would appreciate if some experts help find the answers of the bellows questions:


Q.1 - How much time is reasonable for each iteration for these kind of problems.

Q.2 - Is there any way to solve this problem faster without changing the hardware?


Thanks a lot...
reza_k is offline   Reply With Quote

Old   March 6, 2015, 07:45
Default
  #2
Senior Member
 
Join Date: Jul 2011
Location: Berlin, Germany
Posts: 173
Rep Power: 15
monkey1 is on a distinguished road
I can only give you one hint concerning computing speed. According to a sentence I once heard from ANSYS support, the optimum for CFX is: 250.000 Elements / core. Meaning for a 3,2 Mio Element Mesh you would need a least 3,2*4=12 (13) cores.
And apart from reducing ne number of elements, or the number of solved equations (meaning if you only solve the flow without heat transfer or any extra equations), or reduce the numerical quality (by increasing the convergence criteria, switching the solver from double to single precision, avoiding High resolution solving schemes, etc...) I would not have a tip how to increase computing speed.
Except for one:
Carefully check your mesh quality. The less "bad" elements (Quality criterion in ICEM should be >0,2 for Tetraeders and >0,1 for Prism layers) you have, the faster it converges and solves. Also from my experience, a structured mesh gives a faster solution than an unstructured.
monkey1 is offline   Reply With Quote

Old   March 6, 2015, 07:48
Default
  #3
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,854
Rep Power: 144
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
Your mesh is 3.2m nodes/elements. Have you shown you need a mesh that fine? You might have made it finer than is required. Alternately your mesh might not be fine enough and you need it finer. Unless you have checked you do not know and you are just guessing - and unless you check you will have no idea whether your results are accurate or not.

So generate another mesh with half the element edge length (so approx 1/8 the number of nodes) and compare the fully converged results. If there is no difference then the coarser mesh is fine.

And if it says you need the finer mesh then you will just have to put up with long run times. The longest simulation I have done ran for 6 weeks. You would be able to do 36288 iterations in that time - that should be enough to get convergence .
Mfaizan likes this.
ghorrocks is offline   Reply With Quote

Old   March 6, 2015, 09:34
Default
  #4
New Member
 
kiarash kiani
Join Date: Mar 2015
Posts: 13
Rep Power: 11
reza_k is on a distinguished road
Thanks monkey1, thanks ghorrocks...
I really appreciate your guidance.

Unfortunately, I have checked most of the points you mentioned. The 3.2m elements have been generated for Wall Modeled Large Eddy Simulation and the current SST simulation is just an initial condition. However based on the literature at least 5m elements are needed. So it doesn't sound good, if there is no way to increase the solution speed rather than improving hardwares.
reza_k is offline   Reply With Quote

Old   March 6, 2015, 21:04
Default
  #5
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,854
Rep Power: 144
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
CFD is the just about the most computational intensive science about. And if you are doing LES then you have to expect long run time. That's why CFD uses a lot of parallelisation - so look at putting more computers on your network to speed things up.
ghorrocks is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Future CFD Research Jas Main CFD Forum 10 March 30, 2013 13:26
Numerical oscillations in CFX with water boiling problem. michujo CFX 4 December 16, 2011 10:00
Solution convergence problem - poor mesh? Shawn_A OpenFOAM Running, Solving & CFD 2 November 12, 2011 20:31
cfx mesh problem... mactech001 ANSYS Meshing & Geometry 0 November 5, 2009 03:19
problem in displaying surfaces in CFX haho CFX 1 July 5, 2009 20:25


All times are GMT -4. The time now is 09:20.