CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS > ANSYS Meshing & Geometry

[Other] Note on mesh size, iterative and direct solvers (MECHANICAL)

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 15, 2024, 11:33
Post Note on mesh size, iterative and direct solvers (MECHANICAL)
  #1
New Member
 
Nicoḷ Badodi
Join Date: Mar 2020
Posts: 20
Rep Power: 6
NBad is on a distinguished road
I just had a nice two days dive in the underlying solver logic of mechanical, here are some of my notes for anyone needing them in the future. I will dump the complete details even if some might seem obvious.

There are two main categories of solvers: direct and iterative. Direct solvers try to diagonalize the matrix representing the problem in one go while iterative ones do that iteratively slowly converging to the solution. In practice there is only one direct solver (SPARSE) and several iterative ones (JCG, ICCG, QMR, and PCG). You can find details for all of them here.
  • The DIRECT solver requires a LOT of RAM to run, especially if for some reason it requires to pivot, in which case the amount of memory required raises even more. The memory allocated by this kind of solver must also be contiguous, so it is not guaranteed that if you have enough RAM you will have enough CONTIGUOUS RAM to run the direct solver [reference]. This might change depending on your OS settings btw. When the DIRECT solver can't find enough RAM it switches to the out-of-core solving process in which the matrices are stored on the disk. This allows it to have more memory but greatly reduces the solution speed (depending on your HDD or SSD performance).
  • ITERATIVE solvers utilize way less memory, but can be less efficient in some cases, especially for smaller models. To my experience, the mechanical application always goes for PCG as an iterative solver of choice (I have never seen trying to solve with any other one). PCG is the fastest iterative algorithm, but it also requires the most memory.

Now, it might happen that, when solving with the iterative option active, the algorithm switches back to the DIRECT solver and then crashes because it doesn't have enough RAM or memory. These two cases are:
  1. The iterative solver failed to converge. Now since the program always choses PCG as an iterative solver, there are two ways of bypassing this issue.
  • Changing the initial conditions to be closer to the final solution. This can be done in a thermal analysis for example by changing the initial temperature setting to a value that matches the expected average temperature of your system. It can also be done node-by-node via APDL command IC.
  • Modifying the PCG difficulty level. PCG estimates the problem difficulty level and changes some internal configuration parameters (such as the convergence tolerance, max. number of iterations etc..). Sometimes, its estimate is not real. The difficulty level of the problem and the other parameters can be set using the APDL command PCGOPT, this can help the iterative solver to converge preventing the switchback to the direct solver.
  1. The matrix is incompatible with the iterative algorithm. Depending on the type of problem, but also on some BC or other settings, the matrix generated can be either SYMMETRIC or UNSYMMETRIC. Now, although in the EQSLV command reference it is written that PCG can solve both symmetric and unsymmetric matrices, the reality is that when the solver meets an unsymmetric matrix it switches back from PCG to the direct solver, causing memory issues. The type of matrix the problem is generating can be found in the "Solution Output". Now, the only way to solve this (unless you want to modify your problem definition) is to utilize an iterative solver which can handle the type of matrix your setup is generating:
  • SYMMETRIC: JCG, ICCG, QMR, PCG. The choice of option should be PCG because it is better optimized than the other solvers. PCG and JCG support GPU acceleration.
  • UNSYMMETRIC: JCG, ICCG. They do not support GPU acceleration.
So this is it, I hope someone will find it useful in the event they get stuck with memory problems.
NBad is offline   Reply With Quote

Reply

Tags
ansys mechanical apdl, memory allocation, memory limit, solver control


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Linear solver choice for inner iterations of implicit CFD solvers aerosayan Main CFD Forum 4 January 5, 2024 13:49
Direct vs Iterative Linear Solvers for non-linears bill Main CFD Forum 16 November 5, 2014 08:18
Problems with Direct Solvers pnr4432 Main CFD Forum 2 February 27, 2014 15:00
direct and iterative solution subhra Main CFD Forum 2 May 9, 2004 02:23


All times are GMT -4. The time now is 18:47.