CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS > FLUENT > Fluent UDF and Scheme Programming

Fluent UDF wrong number of cells in parallel - correct in serial

Register Blogs Community New Posts Updated Threads Search

Like Tree4Likes
  • 2 Post By AlexanderZ
  • 2 Post By dralexpe

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 14, 2018, 12:51
Default Fluent UDF wrong number of cells in parallel - correct in serial
  #1
New Member
 
Join Date: May 2018
Posts: 4
Rep Power: 8
dralexpe is on a distinguished road
I am having problems with an UDF in Fluent 18.1. I am doing some tests to find the centroid of a cell that is closest to a specified point. Later on this will be part of a larger simulation, where I need to put some momentum sources in cells closest to some specified locations. The UDF also counts the number of cells as it loops over the domain, but in parallel it seems it finds far less cells than in parallel, and the resulting cell which supposedly is closest to the specified input differs greatly. There are more header files than necessary, but that is not the problem.

Here is the UDF:

Code:
#include "udf.h"
#include "surf.h"
#include "para.h"
#include "mem.h"
#include "metric.h"
#include "prf.h"

#define ZONE1_ID 31
#define ZONE2_ID 32

DEFINE_ON_DEMAND(search)
{
    Domain *domain=Get_Domain(1);
    Thread *tf;
    cell_t c, c_min;
    size_t i,thr_min_id,j,total_cells;
    
    
    real pos[ND_ND],c_centroid[ND_ND],eps,dist,min_dist, min_pos[ND_ND],min_vol;
    
    
#if !RP_HOST
    pos[0]=RP_Get_Real("x_0");
    pos[1]=RP_Get_Real("x_1");
    pos[2]=RP_Get_Real("x_2");
    eps=RP_Get_Real("eps");
    
    i=1;
    min_dist=1.0e10;
    min_vol=0.0;
    total_cells=0;
    
    thread_loop_c(tf,domain)
    {
        Message0("Looping over domain i= %d thread id= %d\n",i,THREAD_ID(tf));
        j=1;
            begin_c_loop(c,tf)
            {
                C_CENTROID(c_centroid,c,tf);
                dist=sqrt(SQR(pos[0]-c_centroid[0])+SQR(pos[1]-c_centroid[1])+SQR(pos[2]-c_centroid[2]));
                if (dist <= min_dist)
                {
                    min_dist=dist;
                    NV_V(min_pos,=,c_centroid);
                    c_min=c;
                    thr_min_id=THREAD_ID(tf);
                    min_vol=C_VOLUME(c,tf);
                }
                j++;
            }
            end_c_loop(c,tf)
            j--;
            Message0("Finished thread id= %d cell count= %d\n\n",THREAD_ID(tf),j);
            total_cells += j;
        i++;
    }
    Message0("Input point: x= %10.6f y= %10.6f z= %10.6f precision= %e\n\n",pos[0],pos[1],pos[2],eps);
    Message0("Thread id= %d cell index= %d minimum distance= %10.6f cell volume= %e\n",thr_min_id,c_min,min_dist,min_vol);
    Message0("At cell centroid - CGx= %10.6f  CGy= %10.6f  CGz= %10.6f \n\n",min_pos[0],min_pos[1],min_pos[2]);
    Message0("Total number of cells = %d\n",total_cells);
    Message0("Done\n\n");
    
#endif
    
}
Here is the output in parallel (running on a cluster with MPI, 2 nodes * 20 cores = 40 cores which resulted in 40 mesh partitions):

Code:
> Looping over domain i= 1 thread id= 24 
Finished thread id= 24 cell count= 1730
 

 Looping over domain i= 2 thread id= 26
 Finished thread id= 26 cell count= 7854
 

 Looping over domain i= 3 thread id= 27
 Finished thread id= 27 cell count= 43060
 

 Looping over domain i= 4 thread id= 32
 Finished thread id= 32 cell count= 12436
 

 Looping over domain i= 5 thread id= 29
 Finished thread id= 29 cell count= 0
 

 Looping over domain i= 6 thread id= 25
 Finished thread id= 25 cell count= 128
 

 Looping over domain i= 7 thread id= 28
 Finished thread id= 28 cell count= 1133
 

 Looping over domain i= 8 thread id= 30
 Finished thread id= 30 cell count= 32083
 

 Looping over domain i= 9 thread id= 31
 Finished thread id= 31 cell count= 12583
 

 Input point: x=   0.788187 y=  -5.436616 z=  -0.875589 precision= 1.000000e-03
 

 Thread id= 32 cell index= 12319 minimum distance=   1.417014 cell volume= 6.264763e-04
 At cell centroid - CGx=   0.242727  CGy=  -4.128798  CGz=  -0.871965 
 

 Total number of cells = 111007
 Done

>  Mesh Size
 

 Level    Cells    Faces    Nodes   Partitions
     0  4095572  8249927   713771           40
 

  9 cell zones, 34 face zones.
Output in serial:

Code:
Looping over domain i= 1 thread id= 24 Finished thread id= 24 cell count= 3983
 

 Looping over domain i= 2 thread id= 26
 Finished thread id= 26 cell count= 17016
 

 Looping over domain i= 3 thread id= 27
 Finished thread id= 27 cell count= 79384
 

 Looping over domain i= 4 thread id= 32
 Finished thread id= 32 cell count= 89264
 

 Looping over domain i= 5 thread id= 29
 Finished thread id= 29 cell count= 10566
 

 Looping over domain i= 6 thread id= 25
 Finished thread id= 25 cell count= 88535
 

 Looping over domain i= 7 thread id= 28
 Finished thread id= 28 cell count= 386195
 

 Looping over domain i= 8 thread id= 30
 Finished thread id= 30 cell count= 1994277
 

 Looping over domain i= 9 thread id= 31
 Finished thread id= 31 cell count= 1426352
 

 Input point: x=   0.788187 y=  -5.436616 z=  -0.875589 precision= 1.000000e-03
 

 Thread id= 32 cell index= 61083 minimum distance=   0.042580 cell volume= 4.662508e-04
 At cell centroid - CGx=   0.750784  CGy=  -5.455092  CGz=  -0.884115 
 

 Total number of cells = 4095572
 Done
The precision variable is not yet used, but later on this will define how close to the input point(s) will the chosen cells be.

I have also tried the "begin_c_loop_int" and "begin_c_loop_ext" to separate the looping over either internal or external cells, but to no avail.

Any help would be appreciated.

Thank you.
dralexpe is offline   Reply With Quote

Old   May 14, 2018, 22:13
Default
  #2
Senior Member
 
Alexander
Join Date: Apr 2013
Posts: 2,363
Rep Power: 34
AlexanderZ will become famous soon enoughAlexanderZ will become famous soon enough
look into PRF_GISUM1 in Ansys Fluent Customization manual

best regards
AlexanderZ is offline   Reply With Quote

Old   May 14, 2018, 22:59
Default
  #3
New Member
 
Join Date: May 2018
Posts: 4
Rep Power: 8
dralexpe is on a distinguished road
Quote:
Originally Posted by AlexanderZ View Post
look into PRF_GISUM1 in Ansys Fluent Customization manual

best regards
I understand this is a global reduction/sums macro but I don't know which loop does it go into. Could you please expand a bit?


Thank you.
dralexpe is offline   Reply With Quote

Old   May 15, 2018, 01:29
Default
  #4
Senior Member
 
Alexander
Join Date: Apr 2013
Posts: 2,363
Rep Power: 34
AlexanderZ will become famous soon enoughAlexanderZ will become famous soon enough
This is an example, how to count cells in parallel
Code:
#include "udf.h"

DEFINE_ON_DEMAND(counting)
{
	Domain *d = Get_Domain(1);
	Thread *t;
	cell_t c;
	int ncount = 0;
	#if !RP_HOST
		thread_loop_c(t,d)
		{
			begin_c_loop_int(c,t)
			{
				ncount +=1;
			}
			end_c_loop_int(c,t)
		}
	#endif
	
	#if RP_NODE
		ncount = PRF_GISUM1(ncount);
	#endif
	
	node_to_host_int_1(ncount);
	
	#if !RP_NODE
		Message("Number of cells %d\n",ncount);
	#endif
}
best regards
pakk and obscureed like this.
AlexanderZ is offline   Reply With Quote

Old   May 15, 2018, 12:18
Default
  #5
Senior Member
 
Join Date: Sep 2017
Posts: 246
Rep Power: 12
obscureed is on a distinguished road
Hi dralexpe,

AlexanderZ's answer is good. You mentioned begin_c_loop_int -- note that this is absolutely required here to get the correct answer.

For your longer-term goal of finding a cell containing a point, you should consider some built-in (but poorly documented) functions such as CX_Find_Cell_With_Point. See for example CX_Find_Cell_With_Point: problem (and note there some syntax changes compared to old versions, and the whole issue of not finding cells in some partitions). It might take some effort to get them working, but on large meshes they should run much faster than a crude search.

Good luck!
Ed
obscureed is offline   Reply With Quote

Old   May 16, 2018, 14:59
Smile Solved
  #6
New Member
 
Join Date: May 2018
Posts: 4
Rep Power: 8
dralexpe is on a distinguished road
I had tried this function,
Quote:
CX_Find_Cell_With_Point
before posting but couldn't get it to work in Fluent 18.1 and parallel.

I finally managed to get the right results in parallel, see below.

Code:
#include "udf.h"
#include "surf.h"
#include "para.h"
#include "mem.h"
#include "metric.h"
#include "prf.h"

#define ZONE1_ID 31
#define ZONE2_ID 32

DEFINE_ON_DEMAND(search)
{
    Domain *domain=Get_Domain(1);
    Thread *tf;
    cell_t c, c_min;
    int i,thr_min_id,j,total_cells,id_send,thr_min_id_send,c_min_send;
    
    
    real pos[ND_ND],c_centroid[ND_ND],pos_send[ND_ND],eps,dist,min_dist, min_pos[ND_ND],min_vol,min_dist_global,min_dist_send,min_vol_send;
    
    
#if !RP_HOST
    pos[0]=RP_Get_Real("x_0");
    pos[1]=RP_Get_Real("x_1");
    pos[2]=RP_Get_Real("x_2");
    eps=RP_Get_Real("eps");
    
    i=1;
    min_dist=1.0e10;
    min_vol=0.0;
    total_cells=0;
    
    thread_loop_c(tf,domain)
    {
        j=1;
            begin_c_loop_int(c,tf)
            {
                
                C_CENTROID(c_centroid,c,tf);
                dist=sqrt(SQR(pos[0]-c_centroid[0])+SQR(pos[1]-c_centroid[1])+SQR(pos[2]-c_centroid[2]));
                if (dist <= min_dist)
                {
                    min_dist=dist;
                    NV_V(min_pos,=,c_centroid);
                    c_min=c;
                    thr_min_id=THREAD_ID(tf);
                    min_vol=C_VOLUME(c,tf);
                }
                j++;
            }
            end_c_loop_int(c,tf)
            j--;
            total_cells += j;
        i++;
    }    
#endif

#if RP_NODE
    min_dist_global=PRF_GRLOW1(min_dist);
    total_cells=PRF_GRSUM1(total_cells);
    id_send=0;
    Message("------------------node %d -----------------\n",myid);
    Message("minimum distance on it= %10.6f thread id= %d cell index= %d cell volume= %e precision= %10.6f\n",min_dist,thr_min_id,c_min,min_vol,eps);
    Message("centroid position x= %10.6f  y= %10.6f  z= %10.6f\n\n",min_pos[0],min_pos[1],min_pos[2]);

    if (min_dist <= min_dist_global) 
    {
        id_send=myid;
        thr_min_id_send=thr_min_id;
        c_min_send=c_min;
        min_vol_send=min_vol;
        min_dist_send=min_dist;
        NV_V(pos_send,=,min_pos);
        if(!I_AM_NODE_ZERO_P) //send only from nodes other than zero
        {
            PRF_CSEND_INT(node_zero,&id_send,1,myid);
            PRF_CSEND_INT(node_zero,&thr_min_id_send,1,myid);
            PRF_CSEND_REAL(node_zero,&min_dist_send,1,myid);
            PRF_CSEND_INT(node_zero,&c_min,1,myid);
            PRF_CSEND_REAL(node_zero,&min_vol_send,1,myid);
            PRF_CSEND_REAL(node_zero,pos_send,ND_ND,myid);
        }
    }
    id_send=PRF_GIHIGH1(id_send);
    if(I_AM_NODE_ZERO_P)
    {
        if (id_send != myid) //if min_dist is on node zero don't send anything
        {
            PRF_CRECV_INT(id_send,&id_send,1,id_send);
            PRF_CRECV_INT(id_send,&thr_min_id_send,1,id_send);
            PRF_CRECV_REAL(id_send,&min_dist_send,1,id_send);
            PRF_CRECV_INT(id_send,&c_min_send,1,id_send);
            PRF_CRECV_REAL(id_send,&min_vol_send,1,id_send);
            PRF_CRECV_REAL(id_send,pos_send,ND_ND,id_send);
        }
    }
    Message0("======= node zero own values for minimum distance ========\n");
    Message0("id= %d thread id= %d  minimum distance= %10.6f cell index= %d cell volume= %e precision= %10.6f\n",myid,thr_min_id,min_dist,c_min,min_vol,eps);
    Message0("centroid position x= %10.6f  y= %10.6f  z= %10.6f\n",min_pos[0],min_pos[1],min_pos[2]);
    Message0("======= node zero received values for minimum distance ========\n");
    Message0("id= %d thread id= %d  minimum distance= %10.6f cell index= %d cell volume= %e\n",id_send,thr_min_id_send,min_dist_send,c_min_send,min_vol_send);
    Message0("centroid position x= %10.6f  y= %10.6f  z= %10.6f\n\n",pos_send[0],pos_send[1],pos_send[2]);

    
#endif

    

node_to_host_real_3(min_dist_global,min_vol_send,eps);
node_to_host_int_1(total_cells);
node_to_host_real(pos_send,ND_ND);
node_to_host_real(pos,ND_ND);
node_to_host_int_3(id_send,thr_min_id_send,c_min_send);




#if !RP_NODE
    Message("***** Values on the host *******\n");
    Message("Total number of cells= %d\n",total_cells);
    Message("Input point: x= %10.6f y= %10.6f z= %10.6f precision= %e\n\n",pos[0],pos[1],pos[2],eps);
    Message("Minimum distance found= %10.6f\n",min_dist_global);
    Message("Centroid closest to input point x=%10.6f  y= %10.6f  z= %10.6f\n",pos_send[0],pos_send[1],pos_send[2]);
    Message("Cell has volume= %e\n",min_vol_send);
    Message("On node id= %d  thread id= %d  cell index in thread= %d\n",id_send,thr_min_id_send,c_min_send);
    
    Message("******** Done  *************\n\n");
    
#endif
}
The sending of messages to node-0 and host are not really necessary, I did it just to see how it works and if it works correctly. My mistake was that I was not thinking in parallel mode, where each computing node has its own set of variables with different values.

Thank you both for your help. Your input was really appreciated
pakk and Lucrecio like this.
dralexpe is offline   Reply With Quote

Old   May 17, 2018, 05:10
Default
  #7
Senior Member
 
Alexander
Join Date: Apr 2013
Posts: 2,363
Rep Power: 34
AlexanderZ will become famous soon enoughAlexanderZ will become famous soon enough
check line
Code:
eps=RP_Get_Real("eps");
did you try to change eps value?

from documentation RP_Get macro may work from HOST only, however, something may be changed.

best regards
AlexanderZ is offline   Reply With Quote

Old   May 17, 2018, 09:26
Default
  #8
New Member
 
Join Date: May 2018
Posts: 4
Rep Power: 8
dralexpe is on a distinguished road
Quote:
Originally Posted by AlexanderZ View Post
check line
Code:
eps=RP_Get_Real("eps");
did you try to change eps value?

from documentation RP_Get macro may work from HOST only, however, something may be changed.

best regards
I did a test and changed the value of eps and it showed it did change on all nodes. The value of "eps" was created as an input parameter, but not using "RP_Get_Input_Parameter("variable-name")" because for some reason that didn't work. I followed the procedure outlined here:

https://www.computationalfluiddynami...sys-workbench/

and it worked.

Thank you.
dralexpe is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Foam::error::PrintStack almir OpenFOAM Running, Solving & CFD 92 May 21, 2024 08:56
[General] Extracting ParaView Data into Python Arrays Jeffzda ParaView 30 November 6, 2023 22:00
Decomposing meshes Tobi OpenFOAM Pre-Processing 22 February 24, 2023 10:23
The fluent stopped and errors with "Emergency: received SIGHUP signal" yuyuxuan FLUENT 0 December 3, 2013 23:56
DecomposePar unequal number of shared faces maka OpenFOAM Pre-Processing 6 August 12, 2010 10:01


All times are GMT -4. The time now is 14:58.