|
[Sponsors] |
Error in accessing C_UDMI value at partition boundary mesh |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
March 10, 2023, 07:39 |
Error in accessing C_UDMI value at partition boundary mesh
|
#1 |
New Member
Join Date: Aug 2019
Posts: 12
Rep Power: 7 |
Hi Folks,
Recently i've faced some difficulty in trying to get sum of C_UDMI value of the neighborhood cell for a given cell c at thread t due to different partition mesh existing in my cell zone. I've tried to parallelize my code and use PRF_CSEND/PRF_CRECV so that each computing node will send the requested C_UDMI value to node-0 for calculation purposes but I encountered a weird error from MPI saying that allocated memory buffer size is smaller than the received data. (will share the code once i've access to my laptop). Below is draft code structure of my udf without PRF_CSEND/PRF_CRECV in which I first noticed the issue of getting C_UDMI value at partition boundary mesh. Code:
c_face_loop(c,t,n) { f = C_FACE(c,t,n); tf = C_FACE_THREAD(c,t,n); if(!BOUNDARY_FACE_THREAD(tf)) { /* Get neighborhood cell thread c1 = F_C1(f,tf); t1 = THREAD_T1(tf); c_udmi_calc += C_UDMI(c1,t1,0); } else { c_udmi_calc +=0; } } This thread and this post in ANSYS Fluent forum also faced similar problem as mine where the c_udmi value at partition boundary mesh is unavailable/wrong and to this day no solution was found to fix this issue. Appreciate if you guys can give me some thoughts/opinion to overcome this problem! Thanks! |
|
March 14, 2023, 09:49 |
|
#2 |
New Member
Join Date: Aug 2019
Posts: 12
Rep Power: 7 |
just got access to my laptop. Below is the snapshot of message passing macro from other compute node to node-0 for averaging purposes. I've encountered an error using PRF_CSEND/CRECV due to buffer size is smaller than the transfer data size. I believe i have allocated all the memory correctly in the UDF, and not sure where the error is coming from.
Code:
float neighbor_avg(cell_t c, Thread *thread, int k) { /* Different variables are needed on different nodes */ #if !RP_HOST /* no host calculation */ Domain *domain = Get_Domain(1); Thread *tf, *t1; face_t f; cell_t c1; int size; /* data passing variables */ real *c_udmi_val; real avg_global = 0; int pe = 0, n, i; /* Each Node loads up its data passing c_udmi_val */ size = C_NFACES(c, thread); c_udmi_val = (real *)malloc(size * sizeof(real)); c_face_loop(c, thread, n) { f = C_FACE(c, thread, n); tf = C_FACE_THREAD(c, thread, n); if (!BOUNDARY_FACE_THREAD_P(tf)) { c1 = F_C1(f, tf); t1 = THREAD_T1(tf); c_udmi_val[n] = C_UDMI(c, thread, k); } else { c_udmi_val[n] = 0; } } /* Set node_zero to destination node */ if (!I_AM_NODE_ZERO_P) { PRF_CSEND_INT(node_zero, &size, 1, myid); PRF_CSEND_REAL(node_zero, c_udmi_val, size, myid); } if (I_AM_NODE_ZERO_P) { compute_node_loop_not_zero(i) /* see definition in para.h */ { PRF_CRECV_INT(i, &size, 1, i); /* Reallocate memory for arrays for node-i */ c_udmi_val = (real *)realloc(c_udmi_val, size * sizeof(real)); PRF_CRECV_REAL(i, c_udmi_val, size, i); /*PRF_CSEND_INT(node_host, &size, 1, myid); PRF_CSEND_REAL(node_host, c_udmi_val, size, myid); free((char *)c_udmi_val);*/ } for (i = 0; i < size; i++) { avg_global += c_udmi_val[i]; } avg_global = avg_global / size; return avg_global; } free(c_udmi_val); /* free malloc on nodes after data sent */ #endif /* ! RP_HOST */ } Code:
Fatal error in MPI_Recv: Message truncated, error stack: MPI_Recv(224).......................: MPI_Recv(buf=000001DA1193D06D, count=6, MPI_BYTE, src=6, tag=6, comm=0x84000004, status=00000093B01FF178) failed MPIDI_CH3U_Request_unpack_uebuf(618): Message truncated; 48 bytes received but buffer size is 6 [mpiexec@hostname] ..\\hydra\\utils\\sock\\sock.c (420): write error (Unknown error) send of 24 bytes failed. [mpiexec@hostname] ..\\hydra\\utils\\launch\\launch.c (121): shutdown failed, sock 844, error 10093 The fl process could not be started. ===============Message from the Cortex Process================================ Fatal error in one of the compute processes. ============================================================================== Last edited by lolno; March 16, 2023 at 23:10. |
|
March 21, 2023, 12:20 |
|
#3 |
Senior Member
Alexander
Join Date: Apr 2013
Posts: 2,363
Rep Power: 34 |
I dont have access now to my workstation so cant check by myself
regarding your problem, I can see, that function has type float, however return avg_global; has real type, which is double in case you are running fluent in double precision
__________________
best regards ****************************** press LIKE if this message was helpful |
|
March 22, 2023, 05:50 |
|
#4 |
New Member
Join Date: Aug 2019
Posts: 12
Rep Power: 7 |
yup, i'm running double precision which means my data float/real should be 8bytes length
|
|
Tags |
error, fluent - udf - parallel, partitioning mesh, udmi |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Wind turbine simulation | Saturn | CFX | 60 | July 17, 2024 06:45 |
sliding mesh problem in CFX | Saima | CFX | 46 | September 11, 2021 08:38 |
An error has occurred in cfx5solve: | volo87 | CFX | 5 | June 14, 2013 18:44 |
[Commercial meshers] Trimmed cell and embedded refinement mesh conversion issues | michele | OpenFOAM Meshing & Mesh Conversion | 2 | July 15, 2005 05:15 |
Convective Heat Transfer - Heat Exchanger | Mark | CFX | 6 | November 15, 2004 16:55 |