|
[Sponsors] |
July 19, 2020, 06:09 |
Velocity Sampling in parallel simulation
|
#1 |
New Member
YUNPENG SONG
Join Date: Jul 2020
Location: Tokyo
Posts: 1
Rep Power: 0 |
Hi,
I meet some problem in velocity sampling by looping all cells. Actually, my target is to get the velocity alone 3 moving lines in every time steps. And then sampled velocity will been used in source term calculation. To achieve my target, I use cell loop in ADJUST marco trying to get the velocity of the cell closest to my target point alone 3 lines. To store those velocity for next calculation, I created 3 2D-arrays(Global variable). To my understanding, by approaching cell loop and thread loop, I can access the velocity of each cell in the entire domain, so that I can store the value which satisfy my requirement in defined global variable. However, the results shows that the 2D-arrays have different values in different NODE. I think there may have some communication issue between each nodes. It seems that each NODE has a global variable with the same name but store different value. Is there any better way that I can achieve my target? Here shows my UDF code. Code:
#include "udf.h" #include <math.h> /* Wind Turbine Parameters ============================*/ #define B_NUM 3 /* number of the blades; */ #define RPM 15 /* rotation speed of blade (round per minute)*/ /* Projection Parameters ==============================*/ #define rot_x 5.0 /* thickness of the rotor disk*/ /* Velocity Sampling Parameters =======================*/ #define S_R 55.0 /* Sampling radius */ #define SS_NUM 30 /* Sampling Section number */ /* Other Parameters ===================================*/ #define pi 3.141592653 /* Define Golable Variables ==========================*/ real Ux_bld[B_NUM][SS_NUM]; /* x velocity alone blade [empty]*/ real Uy_bld[B_NUM][SS_NUM]; /* y velocity alone blade [empty]*/ real Uz_bld[B_NUM][SS_NUM]; /* z velocity alone blade [empty]*/ real d_bld[B_NUM][SS_NUM]; /* r of earch point alone blade [empty]*/ real nearest_d[B_NUM][SS_NUM]; /* nearest distence [empty]*/ real azimuthAngle[B_NUM]; /* azimuthAngle [empty]*/ real roundCount; /* roundCount [empty]*/ DEFINE_INIT(initialization, d) { #if !RP_HOST /* velocity Initialization */ real point_interval = S_R / SS_NUM; int i, j; for (i = 0; i < B_NUM; i++) { for (j = 0; j < SS_NUM; j++) { d_bld[i][j] = (j + 1) * point_interval; /* Initialize the radius of earch point */ Ux_bld[i][j] = -2; /* velocity Initialization */ Uy_bld[i][j] = -2; /* velocity Initialization */ Uz_bld[i][j] = -2; /* velocity Initialization */ } } #endif } DEFINE_ADJUST(blade_velocity, d) { #if !RP_HOST int partitions, ids = myid; partitions = PRF_GIHIGH1(ids) + 1; /* Obtain the numbers of NODEs */ Thread *t; cell_t c; int i, j, p; real x0 = 0, y0 = 0, z0 = 114.25; real flow_time = RP_Get_Real("flow-time"); /* obtain current time from Fluent */ real Omega = RPM * 2 * pi / 60; /* rotational speed (rad/s) */ real xyz[3], x, y, z, r; real x_c, y_c, z_c, DITS; /* Update Blades azimuthAngle */ for (i = 0; i < B_NUM; i++) { /* azimuthAngle calculation */ azimuthAngle[i] = (0.0 + i * 120) * (pi / 180.0) + Omega * flow_time; roundCount = azimuthAngle[i] / (2 * pi); azimuthAngle[i] = azimuthAngle[i] - floor(roundCount) * (2 * pi); for (j = 0; j < SS_NUM; j++) { /* Initialize the nearest distence Matrix */ nearest_d[i][j] = 9999999; } } for (p = 0; p <= partitions - 1; p++) { if (p == myid) { Message("\n >> | MYID: %d | \n", p); thread_loop_c(t, d) { begin_c_loop(c, t) { C_CENTROID(xyz, c, t); x = xyz[0]; y = xyz[1]; z = xyz[2]; r = sqrt(pow(x - x0, 2) + pow(y - y0, 2) + pow(z - z0, 2)); if (r <= S_R && (x - x0) >= -rot_x && (x - x0) <= rot_x) { for (i = 0; i < B_NUM; i++) { for (j = 0; j < SS_NUM; j++) { x_c = 0 + x0; /* Calculate point coordinate of x */ y_c = cos(azimuthAngle[i]) * d_bld[i][j] + y0; /* Calculate point coordinate of y */ z_c = sin(azimuthAngle[i]) * d_bld[i][j] + z0; /* Calculate point coordinate of z */ DITS = sqrt(pow(x - x_c, 2) + pow(y - y_c, 2) + pow(z - z_c, 2)); /* calculate distence between cell and target point */ if (DITS < nearest_d[i][j]) /* find nearest cell */ { /* nearest local velocity */ Ux_bld[i][j] = C_U(c, t); Uy_bld[i][j] = C_V(c, t); Uz_bld[i][j] = C_W(c, t); nearest_d[i][j] = DITS; /*Message("\n | Point Radius: %f m | Nearest DIST : %f m | Velocity < x: %f m/s y: %f m/s z:%f m/s > | \n" ,d_bld[i][j], nearest_d[i][j], Ux_bld[i][j], Uy_bld[i][j], Uz_bld[i][j]);*/ } } } } } end_c_loop(c, t) } Message("\n ============================ >> I AM: %d START<< =================================== \n", myid); for (j = 0; j < SS_NUM; j++) { Message("\n | BLADE: #%d | Nearest DIST : %f m | Point Radius: %f m | Point Velocity < x: %f m/s y: %f m/s z: %f m/s > \n", 1, nearest_d[1][j], d_bld[1][j], Ux_bld[1][j], Uy_bld[1][j], Uz_bld[1][j]); } Message("\n ============================ >> I AM: %d END << =================================== \n", myid); } PRF_GSYNC(); } #endif } Last edited by Torii_Nagi; July 19, 2020 at 13:20. Reason: missed some key information |
|
July 20, 2020, 02:29 |
|
#2 |
Senior Member
Alexander
Join Date: Apr 2013
Posts: 2,363
Rep Power: 34 |
by default fluent partitioning is random. So your curve may not be in partition 1 and 2 (from your picture) It could be the reason of this outcome
each computational node calculates on separated part of domain but frankly, I didn't get how your code works
__________________
best regards ****************************** press LIKE if this message was helpful |
|
Tags |
global variable, parallel calculation |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Fluent simulation in parallel is slower on workstation than laptop?!!?! | losiola | ANSYS | 5 | November 4, 2022 09:56 |
OpenFoam: time averaging velocity of a URANS Simulation | andreic | OpenFOAM Post-Processing | 1 | August 22, 2019 10:40 |
OpenMPI error at the beginnin of parallel OpenFOAM Simulation | tre95 | OpenFOAM Programming & Development | 2 | June 17, 2019 07:36 |
How to set unsteady turbulent inlet velocity using the result of another simulation? | John-Lee | FLUENT | 2 | December 27, 2016 18:36 |
Explicitly filtered LES | saeedi | Main CFD Forum | 16 | October 14, 2015 12:58 |