|
[Sponsors] |
DPM : Solution for Collection Efficiency (just sharing) |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 20, 2015, 06:36 |
DPM : Solution for Collection Efficiency (just sharing)
|
#1 |
New Member
Join Date: Apr 2015
Posts: 28
Rep Power: 11 |
This is a udf I've done to compute the collection efficiency in 2D planar case (for fun and pratice) ! So it doesn't work for axisymmetric flow. If anyone see any errors, I would be very happy to learn from them otherwise this post is just to help others.
/*================================================= ==================================== WORKS FOR 2D ONLY : NOT AXISYMMETRIC ! ================================================== ===================================*/ #include "udf.h" #include "dpm.h" #include "surf.h" #include "random.h" #include "sg_mem.h" #include "para.h" #include "mem.h" real beta[401] ; real x[401] = { x } ; // x = value of the most frontal position of your wall impacted for y real y[401] = { y } ; real Vn[401]; real V0[401]; real x0[401] ; real y0[401] ; real dx[401] ; real dy[401] ; real dy0[401] ; real norm2[401] ; real ds[401]; int i = 1 ; int n ; DEFINE_DPM_BC(dpm_report,p,t,f,f_normal,dim) { Domain *domain; //Domain is declared as a variable domain = Get_Domain(1); //Returns fluid domain pointer double pwc[2], A[2], center[2]; double pt[2], B[2], centerb[2]; pwc[0] = P_POS(p)[0]; pwc[1] = P_POS(p)[1]; x[i] = pwc[0]; y[i] = pwc[1]; pt[0] = P_INIT_POS(p)[0] ; pt[1] = P_INIT_POS(p)[1] ; x0[i] = pt[0] ; y0[i] = pt[1] ; V0[i] = P_INIT_VEL(p)[0] ; Vn[i] = NV_DOT(P_VEL(p), f_normal)/NV_MAG(f_normal) ; dx[i] = x[i] - x[i-1] ; dy[i] = y[i] - y[i-1] ; dy0[i] = y0[i]-y0[i-1] ; norm2[i] = pow( dx[i], 2) + pow(dy[i], 2) ; ds[i] = pow(norm2[i], 0.5) ; beta[i] = dy0[i]/ds[i] ; i=i+1 ; return x,y,x0,y0,dy0,ds,beta, PATH_ABORT; } DEFINE_ON_DEMAND(beta_profile) { FILE *f1; f1 = fopen("report_beta.txt", "a"); fprintf(f1,"Stream Index, X impact, Y impact, x0, y0, dy, ds, beta\n"); for (i = 1 ; i <= 401 ; i++) { fprintf(f1, "%d %f %f %f %f %f %f %f\n", i-1,x[i],y[i],x0[i],y0[i],dy0[i],ds[i],beta[i] ); } fclose(f1); } Last edited by Manathan; April 21, 2015 at 14:33. |
|
April 21, 2015, 06:27 |
|
#2 |
Senior Member
Join Date: Mar 2015
Posts: 892
Rep Power: 18 |
Firstly, thanks for sharing your code with the community and I have a couple of comments. First, if your goal is to count the number of particles to calculate the collection efficiency, you could use User-Defined Memory and increment face values for each particle deposition. Second, be careful with opening files with multiple nodes (processes) at the same time because you may corrupt or otherwise adversely affect your file; perhaps run this script on the host or on node 0 (these global variables are available to all processes).
|
|
April 21, 2015, 06:43 |
|
#3 |
New Member
Join Date: Apr 2015
Posts: 28
Rep Power: 11 |
Thanks for your comments,
1) with the 2D approach, i just need to track the trajectory of particle and basically measure its deviation from injection : this is why i liked it, no need to work with face values, just need the DPM_BC function to get the "p" trapped at the wall. But thank you for your approach, i previously didn't think of how to do it this way ! 2) I'm not sure I understand the "multiple nodes" and the problem with this approach in term of files. I'm a junior in udf, fluent files etc... |
|
April 21, 2015, 07:12 |
|
#4 | ||
Senior Member
Join Date: Mar 2015
Posts: 892
Rep Power: 18 |
Quote:
However, if you're interested in the particle injection location and other parameters beyond the number of particles and particle mass fluxes then you'd need to store this data. I've used a similar approach where I saved this data directly to a text file within the DEFINE_DPM_BC macro and post-processed this data with MATLAB. This method has the benefit of not requiring a storage of large arrays in memory (millions of particles) and is easily tractable to arbitrary geometries and dimensions (no hard coding of arrays). Quote:
DEFINE_ON_DEMAND is an example of a macro which is called by every node, host and in serial (have a read of the parallelisation of UDFs in the UDF Manual for details; it's a brief read). If one node, say NodeA, opens "report_beta.txt" and begins writing line by line the particle details but suddenly another node, say NodeB, has also been tasked with the same job and attempts to open the file then there could be issues (there is only one physical location on your HDD for this file). At the very least, if the C developers and/or Fluent developers have locked the file (not the case with fopen() etc), this writing process would take N times longer than serial where N is the number of nodes. On the other hand, the DEFINE_DPM_BC macro is only called by a single node when a particle collides with a boundary (each node essentially has a portion of all particles to "look after"). |
|||
April 21, 2015, 08:56 |
|
#5 |
New Member
Join Date: Apr 2015
Posts: 28
Rep Power: 11 |
Thank you for your replies,
Everything is clearer now ! I will be careful with DEFINE_ON_DEMAND then... So if i understand well; for my example about writing files, if I put those lines of DEFINE_ON_DEMAND simply in the DPM_BC macro, Will it do the job correctly and writing step by step at each impact the data in the file ? : " FILE *f1; f1 = fopen("report_beta.txt", "a"); fprintf(f1, "%d %f %f %f %f %f %f %f\n", i-1,x,y,x0,y0,dy0,ds,beta ); fclose(f1); " |
|
April 21, 2015, 09:19 |
|
#6 |
Senior Member
Join Date: Mar 2015
Posts: 892
Rep Power: 18 |
If you write to one single file every time this DEFINE_DPM_BC is called then there's still a chance that two or more processes attempt to access this file (when two particles deposit at the same wall clock time). I've written to multiple files which are unique to each node using the "myid" variable (which is defined by Fluent, you don't declare or initialise this variable). For example:
Code:
FILE * f1; char fileName[100]; sprintf(fileName, "deposited_particles_%d.dat", myid); f1 = fopen(fileName,"a"); fclose(f1); |
|
April 21, 2015, 09:22 |
|
#7 |
New Member
Join Date: Apr 2015
Posts: 28
Rep Power: 11 |
ohhhhh ! that's an amazing trick oO, thank you so much !!!
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
grid dependancy | gueynard a. | Main CFD Forum | 19 | June 27, 2014 22:22 |
Why 3D solid-pore geometry showing diverged solution? | Sargam05 | OpenFOAM | 0 | December 3, 2012 16:45 |
Analytic solution for 2D steady Euler equations | jojo81 | Main CFD Forum | 0 | October 15, 2012 13:05 |
IcoFoam parallel woes | msrinath80 | OpenFOAM Running, Solving & CFD | 9 | July 22, 2007 03:58 |
Wall functions | Abhijit Tilak | Main CFD Forum | 6 | February 5, 1999 02:16 |