CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Varying times during MPI parallel runs

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 18, 2015, 12:31
Default Varying times during MPI parallel runs
  #1
New Member
 
Deep Ray
Join Date: Jan 2014
Posts: 6
Rep Power: 12
d_ray is on a distinguished road
I have written a C++ code for a finite volume solver to simulate 2D compressible flows on unstructured meshes, and parallelised my code using MPI (openMPI 1.8.1). I partition the initial mesh into N parts (which is equal to the number of processors being used) using gmsh-Metis. In the solver, there is a function that calculates the numerical flux across each local face in the various partitions. This function takes the the left/right values and reconstructed states (evaluated prior to the function call) as input, and returns the corresponding flux. During this function call, there is no inter-processor communication, since all the input data is available locally. I use MPI_Wtime to find the time taken for each such function call. With 6 processors (Intel® Core™ i7 (3770)), I get the following results:

Processor 1: 1406599932 calls in 127.467 minutes

Processor 2: 1478383662 calls in 18.5758 minutes

Processor 3: 1422943146 calls in 65.3507 minutes

Processor 4: 1439105772 calls in 40.379 minutes

Processor 5: 1451746932 calls in 23.9294 minutes

Processor 6: 1467187206 calls in 32.5326 minutes

I am really surprised with the timings, especially those from processors 1 and 2. Processor 2 makes almost 80 million more calls than processor 1 but takes 1/7 the time taken by processor 1. I re-iterate that there is no inter-processor communication taking place in this function. Could the following cause this large a variation in time?

1. Conditional-if loops inside the function
2. The magnitude of the values of input variables. For instance, if a majority of the values in for a processor are close to 0.

If not these, could there be any other reason behind this disparity?
d_ray is offline   Reply With Quote

Reply

Tags
c++, finite volume method, mpi parallel, timings


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
simpleFoam parallel AndrewMortimer OpenFOAM Running, Solving & CFD 12 August 7, 2015 19:45
How to setup a simple OpenFOAM cluster? TommiPLaiho OpenFOAM Installation 3 October 27, 2013 16:15
Problems about distributed parallel runs vkrastev OpenFOAM Running, Solving & CFD 10 November 11, 2012 10:22
Error with parallel computing: MPI therandomestname FLUENT 1 June 28, 2012 05:12
Error using LaunderGibsonRSTM on SGI ALTIX 4700 jaswi OpenFOAM 2 April 29, 2008 11:54


All times are GMT -4. The time now is 11:55.