CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

Running shape_optimized.py with multiple compute nodes in parallel compute

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 2, 2016, 04:31
Unhappy Running shape_optimized.py with multiple compute nodes in parallel compute
  #1
New Member
 
Xie Chen
Join Date: Mar 2015
Posts: 3
Rep Power: 11
Zetrov is on a distinguished road
I have built SU2 with openmpi and I can running these command lines successfully:

mpirun -hostfile hostfile -n 6 SU2_CFD inv_NACA0012.cfg

my hostfile was set like this:
node1 slot=4
node2 slot=4

I also have modified MPI command lines in ~/SU2/run/interface.py like this
'srun -hostfile hostfile -n %i %s'
'mpirun -hostfile hostfile -n %i %s'
'mpiexec -hostfile hostfile -n %i %s'

so I can also run command "parallel_compute.py -n 6 -f inv_NACA0012.cfg"
with 6 cores.

But when I try the command "shape_optimized.py -n 6 -f inv_NACA0012_adv.cfg", there are always problems.

[su2@node1 bin]$ shape_optimization.py -n 6 -f inv_NACA0012_adv.cfg

Release 4.0.2 "Cardinal"

Found: mesh_NACA0012_inv.su2
New Project: ./
Removing old designs in 10s. Done!

Sequential Least SQuares Programming (SLSQP) parameters:
Number of design variables: 38
Objective function scaling factor: 0.001
Maximum number of iterations: 100
Requested accuracy: 1e-13
Initial guess for the independent variable(s): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Lower and upper bound for each independent variable: [(-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0), (-10000000000.0, 10000000000.0)]
Traceback (most recent call last):
File "/home/su2/su2/bin/shape_optimization.py", line 154, in <module>
main()
File "/home/su2/su2/bin/shape_optimization.py", line 99, in main
options.quiet )
File "/home/su2/su2/bin/shape_optimization.py", line 137, in shape_optimization
SU2.opt.SLSQP(project,x0,xb,its,accu)
File "/home/su2/su2/bin/SU2/opt/scipy_tools.py", line 128, in scipy_slsqp
epsilon = eps )
File "/usr/lib64/python2.6/site-packages/scipy/optimize/slsqp.py", line 217, in fmin_slsqp
mieq = len(f_ieqcons(x))
File "/usr/lib64/python2.6/site-packages/scipy/optimize/optimize.py", line 97, in function_wrapper
return function(x, *args)
File "/home/su2/su2/bin/SU2/opt/scipy_tools.py", line 211, in con_cieq
cons = project.con_cieq(x)
File "/home/su2/su2/bin/SU2/opt/project.py", line 233, in con_cieq
return self._eval(konfig, func,dvs)
File "/home/su2/su2/bin/SU2/opt/project.py", line 182, in _eval
vals = design._eval(func,*args)
File "/home/su2/su2/bin/SU2/eval/design.py", line 142, in _eval
vals = eval_func(*inputs)
File "/home/su2/su2/bin/SU2/eval/design.py", line 432, in con_cieq
func = su2func(this_con,config,state)
File "/home/su2/su2/bin/SU2/eval/functions.py", line 85, in function
aerodynamics( config, state )
File "/home/su2/su2/bin/SU2/eval/functions.py", line 224, in aerodynamics
info = su2run.direct(config)
File "/home/su2/su2/bin/SU2/run/direct.py", line 81, in direct
SU2_CFD(konfig)
File "/home/su2/su2/bin/SU2/run/interface.py", line 110, in CFD
run_command( the_Command )
File "/home/su2/su2/bin/SU2/run/interface.py", line 268, in run_command
raise exception , message
RuntimeError: Path = /home/su2/su2/bin/DESIGNS/DSN_001/DIRECT/,
Command = mpirun -hostfile /home/su2/openmpi/hostfile -n 6 /home/su2/su2/bin/SU2_CFD config_CFD.cfg
SU2 process returned error '143'
[node1][[17791,1],3][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 109145 on node node2 exited on signal 15 (Terminated).
--------------------------------------------------------------------------

Did I miss something??
Zetrov is offline   Reply With Quote

Old   March 2, 2016, 21:10
Post
  #2
New Member
 
Xie Chen
Join Date: Mar 2015
Posts: 3
Rep Power: 11
Zetrov is on a distinguished road
try it again last night

nothing happened...

I'm gotta mad...
Zetrov is offline   Reply With Quote

Reply

Tags
optimization parallel


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 12:58
Unconsistent parallel jobs running time arnaud6 OpenFOAM Running, Solving & CFD 4 February 10, 2015 13:42
Fluent 14.0 file not running in parallel mode in cluster tejakalva FLUENT 0 February 4, 2015 08:02
Running CFX parallel distributed Under linux system with loadleveler queuing system ahmadbakri CFX 1 December 21, 2014 05:19
problem about running parallel on cluster killsecond OpenFOAM Running, Solving & CFD 3 July 23, 2014 22:13


All times are GMT -4. The time now is 12:52.