Profiling scipy.weave inline codes

I am using scipy.weave

for mission critical components in python scripting. If possible, I parallelize these codes using OpenMP. In some cases, I experience bottlenecks, which is probably due to false exchange. How can I profile these inline codes i.e. Any suggestions for suitable tools on Linux platforms?

Below is a poor implementation of vector addition that is prone to false sharing.

from scipy.weave import inline
import numpy as np
import time

N = 1000
a = np.random.rand(N)
b = np.random.rand(N)
c = np.random.rand(N)

cpus = 4
weave_options = {'headers'           : ['<omp.h>'],
                 'extra_compile_args': ['-fopenmp -O3'],
                 'extra_link_args'   : ['-lgomp'],
                 'compiler'          : 'gcc'}

code = \
r"""
omp_set_num_threads(cpus);
#pragma omp parallel
{
   #pragma omp for schedule(dynamic)
      for ( int i=0; i<N; i++ ){
         c[i] = a[i]+b[i];
      }
}
"""

now = time.time()
inline(code,['a','b','c','N','cpus'],**weave_options)
print "TOOK {0:.4f}".format(time.time()-now)
print "SUCCESS" if np.all(np.equal(a,a)) else "FAIL"

      

EDIT:

Can be used

valgrind --tool=callgrind --simulate-cache=yes python ***.py

and kcachegrind ./callgrind.out.****

to make at least a little impression. But the result tends to get messy for these kinds of packing codes.

+3


source to share





All Articles