Numpy EIG is about 4x slower than MATLAB EIG on Mac OS X 10.6. What am I doing wrong?

I tried profiling the EIG function in MATLAB and NumPy to compare the performance on my MacBook Pro (2 GHz, i7 quad core with OS X 10.6). NumPy EIG looks pretty slow compared to MATLAB.

Here's the code I profiled in NumPy:

s = '''\
x = numpy.random.random((2000,2000));
numpy.linalg.eig(x);
'''
t = timeit.Timer(stmt=s,setup="import numpy")
result = t.repeat(repeat=5,number=10)
result
Out[22]: 
[197.1737039089203,
 197.34872913360596,
 196.8160741329193,
 197.94081807136536,
 194.5740351676941]

      

That's around 19.5s / exec in NumPy.

Here's the same code in MATLAB:

clear all
tic;
for i = 1:50
    x = rand(2000,2000);
    eig(x);
end
toc;
Elapsed time is 267.976645 seconds.

      

That's around 5.36 sec / exec on MATLAB.

I think this is something simple as it shouldn't be much of a JIT performance dependency, so it probably boils down to BLAS and the routines that access the BLAS libraries. I know MATLAB uses the Accelerate Framework on Mac.

NumPy also seems to be using the Accelerate Framework BLAS on my Macbook Pro; here the outputnumpy.show_config()

numpy.show_config()
lapack_opt_info:
    extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
    extra_compile_args = ['-msse3']
    define_macros = [('NO_ATLAS_INFO', 3)]
blas_opt_info:
    extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
    extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers']
    define_macros = [('NO_ATLAS_INFO', 3)]

      

I am using Python 2.7.2 and NumPy 1.6 (both installed from MacPorts)

So here's my question for the NumPy folks: Why is NumPy slower in this case? I didn't have any optimizations when installing NumPy?

+3


source to share


3 answers


As far as I know MATLAB uses MKL libraries as BLAS and not Accelerate Framework. My experience tells me that the acceleration is significantly slower than the MKL. To test this you can try to get the academic version of Enthought Python Distribution (EPD) where Numpy is compiled against MKL and compare these timings. Also, MATLAB uses all threads by default (try running in single-threaded mode), but Numpy does not. In EPD, this can be done by running



import mkl 
mkl.set_num_threads(4) 

      

+5


source


If I am reading this right, you are profiling the performance of the random number generator in addition to the eig function. I made this mistake once over the course of a year comparing GAUSS to MATLAB - I would re-factor from random number generation as you see what you get.

one more note - for some LAPACK / BLAS elements you can get better performance if you make sure your numpy arrays are stored in Fortran order internally:



In [12]: x = numpy.random.random((200,200))

In [13]: x.flags
Out[13]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False


In [15]: x = numpy.array(x, order="F")

In [16]: x.flags
Out[16]: 
  C_CONTIGUOUS : False
  F_CONTIGUOUS : True
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False

      

-Chris

+2


source


The huge difference is that in MATLAB you only compute eigenvalues, but in python / numpy you compute both eigenvalues ​​and eigenvectors. To fix this and make appropriate comparisons, you must do one of the following: 1.change numpy.linalg.eig (x) to numpy.linalg.eigvals (x), leave the matlab code as is OR 2. Change eig (x) on [V, D] = eig (x) in matlab, leave the python / numpy code as it is (this may create more memory consumed by the matlab script) in my experience, python / numpy optimized with MKL (I use windows, not know a lot about acclerate framework), just as fast or slightly faster than Matlab optimized with MKL.

+1


source







All Articles