Hacking bit with NumPy

I was trying to implement a generalized version of the fast inverse square root I found here , and this is what I came up with:

import numpy as np

def get_K(exponent, B=127, L=2**23, sigma=0.0450465, f=np.float32):
    return f((1 - exponent) * L * (B - f(sigma)))

def get_result(exponent, B=127, L=2**23, sigma=0.0450465, f=np.float32):
    K = f(get_K(exponent, 127, 2**23, f(0.0450465)))
    return lambda num: (K + f(num*exponent))

if __name__ == '__main__':
    print((get_result(0.5)(2)).astype(np.int32))

      

but when I run the above example I get 532487680

, which is the same result as in the view numpy.float32

for get_result(0.5)(2)

.

What am I doing wrong? In other words, how do I go from processing a number from a 32-bit floating point number to a 32-bit integer in the same way I would in C using numpy?

+3


source to share


1 answer


The following fast inverse square root implementation can be used with numpy (adapted from [1] ),

def fast_inv_sqrt(x):
    x = x.astype('float32')
    x2 = x * 0.5;
    y = x.view(dtype='int32')
    y = 0x5f3759df - np.right_shift(y, 1)
    y = y.view(dtype='float32')
    y  = y * ( 1.5 - ( x2 * y * y ) )
    return y

      

now since numpy will allocate multiple temporary arrays this is not very fast,



 import numpy as np

 x = np.array(1,10000, dtype='float32')

 %timeit fast_inv_sqrt(x)
 # 10000 loops, best of 3: 36.2 ยตs per loop

 %timeit 1./np.sqrt(x)
 # 10000 loops, best of 3: 13.1 ยตs per loop

      

If you want speed, you are better off doing this calculation in C and writing a python interface with Cython, f2py, etc.

+2


source







All Articles