Fastest way to use numpy.interp on a two dimensional array
I have the following problem. I am trying to find the fastest way to use numpy interpolation method for a 2-dimensional x-coordinates array.
import numpy as np xp = [0.0, 0.25, 0.5, 0.75, 1.0] np.random.seed(100) x = np.random.rand(10) fp = np.random.rand(10, 5)
Basically, there xp
will be the x-coordinates of the data points, there x
will be an array containing the x-coordinates of the values โโI want to interpolate, and there fp
will be a two-dimensional array containing the y-coordinates of the datapoints.
xp
[0.0, 0.25, 0.5, 0.75, 1.0]
x
array([ 0.54340494, 0.27836939, 0.42451759, 0.84477613, 0.00471886,
0.12156912, 0.67074908, 0.82585276, 0.13670659, 0.57509333])
fp
array([[ 0.89132195, 0.20920212, 0.18532822, 0.10837689, 0.21969749],
[ 0.97862378, 0.81168315, 0.17194101, 0.81622475, 0.27407375],
[ 0.43170418, 0.94002982, 0.81764938, 0.33611195, 0.17541045],
[ 0.37283205, 0.00568851, 0.25242635, 0.79566251, 0.01525497],
[ 0.59884338, 0.60380454, 0.10514769, 0.38194344, 0.03647606],
[ 0.89041156, 0.98092086, 0.05994199, 0.89054594, 0.5769015 ],
[ 0.74247969, 0.63018394, 0.58184219, 0.02043913, 0.21002658],
[ 0.54468488, 0.76911517, 0.25069523, 0.28589569, 0.85239509],
[ 0.97500649, 0.88485329, 0.35950784, 0.59885895, 0.35479561],
[ 0.34019022, 0.17808099, 0.23769421, 0.04486228, 0.50543143]])
The desired output should look like this:
array([ 0.17196795, 0.73908678, 0.85459966, 0.49980648, 0.59893702,
0.9344241 , 0.19840596, 0.45777785, 0.92570835, 0.17977264])
Again, finding the fastest way to do this is a simplified version of my problem, which is about 1 million versus 10 million in length.
thank
source to share
So, basically you want the output equivalent
np.array([np.interp(x[i], xp, fp[i]) for i in range(x.size)])
But the loop for
will make it pretty slow for largex.size
This should work:
def multiInterp(x, xp, fp):
i, j = np.nonzero(np.diff(np.array(xp)[None,:] < x[:,None]))
d = (x - xp[j]) / np.diff(xp)[j]
return fp[i, j] + np.diff(fp)[i, j] * d
EDIT: This works even better and can handle large arrays:
def multiInterp2(x, xp, fp):
i = np.arange(x.size)
j = np.searchsorted(xp, x) - 1
d = (x - xp[j]) / (xp[j + 1] - xp[j])
return (1 - d) * fp[i, j] + fp[i, j + 1] * d
Testing:
multiInterp2(x, xp, fp)
Out:
array([ 0.17196795, 0.73908678, 0.85459966, 0.49980648, 0.59893702,
0.9344241 , 0.19840596, 0.45777785, 0.92570835, 0.17977264])
Terms with initial data:
%timeit multiInterp2(x, xp, fp)
The slowest run took 6.87 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 25.5 ยตs per loop
%timeit np.concatenate([compiled_interp(x[[i]], xp, fp[i]) for i in range(fp.shape[0])])
The slowest run took 4.03 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 39.3 ยตs per loop
Seems to be faster even at small size x
Try something many, much more:
n = 10000
m = 10000
xp = np.linspace(0, 1, n)
x = np.random.rand(m)
fp = np.random.rand(m, n)
%timeit b() # kazemakase above
10 loops, best of 3: 38.4 ms per loop
%timeit multiInterp2(x, xp, fp)
100 loops, best of 3: 2.4 ms per loop
Benefits scale much better than the run version np.interp
source to share
np.interp
is basically a wrapper around the compiled one numpy.core.multiarray.interp
. We can reduce performance a bit by using it directly:
from numpy.core.multiarray import interp as compiled_interp
def a(x=x, xp=xp, fp=fp):
return np.array([np.interp(x[i], xp, fp[i]) for i in range(fp.shape[0])])
def b(x=x, xp=xp, fp=fp):
return np.concatenate([compiled_interp(x[[i]], xp, fp[i]) for i in range(fp.shape[0])])
def multiInterp(x=x, xp=xp, fp=fp):
i, j = np.nonzero(np.diff(xp[None,:] < x[:,None]))
d = (x - xp[j]) / np.diff(xp)[j]
return fp[i, j] + np.diff(fp)[i, j] * d
Timing tests show that for arrays of examples, this is consistent with Daniel Forsman's solution:
%timeit a()
10000 loops, best of 3: 44.7 ยตs per loop
%timeit b()
10000 loops, best of 3: 32 ยตs per loop
%timeit multiInterp()
10000 loops, best of 3: 33.3 ยตs per loop
Update
For multiple large arrays, multiInterp owns the gender:
n = 100 m = 1000 xp = np.linspace(0, 1, n) x = np.random.rand(m) fp = np.random.rand(m, n) %timeit a() 100 loops, best of 3: 4.14 ms per loop %timeit b() 100 loops, best of 3: 2.97 ms per loop %timeit multiInterp() 1000 loops, best of 3: 1.42 ms per loop
But for even bigger ones, it lags behind:
n = 1000
m = 10000
%timeit a()
10 loops, best of 3: 43.3 ms per loop
%timeit b()
10 loops, best of 3: 32.9 ms per loop
%timeit multiInterp()
10 loops, best of 3: 132 ms per loop
Finally, for very large arrays (I'm 32 bit), temporary arrays become a problem:
n = 10000
m = 10000
%timeit a()
10 loops, best of 3: 46.2 ms per loop
%timeit b()
10 loops, best of 3: 32.1 ms per loop
%timeit multiInterp()
# MemoryError
source to share