Local maxima in point cloud

I have point cloud C where each point has an associated value. Let's say the points are in 2nd space, so each point can be represented by a triplet (x, y, v).

I would like to find a subset of points that are local maximums. That is, for some radius R, I would like to find a subset of points S in C such that for any point Pi (with the value vi) in S there is no point Pj in C within the R-distance Pi, whose vj value is greater than vi.

I see how I could do this in O (N ^ 2), but it seems wasteful. Is there an efficient way to do this?


Side notes:

  • The source of this problem is that I am trying to find local maxima in a sparse matrix, so in my case x, y are ordered integer indices - if that simplifies the problem, let me know!
  • I'm perfectly happy if this solution is just for Manhattan distance or whatever.
  • I am in python, so if there is any good vector numpy way to do this, just fine.
+2


source to share


3 answers


Following on from Eve's suggestion, here is the answer that scipy KDTree uses :



from scipy.spatial.kdtree import KDTree
import numpy as np

def locally_extreme_points(coords, data, neighbourhood, lookfor = 'max', p_norm = 2.):
    '''
    Find local maxima of points in a pointcloud.  Ties result in both points passing through the filter.

    Not to be used for high-dimensional data.  It will be slow.

    coords: A shape (n_points, n_dims) array of point locations
    data: A shape (n_points, ) vector of point values
    neighbourhood: The (scalar) size of the neighbourhood in which to search.
    lookfor: Either 'max', or 'min', depending on whether you want local maxima or minima
    p_norm: The p-norm to use for measuring distance (e.g. 1=Manhattan, 2=Euclidian)

    returns
        filtered_coords: The coordinates of locally extreme points
        filtered_data: The values of these points
    '''
    assert coords.shape[0] == data.shape[0], 'You must have one coordinate per data point'
    extreme_fcn = {'min': np.min, 'max': np.max}[lookfor]
    kdtree = KDTree(coords)
    neighbours = kdtree.query_ball_tree(kdtree, r=neighbourhood, p = p_norm)
    i_am_extreme = [data[i]==extreme_fcn(data[n]) for i, n in enumerate(neighbours)]
    extrema, = np.nonzero(i_am_extreme)  # This line just saves time on indexing
    return coords[extrema], data[extrema]

      

+3


source


Use a 2D tree (2D kD-tree instance ). After preprocessing N.Log (N) time, it will allow you to search for close neighbors across all your points for roughly Log (N) + K time (average, K neighbors), for a total of N. Log (N) + KN It will live just fine with the Manhattan distance.



+3


source


I found this solution, but maybe O (N ^ 2):

import numpy as np

# generate test data
n = 10
foo = np.random.rand(n,n)

# fixed test data for visual back-checking
# foo = np.array([[ 0.12439309,  0.88878825,  0.21675684,  0.21422532,  0.7016789 ],
#                 [ 0.14486462,  0.40642871,  0.4898418 ,  0.41611303,  0.12764404],
#                 [ 0.41853585,  0.22216484,  0.36113181,  0.5708699 ,  0.3874901 ],
#                 [ 0.24314391,  0.22488507,  0.22054467,  0.25387521,  0.46272496],
#                 [ 0.99097341,  0.76083447,  0.37941783,  0.932519  ,  0.9668254 ]])

# list to collect local maxima
local_maxima = []

# distance in x / y to define region of interest around current center coordinate
# roi = 1 corresponds to a region of interest of 3x3 (except at borders)
roi = 1

# give pseudo-coordinates
x,y = np.meshgrid(range(foo.shape[0]), range(foo.shape[1]))

for i in range(foo.shape[0]):
    for j in range(foo.shape[1]):
        x0 = x[i,j]
        y0 = y[i,j]
        z0 = foo[i,j]
        # index calculation to avoid out-of-bounds error when taking sub-matrix
        mask_x = abs(x - x0) <= roi
        mask_y = abs(y - y0) <= roi
        mask = mask_x & mask_y
        if np.max(foo[mask]) == z0:
            local_maxima.append((i, j))

print local_maxima

      

All about defining sliding windows / filters based on your matrix. All other solutions that come to me are more likely to point to absolute highs (like histograms) ...

However, I hope my ansatz is useful to some extent ...

EDIT: Here's another solution, which should be faster than the first, but still O (N ^ 2), and it doesn't depend on straight-line gridded data:

import numpy as np

# generate test data
# points = np.random.rand(10,3)

points = np.array([[ 0.08198248,  0.25999721,  0.07041999],
                   [ 0.19091977,  0.05404123,  0.25826508],
                   [ 0.8842875 ,  0.90132467,  0.50512316],
                   [ 0.33320528,  0.74069399,  0.36643752],
                   [ 0.27789568,  0.14381512,  0.13405309],
                   [ 0.73586202,  0.4406952 ,  0.52345838],
                   [ 0.76639731,  0.70796547,  0.70692905],
                   [ 0.09164532,  0.53234394,  0.88298593],
                   [ 0.96164975,  0.60700481,  0.22605181],
                   [ 0.53892635,  0.95173308,  0.22371167]])

# list to collect local maxima
local_maxima = []

# distance in x / y to define region of interest around current center coordinate
radius = 0.25

for i in range(points.shape[0]):
        # radial mask with radius radius, could be beautified via numpy.linalg
        mask = np.sqrt((points[:,0] - points[i,0])**2 + (points[:,1] - points[i,1])**2) <= radius
        # if current z value equals z_max in current region of interest, append to result list
        if points[i,2] == np.max(points[mask], axis = 0)[2]:
            local_maxima.append(tuple(points[i]))

      

Result:

local_maxima = [
 (0.19091976999999999, 0.054041230000000003, 0.25826507999999998), 
 (0.33320527999999999, 0.74069399000000002, 0.36643752000000002), 
 (0.73586202000000001, 0.44069520000000001, 0.52345838), 
 (0.76639731, 0.70796546999999999, 0.70692904999999995), 
 (0.091645320000000002, 0.53234393999999996, 0.88298593000000003), 
 (0.53892635, 0.95173308000000001, 0.22371167)
]

      

0


source







All Articles