How to align SURF and Harris points MATLAB
As you know in Matlab, there is a function to detect Harris or SURF function separately. Then I need to combine these two feature lists from Harris and SURF to make the mapping more efficient.
the following is the default procedure we know
points_image_Harris =detectHarrisFeatures(image ); [feature_image_Harris, validpoints_image_Harris] = extractFeatures(image, points_image_Harris ); indexPairs_Harris = matchFeatures(feature_template_Harris,feature_image_Harris);
but I want to match two lists of points before doing the mapping: something like this:
points_image_Harris =detectHarrisFeatures(image );
points_image_SURF =detectSURFFeatures(image );
Points = points_image_Harris + points_image_SURF
then use the Points list to extract the features and match them. How to do it? if they are of two different types? corner points and SURFPoints!
I want both functions built from both SURF and Harris as the following output:
I don't know if a combination is possible or any idea to get both corresponding functions from both.
Actually I want to detect these functions, then I want to get the position of those pixels from the frames, and then calculate the difference between the X and Y position.
Also I don't know how to get the coordinates of the objects from SURF and Harris matches
Using detectHarrisFeatures
and detectSURFFeatures
essentially returns a structure where each field contains relevant information about the points of interest found in the image. To give a reproducible example, use an image cameraman.tif
that is part of the imaging toolkit. Let also use both feature detection structures with default parameters:
>> im = imread('cameraman.tif');
>> harrisPoints = detectHarrisFeatures(im);
>> surfPoints = detectSURFFeatures(im);
When we show harrisPoints
, this is what we get:
harrisPoints =
184x1 cornerPoints array with properties:
Location: [184x2 single]
Metric: [184x1 single]
Count: 184
When we show surfPoints
, this is what we get:
surfPoints =
180x1 SURFPoints array with properties:
Scale: [180x1 single]
SignOfLaplacian: [180x1 int8]
Orientation: [180x1 single]
Location: [180x2 single]
Metric: [180x1 single]
Count: 180
Thus, both harrisPoints
and surfPoints
have a field named Location
that contains the spatial coordinates of the features you want. This will be a matrix N x 2
where each row gives you the location of an object point. The first column is x
either the horizontal coordinate and the second column is y
either the vertical coordinate. The origin is in the upper left corner of the image, and the coordinate is y
positive when moving down.
Therefore, if you want to combine both function points together, enter the field of Location
both objects and combine them together into one matrix:
>> Points = [harrisPoints.Location; surfPoints.Location];
Points
should now contain a matrix in which each row gives you a feature.
I would like to make a small note that the Harris angle detector is just a point of interest detection algorithm. All you are given is the location of points of interest in the image. SURF is a discovery and description framework where you not only get points of interest, but also get a good reliable description of each point of interest, which you can use to perform comparisons between other points of interest in other images. Therefore, if you want to combine both Harris and SURF together, this is not possible because Harris does not support the description of percentage points.
source to share
In fact, it is not recommended to combine the points returned by different detectors before matching. It is better to match descriptors extracted from different points of interest separately and then combine the matched points. Otherwise, you will be comparing apples and oranges.
Think of it this way: Harris detects corners, while SURF detects droplet centers. The Harris angle and the SURF key point are unlikely to correspond to the same physical point in the world. Therefore, it makes sense to compare these items separately.
source to share