Fine-tuning the parameters of the Hough Line function of OpenCV

I am trying to get 4 lines around a square to get the vertices of the square. I'm going to use this approach instead of finding angles directly using Harris's method or contours because of precision. Using houghlines in a built-in function in opencv, I cannot get the full lengths to get the intersection points and I also end up with too many irrelevant lines. I would like to know if the parameters can be adjusted to get my requirements? If so, how should I do this? My question is exactly the same as this one here. However, I am not getting these lines even after changing these parameters. I have attached the original image along with the code and output:

Original Image:

Original image

code:

#include <Windows.h>
#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2/imgcodecs/imgcodecs.hpp"
#include "opencv2/videoio/videoio.hpp"

using namespace cv;
using namespace std;

int main(int argc, const char** argv)
{

    Mat image,src;
    image = imread("c:/pics/output2_1.bmp");
    src = image.clone();
    cvtColor(image, image, CV_BGR2GRAY);
    threshold(image, image, 0, 255, CV_THRESH_OTSU + CV_THRESH_BINARY_INV);

    namedWindow("thresh", WINDOW_NORMAL);
    resizeWindow("thresh", 600, 400);

    imshow("thresh", image);

    cv::Mat edges;

    cv::Canny(image, edges, 0, 255);

    vector<Vec2f> lines;
    HoughLines(edges, lines, 1, CV_PI / 180, 100, 0, 0);
    for (size_t i = 0; i < lines.size(); i++)
    {
        float rho = lines[i][0], theta = lines[i][1];
        Point pt1, pt2;
        double a = cos(theta), b = sin(theta);
        double x0 = a*rho, y0 = b*rho;
        pt1.x = cvRound(x0 + 1000 * (-b));
        pt1.y = cvRound(y0 + 1000 * (a));
        pt2.x = cvRound(x0 - 1000 * (-b));
        pt2.y = cvRound(y0 - 1000 * (a));
        line(src, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
    }

namedWindow("Edges Structure", WINDOW_NORMAL);
resizeWindow("Edges Structure", 600, 400);

imshow("Edges Structure", src);
waitKey(0);

return(0);
}

      

Output Image:

HoughLinesOutput

Update: There is a frame in this image, so I was able to reduce the nonessential lines at the border of the image by removing this frame, however I still don't get full lines covering the square.

+1


source to share


1 answer


There are many ways to do this, I will give an example of just one. However, I am the fastest at python

so my sample code will be in that language. Don't translate it, though (feel free to edit your post with your C ++ solution after you've finished it for others).

For preprocessing, I highly recommend dilate()

on your edge image. This will make the lines thicker, which will better fit the Hough lines. What Hough's lines function does in abstract form is basically making a grid of lines going through a ton of angles and distances, and if the lines intersect any white pixels from Canny, then it gives that line a score for every point it goes through. however, the lines from Canny will not be completely straight, so you will end up with several different lines. Strengthening these Canny lines will mean that each line that really fits very well will have a better chance of winning.

Dilated Canny

If you are going to use HoughLinesP

, then your output will be a line segment where all you have are two points on the line.

Since the lines are mostly vertical and horizontal, you can easily separate lines based on their position. If the two y-coordinates of one line are next to each other, then the line is mostly horizontal. If the two x coordinates are close to each other, then the line is mostly vertical. So you can segment your lines into vertical lines and horizontal lines this way.

def segment_lines(lines, delta):
    h_lines = []
    v_lines = []
    for line in lines:
        for x1, y1, x2, y2 in line:
            if abs(x2-x1) < delta: # x-values are near; line is vertical
                v_lines.append(line)
            elif abs(y2-y1) < delta: # y-values are near; line is horizontal
                h_lines.append(line)
    return h_lines, v_lines

      

Segmented Hough strings

Then you can get the intersection points of the two line segments from their endpoints using determinants .

def find_intersection(line1, line2):
    # extract points
    x1, y1, x2, y2 = line1[0]
    x3, y3, x4, y4 = line2[0]
    # compute determinant
    Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    return Px, Py

      

So now, if you go through all your lines, you will have intersection points from all horizontal and vertical lines, but you have many lines, so you will have many intersection points for the same window corner.

Intersections

However, they are all in the same vector, so you need to not only average the points in each corner, but also group them together. You can achieve this with k-mean clustering, which is implemented in OpenCV as kmeans()

.



def cluster_points(points, nclusters):
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
    _, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
    return centers

      

Finally, we can just plot these centers (making sure we round up first --- since everything is float so far) on image c circle()

to make sure we did it right.

k - means clustered corner points

And we have it; four dots in the corners of the box.

Here's my complete code in python, including the code for generating the above numbers:

import cv2
import numpy as np 

def find_intersection(line1, line2):
    # extract points
    x1, y1, x2, y2 = line1[0]
    x3, y3, x4, y4 = line2[0]
    # compute determinant
    Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    return Px, Py

def segment_lines(lines, delta):
    h_lines = []
    v_lines = []
    for line in lines:
        for x1, y1, x2, y2 in line:
            if abs(x2-x1) < delta: # x-values are near; line is vertical
                v_lines.append(line)
            elif abs(y2-y1) < delta: # y-values are near; line is horizontal
                h_lines.append(line)
    return h_lines, v_lines

def cluster_points(points, nclusters):
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
    _, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
    return centers

img = cv2.imread('image.png')

# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
dilated = cv2.dilate(edges, np.ones((3,3), dtype=np.uint8))

cv2.imshow("Dilated", dilated)
cv2.waitKey(0)
cv2.imwrite('dilated.png', dilated)

# run the Hough transform
lines = cv2.HoughLinesP(dilated, rho=1, theta=np.pi/180, threshold=100, maxLineGap=20, minLineLength=50)

# segment the lines
delta = 10
h_lines, v_lines = segment_lines(lines, delta)

# draw the segmented lines
houghimg = img.copy()
for line in h_lines:
    for x1, y1, x2, y2 in line:
        color = [0,0,255] # color hoz lines red
        cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)
for line in v_lines:
    for x1, y1, x2, y2 in line:
        color = [255,0,0] # color vert lines blue
        cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)

cv2.imshow("Segmented Hough Lines", houghimg)
cv2.waitKey(0)
cv2.imwrite('hough.png', houghimg)

# find the line intersection points
Px = []
Py = []
for h_line in h_lines:
    for v_line in v_lines:
        px, py = find_intersection(h_line, v_line)
        Px.append(px)
        Py.append(py)

# draw the intersection points
intersectsimg = img.copy()
for cx, cy in zip(Px, Py):
    cx = np.round(cx).astype(int)
    cy = np.round(cy).astype(int)
    color = np.random.randint(0,255,3).tolist() # random colors
    cv2.circle(intersectsimg, (cx, cy), radius=2, color=color, thickness=-1) # -1: filled circle

cv2.imshow("Intersections", intersectsimg)
cv2.waitKey(0)
cv2.imwrite('intersections.png', intersectsimg)

# use clustering to find the centers of the data clusters
P = np.float32(np.column_stack((Px, Py)))
nclusters = 4
centers = cluster_points(P, nclusters)
print(centers)

# draw the center of the clusters
for cx, cy in centers:
    cx = np.round(cx).astype(int)
    cy = np.round(cy).astype(int)
    cv2.circle(img, (cx, cy), radius=4, color=[0,0,255], thickness=-1) # -1: filled circle

cv2.imshow("Center of intersection clusters", img)
cv2.waitKey(0)
cv2.imwrite('corners.png', img)

      


Finally, just one question ... why not use the Harris angle detector implemented in OpenCV like cornerHarris()

? Because it works really well with very minimal code. I spawned a grayscale image and then waved a little to remove false corners, and, well ...

Harris angle detector

This was done with the following code:

import cv2
import numpy as np

img = cv2.imread('image.png')

# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
r, gray = cv2.threshold(gray, 120, 255, type=cv2.THRESH_BINARY)
gray = cv2.GaussianBlur(gray, (3,3), 3)

# run harris
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0.04)

# dilate the corner points for marking
dst = cv2.dilate(dst,None)
dst = cv2.dilate(dst,None)

# threshold
img[dst>0.01*dst.max()]=[0,0,255]

cv2.imshow('dst',img)
cv2.waitKey(0)
cv2.imwrite('harris.png', img)

      

I think that with a few small adjustments, the Harris angle detector could probably be much more accurate than the Hough line crossing extrapolation.

+6


source







All Articles