Otsu Thresholding on Sobel Filtered Image gives different results

I am creating a Sudoku solution app on Android platform and I am having a problem processing an image. I am trying to find the horizontal lines of a puzzle using OpenCV using a Sobel filter and then a threshold with Otsu's algorithm:

Mat kernaly = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(10,2));
Mat dy = new Mat();
Mat close = new Mat();
Imgproc.Sobel(img, dy, CvType.CV_16S, 0, 2);
Core.convertScaleAbs(dy, dy);
Core.normalize(dy,dy,0,255,Core.NORM_MINMAX);
Imgproc.threshold(dy, close, 0, 255, Imgproc.THRESH_BINARY|Imgproc.THRESH_OTSU);
Imgproc.morphologyEx(close, close, Imgproc.MORPH_DILATE, kernaly);

      

This method works really well for most images, for example:

enter image description here

However, this fails for the following image:

enter image description here

Can someone explain why the results are so different and the second image above only returns one row? Also, should I just use a different methodology like Canny or Hough lines?

Thanks in advance!


Edit: Using marola's advice, I tried to remove as much of the black border as possible without having to warp the image. This is the result of applying the same process on these reworked images.

Image 1:

Image 1

Image 2:

Image 2

As you can see, the results are better as most of the lines were found. However, this is still not enough. It can be improved by adding a fixed threshold, but this should be different for each image.

I am probably just using a new approach as this method doesn't seem reliable enough. Any advice would be greatly appreciated.

+3


source to share


3 answers


I found a quick fix that significantly improves the results by doing some image processing before running the code above:

  • I just did as the morol suggested above and cut as much black border out as possible.
  • Resize the images using warpPerspective to be a standard size square.
  • Output the code above and then expand on the result.


This is not the most reliable solution, but it works for all 10 of my sample test images.

0


source


The problem may be caused by intensity distribution. If you look at the histogram after the sobel statement:

enter image description here

compare it to the histogram of the image with successful otsu detection:

enter image description here



you can easily see that in the first histogram it didn't work because the calculated threshold was rather moved to the right rather than to the left (although the main peak on the left is highlighted for all black pixels). And in the second case, the distribution is not divided into peak and flat rest, and not by the fact that there are white pixels that "shift" the calculated threshold to the right.

In other words, you need to get rid of the dominance of black pixels. In other words, try scaling the Sudoku so that the border around the black pixel is as small as possible. This will make the distribution more similar to the second case.

IMHO from these histograms you can tell that the method is quite sensitive, because the difference between the "black" and "white" parts of the image, therefore the calculated threshold level is very sensitive to the image. I wouldn't depend on this approach. How about some fixed threshold level? In general, this may not sound very good, but here it can work more deterministic and still correct.

+1


source


Alternative suggestion instead of otsu:

You can search for local maxima in the Y direction in the sobel image.

  • Expand a sobel image with a 1x10 rectangular kernel ('10' should be the minimum spacing between grid lines in your images.
  • compare the expanded image with the original by setting all pixels to zero ( cmp(dy,dilatedImg,comparisonImg,CMP_GE

    )
  • remove all areas without local maxima ( threshold(dy, mask, 1, 255, THRESH_BINARY); And(comparisonImg, mask, comparisonImg);

    )

You now have all the pixels corresponding to the strongest horizontal edge in the local area.


sidenote, your use of sobel is a little strange:

You take the second sobel deriver with Imgproc.Sobel(img, dy, CvType.CV_16S, 0, 2);

I am assuming you are doing this because you want to define the center of the black lines, not their borders. But in this case, the next step, convertScaleAbs, seems inconsistent. This way you get the lows as well as the highs of the second Sobel derivative. The highs should correspond to the centers of the black lines, but the lows introduce the visible "triple line" artifacts visible in your edge image. The Abs step would be more sensible if you were using the first sobel derivate, but in your case it would be better to drop negative values.

+1


source







All Articles