Convert the detected rectangle from the CIImage portrait to the CIImage landscape

I am modifying the code I found here . In the code, we capture video from the phone camera using AVCaptureSession and use CIDetector to detect the rectangle in the image feed. The pitch has an image that is 640x842 (iphone5 in portrait). Then we write on the image so that the user can see the detected rectangle (in fact, it is a trapezoid in most cases).

When the user clicks a button in the UI, we capture an image from the video and rerun the rectangle detection on that enlarged image (3264x2448), which you can see is landscape. Then we convert the perspective to the detected rectangle and crop the image.

This works pretty well, but the problem I am having is that 1 out of 5 captures of the detected rectangle in the large image is different from the detected (and presented to the user) from the smaller image. Even though I only capture when I find the phone is (relatively) stationary, the final image is not the rectangle the user expects.

To solve this problem, I want to use the coordinates of the originally captured rectangle and translate them to a rectangle in the captured still image. Here I am afraid.

I tried this with the rectangle I detected:

CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);

CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);

CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);

      

So for a given rectangle:

TopLeft: 88.213425, 632.31329 TopRight: 545.59302, 632.15546 BottomRight: 575.57819, 369.22321 BottomLeft: 49.973862, 369.40466

I now have this pivot rectangle:

origin = (x = 369.223206, y = -575.578186) size = (width = 263.090088, height = 525.604309)

How to translate coordinates with a pivoting rectangle of a smaller portrait to coordinates with a 3264x2448 image?

Edit Spirit .. Reading my own approach realized that making a rectangle from a trapezoid would not solve my problem!

Supporting code for rectangle detection, etc.

// In this method we detect a rect from the video feed and overlay

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

   // image is 640x852 on iphone5

    NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];

    CIRectangleFeature *detectedRect = rects[0]; 

    // draw overlay on image code....
}

      

This is a generalized version of taking a still image:

// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:@{kCIImageColorSpace:[NSNull null]}];
             imageData = nil;

         CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];

         if (rectangleFeature) {
             enhancedImage = [self   correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
         }
    }

      

Thank.

+3


source to share


1 answer


I had the same problem when doing stuff like this. I solved it by running below code in swift. See if this can help you.



 if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
            stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
                (imageDataSampleBuffer, error) -> Void in
                let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
                var img = UIImage(data: imageData!)!

                let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
                let takenCGImage = img.cgImage
                let width = (takenCGImage?.width)!
                let height = (takenCGImage?.height)!
                let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
                let cropCGImage = takenCGImage!.cropping(to: cropRect)
                img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)

                let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
                cropViewController.delegate = self
                self.navigationController?.pushViewController(cropViewController, animated: true)
            }
        }

      

0


source







All Articles