Ios - convert CGPoints (CIRectangleFeature) from Image to ImageView

I am creating an iPhone application that detects Rectangle and captures an image using a camera . I create an overlay over the largest rectangle found and after that I have 4 CGPoints

using the CIRectangleFeature

image as well.

All 4 points in CIRectangleFeature

are in landscape orientation, and my app is in Portrait.

When I display the image in UIImageView

the next controller, the coordinates they are breaking. The image is in AspectFit mode . I searched and found several solutions, one of them was

extension CGPoint {
func scalePointByCeficient(ƒ_x: CGFloat, ƒ_y: CGFloat, viewWidth: CGSize, imageWidth: CGSize) -> CGPoint {
    let scale: CGFloat;
    scale = min(ƒ_x, ƒ_y)

    var p: CGPoint = CGPoint(x: self.x, y: self.y)

    p.x *= scale
    p.y *= scale

    p.x += (viewWidth.width - imageWidth.width * scale) / 2.0
    p.y += (viewWidth.height - imageWidth.height * scale) / 2.0

    return p

}

func reversePointCoordinates() -> CGPoint {
    return CGPoint(x: self.y, y: self.x)
}

func sumPointCoordinates(add: CGPoint) -> CGPoint {
    return CGPoint(x: self.x + add.x, y: self.y + add.y)
}

func substractPointCoordinates(sub: CGPoint) -> CGPoint {
    return CGPoint(x: self.x - sub.x, y: self.y - sub.y)
}}

class ObyRectangleFeature : NSObject {

public var topLeft: CGPoint
public var topRight: CGPoint
public var bottomLeft: CGPoint
public var bottomRight: CGPoint

var myRect: CIRectangleFeature?

public var viewWidth: CGSize
public var imageWidth: CGSize

var centerPoint_OLD : CGPoint{
    get {

        myRect?.topLeft = self.topLeft
        myRect?.topRight = self.topRight
        myRect?.bottomLeft = self.bottomLeft
        myRect?.bottomRight = self.bottomRight

        let superCenter: CGPoint = CGPoint(x: (myRect?.bounds().midX)!, y: (myRect?.bounds().midY)!)

        return superCenter
    }

}
var centerPoint : CGPoint{
    get {
        myRect?.topLeft = self.topLeft
        myRect?.topRight = self.topRight
        myRect?.bottomLeft = self.bottomLeft
        myRect?.bottomRight = self.bottomRight

        let superCenter: CGPoint = CGPoint(x: (myRect?.bounds().midX)!, y: (myRect?.bounds().midY)!)

        return superCenter
    }

}

convenience init(rectObj rectangleFeature: CIRectangleFeature) {
    self.init()

    myRect = rectangleFeature

    topLeft = rectangleFeature.topLeft
    topRight = rectangleFeature.topRight
    bottomLeft = rectangleFeature.bottomLeft
    bottomRight = rectangleFeature.bottomRight
}

override init() {

    self.topLeft = CGPoint.zero
    self.topRight = CGPoint.zero
    self.bottomLeft = CGPoint.zero
    self.bottomRight = CGPoint.zero

    self.viewWidth = CGSize.zero
    self.imageWidth = CGSize.zero

    super.init()


}


public func rotate90Degree() -> Void {

    let centerPoint =  self.centerPoint

    //        /rotate cos(90)=0, sin(90)=1
    topLeft = CGPoint(x: centerPoint.x + (topLeft.y - centerPoint.y), y: centerPoint.y + (topLeft.x - centerPoint.x))
    topRight = CGPoint(x: centerPoint.x + (topRight.y - centerPoint.y), y: centerPoint.y + (topRight.x - centerPoint.x))
    bottomLeft = CGPoint(x: centerPoint.x + (bottomLeft.y - centerPoint.y), y: centerPoint.y + (bottomLeft.x - centerPoint.x))
    bottomRight = CGPoint(x: centerPoint.x + (bottomRight.y - centerPoint.y), y: centerPoint.y + (bottomRight.x - centerPoint.x))

    print(self.centerPoint)
}

public func  scaleRectWithCoeficient(ƒ_x: CGFloat, ƒ_y: CGFloat) -> Void {
    topLeft =  topLeft.scalePointByCeficient(ƒ_x: ƒ_x, ƒ_y: ƒ_y, viewWidth: self.viewWidth, imageWidth: self.imageWidth)
    topRight = topRight.scalePointByCeficient(ƒ_x: ƒ_x, ƒ_y: ƒ_y, viewWidth: self.viewWidth, imageWidth: self.imageWidth)
    bottomLeft = bottomLeft.scalePointByCeficient(ƒ_x: ƒ_x, ƒ_y: ƒ_y, viewWidth: self.viewWidth, imageWidth: self.imageWidth)
    bottomRight = bottomRight.scalePointByCeficient(ƒ_x: ƒ_x, ƒ_y: ƒ_y, viewWidth: self.viewWidth, imageWidth: self.imageWidth)
}

public func correctOriginPoints() -> Void {

    let deltaCenter = self.centerPoint.reversePointCoordinates().substractPointCoordinates(sub: self.centerPoint)

    let TL = topLeft
    let TR = topRight
    let BL = bottomLeft
    let BR = bottomRight

    topLeft = BL.sumPointCoordinates(add: deltaCenter)
    topRight = TL.sumPointCoordinates(add: deltaCenter)
    bottomLeft = BR.sumPointCoordinates(add: deltaCenter)
    bottomRight = TR.sumPointCoordinates(add: deltaCenter)

    print(self.centerPoint)
}}

      

His challenge is like

ObyRectangleFeature *scaledRect = [[ObyRectangleFeature alloc] initWithRectObj:(id)rect_rect];

    float f_x = _sourceImageView.frame.size.width / _sourceImageView.image.size.width;
    float f_y = _sourceImageView.frame.size.height / _sourceImageView.image.size.height;

    [scaledRect setViewWidth:_sourceImageView.bounds.size];
    [scaledRect setImageWidth:_sourceImageView.image.size];

    [scaledRect scaleRectWithCoeficientWithƒ_x:f_y ƒ_y:f_x];
    [scaledRect rotate90Degree];
    [scaledRect correctOriginPoints];

      

Basically it checks the scale factor and converts the points to coordinates UIImageView

, and then takes into account the Terrain Factor, which rotates it 90 degrees or more as required. But the result I get is a bit problematic.

enter image description here

As you can see, the rectangle that is made moves below the map. Any ideas on how to resolve this issue?

+3


source to share


1 answer


The problem is the contentMode of the UIImageView changes where the image is placed in the view. It looks like you are using an aspect approach, so you need to calculate how many UImageView are black bars, then divide the image size by the image size and then add an offset based on the black bars. It really isn't that hard, but the math can be tricky. I would recommend using https://github.com/nubbel/UIImageView-GeometryConversion which does the math for all different content modes.



0


source







All Articles