SceneKit 3D Marker Augmented Reality iOS

For the past two weeks I have been working on developing a simple proof-of-concept app where a 3D model is projected onto a specific additional Augmented Reality brand (in my case I am using Aruco tokens) on IOS (with Swift and Objective-C)

I calibrated the iPad camera with a specific fixed lens position and used that to estimate the pose of the AR marker (which from my debug analysis seems pretty accurate). The problem seems to be (surprise, surprise) when I try to use the SceneKit scene to project the model over the marker.

I know that the axis in opencv and SceneKit is different (Y and Z) and have already done this adjustment, as well as the row / column order difference between the two libraries.

After constructing the projection matrix, I apply the same transformation to the 3D model, and from my debug analysis, the object seems to be translated to the correct position and with the desired rotation. The problem is that it never overlaps the specific pixel position of the marker image. I am using AVCapturePreviewVideoLayer to put a video in the background that has the same borders as my SceneKit View.

Does anyone know why this is happening? I tried to play with FOV cameras but had no real impact on the results.

Thanks everyone for your time.

EDIT1: I'll post some of the code here to show what I am currently doing.

I have two subqueries inside the main view, one is the background AVCaptureVideoPreviewLayer and the other is SceneKitView. Both have the same borders as the main view.

On each frame, I use the opencv shell, which outputs the pose of each marker:

    std::vector<int> ids;
    std::vector<std::vector<cv::Point2f>> corners, rejected;

    cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
    if (ids.size() > 0 ){
    cv::aruco::drawDetectedMarkers(frame, corners, ids);
    cv::Mat rvecs, tvecs;
    cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);

    // Let protect ourselves agains multiple markers
    if (rvecs.total() > 1)
        return;
    _markerFound = true;

    cv::Rodrigues(rvecs, _currentR);

    _currentT = tvecs;

    for (int row = 0; row < _currentR.rows; row++){
        for (int col = 0; col < _currentR.cols; col++){
            _currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
        }
        _currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
    }
    _currentExtrinsics.at<double>(3,3) = 1;
    std::cout << tvecs << std::endl;

    // Convert coordinate systems of opencv to openGL (SceneKit)
    // Note that in openCV z goes away the camera (in openGL goes into the camera)
    // and y points down and on openGL point up
    // Another note: openCV has a column order matrix representation, while SceneKit
    // has a row order matrix, but we'll take care of it later.
    cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
    cvToGl.at<double>(0,0) = 1.0f;
    cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
    cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
    cvToGl.at<double>(3,3) = 1.0f;
    _currentExtrinsics = cvToGl * _currentExtrinsics;
    cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);

      

Then in each frame I transform the opencv matrix for SCN4Matrix:

- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();

// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);

mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);

mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);

//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);

return mat;

      

}

In every frame where an AR marker is found, I add a window to the scene and apply the transform to the node object:

SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
    [self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
    Mat R, T;
    [self.delegate returnRotationMat:R];
    [self.delegate returnTranslationMat:T];
    SCNMatrix4 Transformation;
    Transformation = [self          transformToSceneKit:extrinsicMatrixOfTheMarker];
    //_cameraNode.transform = SCNMatrix4Invert(Transformation);
    [_sceneKitScene.rootNode addChildNode:_cameraNode];
    //_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
    //_cameraNode.camera.zNear = 0.0;
    _sceneKitView.pointOfView = _cameraNode;
    _boxNode.transform = Transformation;


    [_sceneKitScene.rootNode addChildNode:_boxNode];
    //_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);

    std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}

      

For example, if the translation vector is (-1, 5, 20), the object appears in the scene at position (-1, -5, -20) in the scene, as well as correct rotation. The problem is that it never appears in the correct position in the background image. I will add some images to show the result.

Result1

Result2

Does anyone know why this is happening?

+3


source to share


1 answer


Solution found. Instead of applying a transform to the node object, I applied an inverted transform matrix to the camera node. Then I applied the following matrix for the camera perspective transformation matrix:

    projection = SCNMatrix4Identity
    projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
    projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
    projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
    projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
    projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
    projection.m33 = (-far - near) / (far - near)
    projection.m34 = (-2 * far * near) / (far - near)
    projection.m43 = -1
    projection.m44 = 0

      



is located far and near the clipping planes z.

I also had to correct the original position of the box to center it on the marker.

+2


source







All Articles