SceneKit get texture coordinate after swift touch

I want to manipulate 2D textures in a 3D SceneKit scene. So I used this code to get the local coordinates:

@IBAction func tap(sender: UITapGestureRecognizer) {
    var arr:NSArray = my3dView.hitTest(sender.locationInView(my3dView), options: NSDictionary(dictionary: [SCNHitTestFirstFoundOnlyKey:true]))
    var res:SCNHitTestResult = arr.firstObject as SCNHitTestResult

    var vect:SCNVector3 = res.localCoordinates}

      

I have a texture read from the stage:

    var mat:SCNNode = myscene.rootNode.childNodes[0] as SCNNode
    var child:SCNNode = mat.childNodeWithName("ID12", recursively: false)
    var geo:SCNMaterial = child.geometry.firstMaterial
    var channel = geo.diffuse.mappingChannel        
    var textureimg:UIImage = geo.diffuse.contents as UIImage

      

and now I want to draw a texture on the touch surface ... How can I do this? how can i convert coordinate from touch to texture image?

+3


source to share


1 answer


It looks like you have two problems. ( Without using regular expressions. :)

First, you need to get the texture coordinates of the point that the point points to, that is, a point in 2D texture space on the object's surface. You almost got it. SCNHitTestResult

provides functions textureCoordinatesWithMappingChannel

. (You use localCoordinates

that gives you a point in 3D space owned by a node as a result of a test of the result.) And you already seem to have found the channel mapping business, so you know you will go for that method.

Problem # 2 - how to draw.

You are doing the right thing to get stuff like content UIImage

. After that, you can look at the drawing using UIGraphics

and CGContext

functions - create an image with UIGraphicsBeginImageContext

, draw an existing image in it, and then draw any new content you want to add at the clicked point. After that you can get the image you are drawing with UIGraphicsGetImageFromCurrentImageContext

and set it as the new one of diffuse.contents

your material. However, this is probably not the best way - you collect a bunch of image data on the processor, and the code is a bit cumbersome too.

A better approach might be to use the integration between SceneKit and SpriteKit. This way your entire 2D drawing happens in the same graphical context as the 3D drawing and the code is a bit simpler.

You can set your stuff diffuse.contents

to the SpriteKit scene. (To use UIImage

which one you are currently using for this texture, simply bind it to SKSpriteNode

one that fills the scene.) Once you have the texture coordinates, you can add a sprite to the scene at that point.



var nodeToDrawOn: SCNNode!
var skScene: SKScene!

func mySetup() { // or viewDidLoad, or wherever you do setup
    // whatever else you're doing for setup, plus:

    // 1. remember which node we want to draw on
    nodeToDrawOn = myScene.rootNode.childNodeWithName("ID12", recursively: true)

    // 2. set up that node texture as a SpriteKit scene
    let currentImage = nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents as UIImage
    skScene = SKScene(size: currentImage.size)
    nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents = skScene

    // 3. put the currentImage into a background sprite for the skScene
    let background = SKSpriteNode(texture: SKTexture(image: currentImage))
    background.position = CGPoint(x: skScene.frame.midX, y: skScene.frame.midY)
    skScene.addChild(background)
}

@IBAction func tap(sender: UITapGestureRecognizer) {
    let results = my3dView.hitTest(sender.locationInView(my3dView), options: [SCNHitTestFirstFoundOnlyKey: true]) as [SCNHitTestResult]
    if let result = results.first {
        if result.node === nodeToDrawOn {
            // 1. get the texture coordinates
            let channel = nodeToDrawOn.geometry!.firstMaterial!.diffuse.mappingChannel
            let texcoord = result.textureCoordinatesWithMappingChannel(channel)

            // 2. place a sprite there
            let sprite = SKSpriteNode(color: SKColor.greenColor(), size: CGSize(width: 10, height: 10))
            // scale coords: texcoords go 0.0-1.0, skScene space is is pixels
            sprite.position.x = texcoord.x * skScene.size.width
            sprite.position.y = texcoord.y * skScene.size.height
            skScene.addChild(sprite)
        }
    }
}

      

For more on the SpriteKit approach (in Objective-C) see SceneKit State of the Union Demo from WWDC14. This shows the SpriteKit scene being used as a texture map for the torus, with paint spheres being pounced on it: whenever the ball SCNHitTestResult

hits the torus, it gets and uses its texcoords to create paint splashes in the SpriteKit scene.


Finally, some Swift style comments for your code (not related to the question and answer):

  • Use let

    instead var

    wherever you don't need to reassign a value and the optimizer will make your code faster.
  • Explicit type annotations ( res: SCNHitTestResult

    ) are rarely needed.
  • Swift dictionaries are hooked up to NSDictionary

    , so you can pass them directly to the API that accepts NSDictionary

    .
  • Casting to a string array Swift ( hitTest(...) as [SCNHitTestResult]

    ) saves you the trouble of adding content.
+6


source







All Articles