SceneKit camera how to compensate for changes in Euler's rules (orientation)
This may be more of a math question, but I hope someone can help me figure out how I will compensate for changes in camera orientation.
What I would like to do is move the camera around the scene using game controls. I have WSAD keys configured to move the camera forward, backward, left, and right respectively. I do this by changing the values โโof the position vector of the node. I also have the right up and down arrow keys displayed for pan and tilt the camera. I do this by setting up the camera node rules (step and yaw).
Everything works well until I rotated the camera using the right and left arrows. Pressing W then moves the camera in the direction it was facing before I drew it. The behavior makes sense to me, but I can't figure out what I need to do to adjust my position vector to compensate for the hover I applied.
Do I need to somehow multiply my position vector with a new transformation? Or better yet, some other property like a pivot axis that I can change so that I can keep my position code the same?
Thanks for any pointers.
Update here's the code I'm working with so far:
public func displayTimerDidFire(timer: MyCustomDisplayTimer!) {
if timer === cameraMoveTimer {
var cameraTransform = cameraNode.transform
var x = CGFloat(0.0)
var y = CGFloat(0.0)
var z = CGFloat(0.0)
var step : CGFloat = 10.0
if moveUpKeyDown {
y += step
}
if moveDownKeyDown {
y -= step
}
if moveLeftKeyDown {
x -= step
}
if moveRightKeyDown {
x += step
}
if moveForwardKeyDown {
z -= step
}
if moveBackwardKeyDown {
z += step
}
cameraTransform = SCNMatrix4Translate(cameraTransform, x, y, z)
cameraNode.transform = cameraTransform
}
else if timer === cameraTiltTimer {
var angles = cameraNode.eulerAngles
var stepAngle : CGFloat = CGFloat(M_PI_2 / 90.0)
if turnLeftKeyDown {
angles.y += stepAngle
}
if turnRightKeyDown {
angles.y -= stepAngle
}
if tiltForwardKeyDown {
angles.x -= stepAngle
}
if tiltBackwardKeyDown {
angles.x += stepAngle
}
cameraNode.eulerAngles = angles
}
}
I have two timers that update the position of the camera and the other that updates its orientation.
The part I'm stuck on is that I know my new transform (old position plus the increment from ASDW) and I know the camera rotation, but I don't know how to combine the two into my actual new transforms. Transform is SCNMatrix4 and rotation is SCNVector4.
Update 2
Here's a revised version of the code. Instead, I switched to working with transform. However, I still cannot find the correct manipulation with rotation in mind.
public func displayTimerDidFire(timer: MyDisplayTimer!) {
var cameraTransform = cameraNode.transform
var x = CGFloat(0.0)
var y = CGFloat(0.0)
var z = CGFloat(0.0)
var step : CGFloat = 10.0
if moveUpKeyDown {
y += step
}
if moveDownKeyDown {
y -= step
}
if moveLeftKeyDown {
x -= step
}
if moveRightKeyDown {
x += step
}
if moveForwardKeyDown {
z -= step
}
if moveBackwardKeyDown {
z += step
}
cameraTransform = SCNMatrix4Translate(cameraTransform, x, y, z)
// Do something with the transform to compensate for rotation..
// ???
cameraNode.transform = cameraTransform
var angles = cameraNode.eulerAngles
var stepAngle : CGFloat = CGFloat(M_PI_2 / 90.0)
if turnLeftKeyDown {
angles.y += stepAngle
}
if turnRightKeyDown {
angles.y -= stepAngle
}
if tiltForwardKeyDown {
angles.x -= stepAngle
}
if tiltBackwardKeyDown {
angles.x += stepAngle
}
cameraNode.eulerAngles = angles
// printNode(cameraNode)
}
Last update
Thanks for the suggestions. Here is the implementation that works for me
I need this objc helper as some of these features don't seem to be available in Swift:
@implementation MySceneKitUtils
+ (SCNVector3)position:(SCNVector3)position multipliedByRotation:(SCNVector4)rotation
{
if (rotation.w == 0) {
return position;
}
GLKVector3 gPosition = SCNVector3ToGLKVector3(position);
GLKMatrix4 gRotation = GLKMatrix4MakeRotation(rotation.w, rotation.x, rotation.y, rotation.z);
GLKVector3 r = GLKMatrix4MultiplyVector3(gRotation, gPosition);
return SCNVector3FromGLKVector3(r);
}
@end
And implementation
public func displayTimerDidFire(timer: MyDisplayTimer!) {
var x = CGFloat(0.0)
var y = CGFloat(0.0)
var z = CGFloat(0.0)
var step : CGFloat = 10.0
if moveUpKeyDown {
y += step
}
if moveDownKeyDown {
y -= step
}
if moveLeftKeyDown {
x -= step
}
if moveRightKeyDown {
x += step
}
if moveForwardKeyDown {
z -= step
}
if moveBackwardKeyDown {
z += step
}
var cameraTransform = cameraNode.transform
var rotation = cameraNode.rotation
var rotatedPosition = MySceneKitUtils.position(SCNVector3Make(x, y, z), multipliedByRotation: cameraNode.rotation)
cameraTransform = SCNMatrix4Translate(cameraTransform, rotatedPosition.x, rotatedPosition.y, rotatedPosition.z)
cameraNode.transform = cameraTransform
// The rotation
cameraTransform = cameraNode.transform
var stepAngle = CGFloat(0.05)
if turnLeftKeyDown {
cameraTransform = SCNMatrix4Rotate(cameraTransform, stepAngle, 0.0, 1.0, 0.0);
}
if turnRightKeyDown {
cameraTransform = SCNMatrix4Rotate(cameraTransform, -stepAngle, 0.0, 1.0, 0.0);
}
if tiltForwardKeyDown {
cameraTransform = SCNMatrix4Rotate(cameraTransform, stepAngle, 1.0, 0.0, 0.0);
}
if tiltBackwardKeyDown {
cameraTransform = SCNMatrix4Rotate(cameraTransform, -stepAngle, 1.0, 0.0, 0.0);
}
cameraNode.transform = cameraTransform
}
source to share
My answer is being added to the above answer. I used this to scale Pinch.
//Creating unit vector
let unitVector = GLKVector4Make(0, 0, -1, 1)
//converting tranform matrix
let glktranform = SCNMatrix4ToGLKMatrix4(cameraNode.transform)
//multiply unit vector with transform matrix
let rotatedMatrix = GLKMatrix4MultiplyVector4(glktranform, unitVector)
//scale it with translation you compute from the WSAD key
let finalScaledVector = GLKVector4MultiplyScalar(rotatedMatrix, translation)
cameraNode.position.x = finalScaledVector.x
cameraNode.position.y = finalScaledVector.y
cameraNode.position.z = finalScaledVector.z
Hope this helps.
source to share