如何正确使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint

我正在iOS上开发一些使用SceneKit的代码,在我的代码中,我想确定全局z平面上的x和y坐标,其中z是0.0,x和y是从轻击手势确定的。 我的设置如下:

override func viewDidLoad() { super.viewDidLoad() // create a new scene let scene = SCNScene() // create and add a camera to the scene let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scene.rootNode.addChildNode(cameraNode) // place the camera cameraNode.position = SCNVector3(x: 0, y: 0, z: 15) // create and add an ambient light to the scene let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light.type = SCNLightTypeAmbient ambientLightNode.light.color = UIColor.darkGrayColor() scene.rootNode.addChildNode(ambientLightNode) let triangleNode = SCNNode() triangleNode.geometry = defineTriangle(); scene.rootNode.addChildNode(triangleNode) // retrieve the SCNView let scnView = self.view as SCNView // set the scene to the view scnView.scene = scene // configure the view scnView.backgroundColor = UIColor.blackColor() // add a tap gesture recognizer let tapGesture = UITapGestureRecognizer(target: self, action: "handleTap:") let gestureRecognizers = NSMutableArray() gestureRecognizers.addObject(tapGesture) scnView.gestureRecognizers = gestureRecognizers } func handleTap(gestureRecognize: UIGestureRecognizer) { // retrieve the SCNView let scnView = self.view as SCNView // check what nodes are tapped let p = gestureRecognize.locationInView(scnView) // get the camera var camera = scnView.pointOfView.camera // screenZ is percentage between z near and far var screenZ = Float((15.0 - camera.zNear) / (camera.zFar - camera.zNear)) var scenePoint = scnView.unprojectPoint(SCNVector3Make(Float(px), Float(py), screenZ)) println("tapPoint: (\(px), \(py)) scenePoint: (\(scenePoint.x), \(scenePoint.y), \(scenePoint.z))") } func defineTriangle() -> SCNGeometry { // Vertices var vertices:[SCNVector3] = [ SCNVector3Make(-2.0, -2.0, 0.0), SCNVector3Make(2.0, -2.0, 0.0), SCNVector3Make(0.0, 2.0, 0.0) ] let vertexData = NSData(bytes: vertices, length: vertices.count * sizeof(SCNVector3)) var vertexSource = SCNGeometrySource(data: vertexData, semantic: SCNGeometrySourceSemanticVertex, vectorCount: vertices.count, floatComponents: true, componentsPerVector: 3, bytesPerComponent: sizeof(Float), dataOffset: 0, dataStride: sizeof(SCNVector3)) // Normals var normals:[SCNVector3] = [ SCNVector3Make(0.0, 0.0, 1.0), SCNVector3Make(0.0, 0.0, 1.0), SCNVector3Make(0.0, 0.0, 1.0) ] let normalData = NSData(bytes: normals, length: normals.count * sizeof(SCNVector3)) var normalSource = SCNGeometrySource(data: normalData, semantic: SCNGeometrySourceSemanticNormal, vectorCount: normals.count, floatComponents: true, componentsPerVector: 3, bytesPerComponent: sizeof(Float), dataOffset: 0, dataStride: sizeof(SCNVector3)) // Indexes var indices:[CInt] = [0, 1, 2] var indexData = NSData(bytes: indices, length: sizeof(CInt) * indices.count) var indexElement = SCNGeometryElement( data: indexData, primitiveType: .Triangles, primitiveCount: 1, bytesPerIndex: sizeof(CInt) ) var geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement]) // material var material = SCNMaterial() material.diffuse.contents = UIColor.redColor() material.doubleSided = true material.shininess = 1.0; geo.materials = [material]; return geo } 

如你看到的。 我有一个四单元高,四个单位宽的三angular形,并设置在以x,y(0.0,0.0)为中心的z平面(z = 0)上。 相机是默认的SCNCamera,它看起来在负z方向,我把它放在(0,0,15)。 zNear和zFar的默认值分别是1.0和100.0。 在我的handleTap方法中,我使用水龙头的x和y屏幕坐标,并尝试findx和y全局场景坐标,其中z = 0.0。 我正在使用对unprojectPoint的调用。

unprojectPoint的文档表明

取消投影z坐标为0.0的点将返回近剪裁平面上的点; 取消投影z坐标为1.0的点将返回远剪裁平面上的一个点。

虽然没有具体说明中间点与近平面之间存在线性关系,但是我已经做出了这个假设,并计算出screenZ的值为近平面与远平面之间的百分比距离,z = 0飞机位于。 为了检查我的答案,我可以点击三angular形的边angular,因为我知道他们在全局坐标中的位置。

我的问题是,我没有得到正确的值,当我开始更改相机上的zNear和zFar裁剪平面时,我没有得到一致的值。 所以我的问题是,我该怎么做呢? 最后,我将创build一个新的几何graphics,并将其放置在与用户点击的位置对应的z平面上。

在此先感谢您的帮助。

3Dgraphicspipe线中的典型深度缓冲区不是线性的 。 透视分区会使归一化的设备坐标的深度处于不同的比例 。 ( 另见这里 )

所以你投入unprojectPoint的z坐标实际上并不是你想要的。

那么如何find与世界空间相匹配的标准化深度坐标呢? 那么,如果这架飞机是正交的相机 – 这是你的是有帮助的。 那么你所要做的就是在这架飞机上投射一个点:

 let projectedOrigin = gameView.projectPoint(SCNVector3Zero) 

现在,您可以在3D视图+标准化深度空间中获得世界起源的位置。 要将2D视图空间中的其他点映射到该平面上,请使用此vector的z坐标:

 let vp = gestureRecognizer.locationInView(scnView) let vpWithZ = SCNVector3(x: vp.x, y: vp.y, z: projectedOrigin.z) let worldPoint = gameView.unprojectPoint(vpWithZ) 

这会让您在世界空间中获得一个点,将点击/点击位置映射到z = 0平面,如果您想向用户显示该位置,则适合用作节点的位置。

(请注意,这种方法只有在您映射到垂直于相机视图方向的平面上时才起作用。如果要将视图坐标映射到不同方向的表面上, vpWithZ的归一化深度值将不会不变。)

经过一些实验后,我们开发了一个触摸点,用于在某个任意深度的场景中给定点。

你需要的修改是计算Z = 0平面与这条直线的交点,这就是你的观点。

 private func touchPointToScenePoint(recognizer: UIGestureRecognizer) -> SCNVector3 { // Get touch point let touchPoint = recognizer.locationInView(sceneView) // Compute near & far points let nearVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 0) let nearScenePoint = sceneView.unprojectPoint(nearVector) let farVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 1) let farScenePoint = sceneView.unprojectPoint(farVector) // Compute view vector let viewVector = SCNVector3(x: Float(farScenePoint.x - nearScenePoint.x), y: Float(farScenePoint.y - nearScenePoint.y), z: Float(farScenePoint.z - nearScenePoint.z)) // Normalize view vector let vectorLength = sqrt(viewVector.x*viewVector.x + viewVector.y*viewVector.y + viewVector.z*viewVector.z) let normalizedViewVector = SCNVector3(x: viewVector.x/vectorLength, y: viewVector.y/vectorLength, z: viewVector.z/vectorLength) // Scale normalized vector to find scene point let scale = Float(15) let scenePoint = SCNVector3(x: normalizedViewVector.x*scale, y: normalizedViewVector.y*scale, z: normalizedViewVector.z*scale) print("2D point: \(touchPoint). 3D point: \(nearScenePoint). Far point: \(farScenePoint). scene point: \(scenePoint)") // Return <scenePoint> return scenePoint }