pto*_*son 17 3d ios scenekit swift
我正在使用iOS上的SceneKit开发一些代码,在我的代码中我想确定全局z平面上的x和y坐标,其中z是0.0,x和y是通过点击手势确定的.我的设置如下:
override func viewDidLoad() {
super.viewDidLoad()
// create a new scene
let scene = SCNScene()
// create and add a camera to the scene
let cameraNode = SCNNode()
let camera = SCNCamera()
cameraNode.camera = camera
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// create and add an ambient light to the scene
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light.type = SCNLightTypeAmbient
ambientLightNode.light.color = UIColor.darkGrayColor()
scene.rootNode.addChildNode(ambientLightNode)
let triangleNode = SCNNode()
triangleNode.geometry = defineTriangle();
scene.rootNode.addChildNode(triangleNode)
// retrieve the SCNView
let scnView = self.view as SCNView
// set the scene to the view
scnView.scene = scene
// configure the view
scnView.backgroundColor = UIColor.blackColor()
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: "handleTap:")
let gestureRecognizers = NSMutableArray()
gestureRecognizers.addObject(tapGesture)
scnView.gestureRecognizers = gestureRecognizers
}
func handleTap(gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let scnView = self.view as SCNView
// check what nodes are tapped
let p = gestureRecognize.locationInView(scnView)
// get the camera
var camera = scnView.pointOfView.camera
// screenZ is percentage between z near and far
var screenZ = Float((15.0 - camera.zNear) / (camera.zFar - camera.zNear))
var scenePoint = scnView.unprojectPoint(SCNVector3Make(Float(p.x), Float(p.y), screenZ))
println("tapPoint: (\(p.x), \(p.y)) scenePoint: (\(scenePoint.x), \(scenePoint.y), \(scenePoint.z))")
}
func defineTriangle() -> SCNGeometry {
// Vertices
var vertices:[SCNVector3] = [
SCNVector3Make(-2.0, -2.0, 0.0),
SCNVector3Make(2.0, -2.0, 0.0),
SCNVector3Make(0.0, 2.0, 0.0)
]
let vertexData = NSData(bytes: vertices, length: vertices.count * sizeof(SCNVector3))
var vertexSource = SCNGeometrySource(data: vertexData,
semantic: SCNGeometrySourceSemanticVertex,
vectorCount: vertices.count,
floatComponents: true,
componentsPerVector: 3,
bytesPerComponent: sizeof(Float),
dataOffset: 0,
dataStride: sizeof(SCNVector3))
// Normals
var normals:[SCNVector3] = [
SCNVector3Make(0.0, 0.0, 1.0),
SCNVector3Make(0.0, 0.0, 1.0),
SCNVector3Make(0.0, 0.0, 1.0)
]
let normalData = NSData(bytes: normals, length: normals.count * sizeof(SCNVector3))
var normalSource = SCNGeometrySource(data: normalData,
semantic: SCNGeometrySourceSemanticNormal,
vectorCount: normals.count,
floatComponents: true,
componentsPerVector: 3,
bytesPerComponent: sizeof(Float),
dataOffset: 0,
dataStride: sizeof(SCNVector3))
// Indexes
var indices:[CInt] = [0, 1, 2]
var indexData = NSData(bytes: indices, length: sizeof(CInt) * indices.count)
var indexElement = SCNGeometryElement(
data: indexData,
primitiveType: .Triangles,
primitiveCount: 1,
bytesPerIndex: sizeof(CInt)
)
var geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])
// material
var material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.doubleSided = true
material.shininess = 1.0;
geo.materials = [material];
return geo
}
Run Code Online (Sandbox Code Playgroud)
如你看到的.我有一个4个单位高,4个单位宽的三角形,并设置在以x,y(0.0,0.0)为中心的z平面(z = 0)上.相机是默认的SCNCamera,它在负z方向看,我把它放在(0,0,15).zNear和zFar的默认值分别为1.0和100.0.在我的handleTap方法中,我采用了点击的x和y屏幕坐标,并尝试找到x和y全局场景坐标,其中z = 0.0.我正在使用unprojectPoint的调用.
unprojectPoint的文档表明
取消投影z坐标为0.0的点将返回近剪裁平面上的点; 取消投影z坐标为1.0的点将返回远剪裁平面上的点.
虽然没有具体说明在两点之间的点与近平面和远平面之间存在线性关系,但我已经做出了这个假设,并将screenZ的值计算为近平面和远平面之间的百分比距离z = 0飞机位于.为了检查我的答案,我可以点击三角形的角落附近,因为我知道它们在全局坐标中的位置.
我的问题是我没有得到正确的值,当我开始更改相机上的zNear和zFar剪裁平面时,我没有获得一致的值.所以我的问题是,我该怎么办呢?最后,我将创建一个新的几何体并将其放置在与用户单击的位置对应的z平面上.
在此先感谢您的帮助.
ric*_*ter 30
3D图形管线中的典型深度缓冲区不是线性的.透视分割导致标准化设备坐标中的深度处于不同的比例.(另见这里.)
所以你输入的z坐标unprojectPoint实际上并不是你想要的那个.
那么,如何找到与世界空间中的平面匹配的标准化深度坐标?嗯,如果那架飞机与摄像机正交,那将有所帮助.然后你需要做的就是在那个平面上投射一个点:
let projectedOrigin = gameView.projectPoint(SCNVector3Zero)
Run Code Online (Sandbox Code Playgroud)
现在,您可以在3D视图+标准化深度空间中获得世界原点的位置.要将2D视图空间中的其他点映射到此平面,请使用此向量中的z坐标:
let vp = gestureRecognizer.locationInView(scnView)
let vpWithZ = SCNVector3(x: vp.x, y: vp.y, z: projectedOrigin.z)
let worldPoint = gameView.unprojectPoint(vpWithZ)
Run Code Online (Sandbox Code Playgroud)
这将为您提供一个世界空间中的点,将点击/点击位置映射到z = 0平面,position如果要向用户显示该位置,则适合用作节点.
(请注意,只有在映射到垂直于摄像机视图方向的平面上时,此方法才有效.如果要将视图坐标映射到不同方向的曲面,则标准化深度值vpWithZ将不会保持不变.)
| 归档时间: |
|
| 查看次数: |
8293 次 |
| 最近记录: |