Vis*_*oko 4 uigesturerecognizer swift iosdeployment arkit
我是一个新的iOS编程。我要构建一个允许用户单击屏幕上显示的特定对象的应用程序。我正在使用addGestureRecognizer已显示的对象,以识别是否已单击用户,然后我只想在屏幕上添加另一个对象。
这是我到目前为止所做的
objpizza = make2dNode(image:#imageLiteral(resourceName: "pizza"),width: 0.07,height: 0.07)
objpizza.position = SCNVector3(0,0,-0.2)
objpizza.name = "none"
self.arView.addGestureRecognizer(UIGestureRecognizer(target: self, action: #selector(selectObject)))
arView.scene.rootNode.addChildNode(objpizza)
Run Code Online (Sandbox Code Playgroud)
这是make2dNode功能只是调整对象
func make2dNode(image: UIImage, width: CGFloat = 0.1, height: CGFloat = 0.1) -> SCNNode {
let plane = SCNPlane(width: width, height: height)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
Run Code Online (Sandbox Code Playgroud)
这是我在实现时从未调用的函数 self.arView.addGestureRecognizer(UIGestureRecognizer(target: self, action: #selector(selectObject)))
@objc func selectObject() {
print("Image has been selected")
}
Run Code Online (Sandbox Code Playgroud)
检测对a的触摸不仅SCNNode需要将a添加UITapGestureRecogniser到视图中,还需要更多。
为了检测到您触摸了哪个SCNNode,您需要使用SCNHitTest(与您的poseRecognizer结合使用),它是:
查找位于指定点或沿指定线段(或射线)的场景元素的过程。
一个SCNHitTest长相为:
SCNGeometry对象沿着您指定的射线。对于射线与几何图形之间的每个交点,SceneKit创建一个命中测试结果,以提供有关包含几何图形的SCNNode对象以及该交点在几何图形表面上的位置的信息。
好的,您可能正在考虑,但是在我看来,这实际上如何工作?
好吧,让我们首先创建一个具有SCNSphere Geometry的SCNNode并将其添加到场景中。
//1. Create An SCNNode With An SCNSphere Geometry
let nodeOneGeometry = SCNSphere(radius: 0.2)
//2. Set It's Colour To Cyan
nodeOneGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
//3. Assign The Geometry To The Node
nodeOne = SCNNode(geometry: nodeOneGeometry)
//4. Assign A Name For Our Node
nodeOne.name = "Node One"
//5. Position It & Add It To Our ARSCNView
nodeOne.position = SCNVector3(0, 0, -1.5)
augmentedRealityView.scene.rootNode.addChildNode(nodeOne)
Run Code Online (Sandbox Code Playgroud)
您会在这里注意到,我已经为SCNNode分配了一个名称,这使跟踪它(例如通过hitTest识别它)变得更加容易。
现在,我们将SCNNode添加到了层次结构中,让我们创建一个UITapGestureRecognizer像这样的样子:
//1. Create A UITapGestureRecognizer & Add It To Our MainView
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(checkNodeHit(_:)))
tapGesture.numberOfTapsRequired = 1
self.view.addGestureRecognizer(tapGesture)
Run Code Online (Sandbox Code Playgroud)
现在我们已经完成了所有设置,我们需要创建我们的checkNodeHit函数来检测用户点击的节点:
/// Runs An SCNHitTest To Check If An SCNNode Has Been Hit
///
/// - Parameter gesture: UITapGestureRecognizer
@objc func checkNodeHit(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location In The View
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An SCNHitTest To Determine If We Have Hit An SCNNode
guard let hitTestNode = self.augmentedRealityView.hitTest(currentTouchLocation, options: nil).first?.node else { return }
if hitTestNode.name == "Node One"{
print("The User Has Successfuly Tapped On \(hitTestNode.name!)")
}
}
Run Code Online (Sandbox Code Playgroud)
现在,如果要将SCNNode放置在用户点击的位置,则必须使用ARSCNHitTest代替,它提供了:
通过检查AR会话的设备摄像头视图中的点发现的有关现实表面的信息。
通过执行此操作,我们便可以使用 worldTransform结果属性将虚拟内容放置在该位置。
供您参考的worldTransform是:
命中测试结果相对于世界坐标系的位置和方向。
在ARKit中的定位可以很容易地像这样可视化:
再次让我们看一下如何使用它来放置虚拟对象:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Get The Current Touch Location In Our ARSCNView & Perform An ARSCNHitTest For Any Viable Feature Points
guard let currentTouchLocation = touches.first?.location(in: self.augmentedRealityView),
let hitTest = self.augmentedRealityView.hitTest(currentTouchLocation, types: .featurePoint).first else { return }
//2. Get The World Transform From The HitTest & Get The Positional Data From The Matrix (3rd Column)
let worldPositionFromTouch = hitTest.worldTransform.columns.3
//3. Create An SCNNode At The Touch Location
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
boxGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
boxNode.geometry = boxGeometry
boxNode.position = SCNVector3(worldPositionFromTouch.x, worldPositionFromTouch.y, worldPositionFromTouch.z)
//4. Add It To The Scene Hierachy
self.augmentedRealityView.scene.rootNode.addChildNode(boxNode)
}
Run Code Online (Sandbox Code Playgroud)
希望能帮助到你...
你需要实施
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
arView.addGestureRecognizer(tapGesture)
Run Code Online (Sandbox Code Playgroud)
For 函数标识用户是否被点击
@obj func didTap(_ gesture: UITabGestureRecognizer){
object2 = make2Node(image: nameOfimage, width: 0.07, height: 0.07)
object2.position = SCNVector(0,0,0.2)
arView.scene.rootNode.addChildNode(object2)
Run Code Online (Sandbox Code Playgroud)
享受编码吧兄弟。
| 归档时间: |
|
| 查看次数: |
502 次 |
| 最近记录: |