oma*_*ojo 11 xcode ios scenekit arkit
我一直在阅读大量关于如何通过在屏幕上拖动对象来移动对象的StackOverflow答案.有些人使用针对.featurePoints的命中测试,有些人使用手势翻译或只是跟踪对象的lastPosition.但老实说..没有人按照每个人的期望它的方式工作.
对.featurePoints进行测试只会使对象四处跳跃,因为拖动手指时不会总是碰到一个特征点.我不明白为什么每个人都在暗示这一点.
像这样的解决方案:使用SceneKit在ARKit中拖动SCNNode
但是物体并没有真正跟随你的手指,并且你走了几步或改变物体或相机的角度的那一刻......并尝试移动物体.. x,z都是倒置的......并且完全有意义要做到这一点.
我真的想要像Apple Demo一样移动对象,但我看看Apple的代码......并且非常奇怪且过于复杂我甚至无法理解.他们移动物体如此美化的技术甚至不像每个人在网上提出的那样接近. https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality
必须有一个更简单的方法来做到这一点.
简短回答:\n要像在 Apple 演示项目中那样获得良好且流畅的拖动效果,您必须像在 Apple 演示项目(处理 3D 交互)中那样进行操作。另一方面,我同意你的观点,如果你第一次看代码,它可能会令人困惑。计算放置在地板上的物体的正确运动并不容易——总是从每个位置或视角。它\xe2\x80\x99是一个复杂的代码构造,正在实现这种出色的拖动效果。苹果在实现这一目标方面做得很好,但并没有让我们变得太容易。
\n\n完整答案:\n为您的需要而剥离 AR 交互模板会导致一场噩梦 - 但如果您投入足够的时间,应该也会起作用。如果您喜欢从头开始,基本上可以开始使用通用的 swift ARKit/SceneKit Xcode 模板(包含太空飞船的模板)。
\n\n您还需要 Apple 提供的整个 AR 交互模板项目。(该链接包含在SO问题中)\n最后你应该能够拖动一个名为VirtualObject的东西,它实际上是一个特殊的SCNNode。此外,您将拥有一个漂亮的焦点广场,它可以用于任何目的 - 例如最初放置物体或添加地板或墙壁。(一些用于拖动效果和焦点方块使用的代码有点合并或链接在一起 - 在没有焦点方块的情况下执行它实际上会更复杂)
\n\n开始使用:\n将以下文件从 AR 交互模板复制到您的空项目:
\n\n将 UIGestureRecognizerDelegate 添加到 ViewController 类定义中,如下所示:
\n\nclass ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {\nRun Code Online (Sandbox Code Playgroud)\n\n将此代码添加到 ViewController.swift 的定义部分中 viewDidLoad 之前:
\n\n// MARK: for the Focus Square\n// SUPER IMPORTANT: the screenCenter must be defined this way\nvar focusSquare = FocusSquare()\nvar screenCenter: CGPoint {\n let bounds = sceneView.bounds\n return CGPoint(x: bounds.midX, y: bounds.midY)\n}\nvar isFocusSquareEnabled : Bool = true\n\n\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\n/// The tracked screen position used to update the `trackedObject`\'s position in `updateObjectToCurrentTrackingPosition()`.\nprivate var currentTrackingPosition: CGPoint?\n\n/**\n The object that has been most recently intereacted with.\n The `selectedObject` can be moved at any time with the tap gesture.\n */\nvar selectedObject: VirtualObject?\n\n/// The object that is tracked for use by the pan and rotation gestures.\nprivate var trackedObject: VirtualObject? {\n didSet {\n guard trackedObject != nil else { return }\n selectedObject = trackedObject\n }\n}\n\n/// Developer setting to translate assuming the detected plane extends infinitely.\nlet translateAssumingInfinitePlane = true\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\nRun Code Online (Sandbox Code Playgroud)\n\n在 viewDidLoad 中,在设置场景之前添加以下代码:
\n\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\nlet panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))\npanGesture.delegate = self\n\n// Add gestures to the `sceneView`.\nsceneView.addGestureRecognizer(panGesture)\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\nRun Code Online (Sandbox Code Playgroud)\n\n在 ViewController.swift 的最后添加以下代码:
\n\n// MARK: - Pan Gesture Block\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\n@objc\nfunc didPan(_ gesture: ThresholdPanGesture) {\n switch gesture.state {\n case .began:\n // Check for interaction with a new object.\n if let object = objectInteracting(with: gesture, in: sceneView) {\n trackedObject = object // as? VirtualObject\n }\n\n case .changed where gesture.isThresholdExceeded:\n guard let object = trackedObject else { return }\n let translation = gesture.translation(in: sceneView)\n\n let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))\n\n // The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.\n currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)\n\n gesture.setTranslation(.zero, in: sceneView)\n\n case .changed:\n // Ignore changes to the pan gesture until the threshold for displacment has been exceeded.\n break\n\n case .ended:\n // Update the object\'s anchor when the gesture ended.\n guard let existingTrackedObject = trackedObject else { break }\n addOrUpdateAnchor(for: existingTrackedObject)\n fallthrough\n\n default:\n // Clear the current position tracking.\n currentTrackingPosition = nil\n trackedObject = nil\n }\n}\n\n// - MARK: Object anchors\n/// - Tag: AddOrUpdateAnchor\nfunc addOrUpdateAnchor(for object: VirtualObject) {\n // If the anchor is not nil, remove it from the session.\n if let anchor = object.anchor {\n sceneView.session.remove(anchor: anchor)\n }\n\n // Create a new anchor with the object\'s current transform and add it to the session\n let newAnchor = ARAnchor(transform: object.simdWorldTransform)\n object.anchor = newAnchor\n sceneView.session.add(anchor: newAnchor)\n}\n\n\nprivate func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {\n for index in 0..<gesture.numberOfTouches {\n let touchLocation = gesture.location(ofTouch: index, in: view)\n\n // Look for an object directly under the `touchLocation`.\n if let object = virtualObject(at: touchLocation) {\n return object\n }\n }\n\n // As a last resort look for an object under the center of the touches.\n // return virtualObject(at: gesture.center(in: view))\n return virtualObject(at: (gesture.view?.center)!)\n}\n\n\n/// Hit tests against the `sceneView` to find an object at the provided point.\nfunc virtualObject(at point: CGPoint) -> VirtualObject? {\n\n // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]\n let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])\n // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]\n // let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)\n\n return hitTestResults.lazy.compactMap { result in\n return VirtualObject.existingObjectContainingNode(result.node)\n }.first\n}\n\n/**\n If a drag gesture is in progress, update the tracked object\'s position by\n converting the 2D touch location on screen (`currentTrackingPosition`) to\n 3D world space.\n This method is called per frame (via `SCNSceneRendererDelegate` callbacks),\n allowing drag gestures to move virtual objects regardless of whether one\n drags a finger across the screen or moves the device through space.\n - Tag: updateObjectToCurrentTrackingPosition\n */\n@objc\nfunc updateObjectToCurrentTrackingPosition() {\n guard let object = trackedObject, let position = currentTrackingPosition else { return }\n translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)\n}\n\n/// - Tag: DragVirtualObject\nfunc translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {\n guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,\n let result = smartHitTest(screenPos,\n infinitePlane: infinitePlane,\n objectPosition: object.simdWorldPosition,\n allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }\n\n let planeAlignment: ARPlaneAnchor.Alignment\n if let planeAnchor = result.anchor as? ARPlaneAnchor {\n planeAlignment = planeAnchor.alignment\n } else if result.type == .estimatedHorizontalPlane {\n planeAlignment = .horizontal\n } else if result.type == .estimatedVerticalPlane {\n planeAlignment = .vertical\n } else {\n return\n }\n\n /*\n Plane hit test results are generally smooth. If we did *not* hit a plane,\n smooth the movement to prevent large jumps.\n */\n let transform = result.worldTransform\n let isOnPlane = result.anchor is ARPlaneAnchor\n object.setTransform(transform,\n relativeTo: cameraTransform,\n smoothMovement: !isOnPlane,\n alignment: planeAlignment,\n allowAnimation: allowAnimation)\n}\n// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\nRun Code Online (Sandbox Code Playgroud)\n\n添加一些焦点广场代码
\n\n// MARK: - Focus Square (code by Apple, some by me)\nfunc updateFocusSquare(isObjectVisible: Bool) {\n if isObjectVisible {\n focusSquare.hide()\n } else {\n focusSquare.unhide()\n }\n\n // Perform hit testing only when ARKit tracking is in a good state.\n if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,\n let result = smartHitTest(screenCenter) {\n DispatchQueue.main.async {\n self.sceneView.scene.rootNode.addChildNode(self.focusSquare)\n self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)\n }\n } else {\n DispatchQueue.main.async {\n self.focusSquare.state = .initializing\n self.sceneView.pointOfView?.addChildNode(self.focusSquare)\n }\n }\n}\nRun Code Online (Sandbox Code Playgroud)\n\n并添加一些控制功能:
\n\nfunc hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square\nfunc showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square\nRun Code Online (Sandbox Code Playgroud)\n\n从 VirtualObjectARView.swift 复制!整个函数 smartHitTest 到 ViewController.swift (所以它们存在两次)
\n\nfunc smartHitTest(_ point: CGPoint,\n infinitePlane: Bool = false,\n objectPosition: float3? = nil,\n allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {\n\n // Perform the hit test.\n let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])\n\n // 1. Check for a result on an existing plane using geometry.\n if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),\n let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {\n return existingPlaneUsingGeometryResult\n }\n\n if infinitePlane {\n\n // 2. Check for a result on an existing plane, assuming its dimensions are infinite.\n // Loop through all hits against infinite existing planes and either return the\n // nearest one (vertical planes) or return the nearest one which is within 5 cm\n // of the object\'s position.\n let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)\n\n for infinitePlaneResult in infinitePlaneResults {\n if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {\n if planeAnchor.alignment == .vertical {\n // Return the first vertical plane hit test result.\n return infinitePlaneResult\n } else {\n // For horizontal planes we only want to return a hit test result\n // if it is close to the current object\'s position.\n if let objectY = objectPosition?.y {\n let planeY = infinitePlaneResult.worldTransform.translation.y\n if objectY > planeY - 0.05 && objectY < planeY + 0.05 {\n return infinitePlaneResult\n }\n } else {\n return infinitePlaneResult\n }\n }\n }\n }\n }\n\n // 3. As a final fallback, check for a result on estimated planes.\n let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })\n let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })\n switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {\n case (true, false):\n return hResult\n case (false, true):\n // Allow fallback to horizontal because we assume that objects meant for vertical placement\n // (like a picture) can always be placed on a horizontal surface, too.\n return vResult ?? hResult\n case (true, true):\n if hResult != nil && vResult != nil {\n return hResult!.distance < vResult!.distance ? hResult! : vResult!\n } else {\n return hResult ?? vResult\n }\n default:\n return nil\n }\n}\nRun Code Online (Sandbox Code Playgroud)\n\n您可能会在复制的函数中看到一些与 hitTest 有关的错误。只需像这样纠正它:
\n\nhitTest... // which gives an Error\nsceneView.hitTest... // this should correct it\nRun Code Online (Sandbox Code Playgroud)\n\n实现渲染器 updateAtTime 函数并添加以下行:
\n\nfunc renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {\n // For the Focus Square\n if isFocusSquareEnabled { showFocusSquare() }\n\n self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\n}\nRun Code Online (Sandbox Code Playgroud)\n\n最后为 Focus Square 添加一些辅助函数
\n\nfunc hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square\nfunc showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square\nRun Code Online (Sandbox Code Playgroud)\n\n此时,您可能仍会在导入的文件中看到大约十几个错误和警告,在 Swift 5 中执行此操作并且您有一些 Swift 4 文件时,可能会发生这种情况。只需让 Xcode 纠正错误即可。(都是重命名一些代码语句,Xcode最了解)
\n\n进入 VirtualObject.swift 并搜索以下代码块:
\n\nif smoothMovement {\n let hitTestResultDistance = simd_length(positionOffsetFromCamera)\n\n // Add the latest position and keep up to 10 recent distances to smooth with.\n recentVirtualObjectDistances.append(hitTestResultDistance)\n recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))\n\n let averageDistance = recentVirtualObjectDistances.average!\n let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance\n simdPosition = cameraWorldPosition + averagedDistancePosition\n} else {\n simdPosition = cameraWorldPosition + positionOffsetFromCamera\n}\nRun Code Online (Sandbox Code Playgroud)\n\n用这一行代码注释或替换整个块:
\n\nsimdPosition = cameraWorldPosition + positionOffsetFromCamera\nRun Code Online (Sandbox Code Playgroud)\n\n此时,您应该能够编译该项目并在设备上运行它。您应该看到宇宙飞船和一个应该已经可以使用的黄色焦点方块。
\n\n要开始放置一个可以拖动的对象,您需要一些函数来创建所谓的虚拟对象,正如我在开始时所说的那样。
\n\n使用此示例函数进行测试(将其添加到视图控制器中的某个位置):
\n\noverride func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {\n\n if focusSquare.state != .initializing {\n let position = SCNVector3(focusSquare.lastPosition!)\n\n // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***\n let testObject = VirtualObject() // give it some name, when you dont have anything to load\n testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)\n testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red\n testObject.categoryBitMask = 0b00000010\n testObject.name = "test"\n testObject.castsShadow = true\n testObject.position = position\n\n sceneView.scene.rootNode.addChildNode(testObject)\n }\n}\nRun Code Online (Sandbox Code Playgroud)\n\n注意:您想要在平面上拖动的所有内容都必须使用 VirtualObject() 而不是 SCNNode() 进行设置。有关 VirtualObject 的其他所有内容与 SCNNode 相同
\n\n(您还可以添加一些常见的 SCNNode 扩展,例如通过名称加载场景的扩展 - 在引用导入的模型时很有用)
\n\n玩得开心!
\n有点晚了,但我知道我在解决这个问题时也遇到了一些问题。最终,我找到了一种方法,只要调用我的手势识别器,就执行两次单独的命中测试。
首先,我对 3D 对象执行命中测试,以检测我当前是否正在按下某个对象(如果您未指定任何选项,您将获得按下特征点、平面等的结果)。我通过使用.categoryBitMask的值来做到这一点SCNHitTestOption。
请记住,您必须.categoryBitMask事先为对象节点及其所有子节点分配正确的值,以便命中测试正常工作。我声明一个可以用于此目的的枚举:
enum BodyType : Int {
case ObjectModel = 2;
}
Run Code Online (Sandbox Code Playgroud)
从我在此处.categoryBitMask发布的 有关值的问题的答案中可以明显看出,考虑为位掩码分配什么值非常重要。
下面是我结合使用的代码,UILongPressGestureRecognizer以便选择我当前按下的对象:
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }
Run Code Online (Sandbox Code Playgroud)
之后,我执行第二次命中测试,以找到我按下的飞机。.existingPlaneUsingExtent如果您不想将对象移动到平面边缘之外,或者.existingPlane想要沿着检测到的平面无限期地移动对象,则可以使用该类型。
var planeHit : ARHitTestResult!
if recognizer.state == .changed {
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard hitTestPlane.first != nil else { return }
planeHit = hitTestPlane.first!
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}
Run Code Online (Sandbox Code Playgroud)
当我尝试这个同时也尝试使用ARAnchors. 如果您想在实践中看到我的方法,您可以查看它,但我并不是为了让其他人使用它而制作它,所以它还没有完成。此外,开发分支应该支持具有更多子节点的对象的某些功能。
编辑:====================================
为了澄清,如果您想使用 .scn 对象而不是常规几何体,则需要在创建对象时迭代该对象的所有子节点,设置每个子节点的位掩码,如下所示:
let objectModelScene = SCNScene(named:
"art.scnassets/object/object.scn")!
let objectNode = objectModelScene.rootNode.childNode(
withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
objectNode.enumerateChildNodes { (node, _) in
node.categoryBitMask = BodyType.ObjectModel.rawValue
}
Run Code Online (Sandbox Code Playgroud)
然后,在手势识别器中获得 hitTestResult 后
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
Run Code Online (Sandbox Code Playgroud)
您需要找到父节点,否则您可能会移动刚刚按下的单个子节点。通过在刚刚找到的节点的节点树中递归向上搜索来完成此操作。
guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }
Run Code Online (Sandbox Code Playgroud)
您可以在其中声明 getParentNode 方法,如下所示
func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? {
if let node = nodeFound {
if node.name == "theNameOfTheParentNodeOfTheObject" {
return node
} else if let parent = node.parent {
return getParentNodeOf(parent)
}
}
return nil
}
Run Code Online (Sandbox Code Playgroud)
然后您可以自由地对 objectNode 执行任何操作,因为它将成为 .scn 对象的父节点,这意味着应用于它的任何转换也将应用于子节点。
| 归档时间: |
|
| 查看次数: |
3058 次 |
| 最近记录: |