jjc*_*jjc 5 ios11 arkit iphone-x
我试图在屏幕上叠加 ARFaceAnchor 顶点以完成两个场景 1) 有一个虚拟人脸保持在中心(屏幕上)位置,但反映几何体的变化。顶点 2)让虚拟人脸与实际人脸重叠(来自预览层) .
我在这里遵循了Rickster 的建议,但只成功地从屏幕上的某些角度投影了脸部(仅出现在左下方并旋转)。我不太熟悉每个矩阵的不同目的,但这是我到目前为止所得到的。有什么建议吗?
let modelMatrix = faceAnchor.transform
var points: [CGPoint] = []
faceAnchor.geometry.vertices.forEach {
// Convert the vertex position from model space to camera space (use the anchor’s transform)
let vertex4 = vector_float4($0.x, $0.y, $0.z, 1)
let vertexCamera = simd_mul(modelMatrix, vertex4)
// Multiply with the camera projection with that vector to get to normalized image coordinates
let normalizedImageCoordinates = simd_mul(projectionMatrix, vertexCamera)
let point = CGPoint(x: CGFloat(normalizedImageCoordinates.x), y: CGFloat(normalizedImageCoordinates.y))
points.append(point)
}
Run Code Online (Sandbox Code Playgroud)
对于那些感兴趣的人,这是(2)的解决方案 - 您可以标准化点以保持面部居中(1)
let faceAnchors = anchors.flatMap { $0 as? ARFaceAnchor }
guard !faceAnchors.isEmpty,
let camera = session.currentFrame?.camera,
let targetView = SomeUIView() else { return }
// Calculate face points to project to screen
let projectionMatrix = camera.projectionMatrix(for: .portrait, viewportSize: targetView.bounds.size, zNear: 0.001, zFar: 1000) // A transform matrix appropriate for rendering 3D content to match the image captured by the camera
let viewMatrix = camera.viewMatrix(for: .portrait) // Returns a transform matrix for converting from world space to camera space.
let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)
for faceAnchor in faceAnchors {
let modelMatrix = faceAnchor.transform // Describes the face’s current position and orientation in world coordinates; that is, in a coordinate space relative to that specified by the worldAlignment property of the session configuration. Use this transform matrix to position virtual content you want to “attach” to the face in your AR scene.
let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)
// Calculate points
let points: [CGPoint] = faceAnchor.geometry.vertices.flatMap({ (vertex) -> CGPoint? in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let normalizedImageCoordinates = simd_mul(mvpMatrix, vertex4)
return CGPoint(x: CGFloat(normalizedImageCoordinates.x ),
y: CGFloat(normalizedImageCoordinates.y ))
})
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1391 次 |
| 最近记录: |