我正在使用iPhone X并ARFaceKit捕捉用户的脸部.目标是使用用户的图像纹理面部网格.
我只是ARFrame从AR会话中看一帧(一).从ARFaceGeometry,我有一组描述脸部的顶点.我制作了当前帧的jpeg表示capturedImage.
然后我想找到将创建的jpeg映射到网格顶点的纹理坐标.我想:1.将顶点从模型空间映射到世界空间; 2.将顶点从世界空间映射到摄像机空间; 3.除以图像尺寸以获得纹理的像素坐标.
let geometry: ARFaceGeometry = contentUpdater.faceGeometry!
let theCamera = session.currentFrame?.camera
let theFaceAnchor:SCNNode = contentUpdater.faceNode
let anchorTransform = float4x4((theFaceAnchor?.transform)!)
for index in 0..<totalVertices {
let vertex = geometry.vertices[index]
// Step 1: Model space to world space, using the anchor's transform
let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)
let worldSpace = anchorTransform * vertex4
// Step 2: World space to camera space
let world3 = float3(worldSpace.x, worldSpace.y, …Run Code Online (Sandbox Code Playgroud) 我创建了在3d对象上绘制的示例应用程序,输出图形不平滑(如图所示).我花了将近一天的时间来弄清楚仍然无法解决的问题.可能是什么问题?
我正在创建如下的自定义绘图几何
//I am creating self.drawingNode touches begin and adding to the rootnode
//points are nothing but 2d points from touches
func createGeometryForPoints(points:[CGPoint]) -> SCNGeometry {
var all_3d_points:[SCNVector3] = []
for point in points {
let result = self.get3dPoint(point)
if result.1 == true {
all_3d_points.append(result.0)
} else {
print("INVALID POINT")
}
}
var indices: [Int32] = []
var index:Int32 = 0
var previousIndex:Int32 = -1
for _ in all_3d_points {
if(previousIndex != -1) {
indices.append(previousIndex)
}
indices.append(index)
index = index …Run Code Online (Sandbox Code Playgroud) 我上下阅读了所有ARKit类的完整文档.我没有看到任何描述实际获取用户面部纹理的能力的地方.
ARFaceAnchor包含ARFaceGeometry(由顶点组成的拓扑和几何)和BlendShapeLocation数组(允许通过操纵用户面顶点上的几何数学来操纵各个面部特征的坐标).
但是我在哪里可以获得用户脸部的实际纹理.例如:实际的肤色/肤色/纹理,面部毛发,其他独特的特征,如疤痕或出生痕迹?或者这根本不可能?