如何解决在所有方向上正确投影2D点均指向SCNView?
摘要:
我正在将Vision框架与AVFoundation结合使用,以检测摄像头供稿中每个面部的脸部界标(通过VNDetectFaceLandmarksRequest
)。从这里开始,我将获得发现的观察结果,并将每个点都未投影到SceneKit视图(SCNView),然后将这些点用作顶点以绘制自定义几何体,并在每个发现的面上覆盖材质。
有效地,我正在努力重新创建ARFaceTrackingConfiguration的功能。通常,此任务可以按预期运行,但是仅当我的设备以横向正确使用前置摄像头时。当我旋转设备或切换到后置摄像头时,未投影的点无法与找到的脸部正确对齐,就像在横向右/前置摄像头中一样。
问题: 测试此代码时,网格将正确显示(即,显示为粘贴在用户的面部),但仅当在横向正确使用前置摄像头时,网格才出现。尽管代码在所有方向上均按预期运行(即为找到的每个面生成面网格),但在所有其他情况下,网格都严重错位。
我认为这个问题或者是由于我转换了脸部的边框(使用VNImageRectFornormalizedRect
而产生的,我使用的是我的SCNView的宽度/高度,而不是像素缓冲区,通常要大得多),尽管我尝试的所有修改都会导致相同的问题。
除此之外,我还认为这可能是我的SCNCamera的问题,因为我不确定转换/投影矩阵的工作原理以及是否需要这样做。
视觉请求设置示例:
// Setup Vision request options
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
// Setup Camera Intrinsics
let cameraIntrinsicData = CMGetAttachment(sampleBuffer,key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
// Set EXIF orientation
let exifOrientation = self.exifOrientationForCurrentDeviceOrientation()
// Setup vision request handler
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,orientation: exifOrientation,options: requestHandlerOptions)
// Setup the completion handler
let completion: VNRequestCompletionHandler = {request,error in
let observations = request.results as! [VNFaceObservation]
// Draw faces
dispatchQueue.main.async {
drawFaceGeometry(observations: observations)
}
}
// Setup the image request
let request = VNDetectFaceLandmarksRequest(completionHandler: completion)
// Handle the request
do {
try handler.perform([request])
} catch {
print(error)
}
我的SCNView设置示例:
// Setup SCNView
let scnView = SCNView()
scnView.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(scnView)
scnView.showsstatistics = true
NSLayoutConstraint.activate([
scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor),scnView.topAnchor.constraint(equalTo: self.view.topAnchor),scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor),scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor)
])
// Setup scene
let scene = SCNScene()
scnView.scene = scene
// Setup camera
let cameraNode = SCNNode()
let camera = SCNCamera()
cameraNode.camera = camera
scnView.scene?.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0,y: 0,z: 16)
// Setup light
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = SCNLight.LightType.ambient
ambientLightNode.light?.color = UIColor.darkGray
scnView.scene?.rootNode.addChildNode(ambientLightNode)
面部处理示例:
func drawFaceGeometry(observations: [VNFaceObservation]) {
// An array of face nodes,one SCNNode for each detected face
var faceNode = [SCNNode]()
// The origin point
let projectedOrigin = sceneView.projectPoint(SCNVector3Zero)
// Iterate through each found face
for observation in observations {
// Setup a SCNNode for the face
let face = SCNNode()
// Setup the found bounds
let faceBounds = VNImageRectFornormalizedRect(observation.boundingBox,Int(self.scnView.bounds.width),Int(self.scnView.bounds.height))
// Verify we have landmarks
if let landmarks = observation.landmarks {
// Landmarks are relative to and normalized within face bounds
let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x,y: faceBounds.origin.y)
.scaledBy(x: faceBounds.size.width,y: faceBounds.size.height)
// Add all points as vertices
var vertices = [SCNVector3]()
// Verify we have points
if let allPoints = landmarks.allPoints {
// Iterate through each point
for (index,point) in allPoints.normalizedPoints.enumerated() {
// Apply the transform to convert each point to the face's bounding Box range
_ = index
let normalizedPoint = point.applying(affineTransform)
let projected = SCNVector3(normalizedPoint.x,normalizedPoint.y,CGFloat(projectedOrigin.z))
let unprojected = sceneView.unprojectPoint(projected)
vertices.append(unprojected)
}
}
// Setup Indices
var indices = [UInt16]()
// Add indices
// ... Removed for brevity ...
// Setup texture coordinates
var coordinates = [CGPoint]()
// Add texture coordinates
// ... Removed for brevity ...
// Setup texture image
let imageWidth = 2048.0
let normalizedCoordinates = coordinates.map { coord -> CGPoint in
let x = coord.x / CGFloat(imageWidth)
let y = coord.y / CGFloat(imageWidth)
let textureCoord = CGPoint(x: x,y: y)
return textureCoord
}
// Setup sources
let sources = SCNGeometrySource(vertices: vertices)
let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates)
// Setup elements
let elements = SCNGeometryElement(indices: indices,primitiveType: .triangles)
// Setup Geometry
let geometry = SCNGeometry(sources: [sources,textureCoordinates],elements: [elements])
geometry.firstMaterial?.diffuse.contents = textureImage
// Setup node
let customFace = SCNNode(geometry: geometry)
sceneView.scene?.rootNode.addChildNode(customFace)
// Append the face to the face nodes array
faceNode.append(face)
}
// Iterate the face nodes and append to the scene
for node in faceNode {
sceneView.scene?.rootNode.addChildNode(node)
}
}
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。