bia*_*ias 5 opencv image ios swift cvpixelbuffer
是否有一种标准的高性能方法可以快速编辑/绘制 CVImageBuffer/CVPixelBuffer?
我在网上找到的所有视频编辑演示都将绘图(矩形或文本)覆盖在屏幕上,并且不直接编辑 CVPixelBuffer。
更新我尝试使用 CGContext 但保存的视频不显示上下文绘图
private var adapter: AVAssetWriterInputPixelBufferAdaptor?
extension TrainViewController: CameraFeedManagerDelegate {
func didOutput(sampleBuffer: CMSampleBuffer) {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
let pixelBuffer: CVPixelBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
context.flush()
adapter?.append(pixelBuffer, withPresentationTime: time)
}
}
Run Code Online (Sandbox Code Playgroud)
您需要CVPixelBufferLockBaseAddress(pixelBuffer, 0)在创建位图之前CGContext以及CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)完成绘制到上下文之后调用。
不锁定像素缓冲区,CVPixelBufferGetBaseAddress()返回 NULL。这会导致您CGContext分配新的内存来绘制,该内存随后被丢弃。
还要仔细检查您的色彩空间。很容易混淆您的组件。
例如
guard
CVPixelBufferLockBaseAddress(pixelBuffer) == kCVReturnSuccess,
let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(pixelBuffer)
adapter?.append(pixelBuffer, withPresentationTime: time)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1953 次 |
| 最近记录: |