标签: cmsamplebuffer

从CMSampleBuffer中提取数据以创建深层副本

我正在尝试在AVCaptureVideoDataOutputSampleBufferDelegate中创建captureOutput返回的CMSampleBuffer副本.

由于CMSampleBuffers来自预先分配的(15)缓冲池,如果我附加对它们的引用,则无法重新收集它们.这会导致所有剩余帧被丢弃.

为了保持最佳性能,某些样本缓冲区直接引用可能需要由设备系统和其他捕获输入重用的内存池.对于未压缩的设备本机捕获,通常会出现这种情况,其中尽可能少地复制内存块.如果多个样本缓冲区长时间引用此类内存池,则输入将无法再将新样本复制到内存中,并且这些样本将被丢弃.

如果您的应用程序通过保留提供的CMSampleBufferRef对象太久而导致丢弃样本,但需要长时间访问样本数据,请考虑将数据复制到新缓冲区中,然后释放样本缓冲区(如果它以前保留过)以便它引用的内存可以重用.

显然我必须复制CMSampleBuffer,但CMSampleBufferCreateCopy()只会创建一个浅拷贝.因此,我得出结论,我必须使用CMSampleBufferCreate().我填写了12个!构造函数需要的参数但遇到了我的CMSampleBuffers不包含blockBuffer的问题(不完全确定那是什么,但似乎很重要).

这个问题已被问过好几次但没有回答.

CMImageBuffer或CVImageBuffer的深层副本在Swift 2.0中创建CMSampleBuffer的副本

一个可能的答案是"我终于想出如何使用它来创建一个深度克隆.所有复制方法都重用了堆中的数据,这些数据会保留AVCaptureSession.所以我不得不将数据拉入NSMutableData对象然后创建了一个新的样本缓冲区." 在SO上归功于Rob.但是,我不知道如何正确地做到这一点.

如果你有兴趣,是输出print(sampleBuffer).没有提到blockBuffer,又名CMSampleBufferGetDataBuffer返回nil.有一个imageBuffer,但使用CMSampleBufferCreateForImageBuffer创建"副本"似乎也没有释放CMSampleBuffer.


编辑:由于这个问题已经发布,我一直在尝试更多的方式来复制内存.

我做了用户Kametrixom尝试的相同的事情.是我尝试同样的想法,首先复制CVPixelBuffer然后使用CMSampleBufferCreateForImageBuffer创建最终的样本缓冲区.但是,这会导致两个错误之一:

  • memcpy指令上的EXC_BAD_ACCESS.AKA试图访问应用程序内存之外的段错误.
  • 或者,内存将成功复制,但CMSampleBufferCreateReadyWithImageBuffer()将失败,结果代码为-12743,"表示给定媒体的格式与给定的格式描述不匹配.例如,与CVImageBuffer配对的格式描述与CMVideoFormatDescriptionMatchesImageBuffer失败."

你可以看到Kametrixom和我都CMSampleBufferGetFormatDescription(sampleBuffer)试图复制源缓冲区的格式描述.因此,我不确定为什么给定媒体的格式与给定的格式描述不匹配.

pool deep-copy ios cmsamplebuffer swift

18
推荐指数
2
解决办法
4410
查看次数

从CVPixelBuffer参考中获取所需数据

我有一个程序可以实时查看摄像机输入并获取中间像素的颜色值.我使用captureOutput:方法从AVCaptureSession输出中获取CMSampleBuffer(恰好读作CVPixelBuffer),然后使用以下代码获取像素的rgb值:

// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0); 

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 
unsigned char* pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);

NSLog(@"Middle pixel: %hhu", pixel[((width*height)*4)/2]);
int red = pixel[(((width*height)*4)/2)+2];
int green = pixel[(((width*height)*4)/2)+1];
int …
Run Code Online (Sandbox Code Playgroud)

iphone ios avcapturesession cmsamplebufferref cmsamplebuffer

10
推荐指数
1
解决办法
6706
查看次数

从CVPixelBuffer创建CMSampleBuffer

我提供了pixelbuffer,我需要从lf.swift库附加到rtmpStream对象以将其流式传输到youtube.它看起来像这样:rtmpStream.appendSampleBuffer(sampleBuffer: CMSampleBuffer, withType: CMSampleBufferType)

所以,我需要以某种方式将CVPixelbuffer转换为CMSampleBuffer以附加到rtmpStream.

var sampleBuffer: CMSampleBuffer? = nil
    var sampleTimingInfo: CMSampleTimingInfo = kCMTimingInfoInvalid
    sampleTimingInfo.presentationTimeStamp = presentationTime

    var formatDesc: CMVideoFormatDescription? = nil
    _ = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &formatDesc)

    if let formatDesc = formatDesc {
        CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault, pixelBuffer, formatDesc, &sampleTimingInfo, &sampleBuffer)
    }

    if let sampleBuffer = sampleBuffer {
        self.rtmpStream.appendSampleBuffer(sampleBuffer, withType: CMSampleBufferType.video)
    }
Run Code Online (Sandbox Code Playgroud)

但是,不幸的是,这不起作用.流媒体库经过测试,在我流式摄像机输入或screenCapture时工作正常.我认为问题可能是sampleTimingInfo,因为它需要decodeTime和Duration,我不知道如何获得提供的CVPixelBuffer.

type-conversion cmsamplebuffer swift cvpixelbuffer

9
推荐指数
0
解决办法
1455
查看次数

播放来自CMSampleBuffer的音频

我已经为iOS中的群组创建了视频聊天应用。我一直在寻找一些方法来分别控制不同参与者的音量。我找到办法静音和取消静音使用isPlaybackEnabledRemoteAudioTrack,但并不能够控制音量。

我还认为我们是否可以在中使用它AVAudioPlayer。我发现了addSink。这是我从这里尝试过的

class Audio: NSObject, AudioSink {
    var a = 1
    func renderSample(_ audioSample: CMSampleBuffer!) {
        print("audio found", a)
        a += 1

        var audioBufferList = AudioBufferList()
        var data = Data()
        var blockBuffer : CMBlockBuffer?

        CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioSample, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
        let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))

        for audioBuffer in buffers {
            let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
            data.append(frame!, count: …
Run Code Online (Sandbox Code Playgroud)

avaudioplayer twilio ios cmsamplebuffer swift

8
推荐指数
2
解决办法
453
查看次数

使用 AVAssetWriter 停止视频录制(完成写入)且不会崩溃的正确方法

我用 录制视频AVAssetWriter。用户可以发送视频,然后我打电话finishWriting,或者取消录制,然后我打电话cancelWriting

我如何记录:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!){
        guard !message_to_send else{
            return
        }
        guard is_recording else{
            return
        }
        guard CMSampleBufferDataIsReady(sampleBuffer) else{
            print("data not ready")
            return
        }
        guard let w=file_writer else{
            print("video writer nil")
            return
        }
        guard let sb=sampleBuffer else{
            return
        }

        if w.status == .unknown /*&& start_recording_time==nil*/{
            if captureOutput==video_output{
                print("\nSTART RECORDING")
                w.startWriting()
                start_recording_time=CMSampleBufferGetPresentationTimeStamp(sb)
                w.startSession(atSourceTime: start_recording_time!)
            }else{
                return
            }
        }

        if w.status == .failed{
            print("failed with error:", w.error ?? "") …
Run Code Online (Sandbox Code Playgroud)

ios avassetwriter cmsamplebuffer swift avassetwriterinput

7
推荐指数
0
解决办法
1594
查看次数

如何在将修改后的sampleBuffer图像附加到AVAssetWriter时从麦克风录制音频

这是我之前未回答的问题的扩展:AVCaptureSession is not Recording audio from the mic in Swift

我非常不清楚如何编写实时修改的视频和从麦克风录制的音频。我已经寻找了几个月,但一无所获。似乎使我的问题与其他问题区分开来的是,我在 captureOutput 函数中从sampleBuffer 获取图像缓冲区,将其转换为图像,修改它,然后将其写回 AVAssetWriterInputPixelBufferAdaptor,而不是将输出中的所有内容记录为正常视频。从这里开始,我不知道如何从sampleBuffer获取音频,或者这是否是正确的方法,尽管我已经看到其他人从captureOutput获取AudioBufferList。

至少,这是我在主课上的内容:

class CaptureVC: UIViewController, AVCapturePhotoCaptureDelegate, AVCaptureVideoDataOutputSampleBufferDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate,UIPickerViewDataSource,UIPickerViewDelegate {
    var captureSession: AVCaptureSession?
    var stillImageOutput: AVCapturePhotoOutput?
    var videoPreviewLayer: AVCaptureVideoPreviewLayer?
    let videoOutput = AVCaptureVideoDataOutput()
    let audioOutput = AVCaptureAudioDataOutput()

    var assetWriter: AVAssetWriter?
    var assetWriterPixelBufferInput: AVAssetWriterInputPixelBufferAdaptor?
    var assetWriterAudioInput: AVAssetWriterInput?
    var currentSampleTime: CMTime?
    var currentVideoDimensions: CMVideoDimensions?
    var videoIsRecording = false

    override func viewDidLoad() {
        super.viewDidLoad()

    let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
    let microphone = AVCaptureDevice.default(.builtInMicrophone, for: AVMediaType.audio, position: .unspecified)

    var …
Run Code Online (Sandbox Code Playgroud)

avcapturesession avassetwriter cmsamplebuffer swift

7
推荐指数
0
解决办法
792
查看次数

如何使用从CMSampleBufferRef(AVFoundation)获得的数据填充音频AVFrame(ffmpeg)?

我正在编写用于将实时音频和视频从webcamera流式传输到rtmp-server的程序.我在MacOS X 10.8中工作,因此我使用AVFoundation框架从输入设备获取音频和视频帧.这个框架进入代表:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection ,

其中sampleBuffer包含音频或视频数据.

当我收到音频数据时sampleBuffer,我试图将这些数据转换为libavcodec AVFrame并用其编码AVFrame:

    aframe = avcodec_alloc_frame();  //AVFrame *aframe;
    int got_packet, ret;
    CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer); //CMSampleBufferRef

    NSUInteger channelIndex = 0;

    CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);

    size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));

    size_t lengthAtOffset = 0;

    size_t totalLength = 0;

    SInt16 *samples = NULL;

    CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

            const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));

    aframe->nb_samples =(int) numSamples;

    aframe->channels=audioDescription->mChannelsPerFrame;

    aframe->sample_rate=(int)audioDescription->mSampleRate;

     //my webCamera configured to …
Run Code Online (Sandbox Code Playgroud)

ffmpeg avcodec cmsamplebuffer

6
推荐指数
0
解决办法
5911
查看次数

从Swift中的unSafeMutablePointer Int16获取值以用于音频数据

我正在努力将此代码转换为Swift,这有助于我获取可视化的音频数据.我在Obj C中使用的代码,运行良好,是:

    while (reader.status == AVAssetReaderStatusReading) {
           AVAssetReaderTrackOutput *trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
            self.sampleBufferRef = [trackOutput copyNextSampleBuffer];
            if (self.sampleBufferRef) {

                CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(self.sampleBufferRef);
                size_t bufferLength = CMBlockBufferGetDataLength(blockBufferRef);
                void *data = malloc(bufferLength);
                CMBlockBufferCopyDataBytes(blockBufferRef, 0, bufferLength, data);

                SInt16 *samples = (SInt16 *)data;
                int sampleCount = bufferLength / bytesPerInputSample;


                for (int i=0; i<sampleCount; i+=100) {
                    Float32 sample = (Float32) *samples++;

                sample = decibel(sample);
                sample = minMaxX(sample,noiseFloor,0);
                tally += sample; 

                for (int j=1; j<channelCount; j++)
                    samples++;
                tallyCount++;

                if (tallyCount == downsampleFactor) {
                    sample …
Run Code Online (Sandbox Code Playgroud)

audio unsafe-pointers avassetreader cmsamplebuffer swift

6
推荐指数
1
解决办法
658
查看次数

在 Swift 中创建 CMSampleBuffer 的副本会返回 OSStatus -12743(无效媒体格式)

我正在尝试执行深度克隆CMSampleBuffer来存储AVCaptureSession. kCMSampleBufferError_InvalidMediaFormat (OSStatus -12743)当我运行该函数时收到错误CMSampleBufferCreateForImageBuffer。我不明白我是如何CVImageBufferCMSampleBuffer格式描述不匹配的。有人知道我哪里出错了吗?她是我的测试代码。

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    let allocator: CFAllocator = CFAllocatorGetDefault().takeRetainedValue()

    func cloneImageBuffer(imageBuffer: CVImageBuffer!) -> CVImageBuffer? {
        CVPixelBufferLockBaseAddress(imageBuffer, 0)
        let bytesPerRow: size_t = CVPixelBufferGetBytesPerRow(imageBuffer)
        let width: size_t = CVPixelBufferGetWidth(imageBuffer)
        let height: size_t = CVPixelBufferGetHeight(imageBuffer)
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let pixelFormatType = CVPixelBufferGetPixelFormatType(imageBuffer)

        let data = NSMutableData(bytes: baseAddress, length: bytesPerRow * height)
        CVPixelBufferUnlockBaseAddress(imageBuffer, 0)

        var clonedImageBuffer: CVPixelBuffer?
        let refCon = NSMutableData()

        if …
Run Code Online (Sandbox Code Playgroud)

core-video core-media ios avcapturesession cmsamplebuffer

6
推荐指数
1
解决办法
1869
查看次数

暂停/恢复时实时AVAssetWriter同步音频和视频

我正在尝试使用 iPhone 的前置摄像头录制有声视频。由于我还需要支持暂停/恢复功能,因此我需要使用AVAssetWriter. 我在网上找到了一个用 Objective-C 编写的示例,它几乎实现了所需的功能(http://www.gdcl.co.uk/2013/02/20/iPhone-Pause.html

不幸的是,在将此示例转换为 Swift 后,我​​注意到如果我暂停/恢复,在每个“部分”的末尾都会有一个小但明显的时间段,在此期间视频只是一个静止帧并且正在播放音频。所以,似乎当isPaused触发时,录制的音轨比录制的视频轨长。

抱歉,如果这看起来像是一个菜鸟问题,但我不是这方面的专家,AVFoundation如果能提供一些帮助,我们将不胜感激!

下面我发布我的实现didOutput sampleBuffer

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    var isVideo = true
    if videoConntection != connection {
        isVideo = false
    }
    if (!isCapturing || isPaused) {
        return
    }

    if (encoder == nil) {
        if isVideo {
            return
        }
        if let fmt = CMSampleBufferGetFormatDescription(sampleBuffer) {
            let desc = CMAudioFormatDescriptionGetStreamBasicDescription(fmt as CMAudioFormatDescription)
            if let chan = desc?.pointee.mChannelsPerFrame, …
Run Code Online (Sandbox Code Playgroud)

avfoundation ios avassetwriter cmsamplebuffer swift

6
推荐指数
1
解决办法
1906
查看次数