MSc*_*ler 6 audio unsafe-pointers avassetreader cmsamplebuffer swift
我正在努力将此代码转换为Swift,这有助于我获取可视化的音频数据.我在Obj C中使用的代码,运行良好,是:
    while (reader.status == AVAssetReaderStatusReading) {
           AVAssetReaderTrackOutput *trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
            self.sampleBufferRef = [trackOutput copyNextSampleBuffer];
            if (self.sampleBufferRef) {
                CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(self.sampleBufferRef);
                size_t bufferLength = CMBlockBufferGetDataLength(blockBufferRef);
                void *data = malloc(bufferLength);
                CMBlockBufferCopyDataBytes(blockBufferRef, 0, bufferLength, data);
                SInt16 *samples = (SInt16 *)data;
                int sampleCount = bufferLength / bytesPerInputSample;
                for (int i=0; i<sampleCount; i+=100) {
                    Float32 sample = (Float32) *samples++;
                sample = decibel(sample);
                sample = minMaxX(sample,noiseFloor,0);
                tally += sample; 
                for (int j=1; j<channelCount; j++)
                    samples++;
                tallyCount++;
                if (tallyCount == downsampleFactor) {
                    sample = tally / tallyCount;
                    maximum = maximum > sample ? maximum : sample;
                    [fullSongData appendBytes:&sample length:sizeof(sample)];//tried dividing the sample by 2
                    tally = 0;
                    tallyCount = 0;
                    outSamples++;
                }
            }
        CMSampleBufferInvalidate(self.sampleBufferRef);
        CFRelease(self.sampleBufferRef);
        free(data);
   }
}
在Swift中,我正在尝试写这部分:
 while (reader.status == AVAssetReaderStatus.Reading) {
            var trackOutput = reader.outputs[0] as! AVAssetReaderTrackOutput
            self.sampleBufferRef = trackOutput.copyNextSampleBuffer()
            if (self.sampleBufferRef != nil) {
            let blockBufferRef = CMSampleBufferGetDataBuffer(self.sampleBufferRef)
            let bufferLength = CMBlockBufferGetDataLength(blockBufferRef)
            var data = NSMutableData(length: bufferLength)
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, bufferLength, data!.mutableBytes)
            var samples = UnsafeMutablePointer<Int16>(data!.mutableBytes)
            var sampleCount = Int32(bufferLength)/bytesPerInputSample
            for var i = 0; i < Int(sampleCount); i++ {
                var sampleValue = CGFloat(samples[i]) etc. etc.
但是,当我在控制台中println()sampleValue刚出来(Opaque Value)时.我无法弄清楚如何实际读取sampleValue.
我是一个尝试读取音频数据用于可视化目的的新手.获取有关音频数据缓冲区的任何帮助都会有所帮助.谢谢.
使用跨步?
let bytesPerInputSample = 4 // assumption ;)
var samplePtr = data.mutableBytes
for _ in stride(from: 0, to: data.length, by: bytesPerInputSample) {
    let currentSample = Data(bytes: samplePtr, count: bytesPerInputSample)
    // do whatever is needed with current sample
    //...
    // increase ptr by size of sample
    samplePtr = samplePtr + bytesPerInputSample
}
| 归档时间: | 
 | 
| 查看次数: | 658 次 | 
| 最近记录: |