通过createMediaElementSource从<audio />元素将audiodata加载到AudioBufferSourceNode中?

Max*_*ens 12 javascript html5 html5-audio web-audio-api

是否可以从<audio/>-element通过createMediaElementSource加载音频文件,然后将音频数据加载到AudioBufferSourceNode

使用audio-element作为源(MediaElementSource)似乎不是一个选项,因为我想使用像noteOn和的Buffer方法noteGrain.

不幸的是,不能通过XHR将音频文件直接加载到缓冲区 (请参阅通过客户端XHR打开Soundcloud Track的stream_url?)

尽管如此,从音频元素加载缓冲区内容似乎是可能的:

http://www.w3.org/2011/audio/wiki/Spec_Differences#Reading_Data_from_a_Media_Element

或者甚至可以直接使用<audio/>-element 的缓冲区作为sourceNode?

Max*_*ens 8

似乎无法从MediaElementSourceNode中提取audiobuffer.

请参阅https://groups.google.com/a/chromium.org/forum/?fromgroups#!topic/chromium-html5/HkX1sP8ONKs

任何证明我错误的回复都是非常受欢迎的!


ebi*_*del 6

这个有可能.请参阅http://updates.html5rocks.com/2012/02/HTML5-audio-and-the-Web-Audio-API-are-BFFs上的帖子.那里还有一个代码片段和示例.有一些突出的错误,但加载<audio>到Web Audio API应该可以按你的需要工作.

// Create an <audio> element dynamically.
var audio = new Audio();
audio.src = 'myfile.mp3';
audio.controls = true;
audio.autoplay = true;
document.body.appendChild(audio);

var context = new webkitAudioContext();
var analyser = context.createAnalyser();

// Wait for window.onload to fire. See crbug.com/112368
window.addEventListener('load', function(e) {
  // Our <audio> element will be the audio source.
  var source = context.createMediaElementSource(audio);
  source.connect(analyser);
  analyser.connect(context.destination);

  // ...call requestAnimationFrame() and render the analyser's output to canvas.
}, false);
Run Code Online (Sandbox Code Playgroud)

  • 是的,但是我想知道是否有可能从MediaElementSource获取**缓冲区**以将其用作/在bufferSourceNode中(具有noteOn,noteOff,noteGrain等)? (4认同)

fra*_*eed 3

今天 2020 年以后,可以通过 audioWorklet 节点实现

https://developer.mozilla.org/en-US/docs/Web/API/AudioWorkletProcessor/AudioWorkletProcessor

在 AudioWorkletContext 中运行,这样您就可以通过消息传递二进制原始数据

// test-processor.js
class RecorderWorkletProcessor extends AudioWorkletProcessor {
  constructor (options) {
    super()
    console.log(options.numberOfInputs)
    console.log(options.processorOptions.someUsefulVariable)
  }
  // @ts-ignore 
  process(inputs, output, parameters) {
      /**
      * @type {Float32Array} length 128 Float32Array(128)
      * non-interleaved IEEE754 32-bit linear PCM 
      * with a nominal range between -1 and +1, 
      * with each sample between -1.0 and 1.0.
      * the sample rate depends on the audioContext and is variable
      */
      const inputChannel = inputs[0][0];  //inputChannel Float32Array(128)
      const { postMessage } = this.port;
      postMessage(inputChannel)  // float32Array sent as byte[512] 
      return true; // always do this!
  }
}
Run Code Online (Sandbox Code Playgroud)

主要代码

const audioContext = new AudioContext()

const audioMediaElement = audioContext.createMediaElementSource(
  /** @type {HTMLAudioElement} */ audio
);

await audioContext.audioWorklet.addModule('test-processor.js')
const recorder = new AudioWorkletNode(audioContext, 'test-processor', 
  {
     processorOptions: {
     someUsefulVariable: new Map([[1, 'one'], [2, 'two']])
  }
});

/**
 *   
 * Objects of these types are designed to hold small audio snippets, 
 * typically less than 45 s. For longer sounds, objects implementing 
 * the MediaElementAudioSourceNode are more suitable. 
 * The buffer contains data in the following format: 
 * non-interleaved IEEE754 32-bit linear PCM (LPCM)
 * with a nominal range between -1 and +1, that is, a 32-bit floating point buffer, 
 * with each sample between -1.0 and 1.0.  
 * @param {ArrayBufferLike|Float32Array} data 
 */
 const convertFloatToAudioBuffer = (data) => {
    const sampleRate = 8000 | audioContext.sampleRate
    const channels = 1;
    const sampleLength = 128 | data.length; // 1sec = sampleRate * 1
    const audioBuffer = audioContext.createBuffer(channels, sampleLength, sampleRate); // Empty Audio
    audioBuffer.copyToChannel(new Float32Array(data), 0); // depending on your processing this could be already a float32array
    return audioBuffer;
}
let startAt = 0
const streamDestination = audioContext.createMediaStreamDestination();
/**
 * Note this is a minimum example it plays only the first sound 
 * it uses the main audio context if it would use a
 * streamDestination = context.createMediaStreamDestination();
 * 
 * @param {ArrayBufferLike|Float32Array} data 
 */
const play = (data) => {
    const audioBufferSourceNoce = audioContext.createBufferSource();  
    audioBufferSourceNoce.buffer = convertFloatToAudioBuffer(data);

    const context = audioContext; // streamDestination; // creates a MediaStream on streamDestination.stream property
    audioBufferSourceNoce.connect(context);    

    // here you will need a hugh enqueue algo that is out of scope for this answer
    startAt = Math.max(context.currentTime, startAt);
    source.start(startAt);
    startAt += buffer.duration;
    audioBufferSourceNoce.start(startAt);
}

// Here is your raw arrayBuffer ev.data
recorder.port.onmessage = (ev) => play(ev.data);

// connect the processor with the source
audioMediaElement.connect(recorder);




Run Code Online (Sandbox Code Playgroud)

笔记

这只是 console.log 来自记录器处理器的数据的最低限度,当您确实想要处理该数据时,我们只采用 1 个通道,您应该考虑在工作线程中再次注册一个处理程序,将数据直接发布到如果您进行大量处理,那么您的主进程可能会变得无响应。