Qua*_*kle 17 javascript html5 webkit web-audio-api
我正在玩Web Audio API并尝试找到导入mp3的方法(所以这只在Chrome中),并在画布上生成它的波形.我可以实时做到这一点,但我的目标是比实时更快地做到这一点.
我能够找到的所有示例都涉及从onaudioprocess事件附带的函数中读取分析器对象的频率数据:
processor = context.createJavascriptNode(2048,1,1);
processor.onaudioprocess = processAudio;
...
function processAudio{
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqByteData);
//calculate magnitude & render to canvas
}
Run Code Online (Sandbox Code Playgroud)
但是看起来,analyser.frequencyBinCount只有在播放声音时才会填充(有关填充缓冲区的信息).
我想要的是能够尽可能快地手动/编程地逐步浏览文件,以生成画布图像.
到目前为止我得到的是:
$("#files").on('change',function(e){
var FileList = e.target.files,
Reader = new FileReader();
var File = FileList[0];
Reader.onload = (function(theFile){
return function(e){
context.decodeAudioData(e.target.result,function(buffer){
source.buffer = buffer;
source.connect(analyser);
analyser.connect(jsNode);
var freqData = new Uint8Array(buffer.getChannelData(0));
console.dir(analyser);
console.dir(jsNode);
jsNode.connect(context.destination);
//source.noteOn(0);
});
};
})(File);
Reader.readAsArrayBuffer(File);
});
Run Code Online (Sandbox Code Playgroud)
但是getChannelData()总是返回一个空的类型数组.
任何见解都会受到赞赏 - 即使事实证明它无法完成.我想我是唯一一个互联网不想要做的东西,在实时.
谢谢.
ebi*_*del 25
Web Audio API 有一个非常棒的"离线"模式,允许您通过音频上下文预处理整个文件,然后对结果执行某些操作:
var context = new webkitOfflineAudioContext();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.noteOn(0);
context.oncomplete = function(e) {
var audioBuffer = e.renderedBuffer;
};
context.startRendering();
Run Code Online (Sandbox Code Playgroud)
因此,除了设置oncomplete回调和调用之外,设置看起来与实时处理模式完全相同startRendering().你回来的e.redneredBuffer是一个AudioBuffer.
我使用以下代码使用OfflineAudioContext来工作。此处的完整示例显示了如何使用它来计算线性chi的FFT幅度。一旦有了将节点连接在一起的概念,就可以离线进行几乎任何操作。
function fsin(freq, phase, t) {
return Math.sin(2 * Math.PI * freq * t + phase)
}
function linearChirp(startFreq, endFreq, duration, sampleRate) {
if (duration === undefined) {
duration = 1; // seconds
}
if (sampleRate === undefined) {
sampleRate = 44100; // per second
}
var numSamples = Math.floor(duration * sampleRate);
var chirp = new Array(numSamples);
var df = (endFreq - startFreq) / numSamples;
for (var i = 0; i < numSamples; i++) {
chirp[i] = fsin(startFreq + df * i, 0, i / sampleRate);
}
return chirp;
}
function AnalyzeWithFFT() {
var numChannels = 1; // mono
var duration = 1; // seconds
var sampleRate = 44100; // Any value in [22050, 96000] is allowed
var chirp = linearChirp(10000, 20000, duration, sampleRate);
var numSamples = chirp.length;
// Now we create the offline context to render this with.
var ctx = new OfflineAudioContext(numChannels, numSamples, sampleRate);
// Our example wires up an analyzer node in between source and destination.
// You may or may not want to do that, but if you can follow how things are
// connected, it will at least give you an idea of what is possible.
//
// This is what computes the spectrum (FFT) information for us.
var analyser = ctx.createAnalyser();
// There are abundant examples of how to get audio from a URL or the
// microphone. This one shows you how to create it programmatically (we'll
// use the chirp array above).
var source = ctx.createBufferSource();
var chirpBuffer = ctx.createBuffer(numChannels, numSamples, sampleRate);
var data = chirpBuffer.getChannelData(0); // first and only channel
for (var i = 0; i < numSamples; i++) {
data[i] = 128 + Math.floor(chirp[i] * 127); // quantize to [0,256)
}
source.buffer = chirpBuffer;
// Now we wire things up: source (data) -> analyser -> offline destination.
source.connect(analyser);
analyser.connect(ctx.destination);
// When the audio buffer has been processed, this will be called.
ctx.oncomplete = function(event) {
console.log("audio processed");
// To get the spectrum data (e.g., if you want to plot it), you use this.
var frequencyBins = new Uint8Array(analyser.frequencyBinCount);
console.log(analyser.getByteFrequencyData(frequencyBins);
// You can also get the result of any filtering or any other stage here:
console.log(event.renderedBuffer);
};
// Everything is now wired up - start the source so that it produces a
// signal, and tell the context to start rendering.
//
// oncomplete above will be called when it is done.
source.start();
ctx.startRendering();
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
7123 次 |
| 最近记录: |