use*_*218 8 javascript audio html5 html5-audio web-audio-api
试图将两个缓冲区合并为一个缓冲区; 我已经能够从音频文件创建两个缓冲区并加载和播放它们.现在我需要将两个缓冲区合并到一个缓冲区中.他们怎么能合并?
context = new webkitAudioContext();
bufferLoader = new BufferLoader(
context,
[
'audio1.mp3',
'audio2.mp3',
],
finishedLoading
);
bufferLoader.load();
function finishedLoading(bufferList) {
// Create the two buffer sources and play them both together.
var source1 = context.createBufferSource();
var source2 = context.createBufferSource();
source1.buffer = bufferList[0];
source2.buffer = bufferList[1];
source1.connect(context.destination);
source2.connect(context.destination);
source1.start(0);
source2.start(0);
}
Run Code Online (Sandbox Code Playgroud)
现在这些来源分别加载并同时播放; 但是如何将这两个源合并到一个缓冲源中呢?我不想附加它们,我想覆盖/合并它们.
解释和/或片段会很棒.
在音频方面,以混合两个音频流(在这里,缓冲区)合为一体,你可以简单地添加每个样本值在一起.实际上,我们可以在您的代码段上构建此代码:
/* `buffers` is a javascript array containing all the buffers you want
* to mix. */
function mix(buffers) {
/* Get the maximum length and maximum number of channels accros all buffers, so we can
* allocate an AudioBuffer of the right size. */
var maxChannels = 0;
var maxDuration = 0;
for (var i = 0; i < buffers.length; i++) {
if (buffers[i].numberOfChannels > maxChannels) {
maxChannels = buffers[i].numberOfChannels;
}
if (buffers[i].duration > maxDuration) {
maxDuration = buffers[i].duration;
}
}
var out = context.createBuffer(maxChannels,
context.sampleRate * maxDuration,
context.sampleRate);
for (var j = 0; j < buffers.length; j++) {
for (var srcChannel = 0; srcChannel < buffers[j].numberOfChannels; srcChannel++) {
/* get the channel we will mix into */
var out = mixed.getChanneData(srcChannel);
/* Get the channel we want to mix in */
var in = buffers[i].getChanneData(srcChannel);
for (var i = 0; i < in.length; i++) {
out[i] += in[i];
}
}
}
return out;
}
Run Code Online (Sandbox Code Playgroud)
然后,只需影响从此函数返回到新函数AudioBufferSourceNode.buffer,并像往常一样播放它.
一些注意事项:为简单起见,我的代码段假定:
toMix[i]值乘以小于1.0的数字即可使其调整,大于1.0以使其更响亮.然后,Web Audio API为您完成所有这些,所以我想知道您为什么需要自己做,但至少现在您知道如何:-).