hun*_*rus 5 javascript audio html5 websocket node.js
我在玩Websocket,媒体API,浏览器和Node.js。我的目标是使两个成员能够实时对话。
所需逻辑
client-1_mic -> client-1_browser --> nodejs server -> client-2_browser -> client-2_speaker
Run Code Online (Sandbox Code Playgroud)
我知道,WebRTC有一种方法,但是我想通过websockets实现。
注意:我使用以下工具:Firefox v56 / 57浏览器,NodeJS(v8),NPM,Ubuntu服务器。
我目前的解决方案现在可以通过网络套接字(也存在通道)进行通信,因此可以进行基本的聊天。
我用于通信的服务器端(server.js)部分:
socket.on('audio', function (data) {
if (socket.username in userTargets && userTargets[socket.username] in usersInRoom) {
io.sockets.connected[usersInRoom[userTargets[socket.username]].socket].emit('updatechat', socket.username, 'New audio arrived');
// send to itself
io.sockets.connected[socket.id].emit('updatechat', socket.username, 'To ' + userTargets[socket.username] + '> here is some new audio');
// Audio passing
io.sockets.connected[usersInRoom[userTargets[socket.username]].socket].emit('audio', socket.username, data);
} else {
io.sockets.in(socket.room).emit('updatechat', socket.username, 'Some audio arrived');
// Audio global spam @todo later add restrict agains the sender
io.sockets.in(socket.room).emit('audio', socket.username, data);
}
});
Run Code Online (Sandbox Code Playgroud)
我的客户部分,音频收集(client.js)
media = mediaOptions.audio;
navigator.mediaDevices.getUserMedia(media.gUM).then(_stream => {
stream = _stream;
recorder = new MediaRecorder(stream);
chunks = [];
recorder.ondataavailable = e => {
chunks.push(e.data);
socket.emit('audio', {audioBlob: e.data}); // Sending only the audio blob
};
log('got media successfully');
}).catch(log);
Run Code Online (Sandbox Code Playgroud)
还有音频检索部分(也是client.js)
socket.on('audio', function (senderUser, data) {
// Lets try to play the audio
try {
//audioBlob: ArrayBuffer { byteLength: 13542 }
//audioBlob: <Buffer 4f 67 67 53 00 00 80 3f 04 00 00 00 00 00 b8 51 43 3c 0a 00 00 00 e3 d9 48 13 20 80 81 80 81 80 81 80 81 80 81 7e 7f 78 78 7c 82 89 86 8a 81 80 81 80 ...
audioBufferContainer.push(data.audioBlob);
console.log('Blob chunk added!');
/*
// missing time duration & empty space filler for audio
// missing blob to arraybuffer conversion
audioCtx.decodeAudioData(data.audioBlob, function(myAudioBuffer) {
audioBufferSource.buffer = myAudioBuffer;
audioBufferSource.connect(audioCtx.audioDestination);
console.log('Audio buffer passed to the buffer source');
audioBufferSource.play();
});
*/
}
catch(e) {
console.error('Some error during replaying: ' + e.message);
}
});
Run Code Online (Sandbox Code Playgroud)
如果我只是简单地收集发送的音频Blob(从beginin-til到end)并进行连接,那么我得到了可播放的音频文件(例如:传输正确,没有数据丢失或损坏)。
我的问题:
- How I should handle the gathered/received audio chunks (blob)? Should I try to convert into audio array buffer then passing it to the audioContext and set the time/ ffset and play it (and maybe near it fill the gaps with empty sound)?
- Or there is any other lib/API what I can use for playing back a "stream" like this?
- Or should I just try to add to the client a <audio src="ws://<host>:<port>"> type of tag and the browser will handle then the replay?
- Or better to add some timer to the sender side and always send full, like half second or second long audio file what is complete (i mean has the header and metadata nd the closing as well) then just repeatedly spamming like different tracks and on client side just stacking into a source queue and playing like an album?
Run Code Online (Sandbox Code Playgroud)
注意:我知道有一个名为binaryJS的库,但未维护(〜5y),该示例无法正常工作
任何人都可以给我一些建议,或者提示我没有得到什么,或者我在哪里弄错了?
附言 :我没有广泛的JS背景,只有其他语言,所以我也不熟悉结构的异步类型或高级的node.js解决方案。