And*_*Hin 6 html5 video-streaming html5-video media-source
更新:
所以我能够通过使用offsetTimestamp属性(在附加每个视频后递增它)来使其工作.
我现在的问题:1)为什么在将MediaSource.mode设置为序列时没有正确完成?
2)为什么我的MediaSource.duration总是"无限"而不是正确的持续时间?
我正在尝试使用MediaSource API附加多个视频文件并无缝播放,就好像它是1个视频一样.
我根据规范(DASH-MPEG)正确地对我的视频进行了转码,当单独播放时,它们工作正常.
但是,当我尝试追加多个时,我遇到的问题是段相互覆盖,持续时间不正确等等.即使所有内容似乎都按预期执行.
我已经尝试过使用offsetTimestamp,但根据文档设置MediaSource.mode到'sequence'应该自动处理这个.此外,出于某种原因,MediaSource.duration即使在追加一个片段之后也总是似乎是"无限".
这是我的代码:
<script>
function downloadData(url, cb) {
console.log("Downloading " + url);
var xhr = new XMLHttpRequest;
xhr.open('get', url);
xhr.responseType = 'arraybuffer';
xhr.onload = function () {
cb(new Uint8Array(xhr.response));
};
xhr.send();
}
if (MediaSource.isTypeSupported('video/mp4; codecs="avc1.64001E"')) {
console.log("mp4 codec supported");
}
var videoSources = [
"{% static 'mp4/ff_97.mp4' %}",
"{% static 'mp4/ff_98.mp4' %}",
"{% static 'mp4/ff_99.mp4' %}",
"{% static 'mp4/ff_118.mp4' %}"
]
var mediaSource = new MediaSource();
mediaSource.addEventListener('sourceopen', function(e) {
var sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
sourceBuffer.mode = 'sequence';
console.log('SourceBuffer mode set to ' + sourceBuffer.mode);
sourceBuffer.addEventListener('updateend', function(e) {
console.log('Finished updating buffer');
console.log('New duration is ' + String(mediaSource.duration));
if (videoSources.length == 0) {
mediaSource.endOfStream();
video.currentTime = 0;
video.play();
return;
}
downloadData(videoSources.pop(), function(arrayBuffer) {
console.log('Finished downloading buffer of size ' + String(arrayBuffer.length));
console.log('Updating buffer');
sourceBuffer.appendBuffer(arrayBuffer);
});
console.log('New duration: ' + String(mediaSource.duration));
});
downloadData(videoSources.pop(), function(arrayBuffer) {
console.log('Finished downloading buffer of size ' + String(arrayBuffer.length));
console.log('Updating buffer');
sourceBuffer.appendBuffer(arrayBuffer);
});
}, false);
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(mediaSource);
Run Code Online (Sandbox Code Playgroud)
这是日志:
mp4 codec supported
(index):78 SourceBuffer mode set to sequence
(index):45 Downloading /static/mp4/ff_118.mp4
(index):103 Finished downloading buffer of size 89107
(index):104 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_99.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 46651
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_98.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 79242
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
(index):45 Downloading /static/mp4/ff_97.mp4
(index):98 New duration: Infinity
(index):92 Finished downloading buffer of size 380070
(index):93 Updating buffer
(index):81 Finished updating buffer
(index):82 New duration is Infinity
Run Code Online (Sandbox Code Playgroud)
2)为什么我的 MediaSource.duration 总是“Infinity”而不是正确的持续时间?
您需要调用MediaSource.endOfStream()MediaSource 对象来计算其SourceBuffer. 我看到您正在这样做,但看起来您正在尝试MediaSource.duration在调用之前访问endOfStream()。我建议您阅读MSE 规范中的流结束算法,您会注意到它将导致调用持续时间更改算法。
如果您想让<video>元素在调用之前报告持续时间MediaSource.endOfStream(),您实际上可以根据您自己对附加段的估计来设置一个值。MediaSource.duration
1)为什么在将 MediaSource.mode 设置为序列时没有正确完成此操作?
据我所知,应该可以。但我自己更喜欢显式timestampOffset方法,因为当想要在缓冲区中远远追加段时,它提供了更大的灵活性(即,如果用户在当前缓冲区末尾之前寻找,您将需要在间隙之后开始加载+追加)。尽管我很高兴在您的用例中寻求我的不是必需的。