mwa*_*ner 5 html firefox google-chrome load-time page-load-time
我正在尝试解决自定义 html 构建报告的错误接受问题,这些报告需要很长时间才能在 googlechrome 上加载,而在 Firefox 上加载时间要好得多。
imo 没有什么关于 html 文件的“特别”。
报告通常使用锚后缀加载,因此浏览器在加载时应跳转到文件的最后(=“摘要部分”)。
我在 github repo => browser bongo test上放了几个示例文件
Turns out, you can have TO FEW JAVASCRIPT in your html :-/
If you take a closer look at the Chrome profiler tool, you realize the "initial rendering" of any page is really quick, often less than 100 msec, no matter if the requested page is a "big" or "small" html / plaintext file.
After the initial rendering, Chromium seems to prefer receiving small junks of data, performing a additional rendering after each and every junk/part of the full content it receives. - and that's what causes Chromium based browsers to be MUCH slower in processing large amounts of data.
You can easily bypass this weird "performance flaw" by rubbing a little JavaScript on it: Simply create a wrapper-page, which loads the actual content by performing a XMLHttpRequest request and updates the DOM only once. 1 initial + 1 rendering after the content is loaded and set into the dom = 2 renderings, instead of 100.000ish.
By using the following code, I've been able to get the load time of a 20 MB plaintext file from ~280 secs down to approx 4 seconds in Google Chrome, current version.
<body>
<div id="file-content">loading, please wait</div>
<script type="text/javascript">
function delayLoad(path, callback) {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
callback(xhr.responseText);
} else {
callback(null);
}
}
};
xhr.open("GET", path);
xhr.send();
}
function setFileContent(fileData) {
var element = document.getElementById('file-content');
if (!fileData) {
element.innerHTML = "error loading data";
return;
}
element.innerHTML = fileData;
}
delayLoad("bongo_files/bongo_20M.txt", setFileContent);
</script>
</body>
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1806 次 |
| 最近记录: |