cod*_*ob8 0 javascript canvas konvajs offscreen-canvas
大家好,我有一个非常复杂的画布编辑器,允许用户使用 Konvajs 和 Gifler 库选择视频背景、添加文本、gif 和 Lottie 动画。它已经走了很长一段路,但是我正在尝试加快我的画布应用程序的性能。我已经阅读了很多关于屏幕外画布的内容,但我不太明白。假设我有一个常规的 HTML 画布对象,我将如何创建一个离屏画布并将其吐回浏览器?理想情况下,我希望能够以 30 fps 的速度从画布中以无延迟的速度从数组中获取图像。我还担心,根据caniuse.com 的说法,屏幕外画布似乎尚未得到广泛支持。每当我尝试从我的画布创建屏幕外画布时,我总是得到:
Failed to execute 'transferControlToOffscreen' on
'HTMLCanvasElement': Cannot transfer control from a canvas that has a rendering context.
Run Code Online (Sandbox Code Playgroud)
正如我所说,我只是想弄清楚如何平滑地渲染我的动画,但不知道如何去做。这里的任何帮助都会很棒。这是代码。
<template>
<div>
<button @click="render">Render</button>
<h2>Backgrounds</h2>
<template v-for="background in backgrounds">
<img
:src="background.poster"
class="backgrounds"
@click="changeBackground(background.video)"
/>
</template>
<h2>Images</h2>
<template v-for="image in images">
<img
:src="image.source"
@click="addImage(image)"
class="images"
/>
</template>
<br />
<button @click="addText">Add Text</button>
<button v-if="selectedNode" @click="removeNode">
Remove selected {{ selectedNode.type }}
</button>
<label>Font:</label>
<select v-model="selectedFont">
<option value="Arial">Arial</option>
<option value="Courier New">Courier New</option>
<option value="Times New Roman">Times New Roman</option>
<option value="Desoto">Desoto</option>
<option value="Kalam">Kalam</option>
</select>
<label>Font Size</label>
<input type="number" v-model="selectedFontSize" />
<label>Font Style:</label>
<select v-model="selectedFontStyle">
<option value="normal">Normal</option>
<option value="bold">Bold</option>
<option value="italic">Italic</option>
</select>
<label>Color:</label>
<input type="color" v-model="selectedColor" />
<button
v-if="selectedNode && selectedNode.type === 'text'"
@click="updateText"
>
Update Text
</button>
<template v-if="selectedNode && selectedNode.lottie">
<input type="text" v-model="text">
<button @click="updateAnim(selectedNode.image)">
Update Animation
</button>
</template>
<br />
<video
id="preview"
v-show="preview"
:src="preview"
:width="width"
:height="height"
preload="auto"
controls
/>
<a v-if="file" :href="file" download="dopeness.mp4">download</a>
<div id="container"></div>
</div>
</template>
<script>
import lottie from "lottie-web";
import * as anim from "../AEAnim/anim.json";
import * as anim2 from "../AEAnim/anim2.json";
import * as anim3 from "../AEAnim/anim3.json";
import * as anim4 from "../AEAnim/anim4.json";
import * as anim5 from "../AEAnim/anim5.json";
export default {
data() {
return {
source: null,
stage: null,
layer: null,
video: null,
animations: [],
text: "",
animationData: null,
captures: [],
backgrounds: [
{
poster: "/api/files/stock/3oref310k1uud86w/poster/poster.jpg",
video:
"/api/files/stock/3oref310k1uud86w/main/1080/3oref310k1uud86w_1080.mp4"
},
{
poster: "/api/files/stock/3yj2e30tk5x6x0ww/poster/poster.jpg",
video:
"/api/files/stock/3yj2e30tk5x6x0ww/main/1080/3yj2e30tk5x6x0ww_1080.mp4"
},
{
poster: "/api/files/stock/2ez931ik1mggd6j/poster/poster.jpg",
video:
"/api/files/stock/2ez931ik1mggd6j/main/1080/2ez931ik1mggd6j_1080.mp4"
},
{
poster: "/api/files/stock/yxrt4ej4jvimyk15/poster/poster.jpg",
video:
"/api/files/stock/yxrt4ej4jvimyk15/main/1080/yxrt4ej4jvimyk15_1080.mp4"
},
{
poster:
"https://images.costco-static.com/ImageDelivery/imageService?profileId=12026540&itemId=100424771-847&recipeName=680",
video: "/api/files/jedi/surfing.mp4"
},
{
poster:
"https://thedefensepost.com/wp-content/uploads/2018/04/us-soldiers-afghanistan-4308413-1170x610.jpg",
video: "/api/files/jedi/soldiers.mp4"
}
],
images: [
{ source: "/api/files/jedi/solo.jpg" },
{ source: "api/files/jedi/yoda.jpg" },
{ source: "api/files/jedi/yodaChristmas.jpg" },
{ source: "api/files/jedi/darthMaul.jpg" },
{ source: "api/files/jedi/darthMaul1.jpg" },
{ source: "api/files/jedi/trump.jpg" },
{ source: "api/files/jedi/hat.png" },
{ source: "api/files/jedi/trump.png" },
{ source: "api/files/jedi/bernie.png" },
{ source: "api/files/jedi/skywalker.png" },
{ source: "api/files/jedi/vader.gif" },
{ source: "api/files/jedi/vader2.gif" },
{ source: "api/files/jedi/yoda.gif" },
{ source: "api/files/jedi/kylo.gif" },
{
source: "https://media3.giphy.com/media/R3IxJW14a3QNa/source.gif",
animation: anim
},
{
source: "https://bestanimations.com/Text/Cool/cool-story-3.gif",
animation: anim2
},
{
source: "https://freefrontend.com/assets/img/css-text-animations/HTML-CSS-Animated-Text-Fill.gif",
animation: anim3
},
{
source: "api/files/jedi/zoomer.gif",
animation: anim4
},
{
source: "api/files/jedi/slider.gif",
animation: anim5
}
],
backgroundVideo: null,
imageGroups: [],
anim: null,
selectedNode: null,
selectedFont: "Arial",
selectedColor: "black",
selectedFontSize: 20,
selectedFontStyle: "normal",
width: 1920,
height: 1080,
texts: [],
preview: null,
file: null,
canvas: null
};
},
mounted: function() {
this.initCanvas();
},
methods: {
changeBackground(source) {
this.source = source;
this.video.src = this.source;
this.anim.stop();
this.anim.start();
this.video.play();
},
removeNode() {
if (this.selectedNode && this.selectedNode.type === "text") {
this.selectedNode.transformer.destroy(
this.selectedNode.text.transformer
);
this.selectedNode.text.destroy(this.selectedNode.text);
this.texts.splice(this.selectedNode.text.index - 1, 1);
this.selectedNode = null;
this.layer.draw();
} else if (this.selectedNode && this.selectedNode.type == "image") {
this.selectedNode.group.destroy(this.selectedNode);
this.imageGroups.splice(this.selectedNode.group.index - 1, 1);
if (this.selectedNode.lottie) {
clearTimeout(this.animations.interval);
this.selectedNode.lottie.destroy();
this.animations.splice(this.selectedNode.lottie.index - 1, 1);
}
this.selectedNode = null;
this.layer.draw();
}
},
async addImage(imageToAdd, isUpdate) {
let lottieAnimation = null;
let imageObj = null;
const type = imageToAdd.source.slice(imageToAdd.source.lastIndexOf("."));
const vm = this;
function process(img) {
return new Promise((resolve, reject) => {
img.onload = () => resolve({ width: img.width, height: img.height });
});
}
imageObj = new Image();
imageObj.src = imageToAdd.source;
imageObj.width = 200;
imageObj.height = 200;
await process(imageObj);
if (type === ".gif" && !imageToAdd.animation) {
const canvas = document.createElement("canvas");
canvas.setAttribute("id", "gif");
async function onDrawFrame(ctx, frame) {
ctx.drawImage(frame.buffer, frame.x, frame.y);
// redraw the layer
vm.layer.draw();
}
gifler(imageToAdd.source).frames(canvas, onDrawFrame);
canvas.onload = async () => {
canvas.parentNode.removeChild(canvas);
};
imageObj = canvas;
const gif = new Image();
gif.src = imageToAdd.source;
const gifImage = await process(gif);
imageObj.width = gifImage.width;
imageObj.height = gifImage.height;
} else if (imageToAdd.animation) {
if(!isUpdate){this.text = "new text";}
const canvas = document.createElement("canvas");
canvas.style.width = 1920;
canvas.style.height= 1080;
canvas.setAttribute("id", "animationCanvas");
const ctx = canvas.getContext("2d");
const div = document.createElement("div");
div.setAttribute("id", "animationContainer");
div.style.display = "none";
canvas.style.display = "none";
this.animationData = imageToAdd.animation.default;
for(let i =0; i <this.animationData.layers.length; i++){
for(let b =0; b<this.animationData.layers[i].t.d.k.length; b++){
this.animationData.layers[i].t.d.k[b].s.t = this.text;
}
}
lottieAnimation = lottie.loadAnimation({
container: div, // the dom element that will contain the animation
renderer: "svg",
loop: true,
autoplay: true,
animationData: this.animationData
});
lottieAnimation.imgSrc = imageToAdd.source;
lottieAnimation.text = this.text;
const svg = await div.getElementsByTagName("svg")[0];
const timer = setInterval(async () => {
const xml = new XMLSerializer().serializeToString(svg);
const svg64 = window.btoa(xml);
const b64Start = "data:image/svg+xml;base64,";
const image64 = b64Start + svg64;
imageObj = new Image({ width: canvas.width, height: canvas.height });
imageObj.src = image64;
await process(imageObj);
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(imageObj, 0, 0, canvas.width, canvas.height);
this.layer.batchDraw();
}, 1000 / 30);
this.animations.push({ lottie: lottieAnimation, interval: timer });
imageObj = canvas;
canvas.onload = async () => {
canvas.parentNode.removeChild(canvas);
};
}
const image = new Konva.Image({
x: 50,
y: 50,
image: imageObj,
width: imageObj.width,
height: imageObj.height,
position: (0, 0),
strokeWidth: 10,
stroke: "blue",
strokeEnabled: false
});
const group = new Konva.Group({
draggable: true
});
// add the shape to the layer
addAnchor(group, 0, 0, "topLeft");
addAnchor(group, imageObj.width, 0, "topRight");
addAnchor(group, imageObj.width, imageObj.height, "bottomRight");
addAnchor(group, 0, imageObj.height, "bottomLeft");
imageObj = null;
image.on("click", function () {
vm.hideAllHelpers();
vm.selectedNode = {
type: "image",
group,
lottie: lottieAnimation,
image: imageToAdd
};
if(lottieAnimation && lottieAnimation.text){vm.text = lottieAnimation.text}
group.find("Circle").show();
vm.layer.draw();
});
image.on("mouseover", function(evt) {
if (vm.selectedNode && vm.selectedNode.type === "image") {
const index = image.getParent().index;
const groupId = vm.selectedNode.group.index;
if (index != groupId) {
evt.target.strokeEnabled(true);
vm.layer.draw();
}
} else {
evt.target.strokeEnabled(true);
vm.layer.draw();
}
});
image.on("mouseout", function(evt) {
evt.target.strokeEnabled(false);
vm.layer.draw();
});
vm.hideAllHelpers();
group.find("Circle").show();
group.add(image);
vm.layer.add(group);
vm.imageGroups.push(group);
vm.selectedNode = {
type: "image",
group,
lottie: lottieAnimation,
image: imageToAdd
};
function update(activeAnchor) {
const group = activeAnchor.getParent();
let topLeft = group.get(".topLeft")[0];
let topRight = group.get(".topRight")[0];
let bottomRight = group.get(".bottomRight")[0];
let bottomLeft = group.get(".bottomLeft")[0];
let image = group.get("Image")[0];
let anchorX = activeAnchor.getX();
let anchorY = activeAnchor.getY();
// update anchor positions
switch (activeAnchor.getName()) {
case "topLeft":
topRight.y(anchorY);
bottomLeft.x(anchorX);
break;
case "topRight":
topLeft.y(anchorY);
bottomRight.x(anchorX);
break;
case "bottomRight":
bottomLeft.y(anchorY);
topRight.x(anchorX);
break;
case "bottomLeft":
bottomRight.y(anchorY);
topLeft.x(anchorX);
break;
}
image.position(topLeft.position());
let width = topRight.getX() - topLeft.getX();
let height = bottomLeft.getY() - topLeft.getY();
if (width && height) {
image.width(width);
image.height(height);
}
}
function addAnchor(group, x, y, name) {
let stage = vm.stage;
let layer = vm.layer;
let anchor = new Konva.Circle({
x: x,
y: y,
stroke: "#666",
fill: "#ddd",
strokeWidth: 2,
radius: 8,
name: name,
draggable: true,
dragOnTop: false
});
anchor.on("dragmove", function() {
update(this);
layer.draw();
});
anchor.on("mousedown touchstart", function() {
group.draggable(false);
this.moveToTop();
});
anchor.on("dragend", function() {
group.draggable(true);
layer.draw();
});
// add hover styling
anchor.on("mouseover", function() {
let layer = this.getLayer();
document.body.style.cursor = "pointer";
this.strokeWidth(4);
layer.draw();
});
anchor.on("mouseout", function() {
let layer = this.getLayer();
document.body.style.cursor = "default";
this.strokeWidth(2);
layer.draw();
});
group.add(anchor);
}
},
async updateAnim(image){
this.addImage(image, true);
this.removeNode();
},
hideAllHelpers() {
for (let i = 0; i < this.texts.length; i++) {
this.texts[i].transformer.hide();
}
for (let b = 0; b < this.imageGroups.length; b++) {
this.imageGroups[b].find("Circle").hide();
}
},
async startRecording(duration) {
const chunks = []; // here we will store our recorded media chunks (Blobs)
const stream = this.canvas.captureStream(30); // grab our canvas MediaStream
const rec = new MediaRecorder(stream, {
videoBitsPerSecond: 20000 * 1000
});
// every time the recorder has new data, we will store it in our array
rec.ondataavailable = e => chunks.push(e.data);
// only when the recorder stops, we construct a complete Blob from all the chunks
rec.onstop = async e => {
this.anim.stop();
const blob = new Blob(chunks, {
type: "video/webm"
});
this.preview = await URL.createObjectURL(blob);
const video = window.document.getElementById("preview");
const previewVideo = new Konva.Image({
image: video,
draggable: false,
width: this.width,
height: this.height
});
this.layer.add(previewVideo);
console.log("video", video);
video.addEventListener("ended", () => {
console.log("preview ended");
if (!this.file) {
const vid = new Whammy.fromImageArray(this.captures, 30);
this.file = URL.createObjectURL(vid);
}
previewVideo.destroy();
this.anim.stop();
this.anim.start();
this.video.play();
});
let seekResolve;
video.addEventListener("seeked", async () => {
if (seekResolve) seekResolve();
});
video.addEventListener("loadeddata", async () => {
let interval = 1 / 30;
let currentTime = 0;
while (currentTime <= duration && !this.file) {
video.currentTime = currentTime;
await new Promise(r => (seekResolve = r));
this.layer.draw();
let base64ImageData = this.canvas.toDataURL("image/webp");
this.captures.push(base64ImageData);
currentTime += interval;
video.currentTime = currentTime;
}
this.layer.draw();
});
};
rec.start();
setTimeout(() => rec.stop(), duration);
},
async render() {
this.captures = [];
this.preview = null;
this.file = null;
this.hideAllHelpers();
this.selectedNode = null;
this.video.currentTime = 0;
this.video.loop = false;
const duration = this.video.duration * 1000;
this.startRecording(duration);
this.layer.draw();
},
updateText() {
if (this.selectedNode && this.selectedNode.type === "text") {
const text = this.selectedNode.text;
const transformer = this.selectedNode.transformer;
text.fontSize(this.selectedFontSize);
text.fontFamily(this.selectedFont);
text.fontStyle(this.selectedFontStyle);
text.fill(this.selectedColor);
this.layer.draw();
}
},
addText() {
const vm = this;
const text = new Konva.Text({
text: "new text " + (vm.texts.length + 1),
x: 50,
y: 80,
fontSize: this.selectedFontSize,
fontFamily: this.selectedFont,
fontStyle: this.selectedFontStyle,
fill: this.selectedColor,
align: "center",
width: this.width * 0.5,
draggable: true
});
const transformer = new Konva.Transformer({
node: text,
keepRatio: true,
enabledAnchors: ["top-left", "top-right", "bottom-left", "bottom-right"]
});
text.on("click", async () => {
for (let i = 0; i < this.texts.length; i++) {
let item = this.texts[i];
if (item.index === text.index) {
let transformer = item.transformer;
this.selectedNode = { type: "text", text, transformer };
this.selectedFontSize = text.fontSize();
this.selectedFont = text.fontFamily();
this.selectedFontStyle = text.fontStyle();
this.selectedColor = text.fill();
vm.hideAllHelpers();
transformer.show();
transformer.moveToTop();
text.moveToTop();
vm.layer.draw();
break;
}
}
});
text.on("mouseover", () => {
transformer.show();
this.layer.draw();
});
text.on("mouseout", () => {
if (
(this.selectedNode &&
this.selectedNode.text &&
this.selectedNode.text.index != text.index) ||
(this.selecte
就像在获得上下文“A”之后不能请求上下文“B”一样,在从画布请求上下文之后,也不能将 DOM 画布的控件转移到 OffscreenCanvas。
在这里,您正在使用 Konva.js 库(我不是特别了解)来初始化您的 DOM 画布。该库需要从该画布访问可用上下文之一(显然是“2D”上下文)。这意味着当您可以访问该画布时,库已经请求了一个上下文,并且您将无法将其控制权转移到 OffscreenCanvas。
图书馆的 repo 上有这个问题,它指出不迟于 12 天前,他们添加了对 OffscreenCanvases 的一些初步支持。因此,我邀请您查看有关如何继续使用该库的示例。
与常规画布相比,OffscreenCanvas 本身并没有提供任何性能提升。它不会神奇地使以 10FPS 运行的代码以 60FPS 运行。
它允许的是不阻塞主线程,并且不被主线程阻塞。为此,您需要将其传输到Web Worker。
这意味着你可以使用它
但是在您的情况下,似乎只有您的代码在运行。所以走这条路你可能不会赢得任何东西。
我们看到要真正利用 OffscreenCanvas 的优势,我们需要在来自 Web Worker 的并行线程中运行它。但是Web Workers 无权访问 DOM。
这是一个巨大的限制,会使很多事情变得更难处理。
例如,要绘制视频,您目前除了使用<video>
元素先播放之外别无他法。Worker 脚本无法访问该<video>
元素,也无法在自己的线程上创建元素。所以唯一的解决方案是从主线程创建一个ImageBitmap 并将其传递回您的 Worker 脚本。所有的艰苦工作(视频解码+位图生成)都在主线程上完成。值得注意的是,即使该方法返回一个 Promise,当我们使用视频作为源时,浏览器除了同步从视频创建 Bitmap 之外别无选择。
createImageBitmap()
因此,在获取 ImageBitmap 以供您的工作人员使用时,您实际上是在重载主线程,如果主线程被锁定做其他事情,您显然必须等待它完成才能从视频中获取您的帧。
另一个很大的限制是目前* Web Workers 无法对 DOM 事件做出反应。所以你必须设置一个代理来将主线程中接收到的事件传递给你的 Worker 线程。再一次,这需要您的主线程是免费的,并且需要大量的新代码。
因为,是的,我相信如果您想要性能改进,这就是您应该寻找的地方。
我只是快速浏览了一下,但我已经看到您setInterval
在一些地方使用率很高。别。如果您需要为可见的内容设置动画requestAnimationFrame
,请始终使用,如果您不需要全速,则添加内部逻辑以跳过帧,但继续使用 rAF 作为主引擎。
您要求浏览器每帧都执行繁重的操作。例如,您的 svg 部分正在每一帧从 DOM 节点创建一个全新的 svg 标记,然后将该标记加载到一个<img>
(这意味着浏览器必须为该图像启动一个全新的 DOM),并在画布上进行光栅化。
这本身在高帧率下很难应对,而 OffscreenCanvas 也无济于事。
您将所有图像存储为静止图像以制作最终视频。这将占用大量内存。
您的代码中可能还有很多其他类似的东西,因此请彻底检查并搜索使您的代码无法达到屏幕刷新率的原因。改进可以做到的,寻找替代品(例如 MediaRecorder 在支持时可能比 Whammy 更好)并祝你好运。
*有一个持续的提议来解决这个问题
归档时间: |
|
查看次数: |
3217 次 |
最近记录: |