ais*_*ipt 5 javascript face-recognition reactjs face-api react-hooks
detectSingleFace我已经在我的 React 项目中实现了 Face-API,该项目正在从图片中检测单个人脸。
现在我想更进一步。我希望face-api在检测后自动裁剪脸部。因此,我可以将其存储在某些服务器、状态或本地存储中。有什么办法可以做到吗?
在这里您可以看到我想要实现的屏幕截图示例,一侧是图片,另一侧是自动裁剪的脸部(我想实现)。

这是我在codesandbox 中的实时代码链接
下面是我的face-api代码模块
PhotoFaceDetection.js
import React, { useState, useEffect, useRef } from "react";
import * as faceapi from "face-api.js";
import Img from "./assets/mFace.jpg";
import "./styles.css";
const PhotoFaceDetection = () => {
const [initializing, setInitializing] = useState(false);
const [image, setImage] = useState(Img);
const canvasRef = useRef();
const imageRef = useRef();
// I want to store cropped image in this state
const [pic, setPic] = useState();
useEffect(() => {
const loadModels = async () => {
setInitializing(true);
Promise.all([
// models getting from public/model directory
faceapi.nets.tinyFaceDetector.load("/models"),
faceapi.nets.faceLandmark68Net.load("/models"),
faceapi.nets.faceRecognitionNet.load("/models"),
faceapi.nets.faceExpressionNet.load("/models")
])
.then(console.log("success", "/models"))
.then(handleImageClick)
.catch((e) => console.error(e));
};
loadModels();
}, []);
const handleImageClick = async () => {
if (initializing) {
setInitializing(false);
}
canvasRef.current.innerHTML = faceapi.createCanvasFromMedia(
imageRef.current
);
const displaySize = {
width: 500,
height: 350
};
faceapi.matchDimensions(canvasRef.current, displaySize);
const detections = await faceapi.detectSingleFace(
imageRef.current,
new faceapi.TinyFaceDetectorOptions()
);
const resizeDetections = faceapi.resizeResults(detections, displaySize);
canvasRef.current
.getContext("2d")
.clearRect(0, 0, displaySize.width, displaySize.height);
faceapi.draw.drawDetections(canvasRef.current, resizeDetections);
console.log(
`Width ${detections.box._width} and Height ${detections.box._height}`
);
setPic(detections);
console.log(detections);
};
return (
<div className="App">
<span>{initializing ? "Initializing" : "Ready"}</span>
<div className="display-flex justify-content-center">
<img ref={imageRef} src={image} alt="face" crossorigin="anonymous" />
<canvas ref={canvasRef} className="position-absolute" />
</div>
</div>
);
};
export default PhotoFaceDetection;
Run Code Online (Sandbox Code Playgroud)
经过大量的研发后,我弄清楚了。对于未来可能遇到问题的读者,这里提供了指南。我创建了另一个函数,它将获取原始图像参考和有界框尺寸,即宽度和高度。之后,我使用faceapi方法来提取面孔,然后在toDataURL方法的帮助下,我实际上将其转换为base64文件,该文件可以渲染到任何图像src或可以存储在任何地方。这就是我上面解释的功能
async function extractFaceFromBox(imageRef, box) {
const regionsToExtract = [
new faceapi.Rect(box.x, box.y, box.width, box.height)
];
let faceImages = await faceapi.extractFaces(imageRef, regionsToExtract);
if (faceImages.length === 0) {
console.log("No face found");
} else {
const outputImage = "";
faceImages.forEach((cnv) => {
outputImage.src = cnv.toDataURL();
setPic(cnv.toDataURL());
});
// setPic(faceImages.toDataUrl);
console.log("face found ");
console.log(pic);
}
}
Run Code Online (Sandbox Code Playgroud)
然后我在主函数中调用上面的函数,我在其中使用了faceapi人脸检测微型模型。
extractFaceFromBox(imageRef.current, detections.box);
您还可以在此处访问实时代码以检查完整的实现
| 归档时间: |
|
| 查看次数: |
4207 次 |
| 最近记录: |