Ian*_*han 6 opencv flutter firebase-mlkit
我目前正在开发一款需要实时面部检测的应用程序.现在我在应用程序中有mlkit库,我正在使用firebase人脸检测器.目前,每次我尝试从文件中检测到一个面时,它都会产生错误:
DynamiteModule(13840): Local module descriptor class for com.google.android.gms.vision.dynamite.face not found.
Run Code Online (Sandbox Code Playgroud)
至于实时部分,我尝试在颤动中使用RepaintBoundary来获取相机小部件的屏幕截图(几乎)每帧并将其转换为二进制文件以进行面部检测.但出于某种原因,每当我试图截图相机小部件时,颤动都会崩溃.它适用于其他小部件.
在遇到这两个问题并花了很长时间试图解决它们之后,我一直在考虑在Android/iOS本机代码中做应用程序的相机部分(我会用OpenCV这样做,这样我才能真正实现时间检测).有没有办法可以使用平台通道在kotlin和swift中实现相机视图并将其导入到颤动的小部件中?或者还有另一种更简单的方法来实现它吗?
关于相机图像流的实时访问,我在另一个问题How to access Camera Frames in flutter fast中回答了你想要使用的CameraController#startImageStream
import 'package:camera/camera.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
void main() => runApp(MaterialApp(home: _MyHomePage()));
class _MyHomePage extends StatefulWidget {
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<_MyHomePage> {
dynamic _scanResults;
CameraController _camera;
bool _isDetecting = false;
CameraLensDirection _direction = CameraLensDirection.back;
@override
void initState() {
super.initState();
_initializeCamera();
}
Future<CameraDescription> _getCamera(CameraLensDirection dir) async {
return await availableCameras().then(
(List<CameraDescription> cameras) => cameras.firstWhere(
(CameraDescription camera) => camera.lensDirection == dir,
),
);
}
void _initializeCamera() async {
_camera = CameraController(
await _getCamera(_direction),
defaultTargetPlatform == TargetPlatform.iOS
? ResolutionPreset.low
: ResolutionPreset.medium,
);
await _camera.initialize();
_camera.startImageStream((CameraImage image) {
if (_isDetecting) return;
_isDetecting = true;
try {
// await doOpenCVDectionHere(image)
} catch (e) {
// await handleExepction(e)
} finally {
_isDetecting = false;
}
});
}
Widget build(BuildContext context) {
return null;
}
}
Run Code Online (Sandbox Code Playgroud)