如何在Tracker事件中获取android facedetector的当前帧(作为Bitmap)?

Joh*_*Doe 9 android vision

我有标准的com.google.android.gms.vision.Tracker 示例在我的Android设备上成功运行,现在我需要对图像进行后处理以找到当前面部的虹膜,该虹膜已在Tracker的事件方法中得到通知.

那么,我如何获得与我在Tracker事件中收到的com.google.android.gms.vision.face.Face完全匹配的Bitmap框架?这也意味着最终的位图应该与网络摄像头分辨率匹配,而不是屏幕分辨率.

一个不好的替代解决方案是在我的CameraSource上每隔几毫秒调用一次takePicture,并使用FaceDetector单独处理这张图片.虽然这有效但我有一个问题,即视频流在拍摄过程中冻结,我得到了大量的GC_FOR_ALLOC消息导致单个bmp facedetector内存浪费.

Bat*_*aGG 7

您必须创建自己的Face tracker版本,它将扩展google.vision人脸检测器.在您的mainActivity或FaceTrackerActivity(在Google跟踪示例中)类中,创建您的FaceDetector类版本,如下所示:

class MyFaceDetector extends Detector<Face> {
    private Detector<Face> mDelegate;

    MyFaceDetector(Detector<Face> delegate) {
        mDelegate = delegate;
    }

    public SparseArray<Face> detect(Frame frame) {
        YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
        byte[] jpegArray = byteArrayOutputStream.toByteArray();
        Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        //TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time
        //So you can process this TempBitmap in your own purposes adding extra code here

        return mDelegate.detect(frame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}
Run Code Online (Sandbox Code Playgroud)

然后,您必须通过修改CreateCameraSource方法将您自己的FaceDetector与CameraSource连接,如下所示:

private void createCameraSource() {

    Context context = getApplicationContext();

    // You can use your own settings for your detector
    FaceDetector detector = new FaceDetector.Builder(context)
            .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
            .setProminentFaceOnly(true)
            .build();

    // This is how you merge myFaceDetector and google.vision detector
    MyFaceDetector myFaceDetector = new MyFaceDetector(detector);

    // You can use your own processor
    myFaceDetector.setProcessor(
            new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
                    .build());

    if (!myFaceDetector.isOperational()) {
        Log.w(TAG, "Face detector dependencies are not yet available.");
    }

    // You can use your own settings for CameraSource
    mCameraSource = new CameraSource.Builder(context, myFaceDetector)
            .setRequestedPreviewSize(640, 480)
            .setFacing(CameraSource.CAMERA_FACING_FRONT)
            .setRequestedFps(30.0f)
            .build();
}
Run Code Online (Sandbox Code Playgroud)