如何在 Tracker 事件中获取 android facedetector 的当前帧(作为位图)?

How to get the current frame (as a Bitmap) for android facedetector in a Tracker event?

我在 android 设备上成功 运行 com.google.android.gms.vision.Tracker example,现在我需要对图像进行后处理以找到当前面部的虹膜已在 Tracker 的事件方法中通知。

那么,如何获得与我在 Tracker 事件中收到的 com.google.android.gms.vision.face.Face 完全匹配的位图帧? 这也意味着最终位图应匹配网络摄像头分辨率而不是屏幕分辨率。

一个不好的替代解决方案是每隔几毫秒在我的 CameraSource 上调用 takePicture 并使用 FaceDetector 单独处理这张照片。尽管这有效,但我遇到了视频流在拍照期间冻结的问题,并且我收到很多 GC_FOR_ALLOC 消息,导致单个 bmp facedetector 内存浪费。

您必须创建自己的面部跟踪器版本,它将扩展 google.vision 面部检测器。在您的 mainActivity 或 FaceTrackerActivity(在 google 跟踪示例中)class 创建您的 FaceDetector class 版本如下:

class MyFaceDetector extends Detector<Face> {
    private Detector<Face> mDelegate;

    MyFaceDetector(Detector<Face> delegate) {
        mDelegate = delegate;
    }

    public SparseArray<Face> detect(Frame frame) {
        YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
        byte[] jpegArray = byteArrayOutputStream.toByteArray();
        Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        //TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time
        //So you can process this TempBitmap in your own purposes adding extra code here

        return mDelegate.detect(frame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}

然后你必须通过如下修改你的CreateCameraSource方法来加入你自己的FaceDetector和CameraSource:

private void createCameraSource() {

    Context context = getApplicationContext();

    // You can use your own settings for your detector
    FaceDetector detector = new FaceDetector.Builder(context)
            .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
            .setProminentFaceOnly(true)
            .build();

    // This is how you merge myFaceDetector and google.vision detector
    MyFaceDetector myFaceDetector = new MyFaceDetector(detector);

    // You can use your own processor
    myFaceDetector.setProcessor(
            new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
                    .build());

    if (!myFaceDetector.isOperational()) {
        Log.w(TAG, "Face detector dependencies are not yet available.");
    }

    // You can use your own settings for CameraSource
    mCameraSource = new CameraSource.Builder(context, myFaceDetector)
            .setRequestedPreviewSize(640, 480)
            .setFacing(CameraSource.CAMERA_FACING_FRONT)
            .setRequestedFps(30.0f)
            .build();
}