Augmented Faces API – 面部标志是如何生成的?

Augmented Faces API – How facial landmarks generated?

我是一名 IT 学生,想了解(了解)更多关于 ARCore 中的 Augmented Faces API

我刚刚看到 ARCore V1.7 版本,以及新的 Augmented Faces API。我得到了这个 API 的巨大潜力。但是我没有看到关于这个主题的任何问题或文章。所以我在问自己,这里有一些关于这个版本的假设/问题。

假设

问题

因此,如果您对此主题有任何建议或评论,请与我们分享!

  1. ARCore's new Augmented Faces API, that is working on the front-facing camera without depth sensor, offers a high quality, 468-point 3D canonical mesh that allows users attach such effects to their faces as animated masks, glasses, skin retouching, etc. The mesh provides coordinates and region specific anchors that make it possible to add these effects.

I firmly believe that a facial landmarks detection is generated with a help of computer vision algorithms under the hood of ARCore 1.7. It's also important to say that you can get started in Unity or in Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other Trackables.

As you know, 2+ years ago Google released Face API that performs face detection, which locates faces in pictures, along with their position (where they are in the picture) and orientation (which way they’re facing, relative to the camera). Face API allows you detect landmarks (points of interest on a face) and perform classifications to determine whether the eyes are open or closed, and whether or not a face is smiling. The Face API also detects and follows faces in moving images, which is known as face tracking.

因此,ARCore 1.7 只是从 Face API 中借用了一些 架构元素 ,现在它不仅可以检测面部特征点并为其生成 468 个点而且还以 60 fps 的速度实时跟踪它们并将 3D 面部几何形状贴在它们上

参见 Google 的 Face Detection Concepts Overview

  1. 计算通过移动 RGB 摄像机拍摄的视频中的深度通道并不是什么火箭科学。您只需要将视差公式应用于跟踪的功能。因此,如果静态 object 上某个特征的平移幅度相当高 – 跟踪的 object 更接近相机,并且如果静态 object 上某个特征的幅度相当大低——被跟踪的 object 离相机更远。这些计算深度通道的方法对于像 The Foundry NUKE and Blackmagic Fusion 这样的合成应用程序已经使用了 10 多年。现在,在 ARCore 中可以访问相同的原则。

  2. 您不能将面部 detection/tracking 算法拒绝为自定义 object 或 body 的另一部分,例如手。 面部增强 API 专为面部开发。

下面是 Java 激活增强面孔功能的代码:

// Create ARCore session that supports Augmented Faces
public Session createAugmentedFacesSession(Activity activity) throws 
                                                      UnavailableException {

    // Use selfie camera
    Session session = new Session(activity, 
                                  EnumSet.of(Session.Feature.FRONT_CAMERA));

    // Enabling Augmented Faces
    Config config = session.getConfig();
    config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
    session.configure(config);
    return session;
}

然后获取检测到的面孔列表:

Collection<AugmentedFace> faceList = session.getAllTrackables(AugmentedFace.class);

最后渲染效果:

for (AugmentedFace face : faceList) {

    // Create a face node and add it to the scene.
    AugmentedFaceNode faceNode = new AugmentedFaceNode(face);
    faceNode.setParent(scene);

    // Overlay the 3D assets on the face
    faceNode.setFaceRegionsRenderable(faceRegionsRenderable);

    // Overlay a texture on the face
    faceNode.setFaceMeshTexture(faceMeshTexture);
    
    // .......
}