Face Tracking

Face Tracking detects and tracks the user's face. With Zappar's Face Tracking library, you can attach 3D objects to the face itself, or render a 3D mesh that fits to, and deforms with, the face as the user moves and changes their expression. You could build face-filter experiences to allow users to try on different virtual sunglasses, for example, or to simulate face paint.

To place content on or around a user's face, create a new FaceTracker object when your page loads, passing in your pipeline:

let faceTracker = new Zappar.FaceTracker(pipeline);

Model File

The face tracking algorithm requires a model file of data in order to operate - you can call loadDefaultModel() to load the one that's included by default with the library. The function returns a promise that resolves when the model has been loaded successfully, which you may wish to use to show a loading screen to the user while the file is downloaded.

let faceTracker = new Zappar.FaceTracker(pipeline);
faceTracker.loadDefaultModel().then(() => {
    // The model has been loaded successfully
});

Face Anchors

Each FaceTracker exposes anchors for faces detected and tracked in the camera view. By default a maximum of one (1) face is tracked at a time, however you can change this using the maxFaces parameter:

faceTracker.maxFaces = 2;

Note that setting a value of two (2) or more faces may impact the performance and frame rate of the library. We recommend sticking with the default value of one, unless your use case requires tracking multiple faces.

Anchors have the following parameters:

Parameter Description
id a string that's unique for this anchor
visible a boolean indicating if this anchor is visible in the current camera frame
identity and expression Float32Arrays containing data used for rendering a face-fitting mesh (see below)
onVisible Event handler that emits when the anchor becomes visible. This event is emitted during your call to Zappar.frameUpdate()
onNotVisible Event handler that emits when the anchor disappears in the camera view. This event is emitted during your call to Zappar.frameUpdate()
pose(cameraPose: Float32Array, mirror?: boolean) returns the pose matrix for this anchor
poseCameraRelative(mirror?: boolean) returns the pose matrix (relative to the camera) for this anchor, for use with cameraPoseWithOrigin

The transformation returned by the pose(...) function provides a coordinate system that has its origin at the center of the head, with positive X axis to the right, the positive Y axis towards the top and the positive Z axis coming forward out of the user's head.

You can access the anchors of a tracker using its anchors parameter - it's a JavaScript Map keyed with the IDs of the anchors. Trackers will reuse existing non-visible anchors for new faces that appear and thus there are never more than maxFaces anchors handled by a given tracker.

Each tracker exposes a JavaScript Set of anchors visible in the current camera frame as its visible parameter. Thus a frame loop for rendering content on faces might look like this:

// Not shown - initialization, camera setup & permissions

let faceTracker = new Zappar.FaceTracker(pipeline);
faceTracker.loadDefaultModel();

function animate() {
    // Ask the browser to call this function again next frame
    requestAnimationFrame(animate);

    // Zappar's library uses this function to prepare camera frames for processing
    // Note this function will change some WebGL state (including the viewport), so you must change it back
    pipeline.processGL();

    gl.viewport(...);

    // This function allows to us to use the tracking data from the most recently processed camera frame
    pipeline.frameUpdate();

    // Upload the current camera frame to a WebGL texture for us to draw
    pipeline.cameraFrameUploadGL();

    // Draw the camera to the screen - width and height here should be those of your canvas
    pipeline.cameraFrameDrawGL(width, height);

    let model = pipeline.cameraModel();
    let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, canvas.width, canvas.height);
    let cameraPoseMatrix = pipeline.cameraPoseDefault();

    for (let anchor of faceTracker.visible) {
        let anchorPoseMatrix = anchor.pose(cameraPoseMatrix);

        // Render content using projectionMatrix, cameraPoseMatrix and anchorPoseMatrix
    }
}

// Start things off
animate();

Users typically expect to see a mirrored view of any user-facing camera feed. Please see the Drawing the Camera article for more information.

Events

In addition to using the anchors and visible parameters, FaceTrackers expose event handlers that you can use to be notified of changes in the anchors or their visibility. The events are emitted during your call to pipeline.frameUpdate().

Event Description
onNewAnchor emitted when a new anchor is created by the tracker
onVisible emitted when an anchor becomes visible in a camera frame
onNotVisible emitted when an anchor goes from being visible in the previous camera frame, to being not visible in the current frame

Here is an example of using these events:

faceTracker.onNewAnchor.bind(anchor => {
    console.log("New anchor has appeared:", anchor.id);
});

faceTracker.onVisible.bind(anchor => {
    console.log("Anchor is visible:", anchor.id);
});

faceTracker.onNotVisible.bind(anchor => {
    console.log("Anchor is not visible:", anchor.id);
});

Face Landmarks

In addition to poses for the center of the head, you can use FaceLandmark to obtain poses for various points on the user's face. These landmarks will remain accurate, even as the user's expression changes.

To get the pose for a landmark, construct a new FaceLandmark object; passing the name of the landmark you'd like to track:

let faceLandmark = new Zappar.FaceLandmark(Zappar.FaceLandmarkName.CHIN);

The following landmarks are available:

Face Landmark locations from the front and side of the face

Face Landmark Diagram ID
EYE_LEFT A
EYE_RIGHT B
EAR_LEFT C
EAR_RIGHT D
NOSE_BRIDGE E
NOSE_TIP F
NODE_BASE G
LIP_TOP H
MOUTH_CENTER I
LIP_BOTTOM J
CHIN K
EYEBROW_LEFT L
EYEBROW_RIGHT M

Note that 'left' and 'right' here are from the user's perspective.

Each frame, after pipeline.frameUpdate(), call one of the following functions to update the face landmark to be accurately positioned according to the most recent identity and expression output from a face anchor:

// Update directly from a face anchor
faceLandmark.updateFromFaceAnchor(myFaceAnchor);

// Or, update from identity and expression Float32Arrays:
faceLandmark.updateFromIdentityExpression(identity, expression);

Once this is done, the pose of the landmark can accessed using the pose parameter:

// This is a Float32Array 4x4 matrix
faceLandmark.pose

This pose is relative to the center of the head so, to obtain a modelView matrix to use for drawing your content, make sure to include the pose from your face anchor in addition to that from the landmark:

mat4 modelViewProjection = projectionMatrix * inverseCameraPoseMatrix * anchorPoseMatrix * landmarkPoseMatrix;
gl_Position = modelViewProjection * vertexPosition;

Face Mesh

In addition to getting a pose for the center of the face using FaceTracker, the Zappar library provides a face mesh that will fit to the face and deform as the user's expression changes. This can be used to apply a texture to the user's skin, much like face paint.

To use the face mesh, first construct a new FaceMesh object and load its data file. The loadDefaultFace function returns a promise that resolves when the data file has been loaded successfully. You may wish to use to show a 'loading' screen to the user while this is taking place.

let faceMesh = new Zappar.FaceMesh();
faceMesh.loadDefaultFace().then(() => {
    // Face mesh loaded
});

Each frame, after pipeline.frameUpdate(), call one of the following functions to update the face mesh to the most recent identity and expression output from a face anchor:

// Update directly from a face anchor
faceMesh.updateFromFaceAnchor(myFaceAnchor);

// Or, update from identity and expression Float32Arrays:
faceMesh.updateFromIdentityExpression(identity, expression);

Once this is done, you can use the following parameters of the FaceMesh object to get mesh data that can be directly uploaded to WebGL ArrayBuffer objects and rendered using gl.TRIANGLES with gl.drawElements(...):

// These are Float32Arrays of the raw mesh data
faceMesh.vertices
faceMesh.uvs
faceMesh.normals

// This is a Uint16Array of the vertex indices
faceMesh.indices

There are two meshes included with the Javascript library, detailed below.

Default Mesh: The default mesh covers the user's face, from the chin at the bottom to the forehead, and from the sideburns on each side. There are optional parameters that determine if the mouth and eyes are filled or not:

loadDefaultFace(fillMouth?: boolean, fillEyeLeft?: boolean, fillEyeRight?: boolean)

Full Head Simplified Mesh: The full head simplified mesh covers the whole of the user's head, including some neck. It's ideal for drawing into the depth buffer in order to mask out the back of 3D models placed on the user's head. There are optional parameters that determine if the mouth, eyes and neck are filled or not:

loadDefaultFullHeadSimplified(fillMouth?: boolean, fillEyeLeft?: boolean, fillEyeRight?: boolean, fillNeck?: boolean)
zapcode branded_zapcode i