Face Tracking

Face Tracking detects and tracks the user's face. With Zappar's Face Tracking library you can attach 3D objects to the face itself, or render a 3D mesh that fits to, and deforms with, the face as the user moves and changes their expression. You could build face-filter experiences to allow users to try on different virtual sunglasses, for example, or to simulate face paint.

Before setting up face tracking, you must first add or replace any existing camera you have in your scene. Find out more here.

To place content on or around a user's face, create a new FaceTracker component:

<ZapparCamera/>
<FaceTracker >
  {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
</FaceTracker>

The group provides a coordinate system that has its origin at the center of the head, with a positive X axis to the right, the positive Y axis towards the top and the positive Z axis coming forward out of the user's head.

Note that users typically expect to see a mirrored view of any user-facing camera feed. Please see the Camera Setup documentation to learn more about mirroring the camera.

Events

The FaceTracker component will emit the following events on the element it's attached to:

Event Description
onVisible emitted when the face appears in the camera view
onNotVisible emitted when the face is no longer visible in the camera view

Here is an example of using these events:

<ZapparCamera />
<FaceTracker
  onNotVisible={(anchor) => console.log(`Not visible ${anchor.id}`)}
  onVisible={(anchor) => console.log(`Visible ${anchor.id}`)}
>
  {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
</FaceTracker>

Face Landmarks

In addition to tracking the center of the head, you can use a FaceLandmark to track content from various points on the user's face. These landmarks will remain accurate, even as the user's expression changes.

To track a landmark, construct a new FaceLandmark component, passing the name of the landmark you'd like to track as props:

export default function App() {
  return (
    <ZapparCanvas>
      <ZapparCamera  />
      <FaceTracker />
      <FaceLandmark
        target="nose-bridge"
      >
        {/*PLACE CONTENT TO APPEAR ON THE NOSE BRIDGE HERE*/}
      </FaceLandmark>
      <directionalLight position={[2.5, 8, 5]} intensity={1.5} />
    </ZapparCanvas>
  );
}

The following landmarks are available:

Face Landmark locations from the front and side of the face

Face Landmark Diagram ID
eye-left A
eye-right B
ear-left C
ear-right D
nose-bridge E
nose-tip F
nose-base G
lip-top H
mouth-center I
lip-bottom J
chin K
eyebrow-left L
eyebrow-right M

Note that 'left' and 'right' here are from the user's perspective.

Face Mesh

In addition to tracking the center of the face using FaceTracker, the Zappar library provides a number of meshes that will fit to the face/head and deform as the user's expression changes. These can be used to apply a texture to the user's skin, much like face paint, or to mask out the back of 3D models so the user's head is not occluded where it shouldn't be.

To use a face mesh, create a child mesh component, attaching FaceBufferGeometry within your FaceTracker component, like this:

import { FaceBufferGeometry, /* ... */ } from '@zappar/zappar-react-three-fiber'
// ...
return (
  <ZapparCanvas>
    <ZapparCamera />
    <FaceTracker>
      <mesh>
        {/* FaceBufferGeometry requires tracker group to be passed into it */}
        <FaceBufferGeometry  />
      </mesh>
      {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
    </FaceTracker>
  </ZapparCanvas>
);

At this time, there are two face meshes included with the library.

Default mesh: The default mesh covers the user's face, from the chin at the bottom to the forehead, and from the sideburns on each side. There are optional parameters that determine if the mouth and eyes are filled or not:

<mesh>
  <FaceBufferGeometry
    fillEyeLeft
    fillEyeRight
    fillMouth
    fillNeck
  />
</mesh>

Full head simplified mesh: The full head simplified mesh covers the whole of the user's head, including a portion of the neck. It's ideal for drawing into the depth buffer in order to mask out the back of 3D models placed on the user's head (see Head Masking below). There are optional parameters that determine if the mouth, eyes and neck are filled or not:

<mesh>
  <FaceBufferGeometry fullHead  />
</mesh>

Head Masking

If you're placing a 3D model around the user's head, such as a helmet, it's important to make sure the camera view of the user's real face is not hidden by the back of the model. To achieve this, the library provides HeadMaskMesh. This is an entity that fits the user's head and fills the depth buffer, ensuring that the camera image shows instead of any 3D elements behind it in the scene.

To use it, add the entity into your HeadMaskMesh entity, before any other 3D content:

import { HeadMaskMesh, /* ... */ } from '@zappar/zappar-react-three-fiber'
// ...
<ZapparCamera />
<FaceTracker>
  <HeadMaskMesh  />
  {/*OTHER 3D CONTENT GOES HERE*/}
</FaceTracker>
zapcode branded_zapcode i