Face Tracking

The Zappar for Unity library enables face tracked experiences on native mobile and WebGL, with no changes to your code required between the build platforms.

Face tracked experiences consist of two main components: a face tracker and a face mesh. The face tracker reports back the pose of the face, while the face mesh reports the information necessary to generate a mesh that tracks the shape and expressions of the face over time. The following snippet demonstrates how to initialize a face tracker and a face mesh:

/* This can be any material you wish */
public Material faceMaterial;
private IntPtr faceTracker;
private IntPtr faceMesh;
private Mesh mesh;
private bool initialised = false;

public bool fillEyeLeft;
public bool fillEyeRight;
public bool fillMouth;
public bool fillNeck;

public void OnZapparInitialised(IntPtr pipeline) 
{
    faceTracker = Z.FaceTrackerCreate( pipeline );
    faceMesh = Z.FaceMeshCreate();

    Z.FaceTrackerModelLoadDefault(faceTracker);
    Z.FaceMeshLoadDefaultFullHeadSimplified(faceMesh, fillMouth, fillEyeLeft, fillEyeRight, fillNeck);

    MeshRenderer meshRenderer = gameObject.AddComponent<MeshRenderer>();
    meshRenderer.sharedMaterial = faceMaterial;

    MeshFilter meshFilter = gameObject.AddComponent<MeshFilter>();

    mesh = new Mesh();
    meshFilter.mesh = mesh;
}

The vertices, uvs, and triangles of the mesh are updated at runtime to adapt to the current shape and expression of the face.

Face Anchors

Each face tracker exposes anchors for faces detected and tracked in the camera view. By default a maximum of one face is tracked at a time, however you can change this by calling:

int numFacesToTrack = 2;
Z.FaceTrackerMaxFacesSet(faceTracker, numFacesToTrack);

Setting a value of two or higher may impact the performance and frame rate of the library. We recommend tracking a single face by default, unless your use case requires tracking multiple faces.

You can query the number of face anchors present in a scene by calling:

int numAnchors = Z.FaceTrackerAnchorCount(faceTracker);

The current pose of any of the face anchors can also be retrieved:

int faceAnchorIndex = 0;
Matrix4x4 cameraPose = ZapparCamera.Instance.Pose();
Matrix4x4 facePose = Z.FaceTrackerAnchorPose(faceTracker, faceAnchorIndex, cameraPose, m_isMirrored);

Face Mesh

In addition to tracking the center of the face using a face tracker, the Zappar library provides a face mesh that will fit to the face and deform as the user's expression changes. This can be used to apply a texture to the user's skin, much like face paint.

On each frame you must call the following three functions to update the internal state of the face mesh:

int faceAnchorIndex = 0;
float[] identity = Z.FaceTrackerAnchorIdentityCoefficients(faceTracker, faceAnchorIndex);
float[] expression = Z.FaceTrackerAnchorExpressionCoefficients(faceTracker, faceAnchorIndex);
Z.FaceMeshUpdate(faceMesh, identity, expression, m_isMirrored);

The raw vertices, normals, uvs, and triangles can then be retrieved by calling:

float[] vertices = Z.FaceMeshVertices(faceMesh);
float[] normals = Z.FaceMeshNormals(faceMesh);
float[] uv = Z.FaceMeshUvs(faceMesh);
int[] triangles = Z.FaceMeshIndices(faceMesh);

Due to platform rendering differences you must transform the mesh variables before rendering. The Zappar library provides a set of utility functions for this:

Mesh mesh = new Mesh();
mesh.vertices = Z.UpdateMeshVerticesForPlatform( vertices );
mesh.normals = Z.UpdateMeshNormalsForPlatform( normals ) ;
mesh.triangles = Z.UpdateMeshTrianglesForPlatform( triangles );
mesh.uv = Z.UpdateMeshUVsForPlatform( uv );

The raw uv data is arranged such that each successive pair of values in the array corresponds to the uv data for a single vertex, while the raw vertices and normals data are both arranged such that each successive triplet of values correspond to a single vertex, and the normal for that vertex, respectively.

zapcode branded_zapcode i