Image Tracking
Image tracking can detect and track a flat image in 3D space. This is great for building content that's augmented onto business cards, posters, magazine pages, etc.
Before setting up image tracking, you must first add or replace any existing camera you have in your scene. Find out more here .
To track content from a flat image in the camera view, create a new ImageTracker
object, passing in your pipeline:
let imageTracker = new Zappar.ImageTracker(pipeline);
Target File
ImageTracker
s use a special 'target file' that's been generated from the source image you'd like to track. You can generate them using the ZapWorks command-line utility like this:
zapworks train myImage.png
For more information on generating target files, check out the ZapWorks CLI documentation.
The resulting file can be loaded into an image tracker object by passing it to the loadTarget(...)
function as either a URL or an ArrayBuffer. The function returns a promise that resolves when the target file has been loaded successfully, which you may wish to use to show a 'loading' screen to the user while the file is downloaded.
let imageTracker = new Zappar.ImageTracker(pipeline);
imageTracker.loadTarget("myImage.zpt").then(() => {
// Image target has been loaded
});
Image Anchors
Each ImageTracker
exposes anchors for images detected and tracked in the camera view. At this time, ImageTracker
s only track one image in view at a time.
Anchors have the following parameters:
Parameter | Description |
---|---|
id |
A string that's unique for this anchor |
visible |
A boolean indicating if this anchor is visible in the current camera frame |
onVisible and onNotVisible |
Event handlers that emit when the anchor becomes visible, or disappears in the camera view. These events are emitted during your call to pipeline.frameUpdate() |
pose(cameraPose: Float32Array, mirror?: boolean) |
A function that returns the pose matrix for this anchor |
poseCameraRelative(mirror?: boolean) |
A function that returns the pose matrix (relative to the camera) for this anchor, for use with cameraPoseWithOrigin |
The transformation returned by the pose(...)
function provides a coordinate system that has its origin at the center of the image, with a positive X axis to the right, the positive Y axis towards the top and the positive Z axis coming up out of the plane of the image.
The scale of the coordinate system is such that a Y value of +1 corresponds to the top of the image, and a Y value of -1 corresponds to the bottom of the image. The X axis positions of the left and right edges of the target image therefore depend on the aspect ratio of the image.
You can access the anchors of a tracker using its anchors
parameter - it's a JavaScript Map
keyed with the IDs of the anchors. Trackers will reuse existing non-visible anchors for new images that appear and thus, until ImageTracker
supports tracking more than one image at a time, there is never more than one anchor managed by each ImageTracker
.
Each tracker exposes a JavaScript Set
of anchors visible in the current camera frame as its visible parameter. Thus a frame loop for rendering content on images might look like this:
// Not shown - initialization, pipeline and source setup & permissions
let imageTracker = new Zappar.ImageTracker(pipeline);
imageTracker.loadTarget("myTarget.zpt");
function animate() {
// Ask the browser to call this function again next frame
requestAnimationFrame(animate);
// Zappar's library uses this function to prepare camera frames for processing
// Note this function will change some WebGL state (including the viewport), so you must change it back
pipeline.processGL();
gl.viewport(...);
// This function allows to us to use the tracking data from the most recently processed camera frame
pipeline.frameUpdate();
// Upload the current camera frame to a WebGL texture for us to draw
pipeline.cameraFrameUploadGL();
// Draw the camera to the screen - width and height here should be those of your canvas
pipeline.cameraFrameDrawGL(width, height);
let model = pipeline.cameraModel();
let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, canvas.width, canvas.height);
let cameraPoseMatrix = pipeline.cameraPoseDefault();
for (let anchor of imageTracker.visible) {
let anchorPoseMatrix = anchor.pose(cameraPoseMatrix);
// Render content using projectionMatrix, cameraPoseMatrix and anchorPoseMatrix
}
}
// Start things off
animate();
Events
In addition to using the anchors
and visible
parameters, ImageTracker
s expose event handlers that you can use to be notified of changes in the anchors or their visibility. The events are emitted during your call to pipeline.frameUpdate()
.
Event | Description |
---|---|
onNewAnchor |
Emitted when a new anchor is created by the tracker |
onVisible |
Emitted when an anchor becomes visible in a camera frame |
onNotVisible |
Emitted when an anchor goes from being visible in the previous camera frame, to being not visible in the current frame |
Here's an example of using these events:
imageTracker.onNewAnchor.bind(anchor => {
console.log("New anchor has appeared:", anchor.id);
});
imageTracker.onVisible.bind(anchor => {
console.log("Anchor is visible:", anchor.id);
});
imageTracker.onNotVisible.bind(anchor => {
console.log("Anchor is not visible:", anchor.id);
});