Instant World Tracking
Instant World Tracking lets you track 3D content to a point chosen by the user in the room or immediate environment around them. With this tracking type you could build a 3D model viewer that lets users walk around to view the model from different angles, or an experience that places an animated character in their room.
To track content from a point on a surface in front of the user, create a new InstantWorldTracker
, passing in your pipeline:
let instantTracker = new Zappar.InstantWorldTracker(pipeline);
Each InstantWorldTracker
exposes a single anchor
from its anchor parameter. That anchor has the following parameters of its own:
Parameter | Description |
---|---|
pose(cameraPose: Float32Array, mirror?: boolean) |
A function that returns the pose matrix for this anchor |
poseCameraRelative(mirror?: boolean) |
A function that returns the pose matrix (relative to the camera) for this anchor, for use with cameraPoseWithOrigin |
To choose the point in the user's environment that the anchor tracks from, use the setAnchorPoseFromCameraOffset(...)
function, like this:
instantTracker.setAnchorPoseFromCameraOffset(0, 0, -5);
The parameters passed in to this function correspond to the X, Y and Z coordinates (in camera space) of the point to track. Choosing a position with X and Y coordinates of zero, and a negative Z coordinate, will select a point on a surface directly in front of the center of the screen.
The transformation returned by the pose(...)
function provides a coordinate system that has its origin at the point that's been set, with the positive Y coordinate pointing up out of the surface, and the X and Z coordinates in the plane of the surface. How far the chosen point is from the camera (i.e. how negative the Z coordinate provided to setAnchorPoseFromCameraOffset
is) determines the scale of the coordinate system exposed by the anchor.
A typical application will call setAnchorPoseFromCameraOffset
each frame until the user confirms their choice of placement by tapping a button, like this:
// Not shown - initialization, camera setup & permissions
let instantTracker = new Zappar.InstantWorldTracker(pipeline);
let hasPlaced = false;
myConfirmButton.addEventListener("click", () => { hasPlaced = true });
function animate() {
// Ask the browser to call this function again next frame
requestAnimationFrame(animate);
// Zappar's library uses this function to prepare camera frames for processing
// Note this function will change some WebGL state (including the viewport), so you must change it back
pipeline.processGL();
gl.viewport(...);
// This function allows to us to use the tracking data from the most recently processed camera frame
pipeline.frameUpdate();
// Upload the current camera frame to a WebGL texture for us to draw
pipeline.cameraFrameUploadGL();
// Draw the camera to the screen - width and height here should be those of your canvas
pipeline.cameraFrameDrawGL(width, height);
if (!hasPlaced) instantTracker.setAnchorPoseFromCameraOffset(0, 0, -5);
let model = pipeline.cameraModel();
let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, canvas.width, canvas.height);
let cameraPoseMatrix = pipeline.cameraPoseDefault();
let anchorPoseMatrix = instantTracker.anchor.pose(cameraPoseMatrix);
// Render content using projectionMatrix, cameraPoseMatrix and anchorPoseMatrix
}
// Start things off
animate();