Coordinate Systems and Poses
The Zappar library models the 3D space of an AR experience using three transformations:
Transformation | Description |
---|---|
Projection matrix | Applies a perspective projection corresponding to a camera with a focal length, as well as near and far clipping planes |
Camera pose | Applies the position and rotation of the camera relative to a 'world' origin |
Anchor pose | Applies the position, scale and rotation of a given tracked object (e.g. an image, or face) relative to the world origin |
To render content relative to an anchor, the following transformation (the ModelViewProjection matrix) is computed, typically in the vertex shader:
mat4 modelViewProjection = projectionMatrix * inverseCameraPoseMatrix * anchorPoseMatrix;
gl_Position = modelViewProjection * vertexPosition;
The following sections show how to compute these various constituent transformation matrices.
Projection Matrix
When rendering an AR experience, it's important that the projection matrix used to render the virtual 3D content matches the parameters of the physical camera (e.g. focal length) being processed and displayed. The Zappar library provides a function to get these parameters for the current frame (herein referred to as the camera model) and a function to convert them into a projection matrix you can use during rendering:
let model = pipeline.cameraModel();
let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, renderWidth, renderHeight);
Pass the dimensions of your canvas for renderWidth
, renderHeight
and, if needed , the zNear
and zFar
clipping plane parameters. The resulting projectionMatrix
is a Float32Array
containing a 4x4 column-wise matrix that you can use directly as a uniform in your vertex shader.
You should call these functions every frame after your pipeline.frameUpdate()
call, since the Zappar library may change the camera model over time as it learns more about the physical camera.
Camera Pose
The Zappar library provides multiple functions for obtaining a camera pose. Each function defines a different world space and thus differing behavior for the camera as the user moves their device in space and around any anchors that are being tracked. Each of the functions return a Float32Array
containing a 4x4 column-major matrix.
Function | Returns |
---|---|
pipeline.cameraPoseDefault() |
A transformation where camera sits, stationary, at the origin of world space, and points down the negative Z axis. Tracked anchors move in world space as the user moves the device or tracked objects in the real world. |
pipeline.cameraPoseWithAttitude(mirror?: boolean) |
A transformation where the camera sits at the origin of world space, but rotates as the user rotates the physical device. When the Zappar library initializes, the negative Z axis of world space points forward in front of the user. |
pipeline.cameraPoseWithOrigin(o: Float32Array) |
A transformation with the (camera-relative) origin specified by the supplied parameter. This is used with the poseCameraRelative(...) : Float32Array functions provided by the various anchor types to allow a given anchor (e.g. a tracked image or face) to be the origin of world space. In this case the camera moves and rotates in world space around the anchor at the origin. |
The correct choice of camera pose will depend on your given use case and content. Here are some examples you might like to consider when choosing which is best for you:
- To have a light that always shines down from above the user, regardless of the angle of the device or anchors, use
cameraPoseWithAttitude
and simulate a light shining down the negative Y axis is world space. - In an application with a physics simulation of stacked blocks, and with gravity pointing down the negative Y axis of world space, using
cameraPoseWithOrigin
would allow the blocks to rest on a tracked image regardless of how the image is held by the user, while usingcameraPoseWithAttitude
would allow the user to tip the blocks off the image by tilting it.
The matrices returned by these functions represent the transformation of the camera relative to a world space, but if you're forming a full ModelViewProjection matrix in order to render content, then you need to use the inverse of the camera transformation. For this purpose, the Zappar library provides a convenience function to compute the inverse:
let inverseCameraPoseMatrix = Zappar.invert(cameraPose);
Anchor Pose
Each of the tracking types provided by the Zappar library expose anchors with a function to obtain an anchor pose for a given camera pose, e.g.:
let cameraPoseMatrix = pipeline.cameraPoseDefault();
let anchorPoseMatrix = myFaceAnchor.pose(cameraPose);
It's best to use this structure, even if you're using cameraPoseWithOrigin
and the anchor is forming the origin of your world space, like this:
let cameraPoseMatrix = pipeline.cameraPoseWithOrigin(myFaceAnchor.poseCameraRelative());
let anchorPoseMatrix = myFaceAnchor.pose(cameraPose);
This pose matrix forms the final transformation for a complete ModelViewProjection matrix for rendering:
mat4 modelViewProjection = projectionMatrix * inverseCameraPoseMatrix * anchorPoseMatrix;
gl_Position = modelViewProjection * vertexPosition;
The following sections gives more details about the various tracking types and their associated anchors: