Pipelines and Camera Processing

In the Zappar package, a pipeline is used to manage the flow of data coming in (i.e. the frames) through to the output from the different tracking types and computer vision algorithms. While most projects will only need one pipeline, it is possible to create as many as you like. Each pipeline can only have one active source of frames (i.e. one camera, or one video), so if you'd like to simultaneously process frames from multiple sources then you'll need a pipeline for each.

We have provided a default ZapparCamera implementation that acts as a wrapper around a single pipeline. This camera manages a list of listeners (tracking targets) and ensures that both the Zappar library and its own pipeline are initialized before tracking targets themselves initialize.

Constructing and Managing Pipelines

To create and initialize a pipeline you should call the following function:

IntPtr pipeline = Z.PipelineCreate();

Constructing the Camera

Zappar cameras act as inputs to a particular pipeline. To create a camera source, call the following function:

bool useFrontFacingCamera = false;
string device = Z.CameraDefaultDeviceId(useFrontFacingCamera);
IntPtr camera = Z.CameraSourceCreate(pipeline, device);

If you'd like to start the user-facing 'selfie' camera, set useFrontFacingCamera = true.

Permissions

The library needs to ask the user for permission to access the camera and motion sensors on the device.

To do this, you can use the following function to show a built-in UI informing the user of the need and providing a button to trigger the underlying platform's permission prompts.

To request permissions using a default UI you should call:

Z.PermissionRequestUi();

This will trigger the native permissions prompt on both iOS and Android, and will display a default UI on WebGL. Calling

bool permissionGranted = Z.PermissionGrantedAll();

will return true once the user has granted the relevant permissions.

Starting the Camera

Once the user has granted the necessary permissions, you can start the camera on the device with the following functions:

Z.PipelineGLContextSet(pipeline);
Z.CameraSourceStart(camera);

Note that if you are not using OpenGL as your rendering engine then Z.PipelineGLContextSet(...) will result in a no-op and can be omitted.

To switch between the front- and rear-facing cameras during your experience, just create a camera source for each and call Z.CameraSourceStart(...) as appropriate.

Processing Camera Frames

Call the following functions to populate texture data in the render loop of a Zappar camera:

Z.Process(pipeline);
Z.PipelineFrameUpdate(pipeline);
Z.CameraFrameUpload(pipeline);

This will upload a frame for rendering, and will update the internal state of any associated tracking targets.

Render Frames

In the render loop, be sure to update both the projection and texture matrices for the camera:

int width = Screen.width;
int height = Screen.height;

Matrix4x4 projection = Z.PipelineProjectionMatrix(pipeline, width, height);
Matrix4x4 textureMatrix = Z.PipelineCameraFrameTextureMatrix(pipeline, width, height, useFrontFacingCamera);

GetComponent<Camera>().projectionMatrix = projection;
material.SetMatrix("_nativeTextureMatrix", textureMatrix);

A default CameraMaterial material object is provided in the Assets/Materials folder that can be assigned to the material object referenced above. This material must use a custom shader of type Unlit/CameraBackgroundShader, an implementation of which is available in the 'Assets/Shaders' folder.

The current frame texture can then be attached to the material by calling:

Texture2D texture = Z.PipelineCameraFrameTexture( pipeline );
if (texture != null) material.mainTexture = texture;
zapcode branded_zapcode i