Usage

You can integrate the Zappar library with the existing requestAnimationFrame loop of your WebGL project. A typical project may look like this. The remainder of our JavaScript documentation goes into more detail about each of the component elements of this example.

// The Zappar library uses a 'pipeline' to manage data flowing in (e.g. camera frames)
// with the output from the various computer vision algorithms
// Most projects will just have one pipline
let pipeline = new Zappar.Pipeline();

// The Zappar library needs the WebGL context to process camera images
// Use this function to tell the pipeline about your context
pipeline.glContextSet(gl);

// We want to process images from the user's camera, so create a camera_source object
// for our pipeline, with the device's default camera
let source = new Zappar.Camera(pipeline, Zappar.cameraDefaultDeviceID());

// Request camera permissions and start the camera
Zappar.permissionRequestUI().then(granted => {
    if (granted) source.start();
    else Zappar.permissionDeniedUI();
});

// Set up a tracker, in this case an image tracker
let imageTracker = new Zappar.ImageTracker(pipeline);
imageTracker.loadTarget("myImage.zpt");

function animate() {
    // Ask the browser to call this function again next frame
    requestAnimationFrame(animate);

    // Your pipeline uses this function to prepare camera frames for processing
    // Note this function will change some WebGL state (including the viewport), so you must change it back
    pipeline.processGL();

    gl.viewport(...);

    // This function allows to us to use the tracking data from the most recently processed camera frame
    pipeline.frameUpdate();

    // Upload the current camera frame to a WebGL texture for us to draw
    pipeline.cameraFrameUploadGL();

    // Draw the camera to the screen - width and height here should be those of your canvas
    pipeline.cameraFrameDrawGL(width, height);

    // Get our 3D projection matrix
    let model = pipeline.cameraModel();
    let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, canvas.width, canvas.height);

    // Get our camera's pose
    let cameraPoseMatrix = pipeline.cameraPoseDefault();
    let inverseCameraPoseMatrix = Zappar.invert(cameraPoseMatrix);

    // Loop through visible image tracker anchors, rendering some content
    for (let anchor of imageTracker.visible) {
        let anchorPoseMatrix = anchor.pose(cameraPoseMatrix);

        // Render content using the following ModelViewProjection matrix:
        // projectionMatrix * inverseCameraPoseMatrix * anchorPoseMatrix
    }
}

// Start things off
animate();
zapcode branded_zapcode i