JavaScript

Our JavaScript library provides the low-level tools you need to build world, face, or image tracked AR experiences directly with WebGL, or to integrate AR into a JavaScript 3D platform of your choice.

You may also be interested in our ThreeJS and A-Frame libraries.

You can use this library by downloading a standalone zip containing the necessary files, by linking to our CDN, or by installing from NPM for use in a webpack project.


Installation

Standalone Download

Download the bundle from this link: https://libs.zappar.com/zappar-js/0.2.6/zappar-js.zip

Unzip into your web project and reference from your HTML like this:

<script src="zappar.js"></script>

CDN

Reference the zappar.js library from your HTML like this:

<script src="https://libs.zappar.com/zappar-js/0.2.6/zappar.js"></script>

NPM Webpack Module

Run the following NPM command inside your project directory:

npm install --save @zappar/zappar

Then import the library into your JavaScript or TypeScript files:

import * as Zappar from "@zappar/zappar";

The final step is to add this necessary entry to your webpack rules:

module.exports = {
  //...
  module: {
    rules: [
      //...
      {
        test: /zcv\.wasm$/,
        type: "javascript/auto",
        loader: "file-loader"
      }
      //...
    ]
  }
};


Terminology

Below you'll find a description of terminology used across the Universal AR JavaScript documentation:

  • Frames: the individual pictures that comprise a video or that come from a webcam or phone camera.
  • Source: the origin of frames, so a given camera or video.
  • Tracking: the processing of frames to find and follow a position in 3D space. With this library you can track images, faces, or even points on a surface in the user's environment.
  • Pipeline: in the Zappar library, a pipeline is used to manage the flow of data coming in (i.e. the frames) through to the output from the different tracking types and computer vision algorithms. Use of the library will typically involve creating a pipeline, attaching a camera source and some trackers, then, each frame, advancing the pipeline and drawing results on screen.
  • Anchor: a tracked point in space, e.g. a tracked image that has appeared in the camera view, or the center of the head of a person that appears in the frame.


Usage

You can integrate the Zappar library with the existing requestAnimationFrame loop of your WebGL project. A typical project may look like this. The remainder of our JavaScript documentation goes into more detail about each of the component elements of this example.

// The Zappar library uses a 'pipeline' to manage data flowing in (e.g. camera frames)
// with the output from the various computer vision algorithms
// Most projects will just have one pipline
let pipeline = new Zappar.Pipeline();

// The Zappar library needs the WebGL context to process camera images
// Use this function to tell the pipeline about your context
pipeline.glContextSet(gl);

// We want to process images from the user's camera, so create a camera_source object
// for our pipeline, with the device's default camera
let source = new Zappar.Camera(pipeline, Zappar.cameraDefaultDeviceID());

// Request camera permissions and start the camera
Zappar.permissionRequestUI().then(granted => {
    if (granted) source.start();
    else Zappar.permissionDeniedUI();
});

// Set up a tracker, in this case an image tracker
let imageTracker = new Zappar.ImageTracker(pipeline);
imageTracker.loadTarget("myImage.zpt");

function animate() {
    // Ask the browser to call this function again next frame
    requestAnimationFrame(animate);

    // Your pipeline uses this function to prepare camera frames for processing
    // Note this function will change some WebGL state (including the viewport), so you must change it back
    pipeline.processGL();

    gl.viewport(...);

    // This function allows to us to use the tracking data from the most recently processed camera frame
    pipeline.frameUpdate();

    // Upload the current camera frame to a WebGL texture for us to draw
    pipeline.cameraFrameUploadGL();

    // Draw the camera to the screen - width and height here should be those of your canvas
    pipeline.cameraFrameDrawGL(width, height);

    // Get our 3D projection matrix
    let model = pipeline.cameraModel();
    let projectionMatrix = Zappar.projectionMatrixFromCameraModel(model, canvas.width, canvas.height);

    // Get our camera's pose
    let cameraPoseMatrix = pipeline.cameraPoseDefault();
    let inverseCameraPoseMatrix = Zappar.invert(cameraPoseMatrix);

    // Loop through visible image tracker anchors, rendering some content
    for (let anchor of imageTracker.visible) {
        let anchorPoseMatrix = anchor.pose(cameraPoseMatrix);

        // Render content using the following ModelViewProjection matrix:
        // projectionMatrix * inverseCameraPoseMatrix * anchorPoseMatrix
    }
}

// Start things off
animate();


Local Preview and Testing

Due to browser restrictions surrounding use of the camera, you must use HTTPS to view or preview your site, even if doing so locally from your computer. If you're using webpack, consider using webpack-dev-server which has an https option to enable this.

Alternatively you can use the ZapWorks command-line tool to serve a folder over HTTPS for access on your local computer, like this:

zapworks serve .

The command also lets you serve the folder for access by other devices on your local network, like this:

zapworks serve . --lan
zapcode branded_zapcode i