Pipelines and Camera Processing
In the Zappar library, a pipeline is used to manage the flow of data coming in (i.e. the frames) through to the output from the different tracking types and computer vision algorithms. It's straightforward to construct a new pipeline:
let pipeline = new Zappar.Pipeline();
The Zappar library needs to use your WebGL context in order to process camera frames. You can set it on your pipeline immediately after it's been created:
pipeline.glContextSet(gl);
While most projects will only need one pipeline, it is possible to create as many as you like. Each pipeline can only have one active source of frames (i.e. one camera, or one video), so if you'd like to simultaneously process frames from multiple sources then you'll need a pipeline for each. These pipelines can share the same GL context (if you're drawing the camera frames from multiple sources onto the same canvas), or use different contexts if you have multiple canvases on your page.
Creating a Frame Source
To start the user's camera, first create a new camera source for your pipeline:
let deviceId = Zappar.cameraDefaultDeviceID();
let source = new Zappar.CameraSource(pipeline, deviceId);
If you'd like to use the user-facing 'selfie' camera, pass true
to the cameraDefaultDeviceID
function:
let deviceId = Zappar.cameraDefaultDeviceID(true);
let source = new Zappar.CameraSource(pipeline, deviceId);
User-facing cameras are normally shown mirrored to users, so if you start one, please check out the options for mirroring the view covered in this article.
Alternatively you can pass any device ID obtained from the browser's navigator.mediaDevices.enumerateDevices()
function as the second parameter of the CameraSource
constructor.
Permissions
The library needs to ask the user for permission to access the camera and motion sensors on the device.
To do this, you can use the following function to show a built-in UI informing the user of the need and providing a button to trigger the browser's permission prompts. The function returns a promise that lets you know if the user granted the permissions or not.
// Show Zappar's built-in UI to request camera permissions
Zappar.permissionRequestUI().then(granted => {
if (granted) {
// User granted the permissions so start the camera
source.start();
} else {
// User denied the permissions so show Zappar's built-in 'permission denied' UI
Zappar.permissionDeniedUI();
}
});
If you'd rather show your own permissions UI, you can use the following function to trigger the browser's permission prompts directly. The function returns a promise that resolves to true
if the user granted all the necessary permissions, otherwise false
.
Due to browser restrictions, this function must be called from within a user event, e.g. in the event handler of a button click.
Zappar.permissionRequest().then(granted => {
if (granted) {
// User granted the permissions so start the camera
} else {
// User denied the permissions
// You can show your own 'permission denied' UI here or use Zappar's built-in one
Zappar.permissionDeniedUI();
}
});
Starting the Frame Source
Once the user has granted the necessary permissions, you can start the camera on the device with the following function:
source.start();
If you'd like to switch between cameras after the source has started you can do that with this function:
// Switch to the self-facing camera:
source.setDevice(Zappar.cameraDefaultDeviceID(true));
If you'd like to pause camera processing (and shutdown the camera device), just call the pause()
function:
source.pause();
Camera processing can be started again using start()
.
Pipelines can only have one source running at a time, so if you create and start a second source, it will pause the first one. If you'd like to let the user switch between the rear and user facing cameras, just have two sources and call start()
on each as appropriate.
In addition to CameraSource
, we also provide HTMLElementSource
which you can use to process frames from an HTML video
element, or an img
element. If you're processing a video
element, you must remember to call play()
on that element - the start()
and pause()
functions of the source only control if frames are being supplied to the pipeline, not the play state of the video itself.
Handling Window Events
Users may switch tabs away from your page during their experience (and hopefully return!). It's good practice to detect these events and pause/start the camera as appropriate. This avoids doing unnecessary computation when the user is not watching, and ensures that the camera is correctly restarted should the browser choose to stop it while the user is away. It's possible using the document's visibilitychange
event:
document.addEventListener("visibilitychange", () => {
switch(document.visibilityState) {
case "hidden":
source.pause();
break;
case "visible":
source.start();
break;
}
});
Processing Frames
Call the following function on your pipeline once an animation frame (e.g. during your requestAnimationFrame
function) in order to process incoming camera frames:
pipeline.processGL();
Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
GL state | Example |
---|---|
The currently bound framebuffer is set to null |
gl.bindFramebuffer(gl.FRAMEBUFFER, null) |
The currently bound texture 2D is set to null |
gl.bindFramebuffer(gl.FRAMEBUFFER, null) |
The currently bound array buffer is set to null |
gl.bindBuffer(gl.ARRAY_BUFFER, null); |
The currently bound element array buffer is set to null |
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null) |
The currently bound program is set to null |
gl.useProgram(null) |
These features are disabled: | gl.SCISSOR_TEST , gl.DEPTH_TEST , gl.BLEND , gl.CULL_FACE |
The pixel store flip-Y mode is disabled | gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, false) |
The viewport is changed | gl.viewport(...) |
The clear color is changed | gl.clearColor(...) |
After calling processGL()
, call the following function to ask the Zappar library to return results from the most recently processed camera frame:
pipeline.frameUpdate();
Next Steps: Drawing the Camera