Converting Existing Projects
You may have an existing project which needs converting in order to use Zappar’s best-in-class tracking technology. This article will showcase how to convert a three.js example using a third party library into an experience that uses the Zappar Universal AR SDK for three.js.
This article presents the comparisons between two similar examples created with three.js: 8th Wall's three.js - Flyer and Zappar's three.js Image Tracking .GLB Animation.
Setting up the scene
8th Wall & three.js
When building with 8th Wall, you will need to set up your variables as you may usually do with regular three.js or JavaScript projects. You then need to create a function which includes your scene items.
// Copyright (c) 2021 8th Wall, Inc.
/* globals XR8 XRExtras THREE */
const imageTargetPipelineModule = () => {
const modelFile = 'jellyfish-model.glb'
const videoFile = 'jellyfish-video.mp4'
const loader = new THREE.GLTFLoader() // This comes from GLTFLoader.js.
let model
let video, videoObj
// Populates some object into an XR scene and sets the initial camera position. The scene and
// camera come from xr3js, and are only available in the camera loop lifecycle onStart() or later.
const initXrScene = ({scene, camera}) => {
Be aware of possible pathing issues. Please consider that the 8th Wall scene and camera are elsewhere, and have limitations.
Zappar's Universal AR SDK for three.js
With Zappar’s Universal AR SDK, you are able to include your dependencies as imports if you choose to do so.
/// Zappar for ThreeJS Examples
//...
import * as ZapparThree from '@zappar/zappar-threejs';
import * as THREE from 'three';
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader';
import './index.css';
const model = new URL('./assets/waving.glb', import.meta.url).href;
const target = new URL('./assets/example-tracking-image.zpt', import.meta.url).href;
The Zappar example shown uses Parcel, which allows developers to easily bundle their projects. However, you have the freedom to include assets as you ordinarily would. Here is a standalone example that does not use Parcel.
You can then set up the scene as if it were a regular three.js project.
// Construct our ThreeJS renderer and scene as usual
const renderer = new THREE.WebGLRenderer();
const scene = new THREE.Scene();
document.body.appendChild(renderer.domElement);
// As with a normal ThreeJS scene, resize the canvas if the window resizes
renderer.setSize(window.innerWidth, window.innerHeight);
window.addEventListener('resize', () => {
renderer.setSize(window.innerWidth, window.innerHeight);
});
Starting the camera
8th Wall & three.js
With 8th Wall, you have little control over the camera; however, you must still sync the camera to your scene.
// Grab a handle to the threejs scene and set the camera position on pipeline startup.
const onStart = ({canvas}) => {
const {scene, camera} = XR8.Threejs.xrScene() // Get the 3js scene from XR
initXrScene({scene, camera}) // Add content to the scene and set starting camera position.
// prevent scroll/pinch gestures on canvas
canvas.addEventListener('touchmove', (event) => {
event.preventDefault()
})
// Sync the xr controller's 6DoF position and camera paremeters with our scene.
XR8.XrController.updateCameraProjectionMatrix({
origin: camera.position,
facing: camera.quaternion,
})
}
Zappar's Universal AR SDK for three.js
With Zappar’s Universal AR SDK, the camera object itself is already set up for you with the correct properties. This allows you to have greater control over how you want to use the camera.
By default,
ZapparThree.Camera
is set to use the rear camera. You can swap between using the front and rear camera programmatically by usingcamera.start(true);
andcamera.start(false);
respectively.
You should create and use the ZapparThree.Camera()
object, as it has been configured for our best-in-class tracking. This ensures that content will appear in the expected positions as per the device camera.
// Create a Zappar camera that we'll use instead of a ThreeJS camera
const camera = new ZapparThree.Camera();
// In order to use camera and motion data, we need to ask the users for permission
// The Zappar library comes with some UI to help with that, so let's use it
ZapparThree.permissionRequestUI().then((granted) => {
// If the user granted us the permissions we need then we can start the camera
// Otherwise let's them know that it's necessary with Zappar's permission denied UI
if (granted) camera.start();
else ZapparThree.permissionDeniedUI();
});
We also have a handy permission request function, though you can customize how you use this to your liking.
To finalize setting up the camera, you must do the following:
- Allow Universal AR to understand the WebGL context, so content is able to render as expected.
- Set the
scene.background
as the camera feed in order to see the camera’s view.
// The Zappar component needs to know our WebGL context, so set it like this:
ZapparThree.glContextSet(renderer.getContext());
// Set the background of our scene to be the camera background texture
// that's provided by the Zappar camera
scene.background = camera.backgroundTexture;
Adding the tracker
8th Wall & three.js
With 8th Wall, you will have an image target in your console that is linked to an app key.
You cannot do very much with targets within the project file itself, so you should rely on the imageTargetPipelineModule()
function for your target to behave and track as expected.
// Custom pipeline modules.
imageTargetPipelineModule(), // Places 3d model and video content over detected image targets.
Zappar's Universal AR SDK for three.js
With Zappar’s Universal AR SDK, you have more control over your targets and anchor groups. This is because they are defined within your project files. Therefore, you are able to carry out actions such as changing your target image easily at any time, or having multiple anchors (target images) at once.
After creating your tracker, you simply need to add it to your scene.
// Create a zappar image_tracker and wrap it in an image_tracker_group to put our ThreeJS content into
// Pass our loading manager in to ensure the progress bar works correctly
const imageTracker = new ZapparThree.ImageTrackerLoader(manager).load(target);
const imageTrackerGroup = new ZapparThree.ImageAnchorGroup(camera, imageTracker);
// Add image tracker group into the ThreeJS scene
scene.add(imageTrackerGroup);
You can enable or disable the tracker programmatically by using the variable name of your image tracker and then using the
.enabled
boolean. For example, to disable the tracker, you could useimageTracker.enabled = false;
.
Adding the content
8th Wall & three.js
To add content to your experience when developing with 8th Wall, create a function that includes the content for the renderer to run. The content in the example below is created with JavaScript and is then set to a three.js texture.
// Populates some object into an XR scene and sets the initial camera position. The scene and
// camera come from xr3js, and are only available in the camera loop lifecycle onStart() or later.
const initXrScene = ({scene, camera}) => {
// create the video element
video = document.createElement('video')
video.src = videoFile
//...set up video
const texture = new THREE.VideoTexture(video)
//...set up texture
videoObj = new THREE.Mesh(
new THREE.PlaneGeometry(0.75, 1),
new THREE.MeshBasicMaterial({map: texture})
)
Zappar's Universal AR SDK for three.js
With Zappar’s Universal AR SDK, you can create your project in the same way if you wish; however, because you have already set up your scene, you do not need to include it or the camera. This means that you can organize your code according to your best workflow.
To add the 3D model (or any other tracked content) to your scene, you simply need to add it to your tracker group; as you normally would in a regular three.js project. This allows developers to add or remove elements programmatically.
In the example below, a three.js .GLTF model loader is being implemented.
// Load a 3D model to place within our group (using ThreeJS's GLTF loader)
const gltfLoader = new GLTFLoader(manager);
gltfLoader.load(model, (gltf) => {
// get the animation and re-declare mixer and action.
// which will then be triggered on button press
mixer = new THREE.AnimationMixer(gltf.scene);
action = mixer.clipAction(gltf.animations[0]);
// Now the model has been loaded, we can roate it and add it to our image_tracker_group
imageTrackerGroup.add(gltf.scene.rotateX(Math.PI / 2));
}, undefined, () => {
console.log('An error ocurred loading the GLTF model');
});
Target visibility behaviour
8th Wall & three.js
Using 8th Wall, you can use the if(detail.name == 'model-target')
statement to refer to when the target is detected in the view.
// When the image target named 'model-target' is detected, show 3D model.
// This string must match the name of the image target uploaded to 8th Wall.
if (detail.name === 'model-target') {
model.position.copy(detail.position)
model.quaternion.copy(detail.rotation)
model.scale.set(detail.scale, detail.scale, detail.scale)
model.visible = true
}
Zappar's Universal AR SDK for three.js
To get the same result in Universal AR, you simply need to use the imageTracker.onVisible.bind()
function.
// When we lose sight of the camera, hide the scene contents.
imageTracker.onVisible.bind(() => { scene.visible = true; });
imageTracker.onNotVisible.bind(() => { scene.visible = false; });
Rendering the scene
8th Wall & three.js
To render a three.js scene with 8th Wall, create a render function with all of your camera pipeline modules.
const onxrloaded = () => {
// If your app only interacts with image targets and not the world, disabling world tracking can
// improve speed.
XR8.XrController.configure({disableWorldTracking: true})
XR8.addCameraPipelineModules([ // Add camera pipeline modules.
// Existing pipeline modules.
XR8.GlTextureRenderer.pipelineModule(), // Draws the camera feed.
XR8.Threejs.pipelineModule(), // Creates a ThreeJS AR Scene.
XR8.XrController.pipelineModule(), // Enables SLAM tracking.
XRExtras.AlmostThere.pipelineModule(), // Detects unsupported browsers and gives hints.
XRExtras.FullWindowCanvas.pipelineModule(), // Modifies the canvas to fill the window.
XRExtras.Loading.pipelineModule(), // Manages the loading screen on startup.
XRExtras.RuntimeError.pipelineModule(), // Shows an error image on runtime error.
// Custom pipeline modules.
imageTargetPipelineModule(), // Places 3d model and video content over detected image targets.
])
// Open the camera and start running the camera run loop.
XR8.run({canvas: document.getElementById('camerafeed')})
}
Zappar's Universal AR SDK for three.js
To render the scene with Zappar’s Universal AR SDK, create a regular three.js render()
or animate()
function.
Because you have already created your three.js scene, you do not need to implement it again within your render loop. You also do not need to modify the canvas in this function, as it was already prepared when you set up your scene.
Within this function, you should call:
camera.updateFrame(renderer);
to update the camera frames.renderer.render(scene, camera);
to update the rendered scene.
You can also add anything else relevant to the render loop here. For the example below, we have an animated model that needs to listen to a
Three.Clock()
in order to update.
// Use a function to render our scene as usual
function render(): void {
// If the mixer has been declared, update our animations with delta time
if (mixer) mixer.update(clock.getDelta());
// The Zappar camera must have updateFrame called every frame
camera.updateFrame(renderer);
// Draw the ThreeJS scene in the usual way, but using the Zappar camera
renderer.render(scene, camera);
// Call render() again next frame
requestAnimationFrame(render);
}
// Start things off
render();
You can choose not to use the
requestAnimationFrame(render);
method and instead userenderer.setAnimationLoop(render);
outside the render function should you wish.
Further reading
To learn more about the concepts covered in this guide, please see the following pages: