The first step for any A-Frame Universal AR project is to add or replace any existing camera you have in your scene with one using the
zappar-camera component, like this:
<a-entity camera zappar-camera></a-entity>
Don't change the position or rotation of the camera yourself - the Zappar library will do this automatically.
The library needs to ask the user for permission to access the camera and motion sensors on the device.
To do this, you can use the following entity to show a built-in UI informing the user of the need and providing a button to trigger the browser's permission prompts.
<!-- Ask user for camera permissions, display some text if permission is denied --> <a-entity zappar-permissions-ui id="permissions"> <!-- Remove the text entity to use Zappar's default permission denied UI --> <a-entity text="value: Please reload the page, accepting the camera permissions." position="0 0 -2"></a-entity> </a-entity>
User Facing Camera
Some experiences, e.g. face tracked experiences, require the use of the user-facing camera on the device. To activate the user-facing, camera, provide the
user-facing: true parameter to the
<a-entity camera zappar-camera="user-facing: true;"></a-entity>
Mirroring the Camera
Users expect user-facing cameras to be shown mirrored, so by default the
zappar-camera will mirror the camera view for the user-facing camera.
Configure this behavior with the following option:
<a-entity camera zappar-camera="user-facing: true; user-camera-mirror-mode: poses"> </a-entity>
The values you can pass to
poses: this option mirrors the camera view and makes sure your content aligns correctly with what you're tracking on screen. Your content itself is not mirrored - so text, for example, is readable. This option is the default.
css: this option mirrors the entire canvas. With this mode selected, both the camera and your content appear mirrored.
no-mirror: no mirroring of content or camera view is performed.
There's also a
rear-camera-mirror-mode parameter that takes the same values should you want to mirror the
rear-facing camera. The default
Realtime Camera-based Reflections
The SDK provides an automatically generated environment map that's useful if you're using materials that support reflections (e.g. MeshStandardMaterial, MeshPhysicalMaterial). The map uses the camera feed to create an approximate environment that can add some realism to your scene.
To apply the map to your scene, simply add the
zappar-environment-map attribute to your scene element.
<a-scene zappar-environment-map> </a-scene>
Alternatively, you may apply the texture to specific object materials by using the same attribute.
<a-sphere position="0 0 -5" environment-map metalness="1" roughness="0" radius="1"></a-sphere>
The Zappar library provides multiple modes for the camera to move around in the A-Frame scene. You can set this mode with the
poseMode parameter of the
There are the following options:
default: in this mode the camera stays at the origin of the scene, pointing down the negative Z axis. Any tracked groups will move around in your scene as the user moves the physical camera and real-world tracked objects.
attitude: the camera stays at the origin of the scene, but rotates as the user rotates the physical device. When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.
anchor-origin: the origin of the scene is the center of the group specified by the camera's
poseAnchorOriginparameter. In this case the camera moves and rotates in world space around the group at the origin.
The correct choice of camera pose will depend on your given use case and content. Here are some examples you might like to consider when choosing which is best for you:
- To have a light that always shines down from above the user, regardless of the angle of the device or anchors, use attitude and place a light shining down the negative Y axis is world space.
- In an application with a physics simulation of stacked blocks, and with gravity pointing down the negative Y axis of world space, using
anchor-originwould allow the blocks to rest on a tracked image regardless of how the image is held by the user, while using
attitudewould allow the user to tip the blocks off the image by tilting it.
With the camera set up, you can then create a tracked experience.