Camera Setup
Add or replace any existing camera you have in your scene with the ZapparCamera
component, and setting its ref like this:
import { ZapparCamera } from "@zappar/zappar-react-three-fiber";
// ...
return (
<ZapparCanvas>
<ZapparCamera/>
</ZapparCanvas>
);
You don't need to change the position or rotation of the camera yourself - the Zappar library will do this for you, automatically.
User Facing Camera
Some experiences, e.g. face tracked experiences, require the use of the user-facing camera on the device. To activate the user-facing camera, provide the userFacing
prop to the ZapparCamera
component:
<ZapparCamera userFacing />
Mirroring the Camera
Users expect user-facing cameras to be shown mirrored, so by default the ZapparCamera
will mirror the camera view for the user-facing camera.
Configure this behavior with the following option:
<ZapparCamera userCameraMirrorMode="poses" />
The values you can pass to userCameraMirrorMode
are:
Camera Mirroring | Description |
---|---|
poses |
this option mirrors the camera view and makes sure your content aligns correctly with what you're tracking on screen. Your content itself is not mirrored - so text, for example, is readable. This option is the default. |
css |
this option mirrors the entire canvas. With this mode selected, both the camera and your content appear mirrored. |
none |
no mirroring of content or camera view is performed. |
There's also a rearCameraMirrorMode
prop that takes the same values should you want to mirror the rear-facing camera. The default rearCameraMirrorMode
is none
.
Realtime Camera-based Reflections
The SDK provides an automatically generated environment map that's useful if you're using materials that support reflections (e.g. MeshStandardMaterial, MeshPhysicalMaterial). The map uses the camera feed to create an approximate environment that can add some realism to your scene.
To apply the map to your scene, simply pass the environmentMap
prop to the ZapparCamera
component:
<ZapparCamera environmentMap />
Alternatively, you may get the texture to attach to specific object materials by passing in a callback function to useEnvironmentMap
:
const App = () => {
const [envMap, setEnvMap] = useState<THREE.Texture>();
return (
<ZapparCanvas>
<ZapparCamera useEnvironmentMap={setEnvMap} />
<mesh position={[0, 0, -5]}>
<sphereBufferGeometry />
<meshStandardMaterial metalness={1} roughness={0} envMap={envMap} />
</mesh>
<directionalLight position={[2.5, 8, 5]} intensity={1.5} />
</ZapparCanvas>
);
};
Camera Pose
The Zappar library provides multiple modes for the camera to move around in the scene. You can set this mode with the poseMode
prop of the ZapparCamera
component. There are the following options:
Camera Pose | Description |
---|---|
default |
in this mode the camera stays at the origin of the scene, pointing down the negative Z axis. Any tracked groups will move around in your scene as the user moves the physical camera and real-world tracked objects. |
attitude |
the camera stays at the origin of the scene, but rotates as the user rotates the physical device. When the Zappar library initializes, the negative Z axis of world space points forward in front of the user. |
anchor-origin |
the origin of the scene is the center of the group specified by the camera's poseAnchorOrigin prop. In this case the camera moves and rotates in world space around the group at the origin. |
The correct choice of camera pose will depend on your given use case and content. Here are some examples you might like to consider when choosing which is best for you:
- To have a light that always shines down from above the user, regardless of the angle of the device or anchors, use
attitude
and place a light shining down the negative Y axis is world space. - In an application with a physics simulation of stacked blocks, and with gravity pointing down the negative Y axis of world space, using
anchor-origin
would allow the blocks to rest on a tracked image regardless of how the image is held by the user, while usingattitude
would allow the user to tip the blocks off the image by tilting it.
Advanced Usage
Custom Video Device
Custom video device IDs can be provided as options passed into ZapparCamera
component:
<ZapparCamera
sources={{
userCamera:
"csO9c0YpAf274OuCPUA53CNE0YHlIr2yXCi+SqfBZZ8=",
rearCamera:
"RKxXByjnabbADGQNNZqLVLdmXlS0YkETYCIbg+XxnvM=",
}}
/>
First Frame
Use the onFirstFrame
callback prop to detect when the first frame has been processed:
<ZapparCamera
onFirstFrame={() => {
console.log("first frame");
}}
/>
Setting the default camera
When the camera component is mounted, it sets itself as the scene's main camera with render priority of 1. You may change this behavior with the following props:
<ZapparCamera
makeDefault={false} // default: true
renderPriority={0} // default: 1
/>
To set the camera as your main scene camera yourself, use useThree
:
const set = useThree(state => state.set)
const cameraRef = useRef()
useLayoutEffect(() => {
set(() => ({ camera: cameraRef.current }))
}, [])
// ...
<ZapparCamera makeDefault={false} ref={ref}/>
// ...
With the camera set up, you can then create a tracked experience.