Drawing the Camera
This article directly follows on from Pipelines and Camera Processing, we highly recommend reading through that first.
The platform provides the most recent camera frame as a WebGL texture that you can draw to the screen. To make this texture available, call the following function after your frameUpdate()
call:
pipeline.cameraFrameUploadGL();
Once uploaded, there are two ways to draw the camera:
- Using the
cameraFrameDrawGL
convenience function - Manually drawing a full-screen quad with WebGL
Drawing the camera with cameraFrameDrawGL
You can use the following function:
pipeline.cameraFrameDrawGL(renderWidth: number, renderHeight: number, mirror?: boolean)
It will automatically draw the camera to the screen as a full screen quad. Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
GL state | Example |
---|---|
The currently bound texture 2D is set to null |
gl.bindTexture(gl.TEXTURE_2D, null) |
The currently bound array buffer is set to null |
gl.bindBuffer(gl.ARRAY_BUFFER, null); |
The currently bound program is set to null |
gl.useProgram(null) |
The currently active texture is set to gl.TEXTURE0 |
gl.activeTexture(gl.TEXTURE0) |
These features are disabled: | gl.SCISSOR_TEST , gl.DEPTH_TEST , gl.BLEND , gl.CULL_FACE |
Manually drawing the camera
Alternatively, you can draw the camera manually to a full screen quad. To do so, you can use the following functions:
pipeline.cameraFrameTextureGL() : WebGLTexture | undefined:
returns a WebGLTexture containing the current camera image (or undefined if none are available) pipeline.
cameraFrameTextureMatrix(renderWidth : number, renderHeight : number, mirror?: boolean) : Float32Array:
pass in your canvas' width and height and it returns a 4x4 column-major matrix that you can use to transform the UV coordinates of the following quad:
Quad | UV Coordinates |
---|---|
Vertex 0 | -1, -1, 0 |
UV 0 | 0, 0 |
Vertex 1 | -1, 1, 0 |
UV 1 | 0, 1 |
Vertex 2 | 1, -1, 0 |
UV 1 | 1, 0 |
Vertex 3 | 1, 1, 0 |
UV 1 | 1, 1 |
Here's an example vertex shader to show how this can be accomplished:
attribute vec4 position; // Bound to a buffer with the vertex data above
attribute vec4 texCoord; // Bound to a buffer with the UV data above
uniform mat4 texTransform; // Set to the matrix returned by cameraFrameTextureMatrix(...)
varying vec4 texVarying; // Used to pass the UV coordinates to the fragment shader
void main()
{
gl_Position = position;
texVarying = texTransform * texCoord;
}
And the corresponding fragment shader:
varying vec4 texVarying; // The UV coordinate from the vertex shader
uniform sampler2D texSampler; // Bound to the texture returned by cameraFrameTextureGL()
void main()
{
gl_FragColor = texture2DProj(texSampler, texVarying);
}