HTML and CSS Reference
With instanced rendering, per-instance data, such as colors and positions, are transferred through vertex buffers
rather than uniforms (see Listing 9-6). First, one or more vertex buffers need to be created to hold the individual
instance data. From there on, the extension is used to notify the implementation that the buffer itself holds instanced
data and to render the geometries.
Listing 9-6. Instanced Array Setup
// Create the instance data
gl.vertexAttribPointer(colorLocation, 4, gl.FLOAT, false, 16, 0);
// Draw the instanced meshes
ext.drawElementsInstancedANGLE(gl.TRIANGLES, indexCount, gl.UNSIGNED_SHORT, 0, instanceCount);
To use instancing, the shaders have to be modified, as the values are now coming from a vertex buffer rather than
a uniform. This just means that any values that were in uniforms but that are now coming from the instanced vertex
buffer have to be modified. The fix is simply to change occurrences of uniform with attribute. Also, if the uniform now
being passed in as instance data was only in the fragment shader, it must be moved to the vertex shader and then
passed over to the fragment shader as a varying variable, because fragment shaders cannot access vertex attributes.
The instanced arrays extension allows many geometries to be drawn with a single call. But, supporting devices
without this extention requires a bit of work as well. Any shaders that act on instanced geometry will require a fallback
version. Additionally, the renderer will need to handle setting uniform values directly instead of using a vertex buffer,
which could be complicated, depending on the how the renderer is set up. Another word of caution: the number of
API calls is based on the number of instances and the number of values per instance and thus could go up drastically.
If the scene is bound by the number of draw calls being submitted, the number of instances will have to be slashed
when the extension is not available.
There are some optimizations possible besides the promoted extensions for the next iteration of WebGL. At the time
of writing, the specification has been released in draft form, so there is a possibility that things could change. WebGL
2 provides a wrapper over OpenGL ECMAScript 3 functionality, which is already available for use, so it's doubtful that
the finalized specification will deviate too far from what has already been proposed. Because at this time no browser
vendors have implemented the specification, no example code is given here.
In WebGL 1.x all the information on how a texture is sampled is contained within the texture object itself. In
WebGL 2.x this state is replicated within a sampler object. This allows the state of the texture to be easily changed and
the same texture to be bound to multiple texture units with different sampling options.
When switching between geometries to draw, the uniform values will need to be updated. This can result in a lot
of calls to set uniform data, even though the data may remain constant for the object. Uniform buffer objects offer a
new way to set uniform data. Instead of setting a value within the shader program, data are set directly on the uniform
buffer. From there on, the data are bound to the shader program, allowing multiple uniform values to be set with a
Rendering the Scene
With the renderer ensuring that no redundant calls are occurring, it's time to focus on how the scene is being
submitted. Even with the underlying optimizations present, the API usage may not be optimal when rendering the
scene. For example, imagine that 100 cubes are visible and that each one can be one of three colors. Let's also assume
that there's a uniform for the diffuse color and another uniform for the MVP matrix. The worst case scenario for the