Graphics Reference
In-Depth Information
vectors are used as bases in a vector space (i.e., each vertex in an output mesh
can be constructed by taking a linear combination of these bases using a weight
vector). More precisely, for each output vertex v i at time t in N morph targets
with base target vertex b i , weight vector w , and target pose vertex p i ,wehave
N
v i ( t )=
b i +
w k ( t )
·
(
p k,i b i ) .
k =0
The formula above summarizes all that is necessary for a morph target implemen-
tation. However, for many scenes, it is often the case that only some of the total
possible weights change every frame. Because of this, we want to avoid wastefully
re-calculating the entire contribution from all target poses every frame. A better
idea is to instead keep track of the changes in the weight vector along with the
current pose in memory. For a change in frame time h , the new position is equal
to the current pose position plus the change in this position:
v i ( t + h )=
v i ( t )+Δ h [
v i ]( t ) .
We also see that the per-frame change in the position depends only on the per-
frame changes in the weights:
N
Δ h [
v i ]( t )=
Δ h [ w k ]( t )
·
(
p k,i b i ) .
k =0
Using this information, we develop an approach where we only need to com-
pute and update the pose with the per-frame contribution for weights that have
changed, i.e., when Δ h [ w k ]( t )
=0.
4.4 Implementation
This implementation uses vertex buffers that are bound to a transform feed-
back object to store and update the current pose across frames. Unfortunately,
OpenGL ES 3.0 prevents reading and writing to the same buffer simultaneously
so a secondary vertex buffer is used to ping-pong the current pose (i.e., the output
buffer is swapped with the input buffer every frame). The difference meshes are
computed in a pre-processing pass by iterating through the vertices and subtract-
ing the base pose. Sensible starting values are loaded into the feedback vertex
buffers (in this case the base pose is used). Every frame, we update the current
pose in the vertex buffers using the changes in the weights. This update can
be performed with or without batching. Finally, we render the contents of the
updated vertex buffer as usual. We perform the computation on vertex normals
in the same fashion as vertex positions; this gives us correctly animated normals
for rendering. (See Figure 4.2.)
Search WWH ::




Custom Search