Graphics Reference
In-Depth Information
you also are thinking of discretizing on a grid—if you want to avoid that
confusion, it's often a good idea to only use Greek letters for your tensor
indices, e.g., α , β , γ instead.
The gradient of a function is ( ∂f/∂x 1 ,∂f/∂x 2 ,∂f/∂x 3 ). This is still a
bit longwinded, so we instead use the generic ∂f/∂x i without specifying
what i is: it's a “free” index.
We could then write the divergence, for example, as
∂u i
∂x i
.
i
This brings us to the Einstein summation convention . It's tedious to have
to write the sum symbol Σ again and again. Thus we just won't bother
writing it: instead, we will assume that in any expression that contains the
index i twice, there is an implicit sum over i in front of it. If we don't want
a sum, we use different indices, like i and j . For example, the dot-product
of two vectors u and n can be written very succinctly as
u i n i .
Note that by expression I mean a single term or a product—it does not
include addition. So this
u i + r i
is a vector, u + r ,notascalarsum.
Einstein notation makes it very simple to write a matrix-vector product,
such as Ax :
A ij x j .
Note that the free index in this expression is j : this is telling us the j th
component of the result. This is also an introduction to second-order ten-
sors, which really are a fancy name for matrices: they have two indices
instead of the one for a vector (which canbecalledafirst-ordertensor).
We can write matrix multiplication just as easily: the product AB is
A ij B jk
with free indices i and k : this is the i, k entry of the result. Similarly, the
outer-product matrix of vectors u and n is
u i n j .
Search WWH ::




Custom Search