Graphics Reference
In-Depth Information
BTF synthesis method described above, the color of a pixel from BTF data is
copied directly to a point to be rendered. The 2D texture synthesis works by
searching the image for pixels based on the best match of the neighborhood of
the pixel to that of the rendered image. In general, the neighborhood similarity is
determined by summing the corresponding pixel differences in the neighborhood.
In contrast, 3D textons fit the local neighborhood over images from a collection
of viewpoints and lighting directions by their very construction.
In terms of BTF synthesis, the importance of the texton method is that it en-
ables BTF synthesis to handle arbitrary mesostructure BTF geometry as well as
arbitrary surface geometry onto which the BTF is mapped. It can be said that
this progress pushed BTF synthesis into the practical stage. As will be described
in Chapter 10), this method influenced the development of “bi-scale” (mesoscale
and macroscale) precomputed radiance transfer [Sloan et al. 02].
9.3 BTF Models
The BTF synthesis methods introduced in the previous section involve copying
an appropriate pixel value from the BTF sample data set directly to every location
on the surface of the object to be rendered. This approach is intuitive and easily
understandable, but it cannot avoid tiling or general synthesis, simply because
the surface to be rendered is normally much larger than the original BTF sample
surface. Moreover, the methods assume that the BTF data is sampled densely
enough for interpolation to be applicable. It is not impossible to prepare such BTF
data, but it may be infeasible in practice. Therefore, specific mathematical models
that capture how the appearance of the textures change with the lighting/viewing
directions have been studied. Such methods are based on advances in image-
based rendering, from which several mathematical models have arisen that make
BTF image synthesis more efficient.
9.3.1 Algebraic Analysis for Image-Based Representation
BTF synthesis is a kind of interpolation problem: it involves synthesizing a new
image from the existing images captured from nearby lighting/viewing directions.
For example, suppose images A and B were captured from the same viewpoint, but
with slightly different lighting directions. Synthesizing a new image for the same
viewpoint with a lighting direction somewhere between the lighting directions of
A and B is a straightforward interpolation problem. Simple linear interpolation
between the pixels of A and B can be applied to synthesize the new image (al-
though the visual plausibility would depend on the particular surface and lighting
Search WWH ::




Custom Search