Approaches to Geometric Modeling (Basic Computer Graphics) Part 8

Round-off Error and Robustness Issues

Accuracy and robustness are important issues in numerical computations. For an overview of some common pitfalls see [McCa98]. Because geometric modeling involves a great many numerical algorithms, one wants to minimize the potential of round-off errors and avoid situations where small changes can lead to radically different results. To accomplish this one must choose carefully between different algorithms and representations for objects. In this section we only scratch the surface of this difficult subject. Currently, solutions to numerical problems tend to be targeted to specific areas. What are still needed are general solutions. In the meantime it is largely up to users to be aware of potential problems and to take steps on their own to try to avoid them.

One of the first things one learns about real numbers when programming is that one almost never tests for equality but rather asks whether numbers are close enough. Modeling systems tend to use plenty of epsilons, small positive constants that are used to define when quantities are supposed to be considered equal. Getting correct yes/no answers when one only has approximations is not easy. So often the mathematical answer to a geometric problem is short and simple, but the computer program that implements it is much longer and messy. For example, to determine whether a point in the plane lies on a line is trivial mathematically. One can simply check whether the point satisfies the equation of the line, which involves checking whether some expression equals zero. In a computer program, testing for zero would probably be too strong a test and one would be satisfied if the expression is suitably small. This might not cause any problems by itself, but errors can propagate. Treating two lines as parallel if they are only almost parallel might be all right, but if a sequence of lines are almost parallel, then the first and the last might be quite far from being parallel. Furthermore, no matter how small the epsilons, there is a potential of getting inconsistencies in the geometric database. In Figure 5.48, if the segment AB is close to being parallel to the top face f of the solid, then in the process of intersecting it with the solid, one may conclude that AB intersects the face f in the segment CD. On the other hand, one may get a more accurate intersection with the side face g and conclude in that case that the line does not intersect CD. This might leave an incorrect description of AB in the database as a composition of three segments. The author has personally known of commercial CAD systems that (at least in their initial version) would crash in some constructions that involved solids that touched each other along essentially parallel faces.


Maintaining the orthogonality of orthogonal matrices is another problem in computer graphics. Orthogonal matrices may become nonorthogonal after numerous transformations. This is a serious problem in robotics where moving joints means transforming matrices. What one needs to do here is to maintain a count of the number of transformations that have been performed and then periodically reorthog-onalize the matrices. Of course, since one does not know which values are incorrect, the new matrices, although orthogonal, are probably not what they should be mathematically.

Intersection inconsistencies due to round-off errors.

Figure 5.48. Intersection inconsistencies due to round-off errors.

Numerical analysis is clearly the first place to look for answers about accuracy in a world of approximations. A lot is known about the accuracy of the output of a computation given the accuracy of the input. For example, to get precise answers with linear problems one would have to perform computations using four to five times the precision of the initial data. In the case of quadratic problems, one would need forty to fifty times the precision. Sometimes there may be guidelines that help one improve the accuracy of results. Given the problems with floating point arithmetic, one could try other types of arithmetic.

Bounded Rational Arithmetic. This is suggested in [Hoff89] and refers to restricting numbers to being rational numbers with denominators that are bounded by a given fixed integer. One can use the method of continued fractions to find the best approximation to a real by such rationals.

Infinite Precision Arithmetic. Of course, there are substantial costs involved in this.

“Exact” Arithmetic. This does not mean the same thing as infinite precision arithmetic. The approach is described in [Fort95]. The idea is to have a fixed but relatively small upper bound on the bit-length of arithmetic operations needed to compute geometric predicates. This means that one can do integer arithmetic. Although one does not get “exact” answers, they are reliable. It is claimed that boundary-based faceted modelers supporting regularized set operators can be implemented with minimal overhead (compared with floating point arithmetic). Exact arithmetic works well for linear objects but has problems with smooth ones. See also [Yu92] and [CuKM99].

Just knowing the accuracy is not always enough if it is worse than one would like. Geometric computations often involve many steps. Rather than worrying about accuracy only after data structures and algorithms have been chosen, one should perhaps also use accuracy as one criterion for choosing the data structures and algorithms.

One cause for the problem indicated in Figure 5.48 is that one often uses different computations to establish a common fact. The question of whether the line segment intersected the edge of the cube was answered twice – first by using the face f and second by using the face g. The problem is in the redundancy in the representation of the edge and the fact that the intersection is determined from a collection of isolated computations. If one could represent geometry in a nonredundant way, then one would be able to eliminate quite a few inconsistency problems. Furthermore, the problem shown in Figure 5.48 would be resolved if, after one found the intersection with face f, one would check for intersections with all the faces adjacent to f and then resolve any inconsistencies.

We give a brief overview of an approach to achieving robust set operations on linear polyhedra described in [HoHK89] and [Hoff89]. It involves both data structures and algorithms. Their data structure allows for nonmanifold objects and specifies the following information about vertices, edges, and faces:

vertex:

the adjacent edges and faces

edge:

the bounding vertices and adjacent faces with the faces listed in a contiguous cyclical order with respect to their intersection with a plane orthogonal to the edge

face:

the bounding edges and vertices as circular lists with the face to the right

Normal planes to edges and planes associated to planes are assumed to be oriented appropriately. To achieve irredundancy of information, planes are defined via equations that are oriented so that the associated normals point out of the associated solid. Vertices are defined as the intersection of the planes to which their adjacent faces belong. Edges are oriented and defined by their bounding vertices. The entire geometry is defined by a unique set of plane equations.

Since all Boolean set operations can be defined by the intersection and complement operation, the interesting operation is intersection. We consider the simplest, but most important, case of finding the intersection of two solids X and Y with connected boundaries. The initial overall strategy is

(1)  If no pair of faces from X and Y intersect, check if one solid is contained in the other and quit.

(2)  Intersect the boundaries of X and Y. For each face f of X that intersects Y find the cross-section of Y with respect to the plane of f. Determine the part of X «* Y contained in that cross-section. These regions will be defined by points and line segments lying in the boundary of X.

(3)  The cells obtained in Step (2) will also lie in faces of Y. We use these cells to determine the subdivision of those faces of Y. Then using face adjacency information for Y, we find and add all the faces of Y lying in the interior of X.

(4)  Assemble all the intersection faces into the solid X «* Y.

The problem with Step (2) is that intersections are computed in isolation that can lead to the inconsistencies mentioned above, therefore, Step (2) and (3) are replaced by (2′) For each intersecting pair of faces f and g from X and Y, respectively, determine the points and segments in which they intersect. Using threedimensional neighborhood information for each intersection, the relevant parts are then transferred to all adjacent faces of X and Y.

(3′) Finally, those faces of either solid that are contained in the other are found using face adjacency information for the solids.

Bounding boxes are used for faces to speed up the algorithm and avoid unnecessary checks for face/face intersections along with efficient algorithms for determining whether intervals intersect. The most work takes place in Step (2). It is here that one makes sure that each element of the intersection is computed in a consistent way with respect to all adjacent cells. For intersecting faces one has to

(1)    find and analyze the points and segments in the intersection,

(2)    transfer the results to adjacent faces in X and Y, and

(3)    link the various intersection pieces into complete face and edge descriptions.

These steps involve careful analysis of neighborhoods of cells. The six major cases arise from face/face, face/edge, face/vertex, edge/edge, edge/vertex, and vertex/vertex intersections.

The authors of [HoHK89] reported that when the algorithm we just described was implemented, very substantial improvements in robustness were realized compared with other modelers. Their test cases included such typically difficult cases as finding the intersection of two cubes, where the second is a slightly rotated version of the first. We refer the reader to that paper and [Hoff89] for additional ideas about dealing with accuracy and robustness that we do not have space to get into here. More papers on robustness can be found in [LinM96]. See also [DeSB92] and [EdaL99]. Often the problems we have talked about are caused by the fact that they are special cases or some sort of degeneracy. There is no problem determining whether two lines intersect if they are reasonably skew. Therefore, perhaps one can always arrange it so that they are or that objects are not almost touching, etc., by perturbing them slightly. This is the basis for an approach to robustness described in [EdeM90]. Objects are put into general position by a small perturbation; however, the perturbation is done symbolically. Nothing is actually ever perturbed.

Finally, because conics are such an important class of spaces in modeling, we finish this section with some facts about the robustness of some of their standard representations. Four common ways to define conic curves are:

(1)    via the general quadratic equation

(2)    in standard form at the origin along with a transformation

(3)    via a few points and/or reals (For example, one can define an ellipse in terms of its center, major axis, and major and minor axes lengths.)

(4)    via projective geometry type constructions

Which is best? It is well known that (1) is by far the worst representation. Changing coefficients even just slightly, can, in certain circumstances, lead to incorrect conclusions as to the type of the conic. According to [Wils87], (2) and (3) are the best with (3) slightly better.

Algorithmic Modeling

Sections 5.3.1-5.3.9 discussed various specific approaches to geometric modeling. This section takes a more global view and tries to identify some unifying principles behind some of the details. Specifically, the relatively recent generative modeling scheme and the natural phenomena and physically based modeling schemes are examples of what is referred to as algorithmic or procedural modeling in [GHSV93]. Algorithmic modeling refers to that part of geometric modeling where one uses algorithms to define and manipulate objects or functions rather than nonconstructive definitions.

Sometimes one has a choice, but even though the spaces we might get can be described by a function that parameterizes the space or defines it implicitly, there are reasons for why it might be useful to generate a space algorithmically:

(1)    Not all spaces can be computed and sometimes, like in the case of fractals, one can only describe them by a limiting process.

(2)    The geometry of a space sometimes changes over time or due to external influences and the best way to describe this may be algorithmically.

(3)    Some complex geometric structures are too complicated to describe by functions.

(4)    Algorithmic descriptions may give rise to added flexibility and power.

In [GHSV93] algorithmic models are divided into four main classes.

Geometry-based models:

Generative modeling is an example. Its models are parameterized by properties and transformations

Functional-based models:

These models are defined by functions and modified by other functions. Texture functions are an example how functions can modify geometry.

Grammar-based models:

Here the structure is defined by a language and a grammar for that language. The grammars can be divided into geometric (fractals and their initiator/ generator paradigm) and topological (graftals) grammars.

Physics-based models:

The models are controlled by the laws of physics. See Section 5.5. Also included here are particle systems, deformable models (elastic/inelastic), and constraint systems.

For more details see [GHSV93].

Looked at abstractly, what the theory of algorithmic modeling adds to “standard” geometric modeling is a symbol generation mechanism that can be either deterministic or probabilistic. To formalize this we must generalize the notion of a Turing machine to one that can deal with continuous functions.

Although the theory of computation is a well-developed subject, the classical theory of Turing machines deals with discrete computations. Here, an analysis of the complexity of a computation takes into account the size of numbers in terms of the number of bits in their binary representation. On the other hand, when we deal with continuous geometry, as we do in geometric modeling, it would be nice to make our baseline the reals and to consider the cost of basic arithmetic operations, such as multiplication, independent of their “size.” To put it another way, since geometric shapes are usually defined by parameterizations or implicitly, we would like to have a concept of computability based on reals and appropriate “computable” continuous functions. We would then like to ask the same types of questions as in the discrete Turing machine case. For example, which topological spaces are “computable” in this setting? In recent years, such a continuous theory of computation has been developed that formalizes the notion of a machine over the reals. See [BlSS89] or [GHSV93]. We would like to mention a few of the highlights of the theory in this section.

State diagram for Example 5.11.1.

Figure 5.49. State diagram for Example 5.11.1.

Computing the greatest integer in x.

Figure 5.50. Computing the greatest integer in x.

We motivate our definition of a machine over the reals with two examples from [BlSS89].

Example. Let wtmp3038-93_thumb[2][2][2][2]be    a    complex    polynomial map. Since the highest order term of g dominates, one can show that there exists a positive constanttmp3038-94_thumb[2][2][2][2]so thattmp3038-95_thumb[2][2][2][2]implies    thattmp3038-96_thumb[2][2][2][2]as    k    goes    to    infinity.    Figure 5.49 shows a flow chart for an algorithm based on g. Right now we prefer to think of it as a state diagram for a machine M. Clearly, M will halt precisely on those inputs z for whichtmp3038-97_thumb[2][2][2][2] • as k goes to infinity.

5.11.2    Example. Figure 5.50 shows an algorithm that computes the greatest integertmp3038-98_thumb[2][2][2][2]fortmp3038-99_thumb[2][2][2][2]We    again    shall    think    of it as a state diagram of a machine that operates on a pair (k,x). To find the greatest integer in x, one inputs (0,x).

What is notable about the two examples here is the state diagram of the “machines” that carried out the associated algorithms. This is what we want to generalize now.

Let R be an ordered commutative ring with unity. Because there is no space to go into lots of details, the definitions we give here will not be as general as they could be but are aimed at the special cases R = Z or R.

Definition. A machine M over R consists of three sets I = Rm (the input space), O = Rn (the output space), and S = Rk (the state space), along with a finite connected directed graph G whose nodes are one of four types:

Input node:

tmp3038-107

Output node:

tmp3038-108

Computation node:

tmp3038-109

Branch node:

tmp3038-110

By expressing the definition of a machine over a ring graphically, it is not as complicated as it may sound. It leads to state diagrams just like in the Turing machine case. The two examples above are, in fact, special cases of a machine M over the reals. In Example 5.11.1, we identify the complex numbers C with R2. The input, output, and state spaces of M are all R2. The function i associated to the input node is the identity map and the function at the branch node istmp3038-111_thumb[2][2][2][2]In    Example 5.11.2, the input, output, and state spaces are R, R, and R2, respectively. The function associated to the input node is the function from R to R2 that maps x to (0,x). At computation nodes, the associated function istmp3038-112_thumb[2][2][2][2]The    output function is the map from R2 to R that maps (x,y) to y.

Now, given a machine M over a ring R with nodes N, define a partial map

tmp3038-115_thumb[2][2][2][2]

by

tmp3038-116_thumb[2][2][2][2]

where

(1)   G is not defined if n is an input or output node.

(2)    If n is a computation node, thentmp3038-117_thumb[2][2][2][2]

(3)    If n is a branch node, thentmp3038-118_thumb[2][2][2][2] tmp3038-119_thumb[2][2][2][2]

If we allowed rational functions in (2), then gn may not be defined because its denominator vanishes on s. It turns out that it is easy to overcome this technical problem and so there is no loss in generality if we assume that gn is always defined on any value where we want to evaluate it.

Definition. A computation of length t is a sequence

tmp3038-123_thumb[2][2][2][2]

where x is an element of I. The set

tmp3038-124_thumb[2][2][2][2]

is called the halting set of M. The input-output map

tmp3038-125_thumb[2][2][2][2]

is defined as follows:tmp3038-126_thumb[2][2][2][2]and    let    (5.8)    be    the    computation    withtmp3038-127_thumb[2][2][2][2]

Then

tmp3038-130_thumb[2][2][2][2]

Definition. A maptmp3038-131_thumb[2][2][2][2]is    said    to    be    computable over R if there exists a machine M over R so thattmp3038-132_thumb[2][2][2][2]In    this case we say that M computes f.

Definition. A settmp3038-133_thumb[2][2][2][2]is    recursively    enumerable    over Rtmp3038-134_thumb[2][2][2][2]for some machine

M over R. It is decidable if it and its complement are both recursively enumerable over R; otherwise it is said to be undecidable.

One can show that decidability of a set A is equivalent to its characteristic function being computable. Lots of interesting results about machines over a ring are proved in [BlSS89]. The ones most interesting for us here will now be mentioned.

5.11.3 Theorem. A recursively enumerable set over R is the countable union of semialgebraic sets.

Proof. See [BlSS89].

5.11.4  Corollary. The halting set of a machine over R is the countable union of disjoint semialgebraic sets. The input-output map for the machine is a piecewise polynomial map.

Since one can show that the Hausdorff-Besicovitch dimension of a semialgebraic set is an integer, we also get

5.11.5  Corollary. The halting set of a machine over R has integral Hausdorff dimension.

Corollary 5.11.4 indicates why it is reasonable to stay with semialgebraic sets in traditional geometric modeling. Corollary 5.11.5, on the other hand, shows that we do not get fractals in this way. The next result tells us something about the sets one encounters when trying to define fractals.

5.11.6 Theorem. The basis of attraction of a complex rational function _tmp3038-139_thumb[2][2][2][2]

(which means the union of the basin of attraction of all attractive periodic points) is a recursively enumerable set over R. In particular, it is the countable union of semi-algebraic sets.

Proof. See [BlSS89]. It can be shown that g has only a finite number of attractive periodic pointsand that there is a polynomial h so that a point is in the basin of g if and only iftmp3038-140_thumb[2][2][2][2]for    some z in its orbit. The theorem is proved using this h and a machine very similar to the one in Example 5.11.1.

Not all basins of attraction are decidable. In fact, it is shown in [BlSS89] that the Julia set and most of its analogs are not recursively enumerable. On the other hand, one can compute semialgebraic set approximations to Julia sets.

Conclusions

In the 1970s and 1980s most modelers were based on either the boundary or CSG representations or both. Here is a summary of the differences between these two representations. Roughly speaking, the advantages of the boundary representation are disadvantages for the CSG representation and vice versa.

Advantages of b-reps:

(1)

It is good for efficient rendering algorithms.

(2)

It can handle general “free-form” or “sculptured” surfaces, that is, “curved” surfaces that typically are defined implicitly or via parameterizations.

(3)

It is good for local modification of models.

Disadvantages of b-reps:

(1)

It takes a lot of data structures and space to define objects.

(2)

Object definitions tend to be complicated.

(3)

Verification of validity is difficult.

Advantages of CSG:

(1)

It is very compact.

(2)

It is a natural way to define many objects and “perfect” for most mechanical engineering parts.

(3)

Validity checking is “built in.”

Disadvantages of CSG:

(1)

It is not good for rendering because one needs a separate boundary evaluation algorithm.

(2)

It may not be easy to define the motions that place objects in position for the Boolean set operations.

(3)

It is impractical to use for sculptured surfaces or solids bounded by such surfaces except in the most

With the typical set of primitives, the best that one could do for such objects, as for example an airplane wing, is get a very inefficient approximation.

(4) The definition of what is a “face” is more complicated.

Some modelers were hybrid systems that used the best features of boundary and CSG representations. In fact, the user interfaces for modelers are reasonably standard and hide the actual representation that is used. It is like with a database program where the user usually does not really know (or care) whether it is truly relational or not. The only way one might get an inkling of the representation on which a modeler is based is by the speed and ease of completing certain queries. For example, boundary representations have an easier time with queries that deal with faces. A hybrid system does have problems however:

(1)    It must be able to convert between the different representation and the b-rep-to-CSG conversion is very hard.

(2)    It must maintain consistency between representations. This limits its coverage. For example, if a b-rep object came from a CSG representation and one modifies it using a parametric surface for blending, the original CSG structure can probably not be kept consistent with it. See [ShaV95].

Initially, the typical operators in b-rep modelers were the set operations basic to CSG, but gradually more and more operations were introduced into modelers, operations, such as local blending operations, that were not easy to implement in a CSG modeler. This has caused pure CSG modelers to disappear, probably also because of the many advantages to using spline surfaces, especially NURBS surfaces, and the fact that there is no general b-rep-to-CSG algorithm. The result is that most modelers are now b-rep based. Volume-based modelers will also probably become more prevalent in the future with faster computers and large amounts of memory. Nevertheless, CSG has had a fundamental impact on the way that one views geometric modeling. CSG can be viewed as an abstract description of objects and so, whether or not a modeler is based on it, the user interfaces will continue to support it. It should be pointed out that the parametric modeling systems that developed did not entirely avoid the problems found in the dual b-rep/csg-rep systems. When a slot in the middle of a block is moved to the edge of the block, it and the associated blend will disappear. Shapiro and Vossler ([ShaV95]) argue that such difficulties are caused by the fact that the concept “parametric family” is not well-defined. A designer may not be able to predict whether certain parameterizations will remain valid throughout the design process. A lot more work needs to be done in this area if one wants a design process that does not require human intervention in the parametric structure of an object.

A modeler’s ability to edit a model is extremely important to a user. Finding a good way to do this is the basis of a lot of current research. We briefly discussed the medial axis representation. Another approach called Erep is described in [GHSV93]. Its goal is to be an editable, high-level textual representation for feature based solid modeling. It is a representation that is independent of the underlying modeler.

Various types of edits.

Figure 5.51. Various types of edits.

Editing involves performing operations on models. Below are some typical local operations supported by modeling systems:

(1)    extrusions, or more generally sweeps (Figure 5.51(a))

(2)    beveling or defining wedges (Figure 5.51(b))

(3)    blending (see Section 15.6), which is a general operation that involves finding blending curves or surfaces.

A blending curve is one that smoothly connects the endpoints of two curves.

A blending surface is one that smoothly connects two surfaces along two given curves, one in each surface, and meets the surfaces tangentially at those curves.

Fillets are a special case of blending. See Figure 5.51(c). Filleting refers to a “rounding” along an edge of a concave corner. This could mean, e.g., matching a circle to a corner which is tangent to the edges.

Chamfering refers to “rounding” at a vertex or along a convex edge. See Figure 5.51(d).

The term “rounding” itself has sometimes been used to mean chamfering.

(4)    local deformations of an edge or a face

(5)    undoing one or more operations to restore a model to a previous state

(6)    skinning (see Section 14.7)

(7) transforming shapes by scaling, mirroring, rotating, translating, . . .

In addition to the geometric operations, commercial modeling systems must also have the ability to annotate the geometry so that users can produce standard mechanical drawings. As a matter of fact, dimensioning and tolerancing is not an afterthought, but an important aspect of the total design and manufacturing process. Dimensions specify the “nominal” or perfect geometry of an object. Since one can never create the perfect object, one needs to be able to specify tolerances within which it would be considered acceptable. Other annotations relate to the type of material of an object and how it is to be rendered – in what color, etc. The representation schemes described in this topic dealt with dimensions. The representation problem for tolerances is much harder. Coming up with a rigorous theoretical foundation for what humans have been doing manually for hundreds of years has been a nontrivial task and many problems remain as one moves towards the goal of automating annotations. See [Just92] for a brief overview of this topic.

Another property that can distinguish modelers is whether they use exact or faceted representations of objects in computations. Many modelers use a faceted representation for display purposes because the display hardware has been optimized to deal with that. The display itself is not the issue here, but rather, whether an algorithm, such as one that determines if two objects intersect, uses a faceted or exact representation of these objects. Finding the intersection of linear spaces is much easier than doing this for curved spaces and so many modelers did this in the 1980s. There are potentially serious problems with this however. To give a two-dimensional example, consider what happens when slightly overlapping circles are approximated by polygons. If they are rotated, an intersection algorithm would sometimes find an intersection and sometimes not, like in Figure 5.52. An incorrect determination of this type could cause problems if passed on to another operation.

This topic has presented an overview of the evolution of geometric modeling. Although we have come a long way, the current state of the field is far from satisfactory. Attempts have been made to develop theoretical foundations so that one can talk about issues in a rigorous way, but by in large the field is still advancing in an ad hoc way. Some specific defects are:

(1)    R-sets may be inadequate. One may want to be able to represent nonmanifold solids with dangling edges or faces. Simply enlarging the domain would still leave unanswered the question of which operations to allow and what they would preserve. See [GHSV93].

(2)    In practice the definition of a representation scheme r is rather ad hoc. Usually only r-1 is well-defined and r itself is not. It is hard to compare different representation schemes. For example, when converting from a boundary to a CSG representation, which CSG representation of the object does one want? Which is the “correct” one? Ad hoc answers are not good enough. See [Shap91] and [GHSV93].

 An intersection problem with faceted circles.

Figure 5.52. An intersection problem with faceted circles.

(3) A theoretical foundation for operations on objects is lacking or not well integrated into the representation scheme concept.

From a very global point of view however, a basic bottleneck is computer power. The fact is that (at least in some important aspects) there is a great body of known mathematics about geometry and topology that is central to doing geometric modeling “right” and which is simply waiting for the time that computers are fast enough to make dealing with it feasible. It is easy to see this relationship between progress in geometric modeling and the development of speedier computers. One simply has to look at the fantastic advances in rendering photorealistic images. This is not to say that no innovations are needed. The use of computers has brought along with it a host of additional problems that have to be solved while at the same time creating opportunities for finding new ways to understanding. An example of the former is the fact that computers do not have infinite precision arithmetic so that algorithms that are mathematically simple become very tricky to implement when one has to worry about round-off errors. An example of the latter is the ability to use computers to visualize data in ways that was not possible before. This by itself can lead to advances in knowledge. Coming up with good user interfaces is also a great challenge. Nevertheless, if computers could just deal with all the mathematics related to geometric modeling that is known right now we would be a giant step further along. Along these lines, two features that modeling systems should support but do not because the algorithms are too expensive computationally are:

(1)  The ability to differentiate between objects topologically.

(One would need to implement the basic machinery of algebraic topology.)

(2) The ability to represent space and objects intrinsically.

Another aspect of this is that one should represent not only objects but the space in which they are imbedded. With the rise of volume rendering we are beginning to see some movement on that front.

Given the certainty of rapid advances in hardware, geometric modeling has an exciting future.

Finally, here are some “philosophical” comments, alluded to before, having to do with the way that one should approach the subject of geometric modeling ideally. One of the things one learns from mathematics is that whenever one introduces a certain structure, be it that of a group, vector space, or whatever, it has always been fruitful to define maps that preserve that structure, like homomorphisms, linear transformations, and so on, respectively. The sets and maps are in a sense interchangeable. One could start with a class of maps and define a structure in terms of that which is left invariant by the maps. Furthermore, new structures and maps are usually studied by associating simpler invariants to them. A perfect example of this is the field of algebraic topology.In computer science one looks for “representations” and everything boils down to finding suitable representations for abstract concepts, representations that mimic the abstract concepts in every way.

Commutative diagrams for representations.

Figure 5.53. Commutative diagrams for representations.

The point of these remarks is encapsulated by Figure 5.53. Ideally, one starts with abstract sets and transformations and then represents them by other sets and transformation in such a way that everything is preserved, meaning that one gets commutative diagrams. In the context of geometric modeling, there are objects, transformations of objects, and functions on these objects. Representations act on maps as well as sets and need to produce commutative diagrams. It would be nice if representations were what are called functors between categories but they are not. (The terms “functor” and “category” have a very definite meaning in mathematics. They help make precise the notion of commutative diagrams that we talk about at various places in this topic, but that is another story that we cannot go into here.) Nevertheless it is worth reminding the reader of what one should at least strive for, or at least keep in mind, even if it is not possible to achieve in practice. Recall our discussion in Section 5.7 and Figure 5.35.

Next post:

Previous post: