# Noncommutative Field Theory in the Visual Cortex (Computer Vision) Part 3

## Probability Measure

The norm of the (normalized) Bargmann transform has a probabilistic interpretation. Hence, we can interpret the norm of the output of simple cells as the probability that the image I is in a specific coherent state. More precisely, the probability that the image has a boundary with orientation 6 at the point (x, y) is expressed by

Let us explicitly note that the probability is higher if the gradient of I is higher and that this information is neurally provided by the energy output of complex cells.

When in Equation (7.3) we applied the nonmaxima suppression procedure to the function || 0(x,y,d)||, for each point (x, y), we assigned a deterministic value d to the function 6 as the value attaining the highest probability || 0(x, y,d) ||.

## The Density Operator and P-Representation

The probability P represents the structure of the image in the phase space. In order to further understand its role, we want to project it back on the retinal 2D space, obtaining the density operator, as defined, for example, in Carmichael (2002):

The operator is not local, in the sense that it does not depend only on the probability at the point (x0, y0, 60), but it is obtained via convolution with the coherent states, defined on noncompact support.

Because

then p can be represented as

which allows an interpretation of p(Ç,ri,Ç,fi) in terms of correlations. In fact, the function p is the sum over 60 of all correlations between the filters centered at points (£, n) and (^,f/), weighted by the probability measure P.

The function p is usually called an operator, with an abuse of language, even if it is just the integral kernel of the operator:

The representation we provide here is the P-representation, because it is a diagonal representation of the operator p in terms of the coherent states. We explicitly recall that we have to use this diagonal P-representation because the coherent states form an overdetermined frame. In this situation, we can apply the following theorem, due to Glauber and Sudarshan:

Theorem 7.1 The diagonal representation of p is invertible in the sense that given P, we have the expression of p, and vice versa, if p was known, it could be possible to recover P in a unique way.

In this sense, the probability P and the density operator p are equivalent.

## Tensorial Structure of the boundaries

In order to understand the meaning of this operator when applied to an image, we will consider the simple case in which receptive profiles centered in (x0, y0) and with preferred orientation 60 are replaced by 2D vectors (cos(60), sin(60))t applied in (x0, y0) (t denoting the transposition of vectors). In this way, the previous operator becomes Equation (7.20)

FIGURE 7.7 Boundary representation of a gray square (a) by means of second-order tensors following Equation (7.20) (b) and by means of infin-ity-order tensors following the density operator Equation (7.18) (c). In this example, the dimensions of the receptive profiles are much smaller than the dimensions of the square.

Hence, p is different from 0 only forand it has matrix values:

With this reduction, we simply associate to every point (£, n) a rank 2 tensor, which expresses the geometric properties of the boundary and can be considered a second-order approximation of the kernel p. On the other hand, because p defines an operator on the infinity dimensional space of functions, we will interpret the density operator p as an infinity-order tensor.

Let us consider, for example, the image of a gray square (see Figure 7.7, top). First, the probability P has been obtained as energy of the Bargmann transform Equation (7.20), and nonmaximal suppression has been applied. The probability P is 0 far from the boundary. Over each point (x0, y0) of the boundary, the probability P is a Dirac Mass in the 3D cortical space, concentrated at the point (x0, y0, 60), where 60 is the orientation of the boundary at that point, while at the corner P, it is the sum of two Dirac Masses.

FIGURE 7.8 The fundamental solution of the Fokker-Planck equation in the phase space. An isosurface of intensity is visualized.

In this example, the dimension of the coherent states is much smaller than the square. Hence, we can apply the rank 2 tensor approximation. This provides us with a stick along the boundary, and a tensor shaped as a ball at the corners, as in the classical approach of Medioni, Lee, and Tang (2000). In Figure 7.7 (b) it is visualized the second-rank tensor field obtained by Equation (7.20).

In Figure 7.7 (c) it is visualized by the density operator (Equation 7.18). The density operator is one-to-one equivalent to the probability P by the Glauber-Sudarshan theorem, so it keeps all the information contained in the probability density distribution and it corresponds to an infinity rank tensor field. The infinity rank tensorial representation provides us with a stick along the boundary, as in the second-order case, and with an entire cross at the corners, keeping the two distinct directions of the borders.

The density operator is then able to represent arbitrarily complex structures, as we will see in the next sections.

Propagation of Cortical Activity and the Fokker-Planck Equation In the section, Association Fields and Integral Curves of the Structure, we observed that different points of the group are connected by the integral curves of the vector fields, cortically implemented by horizontal connectivity. Such connectivity can be modeled in a stochastic setting by the following stochastic differential equation (SDE) first introduced by Mumford (1994) and further discussed by August and Zucker (2003), Williams and Jacobs (1997a), and Sarti and Citti (2010):

where N(0, o2) is a normally distributed variable with zero mean and variance equal to o2. Note that this is the probabilistic counterpart of the deterministic Equation (7.7), naturally defined in the group structure. Both systems are represented in terms of left invariant operators of the Lie group, the first with deterministic curvature and the second with normal random variable curvature. These equations describe the motion of a particle moving with constant speed in a direction randomly changing accordingly with the stochastic process N. Let’s denote u the probability density to find a particle at the point (x, y) moving with direction X1 at the instant of time t conditioned by the fact that it started from a given location with some known velocity. This probability density satisfies a deterministic equation known in literature as the Kolmogorov Forward equation or Fokker-Planck (FP) equation:

In this formulation, the FP equation consists on an advection term in the direction Xp the direction tangent to the path, and a diffusion term on the orientation variable 6 (X2 is the second derivative in direction 6). This equation has been largely used in computer vision and applied to perceptual completion-related problems. It was first used by Williams and Jacobs (1997a) to compute the stochastic completion field; by August and Zucker (2003) and Zucker (2000) to define the curve indicator random field; and more recently, by Duits and Franken (2007) and Franken, Duits, and ter Haar Romeny (2007) applying it to perform contour completion, denois-ing, and contour enhancement. Its stationary counterpart was proposed in Sarti and Citti (2010) to model the probability of the co-occurence of contours in natural images.

Here we propose to use the FP equation for modeling the weights of horizontal connectivity in primary visual cortex. For this purpose, we are not interested in the propagation in time of u, as given by Equation (7.24), but in the fundamental solution of

The fundamental solution of Equation (7.22) is visualized in Figure 7.8, Equation (7.22) is strongly biased in direction X1, and to take into account the symmetry of horizontal connectivity, the model for the probability density propagation has to be symmetrized, for example, considering the sum of the Green functions corresponding to forward and backward FP equations.

This model is in agreement with Sarti and Citti (2010), if we assume that the connectivity is learned by the edge distribution of natural images.

In Figure 7.9, the fundamental solution r of the Fokker-Planck equation is visualized as second-order tensors (left) and as infinity-order tensor by means of the density operator (right).

## Tensorial Structure of the Image

The probability density P(x, y, 6), norm of the Bargmann transform and containing information about boundaries of the image, is then propagated by the following equation:

where the Dirac delta in Equation (7.22) has been substituted with the forcing term P(x, y, 6). Equivalently, the distribution u(x, y, 6) can be obtained by the convolution product:

where r(x, y, 6) is the Green function, solution of (7.22).

For the square of Figure 7.7, the resulting u(x, y, 6) fills in the entire figure and structures the inside. In Figure 7.10, a 2D projection of u(x, y, 6) is visualized by means of a rank 2 tensor field (left) and a rank infinity tensor field corresponding to the density operator (right). In both cases, the tensors induced inside the figure are similar to balls (i.e., they are more isotropic than the tensors in the boundaries). The rank infinity tensor field preserves more information about the global shape of the object. It represents with fidelity the information content of the whole cortical 3D phase space after horizontal connectivity propagation.

FIGURE 7.9 The fundamental solution r of the Fokker-Planck equation visualized as second-order tensors (left [a]) and as infinity-order tensors by means of the density operator (right [b]).

FIGURE 7.10 The inner structure of the square obtained after propagating the lifted boundaries via the Fokker-Planck fundamental solution. The probability density is visualized as second-order tensors (left [a]) and as infinity-order tensor by means of the density operator (right [b]).