Biomedical Engineering Reference
In-Depth Information
For the second approach, digital phantoms can be used. Digital phantoms are
synthetically generated, realistic data, produced from a known ground truth model
(see, e.g., [37]). Given a known ground truth M , the 3D volume V is generated
in a straight-forward manner, and then image formation, distortion, and noise
models are used to produce the final product, the image stack I (Figure 2.17).
The KESM imaging process can be modeled (as in [38]), and based on this, noisy
digital phantoms ( I ) can be generated from a synthetic ground truth ( M ). The
reconstruction algorithm can then be executed on this digital phantom, and the
estimated microstructure M compared to the ground truth ( M ).
These two approaches will help perform large-scale validation of our auto-
mated algorithms.
2.6.2 Exploiting Parallelism
With the advent of high-throughput microscopy instruments like KESM, data
acquisition is no longer a bottleneck: the reconstruction and computational
analysis becomes a major bottleneck [16]. Fortunately, many tasks involved in
reconstruction and analysis can be conducted locally, thus allowing straightfor-
ward parallelization (see, e.g., [39]). For example, the large data volume can be
partitioned into small unit cubes that can fit in several gigabites of memory, and
the reconstruction can be done within each cube on multiple computing nodes in
parallel. The latest techniques developed for parallel feature extraction and 3D
reconstruction using high-performance computing can be adapted easily for these
tasks (see Chapter 6 by Rao and Cecchi and Chapter 8 by Cooper et al.).
Besides parallelizing by dividing our dataset into multiple regions, each to
be handled by a single computing node, we can exploit the avaiable GPUs and
multicores on modern CPUs to run multiple processes simultaneously tracking
different fibers in the same unit cube. This will allow for larger memory blocks,
and thus fewer artificial boundaries across which fiber tracking is inherently more
difficult.
Even though these computations can be carried out locally, in the end, the
results of these computations need to be merged, since processes from neurons
(especially long axons) can project across the full span of the brain. After initial
unit cubes are processed, we must merge data from adjacent unit cubes. It may
also be necessary to modify our reconstruction in one unit cube based on the data
obtained from the boundary of an adjacent unit cube. Some ideas on how a similar
merger can be achieved is discussed in [40].
Another important observation is that the result from parallel reconstruction
and merging need to be stored. As morphological information from adjacent unit
cubes are combined, the fibers within each area must be merged. This can create
potential problems, as the merged fiber-like data loses the locality it previously
enjoyed. Furthermore, the data structure could become extremely complex (e.g.,
the entire vascular system could be one connected component); methods for rep-
resenting such complex structures hierarchically will be needed.
Once all the reconstructions are done and stored so that a network of connected
components emerges (e.g., neuronal or vascular networks), large-scale network
analysis needs to be conducted to extract principles that connect network structure
 
Search WWH ::




Custom Search