Environmental Engineering Reference
In-Depth Information
6.1 Lidar-drived building
models: related work
6.1.1 Building detection
Building detection is often performed on resampled (i.e., interpo-
lated) grid data, thus simplifying the 3D content of lidar data to
2.5D.Roughnessmeasures,i.e.localheightvariations,areoften
used to identify vegetation. Open areas and buildings can be dif-
ferentiated by first computing a digital terrainmodel (DTM) with
so-called filtering methods (Kraus and Pfeifer, 1998; Sithole and
Vosselman, 2004). Thereafter, a normalized digital surface model
(nDSM) is computed by subtraction of the DTM from the DSM
(Weidner and Forstner, 1995; Haala and Brenner, 1999; Gamba
and Houshmand, 2002; Rottensteiner and Briese, 2002; Hu, Tao
and Collins 2003), hence representing local object heights (Hug
andWehr, 1997;Maas andVosselman, 1999; Alharthy andBethel,
2002; T ovari and Vogtle, 2004; Gross, Thoennessen and Hansen,
2005). High objects with low roughness correspond to build-
ing areas. Other approaches identify blobs in the DSM, based
on height jumps, high curvature, etc. (Morgan and Tempfli,
2000; Nardinocchi and Forlani, 2001; Matikainen, Hyyppšaand
Hyyppa, 2003; Rutzinger et al ., 2006).
Building reconstruction may include two parts: building
footprint detection and roof reconstruction. For those stud-
ies focusing on geometric reconstruction of upper roof lines
instead of building footprints, the reliable reconstruction of
complex building roof boundaries is a key step. Most algorithms
work well only under specific assumptions, which limit roofs to
simple shapes such as rectangles or low quality polygons (Wei-
dner and Forstner, 1995; Vosselman, 1999; Wang and Schenk,
2000). Other algorithms, which do not make such assumptions,
often got distorted boundaries expressed by edges detected from
lidar DSMs (Baltsavias, 1999; Weidner, 1995; Yoon et al ., 1999;
Wang and Schenk, 2000; Rottensteiner and Briese, 2002). These
boundaries need to be refined using a set of geometric regularity
constraints (Vestri and Devernay, 2001).
To distinguish buildings from vegetated regions, the clas-
sification is often based on shape measures assuming some
geometric regularity constraints (Wang and Schenk, 2000) or
the roughness of the point clouds. These measures limit the
detectable buildings to a narrower spectrum, and also are not
very reliable for complex scenes such as densely forested areas.
The shape measures often make use of 2D properties such as area
and perimeter; while complex building roofs may present close
values when calculating the roughness measures. The use of the
lidar multiple return data can benefit the separation of buildings
and vegetation since building roofs must be solid surface (Zhan,
Molenaar and Tempfli, 2002; Hu, Tao and Collins, 2003). Lidar
cannot penetrate solid surfaces and will get a single return only
for them. That is, the first and last returns are same in elevation
at solid surfaces but are different at vegetated regions. However,
lidar gets the similar effect at building boundaries as that at
vegetated areas.
This section first provides an overview of lidar data processing
towards 3Dbuiling reconstruction. The output of a lidarmapping
system is a cloud of irregularly spaced 3D points which include
not only the bare ground but also all kinds of objects (build-
ings, trees, cars, etc.). Therefore, the generation of reliable and
accurate building models from lidar data requires a number of
processes, including building detection, outline extraction, roof
shape reconstruction, model generation and regularization, and
finally, model quality analysis (Dorninger and Pfeifer, 2008). The
majority of available literature concentrates on individual aspects
only. For example, methods on building region detection in ras-
terized lidar data were described in the literature (e.g., Hug and
Wehr, 1997; Maas, 1999; Morgan and Tempfli, 2000; Nardinoc-
chi and Forlani, 2001; Alharthy and Bethel, 2002; Matikainen,
Hyyppa and Hyyppa, 2003; T ovari and Vogtle, 2004; Gross,
Thoennessen and Hansen, 2005; Li, Li and Chapman, 2010).
Techniques on roof reconstruction in lidar point clouds with
known building boundaries were presented in the literature (e.g.,
Vosselman and Dijkman, 2001; Hofmann, Maas and Streilein,
2003). Approaches considering detection and reconstruction
were presented in the literature (e.g., Rottensteiner and Briese,
2002; Lafarge et al ., 2008). The reconstructed models presented
in these two references are, however, restricted. In both cases
digital surface model (DSM) data of relatively low density is
processed. This does not allow for exact positioning of building
outlines and prevents the reconstruction of small roof features.
Furthermore, in the latter reference the complexity of building
models is restricted to a composition of predefined building
parts. A general and up-to-date overview on the topic of lidar
mapping technology and data processing, in particular, infor-
mation extraction can be found in Shan and Toth (2008). This
chapter does not aim at covering a complete bibliography, but it
gives a brief summary of the existing building extractionmethods
developed thus far mainly in Europe and North America.
To increase the reliability of building reconstruction, addi-
tional knowledge on buildings has to be incorporated into the
modeling process. Typical assumptions are to define walls as
being vertical and roofs as being a composition of planar faces.
This leads to an idealization of the buildings. The transition zone
of two neighboring roof faces, for example, becomes a straight
line defined by the intersection of two roof planes (Dorninger
and Pfeifer, 2008).
Many methods have been developed for semiautomated or
fully automated extraction of buildings using lidar data in the
past 15 years. Recognizing that fully automation is not attainable
yet, we aim to reduce the complexity of the building reconstruc-
tion task by including a user and automated processes in the
system to work sequentially. For example, the user supplies the
automated processes with inputs and cues. Then the automated
processes produce a scene model based on these inputs. Finally,
the user corrects mistakes in that scene model. In this chapter,
the building extraction task is addressed by a two-step strategy:
building detection followed by building reconstruction (Li, 1999;
Hu, 2003).
6.1.2 Building reconstruction
Building reconstruction recovers the geometrical parameters of
the roofs and walls of a located building (Weidner and Forstner,
Search WWH ::




Custom Search