Graphics Reference
In-Depth Information
Figure . . A classification example. he boundary between the classes of solvent (black triangles)and
insolvent (white squares) companies was estimated using DA, the logit regression (two indistinguishable
linear boundaries) and an SVM (a nonlinear boundary) for a subsample of the Bundesbank data. he
background corresponds to the PDs computed with an SVM
Following a traditional approach, we would expect a monotonic relationship be-
tween predictors and PDs, like the falling relation for the interest coverage ratio
(Fig. . ). However, in reality this dependence is oten nonmonotonic for impor-
tant indicators such as the company size or net income change. In the latter case,
companies that grow too quickly or slowly have a higher probability of default. his
is why nonlinear techniques are considered as alternatives. Two prominent exam-
ples are recursive partitioning (Frydman et al., ) and neural networks (Tam and
Kiang, ).Despite the strengths of these two approaches they also have noticeable
drawbacks: orthogonal division of the data space for recursive partitioning, which is
usually not justified, and heuristic model specification for neural networks.
Recursive partitioning, also known as classification and regression trees (CART)
(Breiman, ), performs classification by dividing up the data space orthogonally.
A division (split) along just one of the axes is possible at each step.he axis is chosen
such that a split along it reduces the misclassification impurity. Entropy-based cri-
teria can also be used. he visible drawback is the orthogonal division itself, which
imposes severe restrictions on the smoothness of the classifying function and may
not adequately capture the correlation structure between the variables. Orthogonal
division means that the separating hyperplane can only consist of orthogonal seg-
ments parallel to the coordinate grid, whereas the boundary between the classes has
a smoothly changing gradient.
Search WWH ::




Custom Search