Geoscience Reference
In-Depth Information
training. and. validation. data. sets),. we. will. end. up. with. a. number. of. distributional.
maps.that.often.vary.in.their.similarity.both.for.the.present.and.future..The.logical.
question.that.arises.in.this.situation.is,.which.one.of.the.maps.is.more.accurate?.The.
answer.is.by.no.means.an.easy.one..Several.validation.methods.have.been.adopted.
to.assess.model.accuracy,.including.the.chi-square/binomial.test,.Kappa.index,.True.
Skills. Statistics. (TSS),. Area. Under. the. Curve. (AUC). of. the. Receiver-Operating.
Characteristics. (ROC). Plots,. and. others. that. have. been. less. popular. (Fielding. and.
Bell. 1997).. All. of. these. validation. methods. are. based. on. the. calculation. of. omis-
sion.(false.absence).and.commission.(false.presence).errors.derived.from.comparing.
model.predictions.against.a.validation.data.set.in.a.contingency.table.
All.of.these.tests.provide.some.information.regarding.model.quality,.but.none.of.
them.are.fully.adequate.for.evaluating.these.types.of.geographic.models,.for.several.
reasons.. First,. tests. that. incorporate. both. types. of. errors. (i.e.,. Kappa. index,. TSS,.
AUC). do. not. take. into. account. the. fact. that. omission. and. commission. errors. are.
not. equivalent. because. the. former. is. estimated. with. presence. data. and. the. latter.
with.absence.data..Although.presence.data.may.contain.positional.and.identiication.
errors. (Hortal. et. al.. 2008),. these. are. relatively. reliable. data. compared. to. absence.
data,.because.even.with.exhaustive.sampling.there.is.always.the.possibility.that.the.
target.species.went.undetected.(Kéry.2002;.MacKenzie.et.al..2006)..Furthermore,.
even.if.we.are.sure.that.the.species.is.not.present,.it.is.not.always.possible.to.deter-
mine.whether.its.absence.relects.historical.or.dispersal.reasons,.or.instead.indicates.
that.the.site.lacks.suitable.conditions..This.is.critical.because.only.the.latter.is.within.
the.scope.of.niche.modeling.efforts.(Jiménez-Valverde.and.Lobo.2007).
Second,. all. these. tests. strongly. depend. on. the. proportion. of. the. predicted. area.
relative.to.the.extent.of.the.whole.area.(i.e.,.species'.prevalence).(Manel.et.al..2001)..
If.the.predicted.area.is.minimal.with.respect.to.the.study.area.(e.g.,.10%.occupancy),.
then.our.expectation.of.predicted.likelihood.of.presences.under.a.null.model.would.
be.so.low.that.even.a.relatively.large.amount.of.omission.error.(e.g.,.50%).will.still.
produce.statistical.results.signiicantly.higher.than.random,.and.this.may.lead.inves-
tigators.to.conclude.that.the.model.is.acceptably.predictive,.when.failing.to.predict.
half.of.the.validation.points.is.clearly.a.poor.performance.(Lobo.et.al..2008;.Peterson.
et.al..2008)..Alternatively,.if.prevalence.is.too.great,.the.power.of.statistical.tests.to.
detect.signiicant.differences.against.random.expectations.is.dramatically.reduced,.
potentially.confounding.investigators.on.the.biological.meaning.of.those.conclusions.
(Manel.et.al..2001).
Third,.none.of.the.above-mentioned.validation.tests.are.designed.for.geographic.
analyses..It.is.not.uncommon.that.two.prediction.maps.produced.from.randomiza-
tions.of.the.same.data.set.have.exactly.the.same.values.of.sensitivity.(i.e.,.presences.
correctly. predicted),. speciicity. (i.e.,. absences. correctly. predicted),. omission,. and.
commission.errors,.but.different.spatial.coniguration..If.these.maps.were.compared.
with.such. statistical. tests,. both.would. obtain. the.same. scores. because. they.do. not.
account.for.geographic.differences.and.only.pay.attention.to.the.numbers.in.the.suc-
cess/error.matrix.(i.e.,.confusion.matrix).
Finally,. the. legitimacy. of. these. tests. relies. on. the. independence. of. the. data. set.
used.for.validation;.in.other.words,.data.used.for.testing.the.models.should.be.inde-
pendent.from.the.data.used.to.evaluate.them..However,.the.geographic.distribution.
Search WWH ::




Custom Search