Geoscience Reference
In-Depth Information
multivariate structures. Previous work (e.g., Lawson and Hansen 2005 ; Chen and
Snyder 2007 ) examining the relationship between phase error uncertainty and data
assimilation have noted the role of the non-Gaussian shape of the distribution and
some of the potential avenues for failure of the EnKF. In Lawson and Hansen ( 2005 )
a two-step procedure was suggested in which the DA is first performed to account
for the position errors and then after shifting the ensemble consistent with the
updated positions (which attempts to reduce the non-Gaussianity by reducing the
variance in position-space) assimilate a second set of observations to account for
the structural update. In Chen and Snyder ( 2007 ) it was suggested that assimilating
observations of vortex shape and intensity as well as position helps reduce the errors
made by the EnKF. This chapter intends to first provide a detailed examination of the
specific structure of distributions that arise from phase uncertainty and then show
how incorporating information about the third moments of the prior impacts the
DA. The focus here will therefore be to understand and then attempt to use the
non-Gaussian information in the prior distribution rather than find ways to avoid or
eliminate it.
The organization of this paper is as follows: In Sect. 7.2 we will describe DA
through a Bayesian perspective and illustrate both linear and nonlinear regression.
In Sect. 7.3 we show how an error in the location of a feature in the fluid leads to
a non-Gaussian distribution. In Sect. 7.4 we will apply Kalman and higher-order
DA methods to the assimilation of observations in which the prior uncertainty is
described by errors in location. Section 7.5 closes the manuscript with a summary
of the most important results, conclusions, and points out avenues of future research
presently being investigated.
7.2
Understanding Data Assimilation Through Bayes' Rule
7.2.1
Bayes' Rule: The Posterior Distribution
We imagine the true state, x ,tobean
N
-vector and that it is drawn from a
distribution whose pdf we label
. In addition, we will collect the sum total of all
previous information about this true state in a previous estimate we label x f .Atthe
present time we have available a
.
x
/
-vector of observations y such that we may use
Bayes' rule to obtain a density that describes the combined knowledge of the likely
distribution of states:
p
/ x j x f
x j y
x f D
.
y j x
;
x :
(7.1)
R
1 .
/ x j x f d
y j x
The density
describes the conditional distribution of observations given a
particular value of the truth (observation likelihood), while
.
y j x
/
.
x j x f /
describes the
Search WWH ::




Custom Search