Global Positioning System Reference
In-Depth Information
cial receivers tracking L1 C/A code plus L2 Y code using semicodeless process-
ing—see Section 5.14) pseudorange measurements were all processed in accordance
with the algorithms in [4], with no filtering to reduce measurement noise. The mea-
surements from the best four satellites (based on a minimum PDOP criterion) were
then used to generate an instantaneous position solution relative to the reference sta-
tion's surveyed location. The resulting instantaneous position errors indicated in
Figure 7.28 are predictably noisy but provide performance in line with the trends
shown in Figure 7.27.
The position error behavior in Figure 7.28 is contrasted with other perspectives
in Figure 7.29. The GPS CS computes smoothed UREs called observed range devia-
tions (ORDs) every 15 minutes for each CS monitor station. In Figure 7.29, we com-
pare position solution errors using interpolated ORDs from the CS Hawaii monitor
station located at Kaena Point with the three-dimensional position errors from Fig-
ure 7.28. The primary distinction between the two sets of data is the relative
smoothness of the position error for the CS Hawaii monitor station. If we filter the
data from the NSTB Hawaii reference station, we see that it is reasonably consistent
with the CS Hawaii monitor station solution error using ORD values. Most of the
divergence between the filtered NSTB solution and the CS ORD-based solution lies
in the fact that optimum satellite selection varied slightly due to the approximately
43 km between the two locations.
One final element of Figure 7.29 is the line representing an all-in-view (AIV)
position solution error using interpolated CS ORD values. Over the 24 hours of data
presented, the AIV position solution provided a 29% improvement in performance
using the same basic measurements. The current constellation's geometry provides
an overall 27% improvement across the globe when using an AIV position solution
instead of one based on a best four-satellite selection algorithm.
The point we focus on here is the fact that we used four different ways to mea-
sure GPS accuracy at two locations very close together and witnessed a 40% spread
in our resulting statistics. If we can see such divergence resulting from vary similar
measurement processing techniques, how much variation can we expect with the
wider range of possible GPS measurement techniques and environments?
Before answering that question, we need to step back and examine briefly the
major factors that affect measured GPS performance. Figure 7.30 provides an over-
view of these factors. Figure 7.30 also illustrates an approach for breaking the prob-
lem into different levels of abstraction based on the scope and fidelity required by
any given group for their GPS performance assessment.
Note that we have begun to use the term assessment as opposed to measure-
ment. This change in term is due to the fact that any but the most basic of GPS per-
formance assessments sometimes require complementing measurements with
estimates or predictive statistics. Situations where direct measurement of all neces-
sary information to assess performance is not practical include global performance
monitoring for GPS CS constellation management purposes, and near-real-time
monitoring of aircraft accuracy inside a national airspace.
Figure 7.30 establishes a framework for developing application-appropriate
methods for assessing GPS performance. All of the performance assessments pro-
vided in this section were generated using this framework. In the figure, we establish
three related paradigms for performance assessment. The three paradigms are:
Search WWH ::




Custom Search