Tracking Filter Engineering: The GaussNewton and polynomial filters
This book provides a complete discussion of the GaussNewton filters, including all necessary theoretical background. This book also covers the expanding and fading memory polynomial filters based on the Legendre and Laguerre orthogonal polynomials, and how these can serve as prefilters for GaussNewton. Of particular interest is a new approach to the tracking of manoeuvring targets that the GaussNewton filters make possible. Fourteen carefully constructed computer programs demonstrate the use and power of GaussNewton and the polynomial filters. Two of these also include Kalman and Swerling filters in addition to GaussNewton, all three of which process identical data that have been prefiltered by polynomial filters. These two programs demonstrate Kalman and Swerling instability, to which GaussNewton is immune, and also the fact that if an attempt is made to forestall Kalman/Swerling instability by the use of a Q matrix, then they cease to be CramérRao consistent and become less accurate than the always CramérRao consistent GaussNewton filters.
Inspec keywords: Kalman filters; Legendre polynomials; tracking filters; matrix algebra
Other keywords: Swerling filters; Laguerre orthogonal polynomials; Legendre orthogonal polynomials; GaussNewton filters; manoeuvring targets tracking; computer programs; tracking filter engineering; polynomial filters; matrix; Kalman filters
Subjects: Filtering methods in signal processing; Algebra; Algebra; Signal processing theory
 Book DOI: 10.1049/PBRA023E
 Chapter DOI: 10.1049/PBRA023E
 ISBN: 9781849195546
 eISBN: 9781849195553
 Page count: 576
 Format: PDF

Front Matter
 + Show details  Hide details

p.
(1)

Part 1: Background
1 Readme_First
 + Show details  Hide details

p.
7
–47
(41)
This chapter explains the meanings of three concepts that have been discussed throughout the book: error/covariancematrix consistency, CramérRao consistency and memory, as it relates to filter engineering.
2 Models, differential equations and transition matrices
 + Show details  Hide details

p.
49
–107
(59)
Models in many disciplines are specified by algebraic equations. In filter engineering, they are always specified by differential equations (DEs), and in this chapter we develop the necessary background to enable us to use DEs as models. For each DE we will see that there is a unique transition matrix, and it is through the transition matrix that the DE is actually implemented. Our discussion is thus about DEs and their transition matrices, and how such matrices are derived.
3 Observation schemes
 + Show details  Hide details

p.
109
–138
(30)
The filter model will be implemented when the T matrix is included in the equations of the filter. It could be the same as, or different from, the external model. In the same way, the observation equation(s) will become part of the filtering algorithm when T is included in the filter algorithm. All of the results derived in this chapter will be needed when we discuss the GaussAitken and GaussNewton filters in Chapters 8 and 9.
4 Random vectors and covariance matrices  theory
 + Show details  Hide details

p.
139
–171
(33)
Random vectors and covariance matrices are the basic building blocks of filter engineering. In this chapter we review some of their properties, and in the next we consider how they are used. In the first and second sections of this chapter we discuss random vectors and their covariance matrices, and at first sight the reader may feel that the material has been well covered in first courses in probability and statistics. However, that is not the case. It is here that we lay the foundation for the split between the two types of covariance matrices  supposed and actual  and between what is theoretical and covered in most such courses, and what is empirical and more often encountered in MonteCarlo simulations. In the third and fourth sections of the chapter we discuss the positivedefinite property of certain matrices, a concept that is often not covered in introductory linear algebra and which plays a key role in filter engineering.
5 Random vectors and covariance matrices in filter engineering
 + Show details  Hide details

p.
173
–196
(24)
In this chapter we explore the role that random vectors and covariance matrices play in filter engineering. For now we are making the assumption that all errors are zero mean.
6 Bias errors
 + Show details  Hide details

p.
197
–217
(21)
For a filter to produce unbiased estimates, the following conditions must all be present: the external model must be a good representation of the physical process, the filter model must either be the same as the external model, or it must emulate it closely, the observation instrument must be properly calibrated (boresighted) so that the total observation vector Y_{n} is acceptably free of bias errors and cycle by cycle, the filter matrix W_{n} must satisfy the exactness constraint relative to the matrix T_{n} that has been incorporated into the filter.
7 Three tests for ECM consistency
 + Show details  Hide details

p.
219
–245
(27)
There is little point in providing an estimate if we do not also give an indication of its accuracy. For this reason a complete filter always includes a filter covariance matrix. However, it doesn't end there. Critical decisions are often based on that matrix, and so providing it is only the first step. We must also be as certain as possible that the covariance matrix provided by the filter really does match the actual estimationerror covariance matrix  i.e. that the filter is ECM consistent. In this chapter, we focus on three tests that determine if ECM consistency is present or absent. The first  called the matrixtomatrix ECM test  is a MonteCarlo type test. The second and third  called the 3sigma and the Chisquared ECM tests  can be run both as MonteCarlo and as singleshot tests. All three tests can only be used in simulations.

Part 2: Nonrecursive filtering
8 Minimum variance and the GaussAitken filters
 + Show details  Hide details

p.
249
–290
(42)
The Gauss filters discussed in this chapter and the next are implementations of what is known as the minimum variance algorithm (MVA). In this chapter we derive the MVA and we begin our exploration of how it is used in filter engineering. Eight Gauss filters will emerge, all of them based on the MVA  two GaussAitken filters in this chapter for use in the alllinear environment of Case 1, and six Gauss Newton filters in Chapter 9 for use in the nonlinear environments of Cases 2, 3 and 4. Further down in this chapter we discuss the meanings of the words nonrecursive and recursive, and it will become clear that the Gauss filters are all nonrecursive implementations of the MVA.
9 Minimum variance and the GaussNewton filters
 + Show details  Hide details

p.
291
–322
(32)
In this chapter, we rederive the MVA by solving what is called Problem Statement 2. This will accomplish three objectives: explain what the words minimum variance mean, provide the link between the MVA and CramérRao and enable us to create three tests by which to determine whether or not a filter is CR consistent.
10 The master control algorithms and goodnessoffit
 + Show details  Hide details

p.
323
–355
(33)
In this chapter we discuss two versions of what we call the master control algorithm (MCA) which is used to control the Gauss filters when manoeuvring targets are being tracked. We refer to them as MCA1 and MCA2. The goodnessoffit (GOF) test is used in both versions of the MCA. The GOF test can also be implemented standalone, i.e. independently of the MCAs. By contrast, the GOF test does not involve the observed trajectory, and so it  and hence also the two versions of the MCA  can be operated both in the field and in simulations. This makes them particularly powerful constructs for use in filter engineering.

Part 3: Recursive Filtering
11 The Kalman and Swerling filters
 + Show details  Hide details

p.
359
–387
(29)
This is a book about GaussNewton and polynomial filtering and not about the Kalman filter. However, no book on filter engineering would be complete without at least a brief look at both the Kalman filter and its forerunner the Swerling filter. It is not our intention to list advantages and disadvantages of various filters. An undertaking of that sort would be both futile and senseless, since such advantages and disadvantages must of necessity relate to particular applications. Nevertheless, there is one item that we will discuss, and that is the following: In theory the extended Kalman and Swerling filters^{1} are CR consistent, which means that  for a given set of inputs  they should produce results that have the same accuracy as those of the (CRconsistent) GaussNewton filters. In practice however, the extended Kalman and Swerling filters must be operated in a way that makes them CR inconsistent, and so the results that they produce are in fact less accurate than those of the GaussNewton filters, and sometimes significantly so.
12 Polynomial filtering  1
 + Show details  Hide details

p.
389
–468
(80)
The polynomial filters are based on the orthogonal polynomials of Legendre and Laguerre. Orthogonal polynomials are widely used in applied mathematics, physics and engineering, and the Legendre and Laguerre polynomials are only two of infinitely many sets, each of which has its own weight function.
13 Polynomial filtering  2
 + Show details  Hide details

p.
469
–516
(48)
Chapter 12 was an introduction to the EMP and FMP filters and to some of the ways in which they can be used. In this chapter we derive the equations for the filters and the expressions for their covariance matrices.

Back Matter
 + Show details  Hide details

p.
517
(1)