This book provides a complete discussion of the Gauss-Newton filters, including all necessary theoretical background. This book also covers the expanding and fading memory polynomial filters based on the Legendre and Laguerre orthogonal polynomials, and how these can serve as pre-filters for Gauss-Newton. Of particular interest is a new approach to the tracking of manoeuvring targets that the Gauss-Newton filters make possible. Fourteen carefully constructed computer programs demonstrate the use and power of Gauss-Newton and the polynomial filters. Two of these also include Kalman and Swerling filters in addition to Gauss-Newton, all three of which process identical data that have been pre-filtered by polynomial filters. These two programs demonstrate Kalman and Swerling instability, to which Gauss-Newton is immune, and also the fact that if an attempt is made to forestall Kalman/Swerling instability by the use of a Q matrix, then they cease to be Cramér-Rao consistent and become less accurate than the always Cramér-Rao consistent Gauss-Newton filters.
Inspec keywords: Kalman filters; Legendre polynomials; tracking filters; matrix algebra
Other keywords: Swerling filters; Laguerre orthogonal polynomials; Legendre orthogonal polynomials; Gauss-Newton filters; manoeuvring targets tracking; computer programs; tracking filter engineering; polynomial filters; matrix; Kalman filters
Subjects: Filtering methods in signal processing; Algebra; Algebra; Signal processing theory
This chapter explains the meanings of three concepts that have been discussed throughout the book: error/covariance-matrix consistency, Cramér-Rao consistency and memory, as it relates to filter engineering.
Models in many disciplines are specified by algebraic equations. In filter engineering, they are always specified by differential equations (DEs), and in this chapter we develop the necessary background to enable us to use DEs as models. For each DE we will see that there is a unique transition matrix, and it is through the transition matrix that the DE is actually implemented. Our discussion is thus about DEs and their transition matrices, and how such matrices are derived.
The filter model will be implemented when the T matrix is included in the equations of the filter. It could be the same as, or different from, the external model. In the same way, the observation equation(s) will become part of the filtering algorithm when T is included in the filter algorithm. All of the results derived in this chapter will be needed when we discuss the Gauss-Aitken and Gauss-Newton filters in Chapters 8 and 9.
Random vectors and covariance matrices are the basic building blocks of filter engineering. In this chapter we review some of their properties, and in the next we consider how they are used. In the first and second sections of this chapter we discuss random vectors and their covariance matrices, and at first sight the reader may feel that the material has been well covered in first courses in probability and statistics. However, that is not the case. It is here that we lay the foundation for the split between the two types of covariance matrices - supposed and actual - and between what is theoretical and covered in most such courses, and what is empirical and more often encountered in Monte-Carlo simulations. In the third and fourth sections of the chapter we discuss the positive-definite property of certain matrices, a concept that is often not covered in introductory linear algebra and which plays a key role in filter engineering.
In this chapter we explore the role that random vectors and covariance matrices play in filter engineering. For now we are making the assumption that all errors are zero mean.
For a filter to produce unbiased estimates, the following conditions must all be present: the external model must be a good representation of the physical process, the filter model must either be the same as the external model, or it must emulate it closely, the observation instrument must be properly calibrated (bore-sighted) so that the total observation vector Yn is acceptably free of bias errors and cycle by cycle, the filter matrix Wn must satisfy the exactness constraint relative to the matrix Tn that has been incorporated into the filter.
There is little point in providing an estimate if we do not also give an indication of its accuracy. For this reason a complete filter always includes a filter covariance matrix. However, it doesn't end there. Critical decisions are often based on that matrix, and so providing it is only the first step. We must also be as certain as possible that the covariance matrix provided by the filter really does match the actual estimation-error covariance matrix - i.e. that the filter is ECM consistent. In this chapter, we focus on three tests that determine if ECM consistency is present or absent. The first - called the matrix-to-matrix ECM test - is a Monte-Carlo type test. The second and third - called the 3-sigma and the Chi-squared ECM tests - can be run both as Monte-Carlo and as single-shot tests. All three tests can only be used in simulations.
The Gauss filters discussed in this chapter and the next are implementations of what is known as the minimum variance algorithm (MVA). In this chapter we derive the MVA and we begin our exploration of how it is used in filter engineering. Eight Gauss filters will emerge, all of them based on the MVA - two Gauss-Aitken filters in this chapter for use in the all-linear environment of Case 1, and six Gauss- Newton filters in Chapter 9 for use in the non-linear environments of Cases 2, 3 and 4. Further down in this chapter we discuss the meanings of the words non-recursive and recursive, and it will become clear that the Gauss filters are all non-recursive implementations of the MVA.
In this chapter, we re-derive the MVA by solving what is called Problem Statement 2. This will accomplish three objectives: explain what the words minimum variance mean, provide the link between the MVA and Cramér-Rao and enable us to create three tests by which to determine whether or not a filter is CR consistent.
In this chapter we discuss two versions of what we call the master control algorithm (MCA) which is used to control the Gauss filters when manoeuvring targets are being tracked. We refer to them as MCA-1 and MCA-2. The goodness-of-fit (GOF) test is used in both versions of the MCA. The GOF test can also be implemented stand-alone, i.e. independently of the MCAs. By contrast, the GOF test does not involve the observed trajectory, and so it - and hence also the two versions of the MCA - can be operated both in the field and in simulations. This makes them particularly powerful constructs for use in filter engineering.
This is a book about Gauss-Newton and polynomial filtering and not about the Kalman filter. However, no book on filter engineering would be complete without at least a brief look at both the Kalman filter and its forerunner the Swerling filter. It is not our intention to list advantages and disadvantages of various filters. An undertaking of that sort would be both futile and senseless, since such advantages and disadvantages must of necessity relate to particular applications. Nevertheless, there is one item that we will discuss, and that is the following: In theory the extended Kalman and Swerling filters1 are CR consistent, which means that - for a given set of inputs - they should produce results that have the same accuracy as those of the (CR-consistent) Gauss-Newton filters. In practice however, the extended Kalman and Swerling filters must be operated in a way that makes them CR inconsistent, and so the results that they produce are in fact less accurate than those of the Gauss-Newton filters, and sometimes significantly so.
The polynomial filters are based on the orthogonal polynomials of Legendre and Laguerre. Orthogonal polynomials are widely used in applied mathematics, physics and engineering, and the Legendre and Laguerre polynomials are only two of infinitely many sets, each of which has its own weight function.
Chapter 12 was an introduction to the EMP and FMP filters and to some of the ways in which they can be used. In this chapter we derive the equations for the filters and the expressions for their covariance matrices.