Adaptive Prediction and Predictive Control
This book is about prediction and control of processes which can be expressed by discretetime models (i.e. the characteristics vary in some way with time). The aim of the book is to provide a unified and comprehensive coverage of the principles, perspectives and methods of adaptive prediction, which is used by scientists and researchers in a wide variety of disciplines.
Inspec keywords: neurocontrollers; nonlinear control systems; adaptive control; parameter estimation; transfer functions; Kalman filters; statespace methods; predictive control
Other keywords: parameter estimation; adaptive prediction; process models; Kalman filter; quasiperiodic series; nonlinear processes; transferfunction models; inputoutput model; neural networks; predictive control; statespace approaches; smoothing; GMDH
Subjects: Nonlinear control systems; Simulation, modelling and identification; Optimal control; Selfadjusting control systems; Neurocontrol; Control system analysis and synthesis methods
 Book DOI: 10.1049/PBCE052E
 Chapter DOI: 10.1049/PBCE052E
 ISBN: 9780863411939
 eISBN: 9781849193481
 Page count: 536
 Format: PDF

Front Matter
 + Show details  Hide details

p.
(1)

1 Introduction
 + Show details  Hide details

p.
1
–8
(8)
Meaningful predictions, control based on predictive performance, and robust implementation are the main themes of this book.

2 Process models
 + Show details  Hide details

p.
9
–55
(47)
Modelling concerns the mathematical representation of the nature of the process with respect to its environment; the purpose of modelling and the type of data available are important considerations.

3 Parameter Estimation
 + Show details  Hide details

p.
56
–110
(55)
System identification is a prerequisite to adaptive prediction and control; it concerns the generation (for example through specific experimentation) and collection of information, revealing the characteristic behaviour of the process, and development of a mathematical representation of the process. Thus while parameter estimation concerns the determination of the numerical values of the parameters of the process model which best describe the dynamics of the process, identification involves model structure selection, collection of relevant information, parameter estimation, and model validation. The nature of the model is very much process and problem dependent, as discussed in Chapter 2. This chapter is primarily devoted to the problem of parameter estimation, model order selection and validation.There are different methods of parameter estimation. The suitability of a method depends on the quality of information contained in the data, the conceptual model structure and the application concerned. A detailed study of the estimation methods is beyond the scope of this book. The discussions are focused mainly on the least squares (LS) method, which is a basic method for parameter estimation.

4 Some popular methods of prediction
 + Show details  Hide details

p.
111
–132
(22)
The following sections are included: introduction; smoothing methods of prediction; box and Jenkins method; other selected methods; and concluding remarks.

5 Adaptive prediction using transferfunction models
 + Show details  Hide details

p.
133
–159
(27)
Adaptive prediction is usually based on minimization of the mean square prediction error. The prediction involves a two stage procedure: (i) estimation of the parameters of an appropriate model of the time series or the process, and (ii) reconfiguration of the process model into a prediction model, and computation of prediction using the estimated parameters.

6 Kalman filter and statespace approaches
 + Show details  Hide details

p.
160
–199
(40)
In this chapter, statespace modelling and optimal estimation of states using the Kalman filter were studied. Compared with the transfer function models based on inputoutput data, the statespace approach offers the additional flexibility of accommodating internal variables as states, which may not be accessible or measurable. The statespace approach permits use of a large number of widely studied and wellestablished methods and algorithms for estimation, prediction, smoothing and control. The Kalman filter can produce optimal state estimates under steady state conditions, even when the measurements are noisy, conditional on the noise being independent with Gaussian distribution.

7 Orthogonal transformation and modelling of periodic series
 + Show details  Hide details

p.
200
–235
(36)
Two basic consequences of orthogonal transformation are relative decorrelation of data and compression of information, which can be used for modelling and prediction of periodic series.

8 Modellong of nonlinear processes: an introduction
 + Show details  Hide details

p.
236
–260
(25)
Certain special features characterize a nonlinear process which can be represented by a singlestage or a multistage model, linear or nonlinear in the parameters.

9 Modelling of nonlinear processes using GMDH
 + Show details  Hide details

p.
261
–273
(13)
Nonlinear processes can be modelled using hierarchical stages of simple nonlinearity, where each building block is represented by a linearintheparameter model.

10 Modelling and prediction of nonlinear processesusing neural networks
 + Show details  Hide details

p.
274
–303
(30)
Nonlinear series with or without periodicity as well as nonlinear inputoutput processes can be modelled using neural networks.

11 Modelling and prediction of quasiperiodic series
 + Show details  Hide details

p.
304
–330
(27)
Modelling of nearly periodic time series is quite straight forward and can be done for example using the singular value decomposition based method or using Box and Jenkins method. So if a quasiperiodic series can be configured into multiple nearly periodic series through decomposition or transformation, the modelling problem can be simplified; this is the basic concept used for modelling quasiperiodic series in this chapter.

12 Predictive control (PartI): inputoutput model based
 + Show details  Hide details

p.
331
–365
(35)
Predictive control aims at obtaining the predicted performance of the process as specified. This chapter describes some of the popularly used predictive control methods for linear systems. A reallife process is usually dynamic in nature, and it works in a stochastic environment; so it is necessary for the controller to be adaptive.

13 Predictive control (PartII): statespace model based
 + Show details  Hide details

p.
366
–398
(33)
Long range predictive control (LRPC) methods formulated using transfer function models were discussed in Chapter 12. This chapter is devoted to the study of statespace formulation of linear quadratic (LQ) controllers, which form another popular class of predictive controllers. Here, the process is represented by a linear statespace model, and the cost criterion is a quadratic function of the states and the control inputs. If the disturbances to the process (as expressed by the model), are Gaussian in nature, the LQ control is referred to as the linear quadratic Gaussian (LQG) control.

14 Smoothing and filtering
 + Show details  Hide details

p.
399
–430
(32)
Some methods for separation or extraction of usable information from the available data have been presented. First, optimum smoothing in statespace framework was discussed, which was to familiarize the reader with the various issues connected with smoothing. This was followed by studies on bidirectional filtering, which can be used to perform smoothing with minimum lag or phase shift. The bidirectional processing is a characteristic feature which is incorporated in many algorithms for biasfree processing; for example the fixed interval smoother uses similar forwardbackward passes, and the centred moving average used in the time series analysis is also effectively similar in concept. Orthogonal transformation offers a numerically robust method of smoothing and signal extraction. The smoothing is performed through the elimination of insignificant on singular value decomposition (SVD). Besides smoothing, the potential of the SVD based methods in signal extraction and pattern estimation in a noisy environment was also demonstrated through application studies. The approach depends on the repetitive nature of the signal component of interest, and hence the data are appropriately configured for analysis. A case study on fetal ECG extraction from maternal ECG showed that extraction is possible with only one signal (i.e. the maternal ECG signal from the abdominal lead), and irrespective of low signal to noise ratio; the other available methods of fetal ECG extraction require one or more additional signals. The application of orthogonal transformation for smoothing and filtering is an area of active research, and the present study has been only a glimpse of its enormous potential.

Appendix 1: Vector and matrix operations
 + Show details  Hide details

p.
431
–438
(8)
The following topics are dealt with: basic definitions; matrix multiplication; determinant; matrix inversion; and differentiation.

Appendix 2: Exponential Fourier series
 + Show details  Hide details

p.
439
–440
(2)
The trigonometric Fourier series representation of a periodic process f(t), with period T, was discussed in Sec.2.5.1. The derivation of exponential Fourier series representation is presented here. The sinusoidal functions can be expressed in terms of exponential functions as follows e = cosθ + lsinθ, l, iθ iθΛ . 1, lθ lθv . fr cosθ = (e + e ), smθ = (e e ), i v1. So the trigonometric Fourier series (2.5.2) can be expressed as f(t) = aο i ΣWeιηω'Veιηωo1) ibηfeιηωoιe“1"^)).

Appendix 3: UD covariance measurement update
 + Show details  Hide details

p.
441
–448
(8)
In sequential state estimation, the covariance of the estimation error undergoes two different updates: (i) the measurement update, and (ii) the time update. The generic expressions for these two updates with respect to the Kalman filter state estimator are discussed in Sec.6.6. These updates appear in different areas of estimation and control.

Appendix 4: Centred moving average
 + Show details  Hide details

p.
449
–450
(2)
The two basic purposes of Centred Moving Averaging (CMA) are (i) estimation of trend in a time series without any timelag, and (ii) reduction of the effects of random or spurious noise associated with the data. The general consequence of averaging is lowpass filtering, that is the high frequency components are attenuated whereas the low frequency components are retained. Any averaging which uses present and the past data only will produce a timelag in the averaged data. In CMA, since both past and post data are used, the lagfree estimate of the series is produced.

Appendix 5A: Recursion of the Diophantine equation
 + Show details  Hide details

p.
451
–453
(3)
The following sections are included: problem statement; recursive solution; and implementation.

Appendix 5B: Predictor for a multivariable process
 + Show details  Hide details

p.
454
–455
(2)
This section of the book deals with predictor algorithm formulated for multivariable stochastic process.

Appendix 6: The covariance matrix for Pstep predictor
 + Show details  Hide details

p.
456
–457
(2)
The covariance matrix P(k+pk) is indicative of the degree of confidence that the estimator has in x̂(k+pk), the pstep ahead prediction of the state x(k). The derivation of P(k+pk) is presented in this appendix.

Appendix 7A: Details of selected examples of Chapter 7
 + Show details  Hide details

p.
458
–459
(2)
The supportive details of the example presented in the electrical power load problem and air traffic problem are given in this appendix.

Appendix 7B: Data on ozone column thickness
 + Show details  Hide details

p.
460
(1)
The atmospheric ozone column thickness in the atmosphere measured at Arosa, Switzerland, is reproduced here from P. Bloomfield. 1985.

Appendix 7C: Data on atmospheric concentration of carbon dioxide
 + Show details  Hide details

p.
461
(1)
The atmospheric concentration of carbon dioxide in parts per million for 22 consecutive years from 1959 to 1980 measured at the Mount Mauna Loa observatory in Hawaii is given in a table; the data are presented rowwise for each year (from January to December) starting from 1959.

Appendix 7D: Data on electrical power load on a substation
 + Show details  Hide details

p.
462
–463
(2)
The electrical power load on a substation for 25 consecutive mondays of the year 1983 are presented here. The hourly data for arranged rowwise in the following table for the consecutive mondays.

Appendix 7E: Data on unemployment in Germany
 + Show details  Hide details

p.
464
(1)
In this paper the monthly figures on the number of people unemployed in Germany during the period 1948 to 1978 are given. The monthly data for each year are presented rowwise.

Appendix 7F: Data on rainfall in India
 + Show details  Hide details

p.
465
–466
(2)
In India, the summer monsoon rainfall shows considerable spatial variability. The data presented here concern spatially coherent rainfall pattern over the northwestern and central parts of Indian covering about 55% of the total area of the country. The monthly rainfall data at 14 meteorological subdivisions over the years 18711990 have been used by Parthasarathy, Rupa Kumar and Munot (1993) to prepare the homogeneous rainfall data set. The data from 1940 to 1990 are extracted and presented here.

Appendix 8A: Data on yearly averaged sunspot numbers
 + Show details  Hide details

p.
467
(1)
The count the number of spots on the sun's surface is of interest in Astronomy and Climatology for geophysical reasons; the series is also of interest to time series analysts for the timevarying nature of the series. The daily observations from more than 50 observatories are used to arrive at the relative values of the sunspot numbers; the yearly averaged values for the years 1700 to 1987 are presented here.

Appendix 8B: Data on variations in the rotation rate of Earth
 + Show details  Hide details

p.
468
(1)
The variations in the rate of rotation of the Earth, are reproduced. The series has been of interest for thepossible relationship with the sunspot numbers (Appendix 8A) and in general with the planetary system. Yearly data for the period 1820 to 1970 are presented here.

Appendix 9: Data on COD process in the Osaka Bay
 + Show details  Hide details

p.
469
–470
(2)
The chemical oxygen demand (COD) can be considered to be an index of water pollution in the sea. COD concentration is monitored at a number of stations in the Osaka bay along with water temperature, transparency and dissolved oxygen concentration. Altogether 84 sets of monthly data are available, corresponding to the years 1976 to 1983. The output variable is the COD concentration, which is related to the input variables: water temperature, water transparency and dissolved oxygen concentration. The observed values of filtered COD (i.e. COD values of sea water free from suspended materials) are also presented.

Appendix 10: Generalized delta rule
 + Show details  Hide details

p.
471
–473
(3)
The derivation of the generalized delta rule (GDR) which is due to Rumelhart, Hinton and Williams (1986) is presented here. GDR gives an expression for the adaptive change in the weights on the interconnection between the nodes minimizing the cost. GDR is based on the gradient descent algorithm, according to which the adaptation in the weights is proportional to the gradient error.

Appendix 11: SVR spectrum
 + Show details  Hide details

p.
474
–478
(5)
SVR spectrum is a method of determining the period length of periodic components present, if any, in any signal or data sequence; the periodic components need not be sinusoidal. The data <y(k)> are arranged into the consecutive rows of a matrix (A) which is singular value decomposed; the generic term Singular Value Ratio (SVR) spectrum stands for the spectrum of a function (usually squared ratio) of the most dominant and other singular values against varying row lengths of the data matrix.

Appendix 12A: Systems and controls basics
 + Show details  Hide details

p.
479
–484
(6)
Some basic concepts related to systems, models and control methods are introduced here. Every system or process is, by itself, a continuous time process. However, for considerations related to measurement and computation, process measurements are often recorded at discretetime intervals, and the monitored process is loosely called a discretetime process.

Appendix 12B: Smith predictor
 + Show details  Hide details

p.
485
–487
(3)
One of the fundamental works in predictive control is by Smith (1957), who designed a controller essentially free from the effects of the time delay. A typical process shows an inherent time delay between the input u, and the process output y. Let the time delay be expressed as G_{d}, given by exp(sτ), in continuous time, s being the Laplace operator. The control action can at best force the output to be equal to the set point in a time equal to the dead time of the process. Faster control performance is not possible. Underestimation of the time delay and unnecessary control action can lead to instability.

Appendix 13A: Derivation of statespace deterministic LQ control
 + Show details  Hide details

p.
488
–490
(3)
In this appendix the statespace formulation of the deterministic LQ control problem is presented. A multiinput multioutput process is considered. The derivation is based on dynamic programming, which was developed by Bellman in 1953 (Bellman and Dreyfus, 1962). The derivation can be easily simplified to the singleinput singleoutput case.

Appendix 13B: Transmittance matrix: formulation and implementation
 + Show details  Hide details

p.
491
–497
(7)
Use of transmittance matrices offers a straightforward method for simplifying a polynomial matrix multiplication problem into an ordinary matrix multiplication problem. The transmittance matrix formulation can be very useful in per forming state estimation. This appendix discusses the vector formulation of the transmittance matrix and its application for state estimation.

Appendix 13C: Covariance time update using UD factorization
 + Show details  Hide details

p.
498
–501
(4)
UD covariance factorization is attractive because of numerical robustness, algorithmic simplicity and computational efficiency. Bierman (1977, p. 124) studies the general covariance time update problem: P = ΦP*Φ^{T} + GQG^{T}, where UD factors of P*, P* = U*D*U*^{T}, are given and the updated UD factors are computed. A particular form of the UD covariance update problem relating to the LQ statespace controller is discussed in Clarke et. al. (1985), where the vector implementation is also presented; the material presented here is largely based on this reference.

Appendix 14A: Lowpass filter
 + Show details  Hide details

p.
502
–505
(4)
A lowpass filter is a frequency domain filter, which allows the specified lowfrequency part of the signal or data sequence to pass through, whereas the higher frequency components are attenuated. The objective is to separate from the data undesirable high frequency components, which may be due to external disturbances or noise. The filter may be characterized by the passband, the transition band, the stopband and the gain or the passband magnitude. The smaller the transition region, the sharper is the separation between the frequency components passed and those attenuated.

Appendix 14B: Permeability data
 + Show details  Hide details

p.
506
(1)
This section of the book presents sets of data that are 2minutely recorded permeability measurements of the greenmix permeability in the process of ironore sintering collected from an iron and steel plant.

Appendix 14C: Composite data on maternal ECG containing fetal ECG
 + Show details  Hide details

p.
507
–508
(2)
This appendix discussed the composite maternal and fetal ECG during gestation period by downsampling the digitized data.

Back Matter
 + Show details  Hide details

p.
267
(1)