- Sort by:
- Newest first
- Titles A to Z

### Filter by subject:

- Computer and control engineering [12]
- Systems and control theory [12]
- Control theory [12]
- Specific control systems [12]
- Discrete control systems [12]
- Control system analysis and synthesis methods [4]
- Mathematical techniques [3]
- Optimal control [3]
- Self-adjusting control systems [3]
- Time-varying control systems [3]
- [2]
- http://iet.metastore.ingenta.com/content/subject/c1200,http://iet.metastore.ingenta.com/content/subject/c1220,http://iet.metastore.ingenta.com/content/subject/c1320,http://iet.metastore.ingenta.com/content/subject/c1160,http://iet.metastore.ingenta.com/content/subject/c1240,http://iet.metastore.ingenta.com/content/subject/c1340j,http://iet.metastore.ingenta.com/content/subject/c1340k
- c1200,c1220,c1320,c1160,c1240,c1340j,c1340k
- [2],[2],[2],[1],[1],[1],[1]
- /search/morefacet;jsessionid=sa78rt59n1ao.x-iet-live-01
- /content/searchconcept;jsessionid=sa78rt59n1ao.x-iet-live-01?option1=pub_concept&facetOptions=2+3&option2=pub_year_facet&sortField=prism_publicationDate&pageSize=20&sortDescending=true&facetNames=pub_year_facet+pub_concept_facet&value1=c1340d&operator2=AND&value2=1988&operator3=AND&option3=pub_concept_facet&value3=
- See more See less

### Filter by content type:

### Filter by publication date:

- 1988 [12]

### Filter by author:

The paper presents an adaptive control algorithm for unknown time-varying systems having purely deterministic time-varying disturbances. The algorithm is an extension of the author's earlier work on adaptive control of deterministic time-varying systems. It is shown that global convergence of the algorithm depends on the observability of a time-varying system. Simulation results are also given to illustrate the ability of this algorithm in tracking both time-varying disturbances and parameters

The paper considers an exact modelmatching control problem via periodic stateoutput feedback for discrete-time systems. A frequency-domain approach is employed in which the prototype system (model) is given by the transfer function *G _{m}(Z)*. The approach is based on equating the closed-loop transfer function

*G*to

_{c}(Z)*G*and solving the resulting equation for the required feedback again. The solvability of this equation leads to the sufficient condition for the exact model-matching problem to have a solution. An example illustrates the proposed method

_{m}(Z)We present a model, based on a fuzzy relation obtained from fuzzy referential sets on the input and output spaces, for predicting the behaviour of nonlinear dynamic systems. The model can be made to learn from experience, and the computing requirements are modest, making online application feasible. Some numerical results are compared with those of earlier models.

The characteristic polynomial assignment for discrete 2-dimensional systems described by the Fornasini-Marchesini model is considered. The similarities and differences of the solutions to this problem, using Roesser's and Fornasini and Marchesini's models, are discussed. Furthermore, the assignment of a part of the characteristic polynomial and the simultaneous calculation of the residual polynomial for the Fornasini-Marchesini model is considered.

A global convergence and stability proof is presented for an indirect LQG self-tuning controller which employs a stochastic approximation type of identification algorithm. A discrete linear single-input/single-output time-invariant stochastic system with correlated noise inputs is considered. The plant model need not be stable or minimum phase. The usual assumption that unstable common factors do not occur in the estimated plant model is replaced by a weaker condition. The first set of stability and convergence results presented in the paper are independent of the control law employed. These are then applied to the specific case of the LQG self-tuner. The control and tracking error signals are shown to be sample mean square bounded, prediction error convergence is demonstrated and optimal pole locations are shown to be achieved asymptotically. A persistency of excitation condition is not assumed.

A joint characterisation of the controllability and observability of a particular kind of discrete system has been developed. The key idea of the procedure can be reduced to a correct choice of the sampling sequence. This freedom, owing to the arbitrary choice of the sampling instants, is used to improve the sensitivity of system controllability and observability, by exploiting an adequate geometric structure. Some qualitative examples are presented for illustrative purposes.

This paper presents a summary and consolidation of stability and robustness results based on input-output theory for discrete adaptive control systems. The objective of this paper is to clarify the techniques involved in applying this stability approach to the adaptive control problem. It is intended that this tutorial may provide a basis for continuing work in this area.

This chapter presents digital controller design. The controllers is design directly in the discrete domain, based on the time domain specification of a closed-loop system response. The controlled plant is represented by either a discrete model, as in the case of certain industrial processes where continuous dynamics is inappropriate, or by a discretised model, which is a continuous system observed, analysed and controlled at discrete intervals of time. Since the time response is the ultimate objective of the design, then this approach which we shall now consider provides a direct path to the design of controllers.

In the paper, a discrete-time algorithm is presented which is based on a predictive control scheme in the form of dynamic matrix control. A set of control inputs are calculated and made available at each time instant, the actual input applied being a weighted summation of the inputs within the set. The algorithm is directly applicable in a self-tuning format and is therefore suitable for slowly time-varying systems in a noisy environment.

In this chapter, we present a perturbation method for the TPBVP arising in the open-loop optimal control of singularly perturbed discrete systems. A state-space model with a three-time-scale property exhibiting boundary layer behaviour at the initial and final points is formulated in Section 5.1. The solution of the model is obtained as the sum of an outer series solution and two correction series solutions for the initial and final boundary layer. In Section 5.2, the optimal control problem with a quadratic cost function is then considered. Using the discrete maximum principle, the state and co state equations are obtained and cast in the singularly perturbed form which exhibits the three-time-scale property. In Section 5.3, a method is described to solve the resulting two-point boundary-value problem.

In this chapter, first in Section 7.1, a method is described to analyse the singularly perturbed nonlinear difference equations for initial and boundary value problems. The approximate solution is obtained in the form of an outer series and a correction series. It is seen that considerable care has to be taken in formulating the equations for the boundary-layer correction series in the case of nonlinear equations. Then, in Section 7.2, the closed-loop optimal control problem is formulated, resulting in the singularly perturbed nonlinear matrix Riccati difference equation. It is seen that the degeneration (the process of suppressing a small parameter) affects some of the final conditions of the Riccati equation. In Section 7.3, a method is given to obtain approximate solutions in terms of an outer series and a terminal boundary-layer correction series. A method is also discussed in Section 7.4 for the important case of the steady-state solution of the matrix Riccati equation. The time-scale analysis of the regulator problem is also given. It is found that these methods, with the special feature of order reduction, offer considerable computational simpli city in evaluating the inverse of a matrix associated with the solution of the Riccati equation. Examples are given to illustrate these methods.

This book presents the twin topics of singular perturbation methods and time scale analysis to problems in systems and control. The heart of the book is the singularly perturbed optimal control systems, which are notorious for demanding excessive computational costs. The book addresses both continuous control systems (described by differential equations) and discrete control systems (characterised by difference equations).