Adaptive Filtering Prediction and Control

Adaptive Filtering Prediction and Control

Adaptive Filtering Prediction and Control

Adaptive Filtering Prediction and Control

eBook

$20.99  $27.95 Save 25% Current price is $20.99, Original price is $27.95. You Save 25%.

Available on Compatible NOOK Devices and the free NOOK Apps.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This unified survey of the theory of adaptive filtering, prediction, and control focuses on linear discrete-time systems and explores the natural extensions to nonlinear systems. In keeping with the importance of computers to practical applications, the authors emphasize discrete-time systems. Their approach summarizes the theoretical and practical aspects of a large class of adaptive algorithms.
Ideal for advanced undergraduate and graduate classes, this treatment consists of two parts. The first section concerns deterministic systems, covering models, parameter estimation, and adaptive prediction and control. The second part examines stochastic systems, exploring optimal filtering and prediction, parameter estimation, adaptive filtering and prediction, and adaptive control. Extensive appendices offer a summary of relevant background material, making this volume largely self-contained. Readers will find that these theories, formulas, and applications are related to a variety of fields, including biotechnology, aerospace engineering, computer sciences, and electrical engineering.

Product Details

ISBN-13: 9780486137728
Publisher: Dover Publications
Publication date: 04/07/2014
Series: Dover Books on Electrical Engineering
Sold by: Barnes & Noble
Format: eBook
Pages: 560
File size: 48 MB
Note: This product may take a few minutes to download.

Read an Excerpt

Adaptive Filtering Prediction and Control


By Graham C. Goodwin, Kwai Sang Sin

Dover Publications, Inc.

Copyright © 1984 Graham C. Goodwin and Kwai Sang Sin
All rights reserved.
ISBN: 978-0-486-13772-8



CHAPTER 1

Introduction to Adaptive Techniques


This book is concerned with the design of adaptive filters, predictors, and controllers. When the system model is completely specified, standard design techniques can be employed. Our emphasis here, however, is on design techniques that are applicable when the system model is only partially known. We shall use the term adaptive to describe this class of design techniques. Ab initio, these techniques will incorporate some form of on-line parameter adjustment scheme. We shall further distinguish two types of parameter adjustment algorithm: those applicable when the system parameters, although unknown, are time invariant; and those applicable when the system parameters are time varying.

A fact well appreciated by those involved in applications is that any realistic design problem is usually complex, involving many factors, such as engineering constraints, economic trade-offs, human considerations, and model uncertainty. However, just as classical control theory provides a useful way of thinking about control problems, we believe that the adaptive approach provides a useful set of tools and guidelines for design. The advantages of the approach have been borne out in numerous applications that have been reported in the literature.

We shall now outline briefly the types of problems intrinsic to filtering, prediction, and control.


1.1 FILTERING

Filtering is concerned with the extraction of signals from noise. Typical applications include:

• Reception and discrimination of radio signals

• Digital data transmission on telephone lines

• Beam-forming arrays

• Detection of radar signals

• Analysis of seismic data

• Processing of pictures sent from spacecraft

• Analysis of electrocardiogram (ECG) and electroencephalogram (EEG) signals


If models are available for the signal and noise, it is possible, at least in principle, to design a filter that optimally enhances the signal relative to the noise. A great deal is known about filter design procedures in simple cases, for example when the signal model is linear. However, some applications demand advanced nonlinear filtering algorithms, and less is known about these procedures.

When the signal and noise models are not completely specified, it seems plausible that appropriate models could be estimated by analyzing actual data. This is frequently done in practice, especially when the models are ill defined or are time varying. This leads to adaptive filters.


1.2 PREDICTION

Prediction is concerned with the problem of extrapolating a given time series into the future. As for filtering, if the underlying model of the time series is known, it is possible, in principle, to design an optimal predictor for future values. Typical applications include:

• Forecasting of product demand

• Predicting water consumption based on temperature and rainfall measurements

• Predicting population growth

• Encoding speech to conserve bandwidth

• Predicting future outputs as a guide to plant operators in industrial process control


When the model of the time series is not completely specified, it again seems plausible that a model could be estimated by analyzing past data from the time series. This leads to adaptive predictors.


1.3 CONTROL

Control is concerned with the manipulation of the inputs to a system so that the outputs achieve certain specified objectives. Typical applications include:

• Control of prosthetic devices and robots

• Control of the aileron and elevators on an aircraft

• Firing of retro rockets on spacecraft to control attitude

• Control of the flow of raw materials in an industrial plant to yield a desired product

• Control of interest and tariff rates to regulate the economy

• Control of anaesthetic dosage to produce a desired level of unconsciousness in a patient


Again there are a vast array of design techniques for generating control strategies when the model of the system is known. When the model is unknown, on-line parameter estimation could be combined with on-line control. This leads to adaptive, or self-learning controllers. The basic structure of an adaptive controller is shown in Fig. 1.3.1.

One can distinguish the following types of control problem in increasing order of difficulty:

• Deterministic control (when there are no disturbances and the system model is known)

• Stochastic control (when there are disturbances and models are available for the system and disturbances)

• Adaptive control (when there may be disturbances and the models are not completely specified)


We shall emphasize the latter type of control problem in this book.

From the discussion above, it can be seen that underlying each of the problems of adaptive filtering, prediction, and control there is some form of parameter estimator. In fact, parameter estimation forms an integral part of any adaptive scheme. We shall discuss in detail various forms of parameter estimator.

It is useful to distinguish between situations in which the system parameters are constant and situations in which the parameters vary with time. Obviously, the former situation is easier to deal with than the latter. A feature of many of the parameter estimation algorithms used in the time-invariant case is that the gain of the algorithms ultimately decreases to zero. Heuristically, this means that when all the parameters have been estimated, the algorithm "turns off." On the other hand, if the parameters are time varying, it is necessary for the estimation algorithm to have the capability of tracking the time variations. Some of the algorithms aimed at the constant-parameter case will automatically have this capability. Other algorithms will need minor modifications to ensure that the turn-off phenomenon does not occur.

CHAPTER 2

Models for Deterministic Dynamical Systems

2.1 INTRODUCTION

In this chapter we give a brief account of various models for deterministic dynamical systems. By deterministic in this context, we mean that the model gives a complete description of the system response. Later we shall contrast this with stochastic models, when the response contains a random component defined on some probability space. The chapter gives a self-contained description of certain aspects of system modeling, with emphasis on those ideas that form the basis of our subsequent treatment of filtering, prediction, and control.

We will begin by looking at models for linear deterministic finite-dimensional systems. In particular, we discuss state-space models, difference operator representations, autoregressive moving-average models, and transfer functions. Models for certain classes of nonlinear systems will also be introduced. Our emphasis here is on the interelation between these different model formats. This is important since the choice of model is often the first step toward the prediction or control of a process. An appropriately chosen model structure can greatly simplify the parameter estimation procedure and facilitate the design of prediction and control algorithms for the process.

This chapter is intended to give a concise, but relatively complete picture of various models for linear systems. The chapter can be read in one of two ways: either in detail to gain a full appreciation of the results, or selectively to get the flavor of the results and to establish notation. If the latter approach is adopted, the reader's attention is directed in particular to Section 2.3.4, as the DARMA models covered in this section are used extensively in subsequent chapters.

Although many of the topics discussed here are covered in other books which deal specifically with systems theory, we believe that our treatment is interesting and distinctive in that it is tailored to the kind of application subsequently discussed in the book. We also present some novel insights and new material, especially concerning the modeling of deterministic disturbances and the interelationship between different types of models. A key point that we want to bring out is that certain model structures are more convenient for specific applications and not for others. It is important, therefore, to appreciate the interconnection between different model formats and their realm of applicability. For alternative viewpoints and further discussion on the topic of linear systems modeling, we refer the reader to standard books on the subject, including Brockett (1970), Desoer (1970), Franklin and Powell (1980), and Fortmann and Hitz (1977), and at a more advanced level Kailath (1980), Rosenbrock (1970), and Wolovich (1974).


2.2 STATE-SPACE MODELS

2.2.1 General

The internal and external behavior of a linear finite-dimensional system can be described by a state-space model. Some of the basic notions associated with state-space modeling are summarized in Section A.1 of Appendix A and those readers who would like to revise these ideas are encouraged to review this section before proceeding.

Here we shall use the following notation for a state-space model:

x(t + 1) = Ax(t) + Bu(t); x(t0) = x0 (2.2.1)

y(t) = Cx(t) (2.2.2)

where {u (t)}, {y (t)}, and {x (t)} denote the r × 1 input sequence, m × 1 output sequence, and n × 1 state sequence, respectively, and x0 is the initial state.

We know from the canonical structure theorem (Section A.1) that a general state-space model can be decomposed into controllable (strictly speaking, reachable—see Appendix A) and observable subsystems. We shall study these subsystems in a little more detail in the sections to follow.


2.2.2 Controllable State-Space Models

In this section we turn our attention to the completely controllable part of a system. Given a finite-dimensional linear system, there exists a transformation that isolates the controllable and uncontrollable parts (see Lemma A.1.2 of Appendix A). Consequently, a general system can be partitioned as in Fig. 2.2.1.

It can be seen from Fig. 2.2.1 that the input signal influences only the completely controllable part of the system. In fact, we shall show later that the poles of the completely controllable part can be arbitrarily assigned by state- variable feedback (this will be explored in Chapter 5). This will give further motivation for the term completely controllable.

For the remainder of this section we shall ignore the uncontrollable part of the system and assume complete controllability (strictly speaking, reachability—see Exercise 2.20). We shall explore some of the implications of this assumption. In particular, we shall describe two special structures that can be used to describe any completely controllable state-space model.

Consider the following completely controllable model (of order n, with r inputs and m outputs):

x(t + 1) = Ax(t) + Bu(t); x(0) = x0 (2.2.3)

y(t) = Cx(t) (2.2.4)

An equivalent representation to (2.2.3)–(2.2.4) can be obtained by simply choosing a new basis for the state space, for example by using a transformation such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.5)

giving

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. An infinite number of choices exists for the transformation P, leading to an infinite number of equivalent completely controllable state-space models. Special forms for the state-space model can be achieved by forming P in particular ways, for example, by using any n linearly independent columns chosen from the controllability matrix. Note this can always be done, since for a completely controllable system the controllability matrix has rank n (see Lemma A.1.1). We shall find that certain choices of the transformation matrix result in the model having a specific structure which is convenient to use for particular applications.

We discuss two particular structures: the controllability form and controller form.


Controllability Form

The single-input case. For ease of explanation, we first consider the single-input case and construct the transformation P from the columns of the controllability matrix as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since the rank of the controllability matrix is n, AnB will be a linear combination of the columns of P; that is, there exist scalars α1, ..., αn such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It can then be readily seen that the resulting state-space model for [bar.x] = P-1x has the following structure:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The structure above will, of course, be familiar to those readers who have taken an elementary course in linear systems theory.

* The multi-input case. The ideas above can be extended to the multi-input case in a relatively straightforward way. (The remainder of this subsection may be omitted on a first reading.) For the multi- input case, we may construct the transformation matrix by simply searching the controllability matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], from left to right to isolate a set of linearly independent columns. The resulting columns are then rearranged into the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.6)

where B = [b1 ··· br]. The indices k1 ··· kr are called the controllability indices and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where i = 1, r, is called the controllability index.

We thus see that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is linearly dependent on preceding columns in [??] and we can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where because of the order of the vectors in [??], the integers kij are given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The total number of scalars, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] thus defined is ns, where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The resulting state-space model has the following structure (illustrated for the three-input case in which no ki is zero and k1 = k3 >k2 + 1):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.7)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.8)

(The reader is asked to verify that the structure above results from the use of the transformation P. Note that in the structure above, k12 = k32 = k2, k21 = k2 + 1, k23 = k2, and k13 = k31 = k1 = k3.)

An alternative in the multi-input case is to search the controllability matrix starting with b1 and then to proceed to Ab1, A2b1, ..., until the vector A v1b1 can be expressed as a linear combination of previous vectors. Then one proceeds to b2, Ab2, ..., etc. Again, the resulting columns are then arranged in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.9)

The resulting state-space model then has the following structure (illustrated for r = 4 and s = 3):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.10)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2.2.11)

where * denotes a possibly nonzero element.

Both of the forms above are called controllability forms [see Kailath (1980, p. 428)].


Controller Forms

The single-input case. Here we take the transformation found in the preceding section and postmultiply it by a nonsingular matrix, M. The matrix M is formed from the characteristic polynomial of A as follows.


(Continues...)

Excerpted from Adaptive Filtering Prediction and Control by Graham C. Goodwin, Kwai Sang Sin. Copyright © 1984 Graham C. Goodwin and Kwai Sang Sin. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface1. Introduction to Adaptive TechniquesPart 1. Deterministic Systems2. Models for Deterministic Dynamical Systems3. Parameter Estimation for Deterministic Systems4. Deterministic Adaptive Prediction5. Control of Linear Deterministic Systems6. Adaptive Control of Linear Deterministic SystemsPart 2. Stochastic Systems7. Optimal Filtering and Prediction8. Parameter Estimation for Stochastic Dynamic Systems9. Adaptive Filtering and Prediction10. Control of Stochastic Systems11. Adaptive Control of Stochastic SystemsAppendicesA. A Brief Review of Some Results from Systems TheoryB. A Summary of Some Stability ResultsC. Passive Systems TheoryD. Probability Theory and Stochastic ProcessesE. Matrix Riccati EquationsReferencesIndex
From the B&N Reads Blog

Customer Reviews