Foundation of Control System Design

Control Systems & Robotics Topics
Post Reply
User avatar
Magneto
Major
Major
Posts: 430
Joined: Wed Jul 15, 2009 1:52 pm
Location: London

Foundation of Control System Design

Post by Magneto » Sun Oct 18, 2009 6:58 pm

The Conventional Foundation for Design

Although control mechanisms have been known since antiquity, two well
known papers, Maxwell’s (1868) and Nyquist’s (1932), have been influential
in forming the foundation of what is now the mainstream theory for thehough control mechanisms have been known since antiquity, two well
known papers, Maxwell’s (1868) and Nyquist’s (1932), have been influential
in forming the foundation of what is now the mainstream theory for the
design of control systems, with its remarkable successes and, as will be seen,
some significant limitations. These papers introduced two key ideas, respectively
called stability and sensitivity, which constitute the foundation of the
conventional framework for control systems design. The two ideas are here
reviewed informally and briefly, in the context of more recent developments
that concern the foundation of control systems design. Accordingly attention
is, in this section, focused on the meaning of control and not on the means
(that is to say, the design methods) needed to achieve it. This makes it possible
to examine the foundation of conventional design theory and to see how
it was originated.

It is assumed that a control system operates in an environment that generates
the input for the system. See Figure 1.1. The input is a vector of time
design of control systems, with its remarkable successes and, as will be seen,
some significant limitations. These papers introduced two key ideas, respectively
called stability and sensitivity, which constitute the foundation of the
conventional framework for control systems design. The two ideas are here
reviewed informally and briefly, in the context of more recent developments
that concern the foundation of control systems design. Accordingly attention
is, in this section, focused on the meaning of control and not on the means
(that is to say, the design methods) needed to achieve it. This makes it possible
to examine the foundation of conventional design theory and to see how
it was originated.

It is assumed that a control system operates in an environment that generates
the input for the system. The input is a vector of time
functions, the components of which occur at corresponding input ports of the
system. The input is transformed by the system into a response, which is a set
of time functions. Some of the individual responses are classified as outputs,
which occur at corresponding output ports. Some of the output ports are classified
as error ports. The response at an error port is called an error. Unless
an explicit distinction is made, the word system means either the concrete
physical situation, which is a primary focus of interest, or its mathematical
model. The word environment also has similar dual meaning. The term port
means a location, either on the physical system or on a corresponding block
diagram model, where a scalar input or a scalar response occurs. The notion
of a port provides a convenient way of taking into account the fact that the
input and the output are, in general, vectors. The system is comprised of
two subsystems, the plant and the controller, connected together by mutual
interaction; that is to say, in a feedback arrangement.
aa.JPG
aa.JPG (13.23 KiB) Viewed 2601 times
The way that certain output ports of the system are classified as error
ports, depends on the design situation. An error, which is the response at the
corresponding error port, is required to be small. How small it is required to
be is one of the crucial aspects of control that is considered in the new design
framework proposed in this article.

The environment is modelled by a set of generators and, if necessary, corresponding
filters. Each generator produces a scalar function of time, called
an input, which feeds a corresponding filter, the output of which is a scalar
function of time that feeds into an input port of the system. In some cases,
a filter is not necessary and is replaced by the identity transformation. The
term filter-system combination will denote such an arrangement but this will
sometimes be abbreviated to the simpler term system, when there is no risk
of confusion. Similarly, the term input port will refer either to the input of the
system or to the input of the filter, depending on the context. This terminology
could be simplified by defining the system so as to include the filters but
that would obscure the fact that the filters are part of the environment. It is
important to maintain a clear distinction between the environment and the
control system, especially when the model of the environment is considered
in detail.

Unless otherwise stated, it is assumed that the filter-system combination
can be represented by ordinary linear differential equations with constant
coefficients, expressed in the standard state-space form by the two equations:
x? = Ax + Bf, e = Cx + Df. Here, as usual, x is the state vector, f is the
input vector of dimension n, produced by the generators, and e is the system
output vector of dimension m. The integers n and m are the numbers of input
and output ports, respectively. A response is any linear combination of states
and inputs. An output is a particular linear combination of states and inputs
defined by the matrices C and D. The value of the state vector at time zero is
assumed to be zero. The matrix D characterises the non-dynamic part of the
input-output behaviour of the system and is called the direct transmission
matrix.

The assumption that the standard state-space equations represent the
filter-system combination is made here partly to simplify the presentation.
Many of the ideas in this chapter are applicable to very general systems
or to the more general linear time-invariant systems
, notably those having time delays, and can also
be translated to make them applicable to sampled-data systems. Also, the
ideas can be extended to any vague system, the input-output transformation
of which is characterised by a known set of rational transfer functions, which
can be used to characterise a time-varying or non-linear system or a linear
time invariant system whose parameters are not precisely known.

There are two ways of determining the environment
filters. In one way, if all the environment filters are chosen appropriately
(in some cases every filter can be chosen to be the identity transformation)
then the direct transmission matrix D is equal to zero. In the other way, every
filter is chosen to be the identity transformation and suitable restrictions
are imposed on the derivative of the input.
The characteristic polynomial of the system is defined by det(sI?A), and
its zeros are called the characteristic roots of the system. For each characteristic
root ?i, there is a mode, of the form t(ki) exp(?i*t), which characterises
the behaviour of the system.

A mode is said to be controllable if it can be excited by some input. If
every mode is controllable then the system is said to be controllable. If every
mode can be excited by some nonzero input from the environment then the
environment is said to be probing. Although, for design purposes, an adequate
working model of the environment might not be probing, it is safe to
assume that an accurate model would be probing because an accurate model
might take into account small parasitic inputs that are ignored in the working
model. Such parasitic inputs can be significant, as will be seen, however
small they might be. Notice that, an environment is probing with respect to a
given system, implies that the system is controllable. This emphasises that the

properties of the system are dependent on the properties of the environment
and vice versa. Thus, for design purposes, the environment and the system
must also be considered as a single unit, called the environment-system couple,
and not only as two separate entities. The notion of environment-system
couple plays a major role in the framework presented in this chapter. Accordingly,
the environment and the ways it can be modelled play a central
role in the new framework.
For each input-output pair of ports there is an input-output transformation
(operator) that, for the purpose of analysis, can be considered in
isolation from the system. Under the assumption that the system can be
represented by standard state space equations, this transformation can be
represented by a rational transfer function, which is proper (numerator degree
not greater than the denominator degree) and without common factors
between the numerator and denominator. If the filter-system combination has
zero direct transmission matrix D then, for every input-output pair of ports,
the transfer function is strictly proper (has denominator degree greater than
the numerator degree).

Informal definition of control

Suppose that the environment-system couple
is such that the environment is probing. Then the environment-system
couple is said to be under control if the following three conditions are satisfied.
First, for every input generated by the environment, all the states are
bounded. Second, for every error port, the response (the error) stays close to
zero for all time. Third, for some specified output ports (other than the error
ports), the response is not too large for all time.
This definition involves only input-response concepts. However, the definition
is purely qualitative because the second and third conditions for control
are not quantified. That is to say, how small the errors are required to be is
not stated in quantitative terms and, for the remaining output ports, what
is considered to be too large a response, is again not stated in quantitative
terms. Usually, the responses at those output ports that are not error ports,
represent the behaviour of actuators or other physical devices, whose range of
operation is limited, often because of saturation but sometimes also for other
reasons, such as limits imposed on the consumption of power. Notice also that
it is not just the system that is under control but the environment-system
couple. This is because the environment produces the input that, together
with the system, determines the size of the responses. However, the environment
is not specified quantitatively and, consequently, the responses at the
output ports cannot be quantified.
For the purpose of analysis, Maxwell considered the system in isolation
from the environment and defined a practical algebraic concept of stability.
According to this, a system is, by definition, stable if all its modes are stable
and each mode of the system is, by definition, stable if its characteristic root
has negative real part. Hence the absolute value of a stable mode is bounded
by a constant multiplied by an exponential function that decays with time.
Maxwell’s condition for stability is relevant and necessary for control. To
see this, suppose that the environment is restricted so that, for every input
generated by the environment, the states are bounded if the filter-system
combination is stable (in fact, stability of the filter-system combination is
necessary and sufficient to ensure that, for every bounded input, all the states
are bounded). This implies that, provided the filter-system is stable, any
environment that generates only bounded inputs causes only bounded states
and hence bounded outputs. These ideas are generalised somewhat in Section
1.4 for filter-system combinations with zero direct transmission matrix D.
It may be noted that, by convention, modes that cannot be excited by any
input (that is, uncontrollable modes) are assumed to be quiescent and therefore
to generate zero, and hence bounded, responses. However, if the system is
not stable then the controllable but unstable modes become unbounded, for
some non-zero bounded input, and the uncontrollable and unstable modes,
although theoretically quiescent, become unbounded if some stray or parasitic
non-zero bounded inputs, however small they might be, are introduced into
those modes. Hence Maxwell’s condition for stability is necessary to achieve
control in the sense of the informal definition. The condition provides the necessary
assurance that the states are bounded if the environment is probing
even if the working model of the environment, with which practical designs
are obtained, is not probing.
A transfer function, if it is rational and proper, is said to be stable if
the real parts of all its poles are negative. If the transfer functions of all the
input-output transformations of the system are stable, this does not imply
that all the modes of the system are stable. The reason is that those modes,
which do not contribute to the transfer functions, can be stable or unstable.
If some of those modes are not stable the system is said to be internally not
stable. This is to distinguish from the input-output or external (involving
ports) stability, which is determined by the stability of the corresponding
transfer functions. A system is said to be input-output stable if the transfer
functions of every input-output transformation is stable. The concept of system
stability is equivalent to the concept of input-output stability if and only
if all the modes of the system contribute to the input-output transformations.
The foundation of conventional theory for the design of control systems
contains two primary concepts. The first is the stability of the system. The
second is a notion of sensitivity of a transfer function that, in its original form,
is well known in terms of the concepts of phase margin and gain margin of the
Nyquist diagram. This notion follows from Nyquist’s practical condition for
the stability of a transfer function. Roughly, sensitivity of a stable transfer
function is one or more non-negative numbers that measure the extent to
which a non-zero input is magnified (or attenuated, depending on whether
the number is greater than or less than one) in the process of becoming the
response. As sensitivity tends to become infinite, so magnification becomes
infinite and hence, for some bounded input, the response tends to become
unbounded, in which case the transfer function and hence the system to
which it belongs, tends to become unstable. It is convenient to assign the
value infinity to the sensitivity of an unstable transfer-function. Thus, any
way of quantifying or measuring sensitivity provides a way of quantifying
the stability of a transfer function. Terms such as ‘degree of input-output
stability’ and ‘margin of stability’ have also been used elsewhere to mean an
inverse measure of sensitivity. The term input-output sensitivity is used to
mean the sensitivity of a transfer function or, more generally, the sensitivity
of an input-output transformation, from an input port to an output port, or
the combined sensitivities of all the input-output transformations from all
the input ports to one output port or to all the output ports. The precise
meaning will be obvious from the context.
The notions of stability and sensitivity merge together to form the following
definition of control. This definition constitutes the core of the current
paradigm of control. In effect, mainstream theories and methods of design are
all those that aim to achieve control in the sense of this definition. No other
theories are, by general consensus, part of the mainstream. The definition
therefore characterises the foundation of conventional control theory.

Conventional definition of control.

A system is said to be under control
if the following three conditions are satisfied. First, the system is stable.
Second, for every error port, every input-output transformation feeding the
error port has low sensitivity or minimal sensitivity. Third, for every output
port that is not an error port, every input-output transformation feeding the
output port has sensitivity that is not too large.
This definition is partly equivalent to the informal definition but is much
more convenient in practice, because its requirements of stability and minimal
input-output sensitivity are more easily achieved than the corresponding
requirements of bounded states and small errors resulting from a probing environment.
In fact, stability implies that all the states are bounded, whether
or not the modes are excited by the input, provided that the environment
produces bounded inputs. More generally, if the direct transmission matrix
D of the filter-system combination is zero and provided that the environment
produces inputs whose p-norms are finite then all the states are bounded if
the system is stable.

The extent to which the conventional definition simplifies the design problem
is worth emphasising. The point to note is that the definition does not
involve a model of the environment. In fact, it is obvious that the system can
be designed to satisfy this definition of control, without taking into account
the environment. However, as will be seen, this neglect of the environment is
sometimes an oversimplification of the real design problem.
Minimal input-output sensitivity implies that the output is made as small
as possible, in some sense. But again, like the informal definition, how small
and in what sense the output is required to be small, is not stated. This lack
of quantification and precision implies that, provided a system can be made
stable, control can always be achieved by minimising the appropriate sensitivities.
Clearly, the only firm constraint imposed by the conventional definition
of control is the stability of the system. As will be seen, this constraint is not
sufficiently stringent and does not represent the notion of control needed in
some important situations.
A further difficulty with the conventional definition is its third condition,
which is intended to limit the size of the responses at the corresponding output
ports. The difficulty arises because the condition is stated in qualitative
terms that are not easy to quantify, even if the meaning of an output being
too large is defined quantitatively. Evidently, it is not possible to specify
quantitatively when the sensitivity of a transfer function is too large, even if
a restriction on the corresponding output is specified quantitatively, without
taking into account the magnitude of the input.
The concept of sensitivity is central to control theory but has been given
various, somewhat arbitrary, mathematical interpretations, each leading to a
separate branch of mainstream control theory and design. Although sensitivity
is a way of quantifying the stability of a transfer function, there appears
to be no universally agreed way of defining this concept and the various definitions
that have been adopted are arbitrary. This lack of agreement will be
seen to have significance in motivating the introduction of the new framework
for control systems design.

One well known interpretation of sensitivity of a transfer function, derived
from its definition for stability, is to measure sensitivity by the size of the
real parts of all its poles, assuming that these poles are confined to a wedgeshaped
region of the left-half plane, to ensure that any oscillations of the
corresponding modes decay quickly. The methods of design called the root
locus and pole placement are based on this interpretation of input-output
sensitivity.
As already mentioned, the original meaning of sensitivity was defined by
the phase margin or the gain margin of the Nyquist diagram. As these two
quantities become smaller, so the sensitivity becomes larger. As the margins
tend to zero, so some of the real parts of the poles of the transfer function
tend to zero.
Classical methods of design, such as those of Nyquist and root-locus, are
characterised by the use of measures of sensitivity that are derived naturally
from their respective practical conditions for stability of a transfer function.
However, many other well-known measures of sensitivity, which are not derived
from a practical criterion of stability of a transfer function, have been
defined.

These various well-known measures of sensitivity include the characteristics
(settling time and undershoot) of the error due to a step input, as well as
certain q-norms of the (possibly weighted. error resulting from a step input or delta function input. Well-known examples
of this are, for the q = 1 norm, the integral of the absolute error (IAE
or, for the q = 2 norm, the square root of the integral of the square of the
error ?ISE . Another measure of sensitivity is provided by the H?-norm
of the frequency response. All these measures of sensitivity are defined when
the transfer function is stable. However, in some cases, for example the stepresponse
characteristics or the H?-norm, if the transfer function is unstable
then the measure of sensitivity is not defined by the same process that defines
its value for a stable transfer function but is defined by assigning to it the
value infinity.
Yet other measures of sensitivity are obtained by considering the transfer
function as an input-response operator, defined by a convolution integral, and
deriving certain functionals, in some cases representing the operator norm (a
q-norm of an impulse response), that depends on the p-norm (p?1+q?1 = 1)
used to characterise the input space, to act as measures of sensitivity.

Using positive weights, any weighted sum of different measures of sensitivity,
related to one transfer function, defines another sensitivity of that
transfer function. Also, a weighted sum of sensitivities, which correspond to
different transfer functions of a system, defines a composite scalar sensitivity
for those transfer functions considered all together. This scalar composite
type is characteristic of certain optimal control methods, which minimise a
scalar composite measure of the sensitivities of the system.
As has been noted, the concept of sensitivity provides a useful measure
of the stability of a transfer function. However, if the transfer function is
unstable then the sensitivity is infinite, whatever the extent of instability.
Clearly therefore, sensitivity does not provide a measure of the extent of
instability of a transfer function. It follows that, whereas a stable transfer
function can, for the purpose of design, be represented by its sensitivity, an
unstable transfer function cannot be so represented. This also points to the
difference between design and tuning. If a system is stable, all its sensitivities
can be measured or computed and, by some means, tuned (adjusted) to
the required values, without knowing the transfer functions of the system.
Otherwise, stability has to be achieved first.
Design, in the conventional sense, therefore involves achieving stability
first and then tuning the sensitivities to the required values. This emphasises
further the central role played in design by the two concepts of stability of
a transfer function and stability of a system. However, stability of a system,
which is what is required by the conventional definition of control (and also
by the new definition given below), can be achieved in different ways. One
particular way1 is to employ numerical methods to satisfy the inequality that
states that the abscissa of stability (this is also called the spectral abscissa
of the matrix A of the system and is defined as the largest of the real parts
of all the characteristic roots) is negative.

Design, in the conventional framework, involves selecting one system, from
a given set of systems, called the system design space ?, so that control is
achieved, in accordance with the conventional definition of control. Because
the sensitivities can be tuned only when the system is input-output stable
and because all the modes of the system are required to be stable, it is useful
to have a convenient characterisation of the stable subset ?Stable, comprising
every element of the set ? such that the system is stable. An initial step in
design involves determining one element of the stability set ?Stable. This can
be done by defining stability either in terms of the abscissa of stability (see
Section 1.6) or in terms of the concept of internal stability. This aspect of
design is here called the principle of uniform stability, because every element
of the set ?Stable is a stable system and the search for a satisfactory design
is restricted to this uniformly stable set. This principle is an obvious extension
of the concept of stability and it is named in this way to emphasise its
importance in design.

Crisis in Control

Although some strong preferences have existed among practitioners, the many
versions of the concept of sensitivity, and the corresponding distinct design
methods that are used to achieve control in the sense of the conventional
definition of control are, by this very definition, essentially equivalent. That
is to say, the conventional framework for design includes all design methods
that achieve control, in the sense of the conventional definition of control,
where each method is characterised by a distinct way of defining the concept
of sensitivity. All such design methods are therefore equivalent. Some design
methods might have advantages, with respect to ease of modelling or computations,
but these aspects are concerned with the means and not with the
ends of design.
This overabundance of distinct, but essentially equivalent, versions of the
same theory suggests that the practitioners of the subject are making futile
attempts to transcend its limitations. After Nyquist’s work, each new way
of defining sensitivity has been introduced on grounds that somehow, unlike
previous versions, it captures more accurately the real meaning of control.
The historian of science, Kuhn (1970), has pointed out that this is a symptom
of crisis in a subject. The following quotation from Page 70 of his influential
book illustrates the point: “By the time Lavoisier began his experiments on
airs in the early 1770s there were as many versions of the phlogiston theory
as there were pneumatic chemists. That proliferation of versions of a theory
is a very usual symptom of crisis. In his preface, Copernicus complained of
it as well.”
The current mainstream approach to control is the product of a merger
between control (servomechanism and regulator) theory and amplifier (circuit)
theory that took place somewhat hastily during the wartime period of
1940-1945. This merger gave rise to the conventional definition of control,
as stated above. However, too restricted a focus on the concept of feedback,
which is shared by both subjects, has sometimes obscured the differences between
them. The long-term validity of the consensus that followed the merger
has been questioned by Bode (1960). Although his well-known book appeared
in 1945, Bode contributed to feedback theory up to but not after, the year
1940, which is just before the merger. He expressed his “misgivings” about
the “fusion” of the two fields by means of incisive metaphors, used delicately
and with humour but nevertheless with serious intent, when he came to the
conclusion that control theory and amplifier theory are “quite different in
fundamental intellectual texture” and the “shotgun [that is to say, hasty and
forced] marriage between [these] two incompatible personalities” (that took
place during the Second World War), which resulted in the current mainstream
approach to control, should perhaps be dissolved with an “amicable
divorce”. There has since been ample time to reconsider the long-term wisdom
of that merger. However, although the analysis given below provides
added reasons for Bode’s conclusions, the reasons given by him were perhaps
not sufficient for his conclusions to be acted upon, also because the nature of
the crisis in control was yet to be clarified.
The conventional definition of control has been accepted, as characterising
the foundation for mainstream theory and design of control systems, since
the year 1945. Although this has been largely a fruitful move, it has also been
insufficient because, like the informal definition of control, the conventional
definition is not quantified. The consequences of this are now considered.
Post Reply

Return to “Control Systems & Robotics”