Namespaces
Variants
Actions

Discrete event system

From Encyclopedia of Mathematics
Revision as of 19:35, 5 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


The theory of discrete event systems, sometimes also referred to as discrete event dynamic systems to emphasize the evolution in time, encompasses a variety of classes of problems and of modelling approaches. A succinct definition of a discrete event system that sets such systems apart from others is hard to give. Two key features of a discrete event system are: i) its dynamics is event driven as opposed to time driven, i.e. the behaviour of a discrete event system is governed only by occurrences of different types of events over time rather than by ticks of a clock; and ii) its state variables belong to a discrete (not necessarily finite) set. Theoretical disciplines that support the study of discrete event systems include systems theory (cf. Automatic control theory), operations research and theoretical computer science. In this article, three important approaches to the description and analysis of discrete event systems will be given. It depends on the field of application which one is the more suitable to use. Common to these approaches are various monotony properties, see [a5]. Related theoretical disciplines are characterized by terms like queueing theory, hybrid systems, Petri nets (cf. Petri net), path algebras, and generalized Markov processes (cf. Markov process). Applications are found specifically in the areas of flexible manufacturing, communication networks and logistic systems.

Logical approach.

Problems in which the precise ordering of events, and not their timing, is of interest, which must satisfy a given set of specifications, are conveniently modelled within the context of the theory of automata and formal languages (cf. Formal languages and automata). A discrete event system is typically represented as a set of state variables that are linked by transitions. The set of state transitions, called events, can be considered as the alphabet of a language, and sequences of events as words (cf. Word) within that language. An automaton can be viewed simply as a device that generates a language by the manipulation of the alphabet according to a specific set of rules. Control aspects can be added in the sense that certain events (i.e. transitions) can be disabled by an external controller. The resulting systems and control theory for discrete event systems is known as supervisory control theory. This theory addresses the synthesis of controllers (i.e. supervisors) for discrete event systems to satisfy a set of qualitative specifications on the admissible orderings of the events that can be executed by the system.

Let the automaton be described by

$$ G = ( X, \Sigma,f, \Sigma _ {G} ,x _ {0} ,X _ { { \mathop{\rm m} } } ) , $$

where

$ X $ is the set of states;

$ \Sigma $ is the set of events associated with the transitions;

$ f : {X \times \Sigma } \rightarrow X $ is the partial transition function;

$ {\Sigma _ {G} } : X \rightarrow {2 ^ \Sigma } $ is the feasible event function: $ \Sigma _ {G} ( x ) $ is the set of all events $ e $ for which $ f ( x,e ) $ is defined;

$ x _ {0} $ is the initial state;

$ X _ { { \mathop{\rm m} } } \subseteq X $ is the set of marked states (upon entering a marked state, the system has completed some operation or task). The language generated by $ G $ is denoted by $ L ( G ) $; $ {\overline{L}\; } $ is the language consisting of all prefixes of $ L $. The language marked by $ G $ is defined by $ L _ { { \mathop{\rm m} } } ( G ) = \{ {s \in L ( G ) } : {f ( x _ {0} ,s ) \in X _ { { \mathop{\rm m} } } } \} $. Let $ \Sigma $ be partitioned into two disjoint subsets,

$$ \Sigma = \Sigma _ { { \mathop{\rm c} } } \cup \Sigma _ { { \mathop{\rm uc} } } , $$

where $ \Sigma _ { { \mathop{\rm c} } } $ is the set of controllable events, i.e. events that can be disabled by the supervisor, and $ \Sigma _ { { \mathop{\rm uc} } } $ is the set of uncontrollable events (i.e. those events cannot be prevented from happening by control). Formally, a supervisor is a function

$$ S : {L ( G ) } \rightarrow {\Gamma = \left \{ {\gamma \in 2 ^ \Sigma } : {\Sigma _ { { \mathop{\rm uc} } } \subseteq \gamma } \right \} } . $$

For each $ s \in L ( G ) $ generated up till now by $ G $( under the control of $ S $), $ S ( s ) \cap \Sigma _ {G} ( f ( x _ {0} ,s ) ) $ is the set of enabled events that $ G $ can execute at its current state $ f ( x _ {0} ,s ) $. The resulting system of $ G $ and $ S $ is denoted by $ S/G $, to suggest that "G is under the supervision of S" . The supervisor $ S $ is said to be non-blocking if $ S/G $ is non-blocking, i.e. $ L ( S/G ) = { {L _ { { \mathop{\rm m} } } ( S/G ) } bar } $. A principal result is the following. Let $ K $, the desired language, be given and satisfy $ K \subseteq L _ { { \mathop{\rm m} } } ( G ) $ and $ K \neq \emptyset $. There exists a non-blocking supervisor $ S $ for $ G $( given) such that $ L _ { { \mathop{\rm m} } } ( S/G ) = K $ if and only if the following two conditions hold:

1) $ {\overline{K}\; } \Sigma _ {\textrm{ uc } } \cap L ( G ) \subseteq {\overline{K}\; } $;

2) $ K = {\overline{K}\; } \cap L _ {\textrm{ m } } ( G ) $. References for this area are [a2] and [a7]. See also Control system.

Timed approach.

Modelling of discrete event systems via the max-plus algebra approach is suitable when timing of the events plays an essential role. The starting point is a "linear" model, linear in the sense of the max-plus algebra, i.e. the algebra in which the admissible operations are maximization and addition, rather than the more conventional addition and multiplication. The basic system studied is defined by the recurrence relation

$$ x ( k + 1 ) = Ax ( k ) , $$

where $ x $ is an $ n $- vector and $ A = \{ a _ {ij } \} $ an $ ( n \times n ) $- matrix with elements in $ \mathbf R \cup {- \infty } $. The "number" $ - \infty $ has been added because it is the neutral element with respect to maximization. The numerical evaluation of the recurrence relation is completely standard, except that the conventional addition is replaced by maximization, and the conventional multiplication is replaced by addition. Just as with linear recurrence relations in the conventional setting, the eigenvalue and eigenvector are important quantities to describe the behaviour of the recurrence relation in the max-plus algebra setting (cf. also Eigen value). In the max-plus algebra, the inverse of the eigenvalue is a measure for the throughput of the system. The eigenvalue $ \lambda $ and eigenvector $ v \neq - \infty $ in the latter case are defined as usual, i.e. by $ Av = \lambda v $, where, however, the used operations are maximization and addition again. Thus, one has

$$ \left ( \begin{array}{cc} 2 & 5 \\ 3 & 1 \\ \end{array} \right ) \left ( \begin{array}{c} 0 \\ - 1 \\ \end{array} \right ) = 4 \left ( \begin{array}{c} 0 \\ - 1 \\ \end{array} \right ) . $$

Associated with the recurrence relation is a weighted digraph with $ n $ nodes and an arc from node $ j $ to node $ i $ if $ a _ {ij } \neq - \infty $. This graph is called the precedence graph of $ A $ and is denoted by $ {\mathcal G} ( A ) $. If $ a _ {ij } \neq - \infty $, then the weight of the corresponding arc in the graph equals $ a _ {ij } $. The mean weight of a path of $ {\mathcal G} ( A ) $ is defined as the sum of all weights along this path (called the weight of the path), divided by the length of this path. If such a path happens to be a circuit, then one talks about the mean weight of the circuit, or equivalently, the cycle mean. The quantity $ ( A ^ {k} ) _ {ij } $ can be interpreted as the maximum of the weights of all paths of length $ k $ that start at node $ j $ and end at node $ i $. A basic result is the following. Given a square matrix $ A $, with $ {\mathcal G} ( A ) $ strongly connected, then $ A $ has one and only one eigenvalue and at least one eigenvector. The eigenvalue equals the maximum cycle mean of $ {\mathcal G} ( A ) $, where the maximum is taken with respect to all circuits of $ {\mathcal G} ( A ) $. A well-known numerical procedure to calculate the eigenvalue is the Karp algorithm, which reads:

$$ \lambda = \max _ {i = 1 \dots n } \left ( \min _ {k = 0 \dots n - 1 } { \frac{( A ^ {n} ) _ {ij } - ( A ^ {k} ) _ {ij } }{n - k } } \right ) , \forall j, $$

where $ A ^ {n} $ and $ A ^ {k} $ are to be evaluated in the max-plus algebra; the other operations (subtraction and division) are conventional ones.

Various extensions exist. If input and output terms are added to the recurrence relation, one enters the field of "discrete event systems theory" , which has many parallels which the conventional linear systems theory. Other extensions are, for instance, the study of systems where the underlying algebra is more exotic than the conventional or max-plus algebra, and the study of recurrence relations where the elements $ a _ {ij } $ are stochastic quantities. There is a natural relationship with the description of flows of tokens in so-called (timed) event graphs, which are a special class of (timed) Petri nets. Within the terminology of the theory of Petri nets (cf. Petri net), it is easy to give a discrete event interpretation of max-plus linear systems. The $ i $ th component of $ x ( k ) $ can often be interpreted as the time instant at which node $ i $ in $ {\mathcal G} ( A ) $ has become active for the $ k $ th time. Thus, the argument $ k $ refers to a counter rather than to time. Within such applications, $ a _ {ij } $ refers to the travelling time from node $ j $ to node $ i $, or to the holding time at node $ j $, of the $ k $ th token. References for this area are [a2], [a3] and [a1].

Stochastic approach.

Yet another approach to discrete event systems is given by the theory of perturbation analysis. This theory is motivated by the fact that the state trajectory (or sample path) of a discrete event system observed under a given set of conditions contains a surprising amount of information about the behaviour of the system under different conditions one might be interested in. This is particularly important when analytical models are unavailable or simply inadequate for discrete event systems of considerable complexity, especially in a stochastic environment. The aim is to satisfy a set of quantitative specifications and to optimize the performance of a discrete event system. Two basic problems are

i) evaluate $ J ( \theta ) = E [ L ( \theta ) ] $ for all $ \theta \in \Theta $;

ii) find $ \theta \in \Theta $ to minimize $ J ( \theta ) = {\mathsf E} [ L ( \theta ) ] $. Here, $ \theta $ is some parameter to be chosen from some set $ \Theta $ and $ J ( \theta ) $ is the performance measure of interest; $ L ( \theta ) $ is the performance obtained over a specific sample path of the system and $ {\mathsf E} $ denotes the expectation with respect to all $ \omega \in \Omega $, where $ \Omega $ represents all random occurrences in the system. The theory of perturbation analysis describes conditions under which $ L ( \theta + \Delta \theta ) $, where $ \Delta \theta $ is a possibly small quantity and $ \theta + \Delta \theta \in \Theta $, can be determined from the original experiment through which $ L ( \theta ) $ was obtained. Thus, it is not necessary to execute a new experiment with parameter $ \theta + \Delta \theta $. Besides, a complicating factor is that the new experiment would be run with respect to another (also unknown) $ \omega $. For small $ \Delta \theta $ one thus obtains an estimate for $ \partial L/ \partial \theta $. The next question to be considered is under which conditions one has

$$ {\mathsf E} \left [ { \frac{\partial L ( \theta, \omega ) }{\partial \theta } } \right ] = { \frac{\partial {\mathsf E} [ L ( \theta, \omega ) ] }{\partial \theta } } , $$

where the argument $ \omega $ has been added for clarity. This equality refers to unbiasedness of the estimator (cf. Unbiased estimator). For many situations this question has been answered satisfactorily. References for this area are [a2], [a4] and [a6].

References

[a1] F.L. Baccelli, G. Cohen, G.J. Olsder, and J.-P. Quadrat, "Synchronization and linearity: an algebra for discrete event systems" , Wiley (1992)
[a2] C.G. Cassandras, S. Lafortune, G.J. Olsder, "Introduction to the modelling, control and optimization of discrete event systems" A. Isidori (ed.) , European Control Conf. , Springer (1995) pp. 75
[a3] R.A. Cuninghame-Green, "Minimax algebra" , Lecture Notes in Economics and Mathematical Systems , 166 , Springer (1979)
[a4] P. Glasserman, "Gradient estimation via perturbation analysis" , Kluwer Acad. Publ. (1991)
[a5] P. Glasserman, D.D. Yao, "Monotone structure in discrete-event systems" , Wiley (1994)
[a6] Y.C. Ho, X. Cao, "Perturbation analysis of discrete event dynamic systems" , Kluwer Acad. Publ. (1991)
[a7] P.J. Ramadge, W.M. Wonham, "Supervisory control of a class of discrete event processes" SIAM J. Control Optim. , 25 (1987) pp. 206–230
How to Cite This Entry:
Discrete event system. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Discrete_event_system&oldid=18173
This article was adapted from an original article by G.J. Olsder (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article