Namespaces
Variants
Actions

Difference between revisions of "Discrete event system"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
d1102301.png
 +
$#A+1 = 106 n = 0
 +
$#C+1 = 106 : ~/encyclopedia/old_files/data/D110/D.1100230 Discrete event system
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The theory of discrete event systems, sometimes also referred to as discrete event dynamic systems to emphasize the evolution in time, encompasses a variety of classes of problems and of modelling approaches. A succinct definition of a discrete event system that sets such systems apart from others is hard to give. Two key features of a discrete event system are: i) its dynamics is event driven as opposed to time driven, i.e. the behaviour of a discrete event system is governed only by occurrences of different types of events over time rather than by ticks of a clock; and ii) its state variables belong to a discrete (not necessarily finite) set. Theoretical disciplines that support the study of discrete event systems include systems theory (cf. [[Automatic control theory|Automatic control theory]]), [[Operations research|operations research]] and theoretical computer science. In this article, three important approaches to the description and analysis of discrete event systems will be given. It depends on the field of application which one is the more suitable to use. Common to these approaches are various monotony properties, see [[#References|[a5]]]. Related theoretical disciplines are characterized by terms like [[Queueing theory|queueing theory]], hybrid systems, Petri nets (cf. [[Petri net|Petri net]]), path algebras, and generalized Markov processes (cf. [[Markov process|Markov process]]). Applications are found specifically in the areas of flexible manufacturing, communication networks and logistic systems.
 
The theory of discrete event systems, sometimes also referred to as discrete event dynamic systems to emphasize the evolution in time, encompasses a variety of classes of problems and of modelling approaches. A succinct definition of a discrete event system that sets such systems apart from others is hard to give. Two key features of a discrete event system are: i) its dynamics is event driven as opposed to time driven, i.e. the behaviour of a discrete event system is governed only by occurrences of different types of events over time rather than by ticks of a clock; and ii) its state variables belong to a discrete (not necessarily finite) set. Theoretical disciplines that support the study of discrete event systems include systems theory (cf. [[Automatic control theory|Automatic control theory]]), [[Operations research|operations research]] and theoretical computer science. In this article, three important approaches to the description and analysis of discrete event systems will be given. It depends on the field of application which one is the more suitable to use. Common to these approaches are various monotony properties, see [[#References|[a5]]]. Related theoretical disciplines are characterized by terms like [[Queueing theory|queueing theory]], hybrid systems, Petri nets (cf. [[Petri net|Petri net]]), path algebras, and generalized Markov processes (cf. [[Markov process|Markov process]]). Applications are found specifically in the areas of flexible manufacturing, communication networks and logistic systems.
  
Line 6: Line 18:
 
Let the automaton be described by
 
Let the automaton be described by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102301.png" /></td> </tr></table>
+
$$
 +
G = ( X, \Sigma,f, \Sigma _ {G} ,x _ {0} ,X _ { { \mathop{\rm m}  } } ) ,
 +
$$
  
 
where
 
where
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102302.png" /> is the set of states;
+
$  X $
 +
is the set of states;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102303.png" /> is the set of events associated with the transitions;
+
$  \Sigma $
 +
is the set of events associated with the transitions;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102304.png" /> is the partial transition function;
+
$  f : {X \times \Sigma } \rightarrow X $
 +
is the partial transition function;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102305.png" /> is the feasible event function: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102306.png" /> is the set of all events <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102307.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102308.png" /> is defined;
+
$  {\Sigma _ {G} } : X \rightarrow {2  ^  \Sigma  } $
 +
is the feasible event function: $  \Sigma _ {G} ( x ) $
 +
is the set of all events $  e $
 +
for which $  f ( x,e ) $
 +
is defined;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d1102309.png" /> is the initial state;
+
$  x _ {0} $
 +
is the initial state;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023010.png" /> is the set of marked states (upon entering a marked state, the system has completed some operation or task). The language generated by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023011.png" /> is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023012.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023013.png" /> is the language consisting of all prefixes of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023014.png" />. The language marked by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023015.png" /> is defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023016.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023017.png" /> be partitioned into two disjoint subsets,
+
$  X _ { { \mathop{\rm m}  } } \subseteq X $
 +
is the set of marked states (upon entering a marked state, the system has completed some operation or task). The language generated by $  G $
 +
is denoted by $  L ( G ) $;
 +
$  {\overline{L}\; } $
 +
is the language consisting of all prefixes of $  L $.  
 +
The language marked by $  G $
 +
is defined by $  L _ { { \mathop{\rm m}  } } ( G ) = \{ {s \in L ( G ) } : {f ( x _ {0} ,s ) \in X _ { { \mathop{\rm m}  } } } \} $.  
 +
Let $  \Sigma $
 +
be partitioned into two disjoint subsets,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023018.png" /></td> </tr></table>
+
$$
 +
\Sigma = \Sigma _ { { \mathop{\rm c}  } } \cup \Sigma _ { { \mathop{\rm uc}  } } ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023019.png" /> is the set of controllable events, i.e. events that can be disabled by the supervisor, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023020.png" /> is the set of uncontrollable events (i.e. those events cannot be prevented from happening by control). Formally, a supervisor is a function
+
where $  \Sigma _ { { \mathop{\rm c}  } } $
 +
is the set of controllable events, i.e. events that can be disabled by the supervisor, and $  \Sigma _ { { \mathop{\rm uc}  } } $
 +
is the set of uncontrollable events (i.e. those events cannot be prevented from happening by control). Formally, a supervisor is a function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023021.png" /></td> </tr></table>
+
$$
 +
S : {L ( G ) } \rightarrow {\Gamma = \left \{ {\gamma \in 2  ^  \Sigma  } : {\Sigma _ { { \mathop{\rm uc}  } } \subseteq \gamma } \right \} } .
 +
$$
  
For each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023022.png" /> generated up till now by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023023.png" /> (under the control of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023024.png" />), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023025.png" /> is the set of enabled events that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023026.png" /> can execute at its current state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023027.png" />. The resulting system of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023029.png" /> is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023030.png" />, to suggest that  "G is under the supervision of S" . The supervisor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023031.png" /> is said to be non-blocking if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023032.png" /> is non-blocking, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023033.png" />. A principal result is the following. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023034.png" />, the desired language, be given and satisfy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023035.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023036.png" />. There exists a non-blocking supervisor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023037.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023038.png" /> (given) such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023039.png" /> if and only if the following two conditions hold:
+
For each $  s \in L ( G ) $
 +
generated up till now by $  G $(
 +
under the control of $  S $),  
 +
$  S ( s ) \cap \Sigma _ {G} ( f ( x _ {0} ,s ) ) $
 +
is the set of enabled events that $  G $
 +
can execute at its current state $  f ( x _ {0} ,s ) $.  
 +
The resulting system of $  G $
 +
and $  S $
 +
is denoted by $  S/G $,  
 +
to suggest that  "G is under the supervision of S" . The supervisor $  S $
 +
is said to be non-blocking if $  S/G $
 +
is non-blocking, i.e. $  L ( S/G ) = { {L _ { { \mathop{\rm m}  } } ( S/G ) } bar } $.  
 +
A principal result is the following. Let $  K $,  
 +
the desired language, be given and satisfy $  K \subseteq L _ { { \mathop{\rm m}  } } ( G ) $
 +
and $  K \neq \emptyset $.  
 +
There exists a non-blocking supervisor $  S $
 +
for $  G $(
 +
given) such that $  L _ { { \mathop{\rm m}  } } ( S/G ) = K $
 +
if and only if the following two conditions hold:
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023040.png" />;
+
1) $  {\overline{K}\; } \Sigma _ {\textrm{ uc  }  } \cap L ( G ) \subseteq {\overline{K}\; } $;
  
2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023041.png" />. References for this area are [[#References|[a2]]] and [[#References|[a7]]]. See also [[Control system|Control system]].
+
2) $  K = {\overline{K}\; } \cap L _ {\textrm{ m } }  ( G ) $.  
 +
References for this area are [[#References|[a2]]] and [[#References|[a7]]]. See also [[Control system|Control system]].
  
 
==Timed approach.==
 
==Timed approach.==
 
Modelling of discrete event systems via the max-plus algebra approach is suitable when timing of the events plays an essential role. The starting point is a  "linear"  model, linear in the sense of the max-plus algebra, i.e. the [[Algebra|algebra]] in which the admissible operations are maximization and addition, rather than the more conventional addition and multiplication. The basic system studied is defined by the recurrence relation
 
Modelling of discrete event systems via the max-plus algebra approach is suitable when timing of the events plays an essential role. The starting point is a  "linear"  model, linear in the sense of the max-plus algebra, i.e. the [[Algebra|algebra]] in which the admissible operations are maximization and addition, rather than the more conventional addition and multiplication. The basic system studied is defined by the recurrence relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023042.png" /></td> </tr></table>
+
$$
 +
x ( k + 1 ) = Ax ( k ) ,
 +
$$
 +
 
 +
where  $  x $
 +
is an  $  n $-
 +
vector and  $  A = \{ a _ {ij }  \} $
 +
an  $  ( n \times n ) $-
 +
matrix with elements in  $  \mathbf R \cup {- \infty } $.
 +
The  "number"   $  - \infty $
 +
has been added because it is the neutral element with respect to maximization. The numerical evaluation of the recurrence relation is completely standard, except that the conventional addition is replaced by maximization, and the conventional multiplication is replaced by addition. Just as with linear recurrence relations in the conventional setting, the eigenvalue and eigenvector are important quantities to describe the behaviour of the recurrence relation in the max-plus algebra setting (cf. also [[Eigen value|Eigen value]]). In the max-plus algebra, the inverse of the eigenvalue is a measure for the throughput of the system. The eigenvalue  $  \lambda $
 +
and eigenvector  $  v \neq - \infty $
 +
in the latter case are defined as usual, i.e. by  $  Av = \lambda v $,
 +
where, however, the used operations are maximization and addition again. Thus, one has
 +
 
 +
$$
 +
\left (
 +
 
 +
\begin{array}{cc}
 +
2  & 5  \\
 +
3  & 1  \\
 +
\end{array}
 +
 
 +
\right ) \left (
 +
 
 +
\begin{array}{c}
 +
0 \\
 +
- 1  \\
 +
\end{array}
 +
 
 +
\right ) = 4 \left (
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023043.png" /> is an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023044.png" />-vector and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023045.png" /> an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023046.png" />-matrix with elements in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023047.png" />. The  "number" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023048.png" /> has been added because it is the neutral element with respect to maximization. The numerical evaluation of the recurrence relation is completely standard, except that the conventional addition is replaced by maximization, and the conventional multiplication is replaced by addition. Just as with linear recurrence relations in the conventional setting, the eigenvalue and eigenvector are important quantities to describe the behaviour of the recurrence relation in the max-plus algebra setting (cf. also [[Eigen value|Eigen value]]). In the max-plus algebra, the inverse of the eigenvalue is a measure for the throughput of the system. The eigenvalue <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023049.png" /> and eigenvector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023050.png" /> in the latter case are defined as usual, i.e. by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023051.png" />, where, however, the used operations are maximization and addition again. Thus, one has
+
\begin{array}{c}
 +
0 \\
 +
- 1 \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023052.png" /></td> </tr></table>
+
\right ) .
 +
$$
  
Associated with the recurrence relation is a weighted digraph with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023053.png" /> nodes and an arc from node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023054.png" /> to node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023055.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023056.png" />. This graph is called the precedence graph of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023057.png" /> and is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023058.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023059.png" />, then the weight of the corresponding arc in the graph equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023060.png" />. The mean weight of a path of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023061.png" /> is defined as the sum of all weights along this path (called the weight of the path), divided by the length of this path. If such a path happens to be a circuit, then one talks about the mean weight of the circuit, or equivalently, the cycle mean. The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023062.png" /> can be interpreted as the maximum of the weights of all paths of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023063.png" /> that start at node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023064.png" /> and end at node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023065.png" />. A basic result is the following. Given a square matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023066.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023067.png" /> strongly connected, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023068.png" /> has one and only one eigenvalue and at least one eigenvector. The eigenvalue equals the maximum cycle mean of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023069.png" />, where the maximum is taken with respect to all circuits of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023070.png" />. A well-known numerical procedure to calculate the eigenvalue is the Karp algorithm, which reads:
+
Associated with the recurrence relation is a weighted digraph with $  n $
 +
nodes and an arc from node $  j $
 +
to node $  i $
 +
if $  a _ {ij }  \neq - \infty $.  
 +
This graph is called the precedence graph of $  A $
 +
and is denoted by $  {\mathcal G} ( A ) $.  
 +
If $  a _ {ij }  \neq - \infty $,  
 +
then the weight of the corresponding arc in the graph equals $  a _ {ij }  $.  
 +
The mean weight of a path of $  {\mathcal G} ( A ) $
 +
is defined as the sum of all weights along this path (called the weight of the path), divided by the length of this path. If such a path happens to be a circuit, then one talks about the mean weight of the circuit, or equivalently, the cycle mean. The quantity $  ( A  ^ {k} ) _ {ij }  $
 +
can be interpreted as the maximum of the weights of all paths of length $  k $
 +
that start at node $  j $
 +
and end at node $  i $.  
 +
A basic result is the following. Given a square matrix $  A $,  
 +
with $  {\mathcal G} ( A ) $
 +
strongly connected, then $  A $
 +
has one and only one eigenvalue and at least one eigenvector. The eigenvalue equals the maximum cycle mean of $  {\mathcal G} ( A ) $,  
 +
where the maximum is taken with respect to all circuits of $  {\mathcal G} ( A ) $.  
 +
A well-known numerical procedure to calculate the eigenvalue is the Karp algorithm, which reads:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023071.png" /></td> </tr></table>
+
$$
 +
\lambda = \max  _ {i = 1 \dots n } \left (  \min  _ {k = 0 \dots n - 1 } {
 +
\frac{( A  ^ {n} ) _ {ij }  - ( A  ^ {k} ) _ {ij }  }{n - k }
 +
} \right ) ,  \forall j,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023072.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023073.png" /> are to be evaluated in the max-plus algebra; the other operations (subtraction and division) are conventional ones.
+
where $  A  ^ {n} $
 +
and $  A  ^ {k} $
 +
are to be evaluated in the max-plus algebra; the other operations (subtraction and division) are conventional ones.
  
Various extensions exist. If input and output terms are added to the recurrence relation, one enters the field of  "discrete event systems theory" , which has many parallels which the conventional linear systems theory. Other extensions are, for instance, the study of systems where the underlying algebra is more exotic than the conventional or max-plus algebra, and the study of recurrence relations where the elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023074.png" /> are stochastic quantities. There is a natural relationship with the description of flows of tokens in so-called (timed) event graphs, which are a special class of (timed) Petri nets. Within the terminology of the theory of Petri nets (cf. [[Petri net|Petri net]]), it is easy to give a discrete event interpretation of max-plus linear systems. The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023075.png" />th component of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023076.png" /> can often be interpreted as the time instant at which node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023077.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023078.png" /> has become active for the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023079.png" />th time. Thus, the argument <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023080.png" /> refers to a counter rather than to time. Within such applications, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023081.png" /> refers to the travelling time from node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023082.png" /> to node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023083.png" />, or to the holding time at node <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023084.png" />, of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023085.png" />th token. References for this area are [[#References|[a2]]], [[#References|[a3]]] and [[#References|[a1]]].
+
Various extensions exist. If input and output terms are added to the recurrence relation, one enters the field of  "discrete event systems theory" , which has many parallels which the conventional linear systems theory. Other extensions are, for instance, the study of systems where the underlying algebra is more exotic than the conventional or max-plus algebra, and the study of recurrence relations where the elements $  a _ {ij }  $
 +
are stochastic quantities. There is a natural relationship with the description of flows of tokens in so-called (timed) event graphs, which are a special class of (timed) Petri nets. Within the terminology of the theory of Petri nets (cf. [[Petri net|Petri net]]), it is easy to give a discrete event interpretation of max-plus linear systems. The $  i $
 +
th component of $  x ( k ) $
 +
can often be interpreted as the time instant at which node $  i $
 +
in $  {\mathcal G} ( A ) $
 +
has become active for the $  k $
 +
th time. Thus, the argument $  k $
 +
refers to a counter rather than to time. Within such applications, $  a _ {ij }  $
 +
refers to the travelling time from node $  j $
 +
to node $  i $,  
 +
or to the holding time at node $  j $,  
 +
of the $  k $
 +
th token. References for this area are [[#References|[a2]]], [[#References|[a3]]] and [[#References|[a1]]].
  
 
==Stochastic approach.==
 
==Stochastic approach.==
 
Yet another approach to discrete event systems is given by the theory of perturbation analysis. This theory is motivated by the fact that the state trajectory (or sample path) of a discrete event system observed under a given set of conditions contains a surprising amount of information about the behaviour of the system under different conditions one might be interested in. This is particularly important when analytical models are unavailable or simply inadequate for discrete event systems of considerable complexity, especially in a stochastic environment. The aim is to satisfy a set of quantitative specifications and to optimize the performance of a discrete event system. Two basic problems are
 
Yet another approach to discrete event systems is given by the theory of perturbation analysis. This theory is motivated by the fact that the state trajectory (or sample path) of a discrete event system observed under a given set of conditions contains a surprising amount of information about the behaviour of the system under different conditions one might be interested in. This is particularly important when analytical models are unavailable or simply inadequate for discrete event systems of considerable complexity, especially in a stochastic environment. The aim is to satisfy a set of quantitative specifications and to optimize the performance of a discrete event system. Two basic problems are
  
i) evaluate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023086.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023087.png" />;
+
i) evaluate $  J ( \theta ) = E [ L ( \theta ) ] $
 +
for all $  \theta \in \Theta $;
  
ii) find <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023088.png" /> to minimize <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023089.png" />. Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023090.png" /> is some parameter to be chosen from some set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023091.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023092.png" /> is the performance measure of interest; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023093.png" /> is the performance obtained over a specific sample path of the system and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023094.png" /> denotes the expectation with respect to all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023095.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023096.png" /> represents all random occurrences in the system. The theory of perturbation analysis describes conditions under which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023097.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023098.png" /> is a possibly small quantity and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d11023099.png" />, can be determined from the original experiment through which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230100.png" /> was obtained. Thus, it is not necessary to execute a new experiment with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230101.png" />. Besides, a complicating factor is that the new experiment would be run with respect to another (also unknown) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230102.png" />. For small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230103.png" /> one thus obtains an estimate for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230104.png" />. The next question to be considered is under which conditions one has
+
ii) find $  \theta \in \Theta $
 +
to minimize $  J ( \theta ) = {\mathsf E} [ L ( \theta ) ] $.  
 +
Here, $  \theta $
 +
is some parameter to be chosen from some set $  \Theta $
 +
and $  J ( \theta ) $
 +
is the performance measure of interest; $  L ( \theta ) $
 +
is the performance obtained over a specific sample path of the system and $  {\mathsf E} $
 +
denotes the expectation with respect to all $  \omega \in \Omega $,  
 +
where $  \Omega $
 +
represents all random occurrences in the system. The theory of perturbation analysis describes conditions under which $  L ( \theta + \Delta \theta ) $,  
 +
where $  \Delta \theta $
 +
is a possibly small quantity and $  \theta + \Delta \theta \in \Theta $,  
 +
can be determined from the original experiment through which $  L ( \theta ) $
 +
was obtained. Thus, it is not necessary to execute a new experiment with parameter $  \theta + \Delta \theta $.  
 +
Besides, a complicating factor is that the new experiment would be run with respect to another (also unknown) $  \omega $.  
 +
For small $  \Delta \theta $
 +
one thus obtains an estimate for $  \partial  L/ \partial  \theta $.  
 +
The next question to be considered is under which conditions one has
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230105.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \left [ {
 +
\frac{\partial  L ( \theta, \omega ) }{\partial  \theta }
 +
} \right ] = {
 +
\frac{\partial  {\mathsf E} [ L ( \theta, \omega ) ] }{\partial  \theta }
 +
} ,
 +
$$
  
where the argument <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d110/d110230/d110230106.png" /> has been added for clarity. This equality refers to unbiasedness of the estimator (cf. [[Unbiased estimator|Unbiased estimator]]). For many situations this question has been answered satisfactorily. References for this area are [[#References|[a2]]], [[#References|[a4]]] and [[#References|[a6]]].
+
where the argument $  \omega $
 +
has been added for clarity. This equality refers to unbiasedness of the estimator (cf. [[Unbiased estimator|Unbiased estimator]]). For many situations this question has been answered satisfactorily. References for this area are [[#References|[a2]]], [[#References|[a4]]] and [[#References|[a6]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.L. Baccelli,  G. Cohen,  G.J. Olsder,  and J.-P. Quadrat,  "Synchronization and linearity: an algebra for discrete event systems" , Wiley  (1992)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  C.G. Cassandras,  S. Lafortune,  G.J. Olsder,  "Introduction to the modelling, control and optimization of discrete event systems"  A. Isidori (ed.) , ''European Control Conf.'' , Springer  (1995)  pp. 75</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  R.A. Cuninghame-Green,  "Minimax algebra" , ''Lecture Notes in Economics and Mathematical Systems'' , '''166''' , Springer  (1979)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  P. Glasserman,  "Gradient estimation via perturbation analysis" , Kluwer Acad. Publ.  (1991)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  P. Glasserman,  D.D. Yao,  "Monotone structure in discrete-event systems" , Wiley  (1994)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  Y.C. Ho,  X. Cao,  "Perturbation analysis of discrete event dynamic systems" , Kluwer Acad. Publ.  (1991)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  P.J. Ramadge,  W.M. Wonham,  "Supervisory control of a class of discrete event processes"  ''SIAM J. Control Optim.'' , '''25'''  (1987)  pp. 206–230</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.L. Baccelli,  G. Cohen,  G.J. Olsder,  and J.-P. Quadrat,  "Synchronization and linearity: an algebra for discrete event systems" , Wiley  (1992)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  C.G. Cassandras,  S. Lafortune,  G.J. Olsder,  "Introduction to the modelling, control and optimization of discrete event systems"  A. Isidori (ed.) , ''European Control Conf.'' , Springer  (1995)  pp. 75</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  R.A. Cuninghame-Green,  "Minimax algebra" , ''Lecture Notes in Economics and Mathematical Systems'' , '''166''' , Springer  (1979)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  P. Glasserman,  "Gradient estimation via perturbation analysis" , Kluwer Acad. Publ.  (1991)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  P. Glasserman,  D.D. Yao,  "Monotone structure in discrete-event systems" , Wiley  (1994)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  Y.C. Ho,  X. Cao,  "Perturbation analysis of discrete event dynamic systems" , Kluwer Acad. Publ.  (1991)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  P.J. Ramadge,  W.M. Wonham,  "Supervisory control of a class of discrete event processes"  ''SIAM J. Control Optim.'' , '''25'''  (1987)  pp. 206–230</TD></TR></table>

Latest revision as of 19:35, 5 June 2020


The theory of discrete event systems, sometimes also referred to as discrete event dynamic systems to emphasize the evolution in time, encompasses a variety of classes of problems and of modelling approaches. A succinct definition of a discrete event system that sets such systems apart from others is hard to give. Two key features of a discrete event system are: i) its dynamics is event driven as opposed to time driven, i.e. the behaviour of a discrete event system is governed only by occurrences of different types of events over time rather than by ticks of a clock; and ii) its state variables belong to a discrete (not necessarily finite) set. Theoretical disciplines that support the study of discrete event systems include systems theory (cf. Automatic control theory), operations research and theoretical computer science. In this article, three important approaches to the description and analysis of discrete event systems will be given. It depends on the field of application which one is the more suitable to use. Common to these approaches are various monotony properties, see [a5]. Related theoretical disciplines are characterized by terms like queueing theory, hybrid systems, Petri nets (cf. Petri net), path algebras, and generalized Markov processes (cf. Markov process). Applications are found specifically in the areas of flexible manufacturing, communication networks and logistic systems.

Logical approach.

Problems in which the precise ordering of events, and not their timing, is of interest, which must satisfy a given set of specifications, are conveniently modelled within the context of the theory of automata and formal languages (cf. Formal languages and automata). A discrete event system is typically represented as a set of state variables that are linked by transitions. The set of state transitions, called events, can be considered as the alphabet of a language, and sequences of events as words (cf. Word) within that language. An automaton can be viewed simply as a device that generates a language by the manipulation of the alphabet according to a specific set of rules. Control aspects can be added in the sense that certain events (i.e. transitions) can be disabled by an external controller. The resulting systems and control theory for discrete event systems is known as supervisory control theory. This theory addresses the synthesis of controllers (i.e. supervisors) for discrete event systems to satisfy a set of qualitative specifications on the admissible orderings of the events that can be executed by the system.

Let the automaton be described by

$$ G = ( X, \Sigma,f, \Sigma _ {G} ,x _ {0} ,X _ { { \mathop{\rm m} } } ) , $$

where

$ X $ is the set of states;

$ \Sigma $ is the set of events associated with the transitions;

$ f : {X \times \Sigma } \rightarrow X $ is the partial transition function;

$ {\Sigma _ {G} } : X \rightarrow {2 ^ \Sigma } $ is the feasible event function: $ \Sigma _ {G} ( x ) $ is the set of all events $ e $ for which $ f ( x,e ) $ is defined;

$ x _ {0} $ is the initial state;

$ X _ { { \mathop{\rm m} } } \subseteq X $ is the set of marked states (upon entering a marked state, the system has completed some operation or task). The language generated by $ G $ is denoted by $ L ( G ) $; $ {\overline{L}\; } $ is the language consisting of all prefixes of $ L $. The language marked by $ G $ is defined by $ L _ { { \mathop{\rm m} } } ( G ) = \{ {s \in L ( G ) } : {f ( x _ {0} ,s ) \in X _ { { \mathop{\rm m} } } } \} $. Let $ \Sigma $ be partitioned into two disjoint subsets,

$$ \Sigma = \Sigma _ { { \mathop{\rm c} } } \cup \Sigma _ { { \mathop{\rm uc} } } , $$

where $ \Sigma _ { { \mathop{\rm c} } } $ is the set of controllable events, i.e. events that can be disabled by the supervisor, and $ \Sigma _ { { \mathop{\rm uc} } } $ is the set of uncontrollable events (i.e. those events cannot be prevented from happening by control). Formally, a supervisor is a function

$$ S : {L ( G ) } \rightarrow {\Gamma = \left \{ {\gamma \in 2 ^ \Sigma } : {\Sigma _ { { \mathop{\rm uc} } } \subseteq \gamma } \right \} } . $$

For each $ s \in L ( G ) $ generated up till now by $ G $( under the control of $ S $), $ S ( s ) \cap \Sigma _ {G} ( f ( x _ {0} ,s ) ) $ is the set of enabled events that $ G $ can execute at its current state $ f ( x _ {0} ,s ) $. The resulting system of $ G $ and $ S $ is denoted by $ S/G $, to suggest that "G is under the supervision of S" . The supervisor $ S $ is said to be non-blocking if $ S/G $ is non-blocking, i.e. $ L ( S/G ) = { {L _ { { \mathop{\rm m} } } ( S/G ) } bar } $. A principal result is the following. Let $ K $, the desired language, be given and satisfy $ K \subseteq L _ { { \mathop{\rm m} } } ( G ) $ and $ K \neq \emptyset $. There exists a non-blocking supervisor $ S $ for $ G $( given) such that $ L _ { { \mathop{\rm m} } } ( S/G ) = K $ if and only if the following two conditions hold:

1) $ {\overline{K}\; } \Sigma _ {\textrm{ uc } } \cap L ( G ) \subseteq {\overline{K}\; } $;

2) $ K = {\overline{K}\; } \cap L _ {\textrm{ m } } ( G ) $. References for this area are [a2] and [a7]. See also Control system.

Timed approach.

Modelling of discrete event systems via the max-plus algebra approach is suitable when timing of the events plays an essential role. The starting point is a "linear" model, linear in the sense of the max-plus algebra, i.e. the algebra in which the admissible operations are maximization and addition, rather than the more conventional addition and multiplication. The basic system studied is defined by the recurrence relation

$$ x ( k + 1 ) = Ax ( k ) , $$

where $ x $ is an $ n $- vector and $ A = \{ a _ {ij } \} $ an $ ( n \times n ) $- matrix with elements in $ \mathbf R \cup {- \infty } $. The "number" $ - \infty $ has been added because it is the neutral element with respect to maximization. The numerical evaluation of the recurrence relation is completely standard, except that the conventional addition is replaced by maximization, and the conventional multiplication is replaced by addition. Just as with linear recurrence relations in the conventional setting, the eigenvalue and eigenvector are important quantities to describe the behaviour of the recurrence relation in the max-plus algebra setting (cf. also Eigen value). In the max-plus algebra, the inverse of the eigenvalue is a measure for the throughput of the system. The eigenvalue $ \lambda $ and eigenvector $ v \neq - \infty $ in the latter case are defined as usual, i.e. by $ Av = \lambda v $, where, however, the used operations are maximization and addition again. Thus, one has

$$ \left ( \begin{array}{cc} 2 & 5 \\ 3 & 1 \\ \end{array} \right ) \left ( \begin{array}{c} 0 \\ - 1 \\ \end{array} \right ) = 4 \left ( \begin{array}{c} 0 \\ - 1 \\ \end{array} \right ) . $$

Associated with the recurrence relation is a weighted digraph with $ n $ nodes and an arc from node $ j $ to node $ i $ if $ a _ {ij } \neq - \infty $. This graph is called the precedence graph of $ A $ and is denoted by $ {\mathcal G} ( A ) $. If $ a _ {ij } \neq - \infty $, then the weight of the corresponding arc in the graph equals $ a _ {ij } $. The mean weight of a path of $ {\mathcal G} ( A ) $ is defined as the sum of all weights along this path (called the weight of the path), divided by the length of this path. If such a path happens to be a circuit, then one talks about the mean weight of the circuit, or equivalently, the cycle mean. The quantity $ ( A ^ {k} ) _ {ij } $ can be interpreted as the maximum of the weights of all paths of length $ k $ that start at node $ j $ and end at node $ i $. A basic result is the following. Given a square matrix $ A $, with $ {\mathcal G} ( A ) $ strongly connected, then $ A $ has one and only one eigenvalue and at least one eigenvector. The eigenvalue equals the maximum cycle mean of $ {\mathcal G} ( A ) $, where the maximum is taken with respect to all circuits of $ {\mathcal G} ( A ) $. A well-known numerical procedure to calculate the eigenvalue is the Karp algorithm, which reads:

$$ \lambda = \max _ {i = 1 \dots n } \left ( \min _ {k = 0 \dots n - 1 } { \frac{( A ^ {n} ) _ {ij } - ( A ^ {k} ) _ {ij } }{n - k } } \right ) , \forall j, $$

where $ A ^ {n} $ and $ A ^ {k} $ are to be evaluated in the max-plus algebra; the other operations (subtraction and division) are conventional ones.

Various extensions exist. If input and output terms are added to the recurrence relation, one enters the field of "discrete event systems theory" , which has many parallels which the conventional linear systems theory. Other extensions are, for instance, the study of systems where the underlying algebra is more exotic than the conventional or max-plus algebra, and the study of recurrence relations where the elements $ a _ {ij } $ are stochastic quantities. There is a natural relationship with the description of flows of tokens in so-called (timed) event graphs, which are a special class of (timed) Petri nets. Within the terminology of the theory of Petri nets (cf. Petri net), it is easy to give a discrete event interpretation of max-plus linear systems. The $ i $ th component of $ x ( k ) $ can often be interpreted as the time instant at which node $ i $ in $ {\mathcal G} ( A ) $ has become active for the $ k $ th time. Thus, the argument $ k $ refers to a counter rather than to time. Within such applications, $ a _ {ij } $ refers to the travelling time from node $ j $ to node $ i $, or to the holding time at node $ j $, of the $ k $ th token. References for this area are [a2], [a3] and [a1].

Stochastic approach.

Yet another approach to discrete event systems is given by the theory of perturbation analysis. This theory is motivated by the fact that the state trajectory (or sample path) of a discrete event system observed under a given set of conditions contains a surprising amount of information about the behaviour of the system under different conditions one might be interested in. This is particularly important when analytical models are unavailable or simply inadequate for discrete event systems of considerable complexity, especially in a stochastic environment. The aim is to satisfy a set of quantitative specifications and to optimize the performance of a discrete event system. Two basic problems are

i) evaluate $ J ( \theta ) = E [ L ( \theta ) ] $ for all $ \theta \in \Theta $;

ii) find $ \theta \in \Theta $ to minimize $ J ( \theta ) = {\mathsf E} [ L ( \theta ) ] $. Here, $ \theta $ is some parameter to be chosen from some set $ \Theta $ and $ J ( \theta ) $ is the performance measure of interest; $ L ( \theta ) $ is the performance obtained over a specific sample path of the system and $ {\mathsf E} $ denotes the expectation with respect to all $ \omega \in \Omega $, where $ \Omega $ represents all random occurrences in the system. The theory of perturbation analysis describes conditions under which $ L ( \theta + \Delta \theta ) $, where $ \Delta \theta $ is a possibly small quantity and $ \theta + \Delta \theta \in \Theta $, can be determined from the original experiment through which $ L ( \theta ) $ was obtained. Thus, it is not necessary to execute a new experiment with parameter $ \theta + \Delta \theta $. Besides, a complicating factor is that the new experiment would be run with respect to another (also unknown) $ \omega $. For small $ \Delta \theta $ one thus obtains an estimate for $ \partial L/ \partial \theta $. The next question to be considered is under which conditions one has

$$ {\mathsf E} \left [ { \frac{\partial L ( \theta, \omega ) }{\partial \theta } } \right ] = { \frac{\partial {\mathsf E} [ L ( \theta, \omega ) ] }{\partial \theta } } , $$

where the argument $ \omega $ has been added for clarity. This equality refers to unbiasedness of the estimator (cf. Unbiased estimator). For many situations this question has been answered satisfactorily. References for this area are [a2], [a4] and [a6].

References

[a1] F.L. Baccelli, G. Cohen, G.J. Olsder, and J.-P. Quadrat, "Synchronization and linearity: an algebra for discrete event systems" , Wiley (1992)
[a2] C.G. Cassandras, S. Lafortune, G.J. Olsder, "Introduction to the modelling, control and optimization of discrete event systems" A. Isidori (ed.) , European Control Conf. , Springer (1995) pp. 75
[a3] R.A. Cuninghame-Green, "Minimax algebra" , Lecture Notes in Economics and Mathematical Systems , 166 , Springer (1979)
[a4] P. Glasserman, "Gradient estimation via perturbation analysis" , Kluwer Acad. Publ. (1991)
[a5] P. Glasserman, D.D. Yao, "Monotone structure in discrete-event systems" , Wiley (1994)
[a6] Y.C. Ho, X. Cao, "Perturbation analysis of discrete event dynamic systems" , Kluwer Acad. Publ. (1991)
[a7] P.J. Ramadge, W.M. Wonham, "Supervisory control of a class of discrete event processes" SIAM J. Control Optim. , 25 (1987) pp. 206–230
How to Cite This Entry:
Discrete event system. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Discrete_event_system&oldid=18173
This article was adapted from an original article by G.J. Olsder (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article