Namespaces
Variants
Actions

Difference between revisions of "Formal languages and automata"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex done)
 
Line 1: Line 1:
 +
{{TEX|done}}
 +
 
Both natural and programming languages can be viewed as sets of sentences, that is, finite strings of elements from some basic vocabulary. The notion of a language introduced below is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages.
 
Both natural and programming languages can be viewed as sets of sentences, that is, finite strings of elements from some basic vocabulary. The notion of a language introduced below is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages.
  
Line 5: Line 7:
 
Formal language theory is — together with automata theory, (cf. [[Automata, theory of|Automata, theory of]]) which is really inseparable from language theory — the oldest branch of theoretical computer science. In some sense, the role of language and automata theory in computer science is analogous to that of philosophy in general science: it constitutes the stem from which the individual branches of knowledge emerge.
 
Formal language theory is — together with automata theory, (cf. [[Automata, theory of|Automata, theory of]]) which is really inseparable from language theory — the oldest branch of theoretical computer science. In some sense, the role of language and automata theory in computer science is analogous to that of philosophy in general science: it constitutes the stem from which the individual branches of knowledge emerge.
  
An [[Alphabet|alphabet]] is a finite non-empty set. The elements of an alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408501.png" /> are called letters. A word over an alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408502.png" /> is a finite string of letters (cf. also [[Word|Word]]). The word consisting of zero letters is called the empty word, written <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408503.png" />. The set of all words (respectively, all non-empty words) over an alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408504.png" /> is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408505.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408506.png" />). (Thus, algebraically <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408507.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408508.png" /> are the free [[Monoid|monoid]] and [[Free semi-group|free semi-group]] generated by the finite set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f0408509.png" />.) For words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085011.png" />, the juxtaposition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085012.png" /> is called the catenation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085013.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085014.png" />. The empty word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085015.png" /> is an identity with respect to catenation. Catenation being associative, the notation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085016.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085017.png" /> is a non-negative integer, is used in the customary sense, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085018.png" /> denotes the empty word. The length of a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085019.png" />, in symbols <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085020.png" />, also sometimes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085021.png" />, means the number of letters in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085022.png" /> when each letter is counted as many times as it occurs. A word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085023.png" /> is a subword of a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085024.png" /> if and only if there are words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085026.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085027.png" /> (cf. also [[Imbedded word|Imbedded word]]). Subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085028.png" /> are referred to as (formal) languages over the alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085029.png" />.
+
An [[Alphabet|alphabet]] is a finite non-empty set. The elements of an alphabet $  V $
 +
are called letters. A word over an alphabet $  V $
 +
is a finite string of letters (cf. also [[Word|Word]]). The word consisting of zero letters is called the empty word, written $  \lambda $.  
 +
The set of all words (respectively, all non-empty words) over an alphabet $  V $
 +
is denoted by $  V^{*} $(
 +
respectively, $  V^{+} $).  
 +
(Thus, algebraically $  V^{*} $
 +
and $  V^{+} $
 +
are the free [[Monoid|monoid]] and [[Free semi-group|free semi-group]] generated by the finite set $  V $.)  
 +
For words $  w _{1} $
 +
and $  w _{2} $,  
 +
the juxtaposition $  w _{1} w _{2} $
 +
is called the catenation of $  w _{1} $
 +
and $  w _{2} $.  
 +
The empty word $  \lambda $
 +
is an identity with respect to catenation. Catenation being associative, the notation $  w^{i} $,  
 +
where $  i $
 +
is a non-negative integer, is used in the customary sense, and $  w^{0} $
 +
denotes the empty word. The length of a word $  w $,  
 +
in symbols $  \mathop{\rm lg}\nolimits (w) $,  
 +
also sometimes $  | w | $,  
 +
means the number of letters in $  w $
 +
when each letter is counted as many times as it occurs. A word $  w $
 +
is a subword of a word $  u $
 +
if and only if there are words $  w _{1} $
 +
and $  w _{2} $
 +
such that $  u = w _{1} ww _{2} $(
 +
cf. also [[Imbedded word|Imbedded word]]). Subsets of $  V^{*} $
 +
are referred to as (formal) languages over the alphabet $  V $.
  
Various unary and binary operations defined for languages will be considered in the sequel. Regarding languages as sets, one may immediately define the Boolean operations of union, intersection, complementation (the complementation of a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085030.png" /> is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085031.png" />), difference, and symmetric difference in the usual manner. The catenation (or product) of two languages <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085032.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085033.png" />, in symbols <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085034.png" />, is defined by
+
Various unary and binary operations defined for languages will be considered in the sequel. Regarding languages as sets, one may immediately define the Boolean operations of union, intersection, complementation (the complementation of a language $  L $
 +
is denoted by $  L^{c} $),  
 +
difference, and symmetric difference in the usual manner. The catenation (or product) of two languages $  L _{1} $
 +
and $  L _{2} $,  
 +
in symbols $  L _{1} L _{2} $,  
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085035.png" /></td> </tr></table>
+
$$
 +
L _{1} L _{2} \  = \
 +
\{ {w _{1} w _ 2} : {
 +
w _{1} \in L _{1} \  \textrm{ and } \  w _{2} \in L _ 2} \}
 +
.
 +
$$
  
The notation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085036.png" /> is extended to the catenation of languages. By definition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085037.png" />. The catenation closure or Kleene star (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085039.png" />-free catenation closure) of a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085040.png" />, in symbols <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085041.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085042.png" />), is defined to be the union of all non-negative powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085043.png" /> (respectively, the union of all positive powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085044.png" />).
+
The notation $  L^{i} $
 +
is extended to the catenation of languages. By definition $  L^{0} = \{ \lambda \} $.  
 +
The catenation closure or Kleene star (respectively, $  \lambda $-
 +
free catenation closure) of a language $  L $,  
 +
in symbols $  L^{*} $(
 +
respectively, $  L^{+} $),  
 +
is defined to be the union of all non-negative powers of $  L $(
 +
respectively, the union of all positive powers of $  L $).
  
The operation of substitution. For each letter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085045.png" /> of an alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085046.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085047.png" /> denote a language (possibly over a different alphabet). Define, furthermore,
+
The operation of substitution. For each letter $  a $
 +
of an alphabet $  V $,  
 +
let $  \sigma (a) $
 +
denote a language (possibly over a different alphabet). Define, furthermore,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085048.png" /></td> </tr></table>
+
$$
 +
\sigma ( \lambda ) \  = \
 +
\{ \lambda \} ,\ \
 +
\sigma (w _{1} w _{2} ) \  = \
 +
\sigma (w _{1} ) \sigma (w _{2} ),\ \
 +
\textrm{ for } \
 +
w _{1} ,\  w _{2} \in V^{*} .
 +
$$
  
For a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085049.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085050.png" />, one defines
+
For a language $  L $
 +
over $  V $,  
 +
one defines
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085051.png" /></td> </tr></table>
+
$$
 +
\sigma (L) \  = \
 +
\{ {u} : {u \in \sigma (w) \
 +
\textrm{ for } \  \textrm{ some } \  w \in L} \}
 +
.
 +
$$
  
Such a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085052.png" /> is called a substitution. A substitution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085053.png" /> such that each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085054.png" /> consists of a single word is called a homomorphism. (Algebraically, a homomorphism of languages is a [[Homomorphism|homomorphism]] of monoids linearly extended to subsets of monoids.) In connection with homomorphisms (and also often elsewhere) one identifies a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085055.png" /> with the singleton set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085056.png" />, writing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085057.png" /> rather than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085058.png" />.
+
Such a mapping $  \sigma $
 +
is called a substitution. A substitution $  \sigma $
 +
such that each $  \sigma (a) $
 +
consists of a single word is called a homomorphism. (Algebraically, a homomorphism of languages is a [[Homomorphism|homomorphism]] of monoids linearly extended to subsets of monoids.) In connection with homomorphisms (and also often elsewhere) one identifies a word $  w $
 +
with the singleton set $  \{ w \} $,  
 +
writing $  \sigma (a) = w $
 +
rather than $  \sigma (a) = \{ w \} $.
  
The main objects of study in formal language theory are finitary specifications of infinite languages. Most of such specifications are obtained as special cases from the notion of a rewriting system. By definition, a rewriting system is an ordered pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085059.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085060.png" /> is an alphabet and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085061.png" /> a finite set of ordered pairs of words over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085062.png" />. The elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085063.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085064.png" /> are referred to as rewriting rules or productions, and are denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085065.png" />. Given a rewriting system, the yield relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085066.png" /> in the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085067.png" /> is defined as follows. For any two words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085068.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085069.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085070.png" /> holds if and only if there are words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085071.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085073.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085074.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085075.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085076.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085077.png" /> is a production in the system. The reflexive transitive closure (respectively, transitive closure) of the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085078.png" /> is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085079.png" /> (respectively <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085080.png" />).
+
The main objects of study in formal language theory are finitary specifications of infinite languages. Most of such specifications are obtained as special cases from the notion of a rewriting system. By definition, a rewriting system is an ordered pair $  (V,\  P) $,  
 +
where $  V $
 +
is an alphabet and $  P $
 +
a finite set of ordered pairs of words over $  V $.  
 +
The elements $  (w,\  u ) $
 +
of $  P $
 +
are referred to as rewriting rules or productions, and are denoted by $  w \rightarrow u $.  
 +
Given a rewriting system, the yield relation $  \Rightarrow $
 +
in the set $  V^{*} $
 +
is defined as follows. For any two words $  w $
 +
and $  u $,  
 +
$  w \Rightarrow u $
 +
holds if and only if there are words $  w^ \prime  $,  
 +
$  w _{1} $,  
 +
$  w^{\prime\prime} $,  
 +
$  u _{1} $
 +
such that $  w = w^ \prime  w _{1} w^{\prime\prime} $,  
 +
$  u = w^ \prime  u _{1} w^{\prime\prime} $
 +
and $  w _{1} \rightarrow u _{1} $
 +
is a production in the system. The reflexive transitive closure (respectively, transitive closure) of the relation $  \Rightarrow $
 +
is denoted by $  \Rightarrow^{*} $(
 +
respectively $  \Rightarrow^{+} $).
  
A phrase-structure grammar, shortly a grammar, is an ordered quadruple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085081.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085082.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085083.png" /> are disjoint alphabets (the alphabets of non-terminals and terminals), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085084.png" /> (the initial letter) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085085.png" /> is a finite set of ordered pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085086.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085087.png" /> is a word over the alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085088.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085089.png" /> is a word over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085090.png" /> containing at least one letter of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085091.png" />. Again, the elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085092.png" /> are referred to as rewriting rules or productions, and are written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085093.png" />. A phrase-structure grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085094.png" /> as above defines a rewriting system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085095.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085096.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085097.png" /> be the relations determined by this rewriting system. Then the language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085098.png" /> generated by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f04085099.png" /> is defined by
+
A phrase-structure grammar, shortly a grammar, is an ordered quadruple $  G = (V _{N} ,\  V _{T} ,\  S,\  P) $,  
 +
where $  V _{N} $
 +
and $  V _{T} $
 +
are disjoint alphabets (the alphabets of non-terminals and terminals), $  S \in V _{N} $(
 +
the initial letter) and $  P $
 +
is a finite set of ordered pairs $  (w , u ) $
 +
such that $  u $
 +
is a word over the alphabet $  V = V _{N} \cup V _{T} $
 +
and $  w $
 +
is a word over $  V $
 +
containing at least one letter of $  V _{N} $.  
 +
Again, the elements of $  P $
 +
are referred to as rewriting rules or productions, and are written as $  w \rightarrow u $.  
 +
A phrase-structure grammar $  G $
 +
as above defines a rewriting system $  (V,\  P) $.  
 +
Let $  \Rightarrow $
 +
and $  \Rightarrow^{*} $
 +
be the relations determined by this rewriting system. Then the language $  L (G) $
 +
generated by $  G $
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850100.png" /></td> </tr></table>
+
$$
 +
L (G) \  = \
 +
\{ {w \in V _{T} ^ *} : {S \Rightarrow^{*} w} \}
 +
.
 +
$$
  
Two grammars <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850101.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850102.png" /> are termed equivalent if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850103.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850104.png" />, a grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850105.png" /> is of the type <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850106.png" /> if and only if the restrictions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850107.png" />) on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850108.png" />, as given below, are satisfied:
+
Two grammars $  G $
 +
and $  G _{1} $
 +
are termed equivalent if and only if $  L (G) = L (G _{1} ) $.  
 +
For $  i = 0,\  1,\  2,\  3 $,
 +
a grammar $  G = (V _{N} ,\  V _{T} ,\  S,\  P) $
 +
is of the type $  i $
 +
if and only if the restrictions $  i $)  
 +
on $  P $,  
 +
as given below, are satisfied:
  
 
0) No restrictions.
 
0) No restrictions.
  
1) Each production in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850109.png" /> is of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850110.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850111.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850112.png" /> are arbitrary words, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850113.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850114.png" /> is a non-empty word (with the possible exception of the production <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850115.png" /> whose occurrence in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850116.png" /> implies, however, that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850117.png" /> does not occur on the right-hand side of any production).
+
1) Each production in $  P $
 +
is of the form $  w _{1} Aw _{2} \rightarrow w _{1} ww _{2} $,  
 +
where $  w _{1} $
 +
and $  w _{2} $
 +
are arbitrary words, $  A \in V _{N} $
 +
and $  w $
 +
is a non-empty word (with the possible exception of the production $  S \rightarrow \lambda $
 +
whose occurrence in $  P $
 +
implies, however, that $  S $
 +
does not occur on the right-hand side of any production).
  
2) Each production in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850118.png" /> is of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850119.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850120.png" />.
+
2) Each production in $  P $
 +
is of the form $  A \rightarrow w $,  
 +
where $  A \in V _{N} $.
  
3) Each production is of one of the two forms <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850121.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850122.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850123.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850124.png" />.
+
3) Each production is of one of the two forms $  A \rightarrow Bw $
 +
or $  A \rightarrow w $,  
 +
where $  A,\  B \in V _{N} $
 +
and $  w \in V _ T^{*} $.
  
A language is of type <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850125.png" /> if and only if it is generated by a grammar of type <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850126.png" />. Type-0 languages are also called recursively enumerable. Type-1 grammars and languages are also called context-sensitive. Type-2 grammars and languages are also called context-free. Type-3 grammars and languages are also referred to as finite-state or regular. (See also [[Grammar, context-sensitive|Grammar, context-sensitive]]; [[Grammar, context-free|Grammar, context-free]].)
+
A language is of type $  i $
 +
if and only if it is generated by a grammar of type $  i $.  
 +
Type-0 languages are also called recursively enumerable. Type-1 grammars and languages are also called context-sensitive. Type-2 grammars and languages are also called context-free. Type-3 grammars and languages are also referred to as finite-state or regular. (See also [[Grammar, context-sensitive|Grammar, context-sensitive]]; [[Grammar, context-free|Grammar, context-free]].)
  
Type-<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850127.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850128.png" />, languages form a strict hierarchy: the family of type-<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850129.png" /> languages is strictly included in the family of type-<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850130.png" /> languages, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850131.png" />.
+
Type- $  i $,
 +
$  i = 0,\  1,\  2,\  3 $,  
 +
languages form a strict hierarchy: the family of type- $  i $
 +
languages is strictly included in the family of type- $  (i - 1) $
 +
languages, for $  i = 1,\  2,\  3 $.
  
A specific context-free language over an alphabet with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850132.png" /> letters, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850133.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850134.png" />, is the Dyck language generated by the grammar
+
A specific context-free language over an alphabet with $  2t $
 +
letters, $  V _{t} = \{ a _{1} \dots a _{t} ,\  \overline{a}\; _{1} \dots \overline{a}\; _{t} \} $,  
 +
$  t \geq 1 $,  
 +
is the Dyck language generated by the grammar
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850135.png" /></td> </tr></table>
+
$$
 +
( \{ S \} ,\  V _{t} ,\  S,\
 +
\{ S \rightarrow SS,\  S \rightarrow \lambda ,\
 +
S \rightarrow a _{1} S \overline{a}\; _{1} \dots
 +
S \rightarrow a _{t} S \overline{a}\; _{t} \} ).
 +
$$
  
The Dyck language consists of all words over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850136.png" /> that can be reduced to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850137.png" /> using the relations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850138.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850139.png" />. If the pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850140.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850141.png" />, are viewed as parentheses of different types, then the Dyck language consists of all sequences of correctly nested parentheses.
+
The Dyck language consists of all words over $  V _{t} $
 +
that can be reduced to $  \lambda $
 +
using the relations $  a _{i} \overline{a}\; _{i} = \lambda $,  
 +
$  i = 1 \dots t $.  
 +
If the pairs $  (a _{i} ,\  \overline{a}\; _{i} ) $,  
 +
$  i = 1 \dots t $,  
 +
are viewed as parentheses of different types, then the Dyck language consists of all sequences of correctly nested parentheses.
  
The family of regular languages over an alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850142.png" /> equals the family of languages obtained from  "atomic languages"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850143.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850144.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850145.png" />, by a finite number of applications of  "regular operations" : union, catenation and catenation closure. The formula expressing how a specific regular language is obtained from atomic languages by regular operations is termed a regular expression.
+
The family of regular languages over an alphabet $  V $
 +
equals the family of languages obtained from  "atomic languages"   $ \{ \lambda \} $
 +
and $  \{ a \} $,  
 +
where $  a \in V $,  
 +
by a finite number of applications of  "regular operations" : union, catenation and catenation closure. The formula expressing how a specific regular language is obtained from atomic languages by regular operations is termed a regular expression.
  
Every context-free language which is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850146.png" />-free (i.e., does not contain the empty word) is generated by a grammar in Chomsky normal form, as well as by a grammar in Greibach normal form. In the former, all productions are of the types <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850147.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850148.png" />, and in the latter — of the types <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850149.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850150.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850151.png" />, where capital letters denote non-terminals and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850152.png" /> is a terminal letter. According to the theorem of Chomsky–Schützenberger, every context-free language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850153.png" /> can be expressed as
+
Every context-free language which is $  \lambda $-
 +
free (i.e., does not contain the empty word) is generated by a grammar in Chomsky normal form, as well as by a grammar in Greibach normal form. In the former, all productions are of the types $  A \rightarrow BC $,  
 +
$  A \rightarrow a $,  
 +
and in the latter — of the types $  A \rightarrow aBC $,  
 +
$  A \rightarrow aB $,  
 +
$  A \rightarrow a $,  
 +
where capital letters denote non-terminals and $  a $
 +
is a terminal letter. According to the theorem of Chomsky–Schützenberger, every context-free language $  L $
 +
can be expressed as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850154.png" /></td> </tr></table>
+
$$
 +
L \  = \  h (D \cap R),
 +
$$
  
for some regular language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850155.png" />, Dyck language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850156.png" /> and homomorphism <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850157.png" />. According to the Lemma of Bar-Hillel (also called the pumping lemma), every sufficiently long word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850158.png" /> in a context-free language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850159.png" /> can be written in the form
+
for some regular language $  R $,  
 +
Dyck language $  D $
 +
and homomorphism $  h $.  
 +
According to the Lemma of Bar-Hillel (also called the pumping lemma), every sufficiently long word $  w $
 +
in a context-free language $  L $
 +
can be written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850160.png" /></td> </tr></table>
+
$$
 +
w \  = \  u _{1} w _{1} u _{2} w _{2} u _{3} ,\ \
 +
w _{1} w _{2} \  \neq \  \lambda ,
 +
$$
  
where for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850161.png" /> the word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850162.png" /> belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850163.png" />. For regular languages, the corresponding result reads as follows: Every sufficiently long word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850164.png" /> in a regular language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850165.png" /> can be written in the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850166.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850167.png" />, where for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850168.png" /> the word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850169.png" /> belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850170.png" />.
+
where for every $  i \geq 0 $
 +
the word $  u _{1} w _ 1^{i} u _{2} w _ 2^{i} u _{3} $
 +
belongs to $  L $.  
 +
For regular languages, the corresponding result reads as follows: Every sufficiently long word $  w $
 +
in a regular language $  L $
 +
can be written in the form $  w = u _{1} w _{1} u _{2} $,  
 +
$  w _{1} \neq \lambda $,  
 +
where for every $  i \geq 0 $
 +
the word $  u _{1} w _ 1^{i} u _{2} $
 +
belongs to $  L $.
  
Derivations according to a context-free grammar (i.e., finite sequences of words where every two consecutive words are in the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850171.png" />) can in a natural way be visualized as labelled trees, the so-called derivation trees (cf. also [[Derivation tree|Derivation tree]]). A context-free grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850172.png" /> is termed ambiguous if and only if some word in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850173.png" /> has two derivation trees. Otherwise, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850174.png" /> is termed unambiguous. A context-free language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850175.png" /> is unambiguous if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850176.png" /> for some unambiguous grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850177.png" />. Otherwise, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850178.png" /> is termed (inherently) ambiguous. One also speaks of degrees of ambiguity. A context-free grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850179.png" /> is ambiguous of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850180.png" /> (a natural number or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850181.png" />) if and only if every word in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850182.png" /> possesses at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850183.png" /> derivation trees and some word in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850184.png" /> possesses exactly <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850185.png" /> derivation trees. A language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850186.png" /> is ambiguous of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850187.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850188.png" /> for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850189.png" /> ambiguous of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850190.png" />, and there is no <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850191.png" /> ambiguous of degree less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850192.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850193.png" />.
+
Derivations according to a context-free grammar (i.e., finite sequences of words where every two consecutive words are in the relation $  \Rightarrow $)  
 +
can in a natural way be visualized as labelled trees, the so-called derivation trees (cf. also [[Derivation tree|Derivation tree]]). A context-free grammar $  G $
 +
is termed ambiguous if and only if some word in $  L (G) $
 +
has two derivation trees. Otherwise, $  G $
 +
is termed unambiguous. A context-free language $  L $
 +
is unambiguous if and only if $  L = L (G) $
 +
for some unambiguous grammar $  G $.  
 +
Otherwise, $  L $
 +
is termed (inherently) ambiguous. One also speaks of degrees of ambiguity. A context-free grammar $  G $
 +
is ambiguous of degree $  k $(
 +
a natural number or $  \infty $)  
 +
if and only if every word in $  L (G) $
 +
possesses at most $  k $
 +
derivation trees and some word in $  L (G) $
 +
possesses exactly $  k $
 +
derivation trees. A language $  L $
 +
is ambiguous of degree $  k $
 +
if and only if $  L = L (G) $
 +
for some $  G $
 +
ambiguous of degree $  k $,  
 +
and there is no $  G _{1} $
 +
ambiguous of degree less than $  k $
 +
such that $  L = L (G _{1} ) $.
  
The families of type-<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850194.png" /> languages, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850195.png" />, defined above using generative devices, can be obtained also by recognition devices. A recognition device defining a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850196.png" /> receives arbitrary words as inputs and  "accepts"  exactly the words belonging to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850197.png" />. The recognition devices finite and pushdown automata, corresponding to type-3 and type-2 languages, will be defined below. The definitions of the recognition devices linear bounded automata and Turing machines, corresponding to type-1 and type-0 languages, are analogous. (Cf. [[Computable function|Computable function]]; [[Complexity theory|Complexity theory]].)
+
The families of type- $  i $
 +
languages, $  i = 0,\  1,\  2,\  3 $,
 +
defined above using generative devices, can be obtained also by recognition devices. A recognition device defining a language $  L $
 +
receives arbitrary words as inputs and  "accepts"  exactly the words belonging to $  L $.  
 +
The recognition devices finite and pushdown automata, corresponding to type-3 and type-2 languages, will be defined below. The definitions of the recognition devices linear bounded automata and Turing machines, corresponding to type-1 and type-0 languages, are analogous. (Cf. [[Computable function|Computable function]]; [[Complexity theory|Complexity theory]].)
  
A rewriting system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850198.png" /> is called a finite deterministic automaton if and only if: a) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850199.png" /> is divided into two disjoint alphabets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850200.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850201.png" /> (the state and the input alphabet); b) an element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850202.png" /> and a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850203.png" /> are specified (initial state and final state set); and c) the productions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850204.png" /> are of the form
+
A rewriting system $  (V,\  P) $
 +
is called a finite deterministic automaton if and only if: a) $  V $
 +
is divided into two disjoint alphabets $  V _{s} $
 +
and $  V _{I} $(
 +
the state and the input alphabet); b) an element $  s _{0} \in V _{s} $
 +
and a subset $  S _{1} \subseteq V _{s} $
 +
are specified (initial state and final state set); and c) the productions in $  P $
 +
are of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850205.png" /></td> </tr></table>
+
$$
 +
s _{i} a _{k} \  \rightarrow \  s _{j} ,\ \
 +
s _{i} ,\  s _{j} \in V _{s} ; \ \
 +
a _{k} \in V _{I} ,
 +
$$
  
and for each pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850206.png" /> there is exactly one such production in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850207.png" />.
+
and for each pair $  (s _{i} ,\  a _{k} ) $
 +
there is exactly one such production in $  P $.
  
A finite deterministic automaton over an input alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850208.png" /> is usually defined by specifying an ordered quadruple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850209.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850210.png" /> is a mapping of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850211.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850212.png" />, the other items being as above. (Clearly, the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850213.png" /> are obtained from the right-hand sides of the productions listed above.)
+
A finite deterministic automaton over an input alphabet $  V _{I} $
 +
is usually defined by specifying an ordered quadruple $  (s _{0} ,\  V _{s} ,\  f,\  S _{1} ) $,
 +
where f $
 +
is a mapping of $  V _{s} \times V _{I} $
 +
into $  V _{s} $,  
 +
the other items being as above. (Clearly, the values of f $
 +
are obtained from the right-hand sides of the productions listed above.)
  
The language accepted or recognized by a finite deterministic automaton <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850214.png" /> is defined by
+
The language accepted or recognized by a finite deterministic automaton $  \mathop{\rm FDA}\nolimits $
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850215.png" /></td> </tr></table>
+
$$
 +
L (  \mathop{\rm FDA}\nolimits ) \  = \
 +
\{ {w \in V _{I} ^ *} : {
 +
s _{0} w \Rightarrow^{*} s _{1} \
 +
\textrm{ for } \  \textrm{ some } \
 +
s _{1} \in S _ 1} \}
 +
.
 +
$$
  
A finite non-deterministic automaton <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850216.png" /> is defined as a finite deterministic automaton with the following two exceptions. In b) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850217.png" /> is replaced by a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850218.png" />. In c) the second half of the sentence (beginning with  "and" ) is omitted. The language accepted by an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850219.png" /> is defined by
+
A finite non-deterministic automaton $  \mathop{\rm FNA}\nolimits $
 +
is defined as a finite deterministic automaton with the following two exceptions. In b) $  s _{0} $
 +
is replaced by a subset $  S _{0} \subseteq V _{s} $.  
 +
In c) the second half of the sentence (beginning with  "and" ) is omitted. The language accepted by an $  \mathop{\rm FNA}\nolimits $
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850220.png" /></td> </tr></table>
+
$$
 +
L (  \mathop{\rm FNA}\nolimits ) \  = \
 +
\{ {w \in V _{I} ^ *} : {
 +
s _{0} w \Rightarrow^{*} s _{1} \
 +
\textrm{ for } \  \textrm{ some } \
 +
s _{0} \in S _{0} \
 +
\textrm{ and } \
 +
s _{1} \in S _ 1} \}
 +
.
 +
$$
  
 
A language is of type 3 if and only if it is accepted by some finite deterministic automaton and if and only if it is accepted by some finite non-deterministic automaton.
 
A language is of type 3 if and only if it is accepted by some finite deterministic automaton and if and only if it is accepted by some finite non-deterministic automaton.
  
A rewriting system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850221.png" /> is called a pushdown automaton if and only if each of the following conditions 1)–3) is satisfied.
+
A rewriting system $  (V,\  P) $
 +
is called a pushdown automaton if and only if each of the following conditions 1)–3) is satisfied.
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850222.png" /> is divided into two disjoint alphabets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850223.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850224.png" />. The sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850225.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850226.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850227.png" /> are called the state, input and pushdown alphabet, respectively. The sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850228.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850229.png" /> are non-empty but not necessarily disjoint.
+
1) $  V $
 +
is divided into two disjoint alphabets $  V _{s} $
 +
and $  V _{I} \cup V _{z} $.  
 +
The sets $  V _{s} $,  
 +
$  V _{I} $
 +
and $  V _{z} $
 +
are called the state, input and pushdown alphabet, respectively. The sets $  V _{I} $
 +
and $  V _{z} $
 +
are non-empty but not necessarily disjoint.
  
2) Elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850230.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850231.png" /> and a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850232.png" /> are specified, namely, the so-called initial state, start letter and final state set.
+
2) Elements $  s _{0} \in V _{s} $,  
 +
$  z _{0} \in V _{z} $
 +
and a subset $  S _{1} \subseteq V _{s} $
 +
are specified, namely, the so-called initial state, start letter and final state set.
  
3) The productions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850233.png" /> are of the two forms
+
3) The productions in $  P $
 +
are of the two forms
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850234.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$ \tag{a1}
 +
zs _{i} \  \rightarrow \  ws _{j} ,\ \
 +
z \in V _{z} ,\ \  w \in V _ z^{*} ,\ \
 +
s _{i} ,\  s _{j} \in V _{s} ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850235.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$ \tag{a2}
 +
zs _{i} a \  \rightarrow \  ws _{j} ,\ \  z \in V _{z} ,\ \
 +
w \in V _ z^{*} ,\ \  a \in V _{i} ,\ \  s _{i} ,\  s _{j} \in V _{s} .
 +
$$
  
The language accepted by a pushdown automaton <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850236.png" /> is defined by
+
The language accepted by a pushdown automaton $  \mathop{\rm PDA}\nolimits $
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850237.png" /></td> </tr></table>
+
$$
 +
L (  \mathop{\rm PDA}\nolimits ) \  = \
 +
\{ {w \in V _{I} ^ *} : {
 +
z _{0} s _{0} w \Rightarrow^{*} us _{1} \  \textrm{ for } \  \textrm{ some } \
 +
u \in V _ z^{*} ,\
 +
s _{1} \in S _ 1} \}
 +
.
 +
$$
  
A pushdown automaton is deterministic if and only if, for every pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850238.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850239.png" /> contains either exactly one production (a1) and no productions (a2), or no productions (a1) and exactly one production (a2) for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850240.png" />.
+
A pushdown automaton is deterministic if and only if, for every pair $  (s _{i} ,\  z) $,  
 +
$  P $
 +
contains either exactly one production (a1) and no productions (a2), or no productions (a1) and exactly one production (a2) for every $  a \in V _{I} $.
  
 
The family of context-free languages equals the family of languages accepted by pushdown automata. Languages accepted by deterministic pushdown automata are referred to as deterministic (context-free) languages. The role of determinism is different in connection with pushdown and finite automata: the family of deterministic languages is a proper subfamily of the family of context-free languages.
 
The family of context-free languages equals the family of languages accepted by pushdown automata. Languages accepted by deterministic pushdown automata are referred to as deterministic (context-free) languages. The role of determinism is different in connection with pushdown and finite automata: the family of deterministic languages is a proper subfamily of the family of context-free languages.
Line 105: Line 387:
 
The automata considered above have no other output facilities than being or not being in a final state, i.e., they are only capable of accepting or rejecting inputs. Occasionally devices (transducers) capable of having words as outputs, i.e., capable of translating words into words, are considered. A formal definition is given below only for the transducer corresponding to a finite automaton. A pushdown transducer is defined analogously.
 
The automata considered above have no other output facilities than being or not being in a final state, i.e., they are only capable of accepting or rejecting inputs. Occasionally devices (transducers) capable of having words as outputs, i.e., capable of translating words into words, are considered. A formal definition is given below only for the transducer corresponding to a finite automaton. A pushdown transducer is defined analogously.
  
A rewriting system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850241.png" /> is called a sequential transducer if and only if each of the following conditions 1)–3) is satisfied.
+
A rewriting system $  (V,\  P) $
 +
is called a sequential transducer if and only if each of the following conditions 1)–3) is satisfied.
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850242.png" /> is divided into two disjoint alphabets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850243.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850244.png" />. The sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850245.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850246.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850247.png" /> are called the state, input and output alphabet, respectively. (The two latter alphabets are non-empty but not necessarily disjoint.)
+
1) $  V $
 +
is divided into two disjoint alphabets $  V _{s} $
 +
and $  V _{1} \cup V _{0} $.  
 +
The sets $  V _{s} $,  
 +
$  V _{1} $,  
 +
$  V _{0} $
 +
are called the state, input and output alphabet, respectively. (The two latter alphabets are non-empty but not necessarily disjoint.)
  
2) An element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850248.png" /> and a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850249.png" /> are specified (initial state and final state set).
+
2) An element $  s _{0} \in V _{s} $
 +
and a subset $  S _{1} \subseteq V _{s} $
 +
are specified (initial state and final state set).
  
3) The productions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850250.png" /> are of the form
+
3) The productions in $  P $
 +
are of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850251.png" /></td> </tr></table>
+
$$
 +
s _{i} w \  \rightarrow \  us _{j} ,\ \
 +
s _{i} ,\  s _{j} \in V _{s} ,\ \
 +
w \in V _{s} ,\ \
 +
w \in V _ 1^{*} ,\ \
 +
u \in V _ 0^{*} .
 +
$$
  
If, in addition, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850252.png" /> in all productions, then the rewriting system is called a generalized sequential machine (gsm).
+
If, in addition, $  w \neq \lambda $
 +
in all productions, then the rewriting system is called a generalized sequential machine (gsm).
  
For a sequential transducer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850253.png" />, words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850254.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850255.png" />, and languages <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850256.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850257.png" />, one defines
+
For a sequential transducer $  \mathop{\rm ST}\nolimits $,  
 +
words $  w _{1} \in V _ 1^{*} $
 +
and $  w _{2} \in V _ 0^{*} $,  
 +
and languages $  L _{1} \in V _ 1^{*} $
 +
and $  L _{2} \subseteq V _ 0^{*} $,  
 +
one defines
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850258.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm ST}\nolimits (w _{1} ) \  = \
 +
\{ {w} : {
 +
s _{0} s _{1} \Rightarrow^{*} ws _{1} \  \textrm{ for } \  \textrm{ some } \
 +
s _{1} \in S _ 1} \}
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850259.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm ST}\nolimits (L _{1} ) \  = \  \{ u: \  u \in  \mathop{\rm ST}\nolimits (w)
 +
\  \textrm{ for } \  \textrm{ some } \  w \in L _{1} \} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850260.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm ST}\nolimits^{-1} (w _{2} ) \  = \  \{ u: \  w _{2} \in  \mathop{\rm ST}\nolimits (u) \} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850261.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm ST}\nolimits^{-1} (L _{2} ) \  = \  \{ u: \  u
 +
\in  \mathop{\rm ST}\nolimits^{-1} (w) \  \textrm{ for } \  \textrm{ some } \  w \in L _{2} \} .
 +
$$
  
Mappings of languages thus defined are referred to as (rational) transductions and inverse (rational) transductions. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850262.png" /> is also a gsm, one speaks of gsm mappings and inverse gsm mappings.
+
Mappings of languages thus defined are referred to as (rational) transductions and inverse (rational) transductions. If $  \mathop{\rm ST}\nolimits $
 +
is also a gsm, one speaks of gsm mappings and inverse gsm mappings.
  
Homomorphisms, inverse homomorphisms and the mappings <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850263.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850264.png" /> is a fixed regular language, are all rational transductions, the first and the last being also gsm mappings. The composite of two rational transductions (respectively, gsm mappings) is again a rational transduction (respectively, a gsm mapping). Every rational transduction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850265.png" /> can be expressed in the form
+
Homomorphisms, inverse homomorphisms and the mappings $  \tau (L) = L \cap R $,  
 +
where $  R $
 +
is a fixed regular language, are all rational transductions, the first and the last being also gsm mappings. The composite of two rational transductions (respectively, gsm mappings) is again a rational transduction (respectively, a gsm mapping). Every rational transduction $  \tau $
 +
can be expressed in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850266.png" /></td> </tr></table>
+
$$
 +
\tau (L) \  = \
 +
h _{1} (h _ 2^{-1} (L) \cap R),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850267.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850268.png" /> are homomorphisms and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850269.png" /> is a regular language. These results show that a language family is closed under rational transductions if and only if it is closed under homomorphisms, inverse homomorphisms and intersections with regular languages.
+
where $  h _{1} $
 +
and $  h _{2} $
 +
are homomorphisms and $  R $
 +
is a regular language. These results show that a language family is closed under rational transductions if and only if it is closed under homomorphisms, inverse homomorphisms and intersections with regular languages.
  
 
A finite probabilistic automaton (or stochastic automaton, cf. [[Automaton, probabilistic|Automaton, probabilistic]]) is an ordered quintuple
 
A finite probabilistic automaton (or stochastic automaton, cf. [[Automaton, probabilistic|Automaton, probabilistic]]) is an ordered quintuple
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850270.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm PA}\nolimits \  = \
 +
(V _{1} ,\  V _{s} ,\  s _{0} ,\  S _{1} ,\  H),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850271.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850272.png" /> are disjoint alphabets (inputs and states), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850273.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850274.png" /> (initial state and final state set), and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850275.png" /> is a mapping of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850276.png" /> into the set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850277.png" />-dimensional stochastic matrices. (A [[Stochastic matrix|stochastic matrix]] is a square matrix with non-negative real entries and with row sums equal to 1.) The mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850278.png" /> is extended to a homomorphism of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850279.png" /> into the monoid of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850280.png" />-dimensional stochastic matrices. Consider <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850281.png" /> to be an ordered set as indicated, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850282.png" /> be the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850283.png" />-dimensional stochastic row vector whose first component equals 1, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850284.png" /> be the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850285.png" />-dimensional column vector consisting of 0's and 1's such that the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850286.png" />-th component of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850287.png" /> equals 1 if and only if the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850288.png" />-th element of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850289.png" /> belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850290.png" />. The language accepted by a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850291.png" /> with cut-point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850292.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850293.png" /> is a real number satisfying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850294.png" />, is defined by
+
where $  V _{1} $
 +
and $  V _{s} = \{ s _{0} \dots s _{ {n - 1}} \} $
 +
are disjoint alphabets (inputs and states), $  s _{0} \in V _{s} $
 +
and $  S _{1} \subseteq V _{s} $(
 +
initial state and final state set), and $  H $
 +
is a mapping of $  V _{1} $
 +
into the set of $  n $-
 +
dimensional stochastic matrices. (A [[Stochastic matrix|stochastic matrix]] is a square matrix with non-negative real entries and with row sums equal to 1.) The mapping $  H $
 +
is extended to a homomorphism of $  V _ 1^{*} $
 +
into the monoid of $  n $-
 +
dimensional stochastic matrices. Consider $  V _{s} $
 +
to be an ordered set as indicated, let $  \pi $
 +
be the $  n $-
 +
dimensional stochastic row vector whose first component equals 1, and let $  \eta $
 +
be the $  n $-
 +
dimensional column vector consisting of 0's and 1's such that the $  i $-
 +
th component of $  \eta $
 +
equals 1 if and only if the $  i $-
 +
th element of $  V _{s} $
 +
belongs to $  S _{1} $.  
 +
The language accepted by a $  \textrm{ PA } $
 +
with cut-point $  \alpha $,  
 +
where $  \alpha $
 +
is a real number satisfying $  0 \leq  \alpha < 1 $,  
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850295.png" /></td> </tr></table>
+
$$
 +
L (  \mathop{\rm PA}\nolimits ,\  \alpha ) \  = \
 +
\{ {w \in V _{1} ^ *} : {
 +
\pi H (w) \eta > \alpha} \}
 +
.
 +
$$
  
 
Languages obtained in this fashion are referred to as stochastic languages.
 
Languages obtained in this fashion are referred to as stochastic languages.
  
Decision problems (cf. [[Decision problem|Decision problem]]) play an important role in language theory. The usual method of proving that a problem is undecidable is to reduce it to some problem whose undecidability is known. The most useful tool for problems in language theory is in this respect the Post correspondence problem. By definition, a Post correspondence problem is an ordered quadruple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850296.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850297.png" /> is an alphabet, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850298.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850299.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850300.png" /> are ordered <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850301.png" />-tuples of elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850302.png" />. A solution to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850303.png" /> is a non-empty finite sequence of indices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850304.png" /> such that
+
Decision problems (cf. [[Decision problem|Decision problem]]) play an important role in language theory. The usual method of proving that a problem is undecidable is to reduce it to some problem whose undecidability is known. The most useful tool for problems in language theory is in this respect the Post correspondence problem. By definition, a Post correspondence problem is an ordered quadruple $  \mathop{\rm PCP}\nolimits = \{ \Sigma ,\  n,\  \alpha ,\  \beta \} $,  
 +
where $  \Sigma $
 +
is an alphabet, $  n \geq 1 $,  
 +
and $  \alpha = ( \alpha _{1} \dots \alpha _{n} ) $,  
 +
$  \beta = ( \beta _{1} \dots \beta _{n} ) $
 +
are ordered $  n $-
 +
tuples of elements of $  \Sigma^{+} $.  
 +
A solution to the $  \mathop{\rm PCP}\nolimits $
 +
is a non-empty finite sequence of indices $  i _{1} \dots i _{k} $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850305.png" /></td> </tr></table>
+
$$
 +
\alpha _{ {i _ 1}} \dots
 +
\alpha _{ {i _ k}} \  = \
 +
\beta _{ {i _ 1}} \dots
 +
\beta _{ {i _ k}} .
 +
$$
  
It is undecidable whether an arbitrary given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850306.png" /> (or an arbitrary given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850307.png" /> over the alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850308.png" />) has a solution.
+
It is undecidable whether an arbitrary given $  \mathop{\rm PCP}\nolimits $(
 +
or an arbitrary given $  \mathop{\rm PCP}\nolimits $
 +
over the alphabet $  \Sigma = \{ a _{1} ,\  a _{2} \} $)  
 +
has a solution.
  
Also Hilbert's tenth problem is undecidable: Given a polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850309.png" /> with integer coefficients, one has to decide whether or not there are non-negative integers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850310.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850311.png" />, satisfying the equation
+
Also Hilbert's tenth problem is undecidable: Given a polynomial $  P (x _{1} \dots x _{k} ) $
 +
with integer coefficients, one has to decide whether or not there are non-negative integers $  x _{i} $,  
 +
$  i = 1 \dots k $,  
 +
satisfying the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850312.png" /></td> </tr></table>
+
$$
 +
P (x _{1} \dots x _{k} ) \  = 0.
 +
$$
  
The membership problem is decidable for context-sensitive languages but undecidable for type-0 languages. (More specifically, given a context-sensitive grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850313.png" /> and a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850314.png" />, it is decidable whether or not <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850315.png" />.) It is decidable whether two given regular languages are equal and also whether one of them is contained in the other. Both of these problems (the equivalence problem and the inclusion problem) are undecidable for context-free languages. It is also undecidable whether a given regular and a given context-free language is empty or infinite, whereas both of these problems are undecidable for context-sensitive languages. It is undecidable whether the intersection of two context-free languages is empty. The intersection of a context-free and a regular language is always context-free; and, hence, its emptiness is decidable. It is undecidable whether a given context-free language is regular.
+
The membership problem is decidable for context-sensitive languages but undecidable for type-0 languages. (More specifically, given a context-sensitive grammar $  G $
 +
and a word $  w $,  
 +
it is decidable whether or not $  w \in L (G) $.)  
 +
It is decidable whether two given regular languages are equal and also whether one of them is contained in the other. Both of these problems (the equivalence problem and the inclusion problem) are undecidable for context-free languages. It is also undecidable whether a given regular and a given context-free language is empty or infinite, whereas both of these problems are undecidable for context-sensitive languages. It is undecidable whether the intersection of two context-free languages is empty. The intersection of a context-free and a regular language is always context-free; and, hence, its emptiness is decidable. It is undecidable whether a given context-free language is regular.
  
The following example is due to M. Soittola. Consider the grammar <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850316.png" /> determined by the productions
+
The following example is due to M. Soittola. Consider the grammar $  G $
 +
determined by the productions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850317.png" /></td> </tr></table>
+
$$
 +
S \  \rightarrow \  abc,\ \
 +
S \  \rightarrow \  aAbc,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850318.png" /></td> </tr></table>
+
$$
 +
Ab \  \rightarrow \  bA,\ \  Ac \  \rightarrow \  Bbcc,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850319.png" /></td> </tr></table>
+
$$
 +
bB \  \rightarrow \  Bb,\ \  aB \  \rightarrow \  aaA,\ \  aB \  \rightarrow \  aa.
 +
$$
  
 
Then
 
Then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850320.png" /></td> </tr></table>
+
$$
 +
L (G) \  = \
 +
\{ {a^{n} b^{n} c ^ n} : {
 +
n \geq 1} \}
 +
.
 +
$$
  
In fact, any derivation according to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850321.png" /> begins with an application of the first or second production, the first production directly yielding the terminal word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850322.png" />. Consider any derivation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850323.png" /> from a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850324.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850325.png" />, leading to a word over the terminal alphabet. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850326.png" /> must begin with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850327.png" /> applications of the third production (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850328.png" /> travels to the right) and then continue with an application of the fourth production (one further occurrence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850329.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850330.png" /> is deposited). Now the word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850331.png" /> has been derived. For this the only possibility is to apply the fifth production <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850332.png" /> times (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850333.png" /> travels to the left), after which one obtains the word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850334.png" />. This word directly yields one of the two words
+
In fact, any derivation according to $  G $
 +
begins with an application of the first or second production, the first production directly yielding the terminal word $  abc $.  
 +
Consider any derivation $  D $
 +
from a word $  a^{i} A b^{i} c^{i} $,  
 +
where $  i \geq 1 $,  
 +
leading to a word over the terminal alphabet. $  D $
 +
must begin with $  i $
 +
applications of the third production ( $  A $
 +
travels to the right) and then continue with an application of the fourth production (one further occurrence of $  b $
 +
and $  c $
 +
is deposited). Now the word $  a^{i} b^{i} Bbc ^ {i + 1} $
 +
has been derived. For this the only possibility is to apply the fifth production $  i $
 +
times ( $  B $
 +
travels to the left), after which one obtains the word $  a^{i} Bb ^ {i + 1} c ^ {i + 1} $.  
 +
This word directly yields one of the two words
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850335.png" /></td> </tr></table>
+
$$
 +
a ^ {i + 1} Ab ^ {i + 1} c ^ {i + 1} \ \
 +
\textrm{ or } \ \
 +
a ^ {i + 1} b ^ {i + 1} c ^ {i + 1}
 +
$$
  
by the last two productions (one further <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850336.png" /> is deposited, and either a new cycle is entered or the derivation is terminated). This argument shows that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040850/f040850337.png" /> generates all words belonging to the language and nothing else; every step in a derivation is uniquely determined, the only exception being that there is a choice between termination and entrance into a new cycle.
+
by the last two productions (one further $  a $
 +
is deposited, and either a new cycle is entered or the derivation is terminated). This argument shows that $  G $
 +
generates all words belonging to the language and nothing else; every step in a derivation is uniquely determined, the only exception being that there is a choice between termination and entrance into a new cycle.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  N. Chomsky,  "Three models for the description of language"  ''IRE Trans. Information Theory'' , '''IT-2'''  (1956)  pp. 113–124</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  N. Chomsky,  "Syntactic structures" , Mouton  (1957)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  M. Davis,  "Computability and unsolvability" , McGraw-Hill  (1958)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  S. Eilenberg,  "Automata, languages and machines" , '''A''' , Acad. Press  (1974)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D. Hilbert,  "Mathematische Probleme. Vortrag, gehalten auf dem internationalen Mathematiker-Kongress zu Paris, 1900" , ''Gesammelte Abhandlungen'' , '''III''' , Springer  (1935)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  S.C. Kleene,  "Representation of events in nerve nets and finite automata" , ''Automata studies'' , '''34''' , Princeton Univ. Press  (1956)  pp. 3–42</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  A. Paz,  "Introduction to probabilistic automata" , Acad. Press  (1971)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  E.L. Post,  "A variant of a recursively unsolvable problem"  ''Bull. Amer. Math. Soc.'' , '''52'''  (1946)  pp. 264–268</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  H. Rogers jr.,  "Theory of recursive functions and effective computability" , McGraw-Hill  (1967)  pp. 164–165</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  G. Rozenberg,  "Selective substitution grammars"  ''Elektronische Informationsverarbeitung und Kybernetik (EIK)'' , '''13'''  (1977)  pp. 455–463</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  A. Salomaa,  "Theory of automata" , Pergamon  (1969)</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  A. Salomaa,  "Formal languages" , Acad. Press  (1973)</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  A. Salomaa,  "Jewels of formal language theory" , Computer Science Press  (1981)</TD></TR><TR><TD valign="top">[a14]</TD> <TD valign="top">  A. Salomaa,  M. Soittola,  "Automata-theoretic aspects of formal power series" , Springer  (1978)</TD></TR><TR><TD valign="top">[a15]</TD> <TD valign="top">  A. Thue,  "Ueber unendliche Zeichenreihen"  ''Skrifter utgit av Videnskapsselskapet i Kristiania'' , '''I'''  (1906)  pp. 1–22</TD></TR><TR><TD valign="top">[a16]</TD> <TD valign="top">  A. Thue,  "Probleme über Veränderungen von Zeichenreihen nach gegebenen Regeln"  ''Skrifter utgit av Videnskapsselskapet i Kristiania'' , '''I.10'''  (1914)</TD></TR><TR><TD valign="top">[a17]</TD> <TD valign="top">  A.M. Turing,  "On computable numbers, with an application to the Entscheidungsproblem"  ''Proc. London Math. Soc.'' , '''42'''  (1936)  pp. 230–265</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  N. Chomsky,  "Three models for the description of language"  ''IRE Trans. Information Theory'' , '''IT-2'''  (1956)  pp. 113–124</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  N. Chomsky,  "Syntactic structures" , Mouton  (1957)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  M. Davis,  "Computability and unsolvability" , McGraw-Hill  (1958)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  S. Eilenberg,  "Automata, languages and machines" , '''A''' , Acad. Press  (1974)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D. Hilbert,  "Mathematische Probleme. Vortrag, gehalten auf dem internationalen Mathematiker-Kongress zu Paris, 1900" , ''Gesammelte Abhandlungen'' , '''III''' , Springer  (1935)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  S.C. Kleene,  "Representation of events in nerve nets and finite automata" , ''Automata studies'' , '''34''' , Princeton Univ. Press  (1956)  pp. 3–42</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  A. Paz,  "Introduction to probabilistic automata" , Acad. Press  (1971)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  E.L. Post,  "A variant of a recursively unsolvable problem"  ''Bull. Amer. Math. Soc.'' , '''52'''  (1946)  pp. 264–268</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  H. Rogers jr.,  "Theory of recursive functions and effective computability" , McGraw-Hill  (1967)  pp. 164–165</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  G. Rozenberg,  "Selective substitution grammars"  ''Elektronische Informationsverarbeitung und Kybernetik (EIK)'' , '''13'''  (1977)  pp. 455–463</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  A. Salomaa,  "Theory of automata" , Pergamon  (1969)</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  A. Salomaa,  "Formal languages" , Acad. Press  (1973)</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  A. Salomaa,  "Jewels of formal language theory" , Computer Science Press  (1981)</TD></TR><TR><TD valign="top">[a14]</TD> <TD valign="top">  A. Salomaa,  M. Soittola,  "Automata-theoretic aspects of formal power series" , Springer  (1978)</TD></TR><TR><TD valign="top">[a15]</TD> <TD valign="top">  A. Thue,  "Ueber unendliche Zeichenreihen"  ''Skrifter utgit av Videnskapsselskapet i Kristiania'' , '''I'''  (1906)  pp. 1–22</TD></TR><TR><TD valign="top">[a16]</TD> <TD valign="top">  A. Thue,  "Probleme über Veränderungen von Zeichenreihen nach gegebenen Regeln"  ''Skrifter utgit av Videnskapsselskapet i Kristiania'' , '''I.10'''  (1914)</TD></TR><TR><TD valign="top">[a17]</TD> <TD valign="top">  A.M. Turing,  "On computable numbers, with an application to the Entscheidungsproblem"  ''Proc. London Math. Soc.'' , '''42'''  (1936)  pp. 230–265</TD></TR></table>

Latest revision as of 08:53, 1 February 2020


Both natural and programming languages can be viewed as sets of sentences, that is, finite strings of elements from some basic vocabulary. The notion of a language introduced below is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages.

The basic theory of computation, as well as of its various branches, such cryptography, is inseparably connected with language theory. The input and output sets of a computational device can be viewed as languages, and — more profoundly — models of computation can be identified with classes of language specifications, in a sense to be made more precise. Thus, for instance, Turing machines can be identified with phrase-structure grammars and finite automata with regular grammars (cf. also Turing machine; Automaton, finite; Grammar, formal).

Formal language theory is — together with automata theory, (cf. Automata, theory of) which is really inseparable from language theory — the oldest branch of theoretical computer science. In some sense, the role of language and automata theory in computer science is analogous to that of philosophy in general science: it constitutes the stem from which the individual branches of knowledge emerge.

An alphabet is a finite non-empty set. The elements of an alphabet $ V $ are called letters. A word over an alphabet $ V $ is a finite string of letters (cf. also Word). The word consisting of zero letters is called the empty word, written $ \lambda $. The set of all words (respectively, all non-empty words) over an alphabet $ V $ is denoted by $ V^{*} $( respectively, $ V^{+} $). (Thus, algebraically $ V^{*} $ and $ V^{+} $ are the free monoid and free semi-group generated by the finite set $ V $.) For words $ w _{1} $ and $ w _{2} $, the juxtaposition $ w _{1} w _{2} $ is called the catenation of $ w _{1} $ and $ w _{2} $. The empty word $ \lambda $ is an identity with respect to catenation. Catenation being associative, the notation $ w^{i} $, where $ i $ is a non-negative integer, is used in the customary sense, and $ w^{0} $ denotes the empty word. The length of a word $ w $, in symbols $ \mathop{\rm lg}\nolimits (w) $, also sometimes $ | w | $, means the number of letters in $ w $ when each letter is counted as many times as it occurs. A word $ w $ is a subword of a word $ u $ if and only if there are words $ w _{1} $ and $ w _{2} $ such that $ u = w _{1} ww _{2} $( cf. also Imbedded word). Subsets of $ V^{*} $ are referred to as (formal) languages over the alphabet $ V $.

Various unary and binary operations defined for languages will be considered in the sequel. Regarding languages as sets, one may immediately define the Boolean operations of union, intersection, complementation (the complementation of a language $ L $ is denoted by $ L^{c} $), difference, and symmetric difference in the usual manner. The catenation (or product) of two languages $ L _{1} $ and $ L _{2} $, in symbols $ L _{1} L _{2} $, is defined by

$$ L _{1} L _{2} \ = \ \{ {w _{1} w _ 2} : { w _{1} \in L _{1} \ \textrm{ and } \ w _{2} \in L _ 2} \} . $$

The notation $ L^{i} $ is extended to the catenation of languages. By definition $ L^{0} = \{ \lambda \} $. The catenation closure or Kleene star (respectively, $ \lambda $- free catenation closure) of a language $ L $, in symbols $ L^{*} $( respectively, $ L^{+} $), is defined to be the union of all non-negative powers of $ L $( respectively, the union of all positive powers of $ L $).

The operation of substitution. For each letter $ a $ of an alphabet $ V $, let $ \sigma (a) $ denote a language (possibly over a different alphabet). Define, furthermore,

$$ \sigma ( \lambda ) \ = \ \{ \lambda \} ,\ \ \sigma (w _{1} w _{2} ) \ = \ \sigma (w _{1} ) \sigma (w _{2} ),\ \ \textrm{ for } \ w _{1} ,\ w _{2} \in V^{*} . $$

For a language $ L $ over $ V $, one defines

$$ \sigma (L) \ = \ \{ {u} : {u \in \sigma (w) \ \textrm{ for } \ \textrm{ some } \ w \in L} \} . $$

Such a mapping $ \sigma $ is called a substitution. A substitution $ \sigma $ such that each $ \sigma (a) $ consists of a single word is called a homomorphism. (Algebraically, a homomorphism of languages is a homomorphism of monoids linearly extended to subsets of monoids.) In connection with homomorphisms (and also often elsewhere) one identifies a word $ w $ with the singleton set $ \{ w \} $, writing $ \sigma (a) = w $ rather than $ \sigma (a) = \{ w \} $.

The main objects of study in formal language theory are finitary specifications of infinite languages. Most of such specifications are obtained as special cases from the notion of a rewriting system. By definition, a rewriting system is an ordered pair $ (V,\ P) $, where $ V $ is an alphabet and $ P $ a finite set of ordered pairs of words over $ V $. The elements $ (w,\ u ) $ of $ P $ are referred to as rewriting rules or productions, and are denoted by $ w \rightarrow u $. Given a rewriting system, the yield relation $ \Rightarrow $ in the set $ V^{*} $ is defined as follows. For any two words $ w $ and $ u $, $ w \Rightarrow u $ holds if and only if there are words $ w^ \prime $, $ w _{1} $, $ w^{\prime\prime} $, $ u _{1} $ such that $ w = w^ \prime w _{1} w^{\prime\prime} $, $ u = w^ \prime u _{1} w^{\prime\prime} $ and $ w _{1} \rightarrow u _{1} $ is a production in the system. The reflexive transitive closure (respectively, transitive closure) of the relation $ \Rightarrow $ is denoted by $ \Rightarrow^{*} $( respectively $ \Rightarrow^{+} $).

A phrase-structure grammar, shortly a grammar, is an ordered quadruple $ G = (V _{N} ,\ V _{T} ,\ S,\ P) $, where $ V _{N} $ and $ V _{T} $ are disjoint alphabets (the alphabets of non-terminals and terminals), $ S \in V _{N} $( the initial letter) and $ P $ is a finite set of ordered pairs $ (w , u ) $ such that $ u $ is a word over the alphabet $ V = V _{N} \cup V _{T} $ and $ w $ is a word over $ V $ containing at least one letter of $ V _{N} $. Again, the elements of $ P $ are referred to as rewriting rules or productions, and are written as $ w \rightarrow u $. A phrase-structure grammar $ G $ as above defines a rewriting system $ (V,\ P) $. Let $ \Rightarrow $ and $ \Rightarrow^{*} $ be the relations determined by this rewriting system. Then the language $ L (G) $ generated by $ G $ is defined by

$$ L (G) \ = \ \{ {w \in V _{T} ^ *} : {S \Rightarrow^{*} w} \} . $$

Two grammars $ G $ and $ G _{1} $ are termed equivalent if and only if $ L (G) = L (G _{1} ) $. For $ i = 0,\ 1,\ 2,\ 3 $, a grammar $ G = (V _{N} ,\ V _{T} ,\ S,\ P) $ is of the type $ i $ if and only if the restrictions $ i $) on $ P $, as given below, are satisfied:

0) No restrictions.

1) Each production in $ P $ is of the form $ w _{1} Aw _{2} \rightarrow w _{1} ww _{2} $, where $ w _{1} $ and $ w _{2} $ are arbitrary words, $ A \in V _{N} $ and $ w $ is a non-empty word (with the possible exception of the production $ S \rightarrow \lambda $ whose occurrence in $ P $ implies, however, that $ S $ does not occur on the right-hand side of any production).

2) Each production in $ P $ is of the form $ A \rightarrow w $, where $ A \in V _{N} $.

3) Each production is of one of the two forms $ A \rightarrow Bw $ or $ A \rightarrow w $, where $ A,\ B \in V _{N} $ and $ w \in V _ T^{*} $.

A language is of type $ i $ if and only if it is generated by a grammar of type $ i $. Type-0 languages are also called recursively enumerable. Type-1 grammars and languages are also called context-sensitive. Type-2 grammars and languages are also called context-free. Type-3 grammars and languages are also referred to as finite-state or regular. (See also Grammar, context-sensitive; Grammar, context-free.)

Type- $ i $, $ i = 0,\ 1,\ 2,\ 3 $, languages form a strict hierarchy: the family of type- $ i $ languages is strictly included in the family of type- $ (i - 1) $ languages, for $ i = 1,\ 2,\ 3 $.

A specific context-free language over an alphabet with $ 2t $ letters, $ V _{t} = \{ a _{1} \dots a _{t} ,\ \overline{a}\; _{1} \dots \overline{a}\; _{t} \} $, $ t \geq 1 $, is the Dyck language generated by the grammar

$$ ( \{ S \} ,\ V _{t} ,\ S,\ \{ S \rightarrow SS,\ S \rightarrow \lambda ,\ S \rightarrow a _{1} S \overline{a}\; _{1} \dots S \rightarrow a _{t} S \overline{a}\; _{t} \} ). $$

The Dyck language consists of all words over $ V _{t} $ that can be reduced to $ \lambda $ using the relations $ a _{i} \overline{a}\; _{i} = \lambda $, $ i = 1 \dots t $. If the pairs $ (a _{i} ,\ \overline{a}\; _{i} ) $, $ i = 1 \dots t $, are viewed as parentheses of different types, then the Dyck language consists of all sequences of correctly nested parentheses.

The family of regular languages over an alphabet $ V $ equals the family of languages obtained from "atomic languages" $ \{ \lambda \} $ and $ \{ a \} $, where $ a \in V $, by a finite number of applications of "regular operations" : union, catenation and catenation closure. The formula expressing how a specific regular language is obtained from atomic languages by regular operations is termed a regular expression.

Every context-free language which is $ \lambda $- free (i.e., does not contain the empty word) is generated by a grammar in Chomsky normal form, as well as by a grammar in Greibach normal form. In the former, all productions are of the types $ A \rightarrow BC $, $ A \rightarrow a $, and in the latter — of the types $ A \rightarrow aBC $, $ A \rightarrow aB $, $ A \rightarrow a $, where capital letters denote non-terminals and $ a $ is a terminal letter. According to the theorem of Chomsky–Schützenberger, every context-free language $ L $ can be expressed as

$$ L \ = \ h (D \cap R), $$

for some regular language $ R $, Dyck language $ D $ and homomorphism $ h $. According to the Lemma of Bar-Hillel (also called the pumping lemma), every sufficiently long word $ w $ in a context-free language $ L $ can be written in the form

$$ w \ = \ u _{1} w _{1} u _{2} w _{2} u _{3} ,\ \ w _{1} w _{2} \ \neq \ \lambda , $$

where for every $ i \geq 0 $ the word $ u _{1} w _ 1^{i} u _{2} w _ 2^{i} u _{3} $ belongs to $ L $. For regular languages, the corresponding result reads as follows: Every sufficiently long word $ w $ in a regular language $ L $ can be written in the form $ w = u _{1} w _{1} u _{2} $, $ w _{1} \neq \lambda $, where for every $ i \geq 0 $ the word $ u _{1} w _ 1^{i} u _{2} $ belongs to $ L $.

Derivations according to a context-free grammar (i.e., finite sequences of words where every two consecutive words are in the relation $ \Rightarrow $) can in a natural way be visualized as labelled trees, the so-called derivation trees (cf. also Derivation tree). A context-free grammar $ G $ is termed ambiguous if and only if some word in $ L (G) $ has two derivation trees. Otherwise, $ G $ is termed unambiguous. A context-free language $ L $ is unambiguous if and only if $ L = L (G) $ for some unambiguous grammar $ G $. Otherwise, $ L $ is termed (inherently) ambiguous. One also speaks of degrees of ambiguity. A context-free grammar $ G $ is ambiguous of degree $ k $( a natural number or $ \infty $) if and only if every word in $ L (G) $ possesses at most $ k $ derivation trees and some word in $ L (G) $ possesses exactly $ k $ derivation trees. A language $ L $ is ambiguous of degree $ k $ if and only if $ L = L (G) $ for some $ G $ ambiguous of degree $ k $, and there is no $ G _{1} $ ambiguous of degree less than $ k $ such that $ L = L (G _{1} ) $.

The families of type- $ i $ languages, $ i = 0,\ 1,\ 2,\ 3 $, defined above using generative devices, can be obtained also by recognition devices. A recognition device defining a language $ L $ receives arbitrary words as inputs and "accepts" exactly the words belonging to $ L $. The recognition devices finite and pushdown automata, corresponding to type-3 and type-2 languages, will be defined below. The definitions of the recognition devices linear bounded automata and Turing machines, corresponding to type-1 and type-0 languages, are analogous. (Cf. Computable function; Complexity theory.)

A rewriting system $ (V,\ P) $ is called a finite deterministic automaton if and only if: a) $ V $ is divided into two disjoint alphabets $ V _{s} $ and $ V _{I} $( the state and the input alphabet); b) an element $ s _{0} \in V _{s} $ and a subset $ S _{1} \subseteq V _{s} $ are specified (initial state and final state set); and c) the productions in $ P $ are of the form

$$ s _{i} a _{k} \ \rightarrow \ s _{j} ,\ \ s _{i} ,\ s _{j} \in V _{s} ; \ \ a _{k} \in V _{I} , $$

and for each pair $ (s _{i} ,\ a _{k} ) $ there is exactly one such production in $ P $.

A finite deterministic automaton over an input alphabet $ V _{I} $ is usually defined by specifying an ordered quadruple $ (s _{0} ,\ V _{s} ,\ f,\ S _{1} ) $, where $ f $ is a mapping of $ V _{s} \times V _{I} $ into $ V _{s} $, the other items being as above. (Clearly, the values of $ f $ are obtained from the right-hand sides of the productions listed above.)

The language accepted or recognized by a finite deterministic automaton $ \mathop{\rm FDA}\nolimits $ is defined by

$$ L ( \mathop{\rm FDA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { s _{0} w \Rightarrow^{*} s _{1} \ \textrm{ for } \ \textrm{ some } \ s _{1} \in S _ 1} \} . $$

A finite non-deterministic automaton $ \mathop{\rm FNA}\nolimits $ is defined as a finite deterministic automaton with the following two exceptions. In b) $ s _{0} $ is replaced by a subset $ S _{0} \subseteq V _{s} $. In c) the second half of the sentence (beginning with "and" ) is omitted. The language accepted by an $ \mathop{\rm FNA}\nolimits $ is defined by

$$ L ( \mathop{\rm FNA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { s _{0} w \Rightarrow^{*} s _{1} \ \textrm{ for } \ \textrm{ some } \ s _{0} \in S _{0} \ \textrm{ and } \ s _{1} \in S _ 1} \} . $$

A language is of type 3 if and only if it is accepted by some finite deterministic automaton and if and only if it is accepted by some finite non-deterministic automaton.

A rewriting system $ (V,\ P) $ is called a pushdown automaton if and only if each of the following conditions 1)–3) is satisfied.

1) $ V $ is divided into two disjoint alphabets $ V _{s} $ and $ V _{I} \cup V _{z} $. The sets $ V _{s} $, $ V _{I} $ and $ V _{z} $ are called the state, input and pushdown alphabet, respectively. The sets $ V _{I} $ and $ V _{z} $ are non-empty but not necessarily disjoint.

2) Elements $ s _{0} \in V _{s} $, $ z _{0} \in V _{z} $ and a subset $ S _{1} \subseteq V _{s} $ are specified, namely, the so-called initial state, start letter and final state set.

3) The productions in $ P $ are of the two forms

$$ \tag{a1} zs _{i} \ \rightarrow \ ws _{j} ,\ \ z \in V _{z} ,\ \ w \in V _ z^{*} ,\ \ s _{i} ,\ s _{j} \in V _{s} ; $$

$$ \tag{a2} zs _{i} a \ \rightarrow \ ws _{j} ,\ \ z \in V _{z} ,\ \ w \in V _ z^{*} ,\ \ a \in V _{i} ,\ \ s _{i} ,\ s _{j} \in V _{s} . $$

The language accepted by a pushdown automaton $ \mathop{\rm PDA}\nolimits $ is defined by

$$ L ( \mathop{\rm PDA}\nolimits ) \ = \ \{ {w \in V _{I} ^ *} : { z _{0} s _{0} w \Rightarrow^{*} us _{1} \ \textrm{ for } \ \textrm{ some } \ u \in V _ z^{*} ,\ s _{1} \in S _ 1} \} . $$

A pushdown automaton is deterministic if and only if, for every pair $ (s _{i} ,\ z) $, $ P $ contains either exactly one production (a1) and no productions (a2), or no productions (a1) and exactly one production (a2) for every $ a \in V _{I} $.

The family of context-free languages equals the family of languages accepted by pushdown automata. Languages accepted by deterministic pushdown automata are referred to as deterministic (context-free) languages. The role of determinism is different in connection with pushdown and finite automata: the family of deterministic languages is a proper subfamily of the family of context-free languages.

The automata considered above have no other output facilities than being or not being in a final state, i.e., they are only capable of accepting or rejecting inputs. Occasionally devices (transducers) capable of having words as outputs, i.e., capable of translating words into words, are considered. A formal definition is given below only for the transducer corresponding to a finite automaton. A pushdown transducer is defined analogously.

A rewriting system $ (V,\ P) $ is called a sequential transducer if and only if each of the following conditions 1)–3) is satisfied.

1) $ V $ is divided into two disjoint alphabets $ V _{s} $ and $ V _{1} \cup V _{0} $. The sets $ V _{s} $, $ V _{1} $, $ V _{0} $ are called the state, input and output alphabet, respectively. (The two latter alphabets are non-empty but not necessarily disjoint.)

2) An element $ s _{0} \in V _{s} $ and a subset $ S _{1} \subseteq V _{s} $ are specified (initial state and final state set).

3) The productions in $ P $ are of the form

$$ s _{i} w \ \rightarrow \ us _{j} ,\ \ s _{i} ,\ s _{j} \in V _{s} ,\ \ w \in V _{s} ,\ \ w \in V _ 1^{*} ,\ \ u \in V _ 0^{*} . $$

If, in addition, $ w \neq \lambda $ in all productions, then the rewriting system is called a generalized sequential machine (gsm).

For a sequential transducer $ \mathop{\rm ST}\nolimits $, words $ w _{1} \in V _ 1^{*} $ and $ w _{2} \in V _ 0^{*} $, and languages $ L _{1} \in V _ 1^{*} $ and $ L _{2} \subseteq V _ 0^{*} $, one defines

$$ \mathop{\rm ST}\nolimits (w _{1} ) \ = \ \{ {w} : { s _{0} s _{1} \Rightarrow^{*} ws _{1} \ \textrm{ for } \ \textrm{ some } \ s _{1} \in S _ 1} \} , $$

$$ \mathop{\rm ST}\nolimits (L _{1} ) \ = \ \{ u: \ u \in \mathop{\rm ST}\nolimits (w) \ \textrm{ for } \ \textrm{ some } \ w \in L _{1} \} , $$

$$ \mathop{\rm ST}\nolimits^{-1} (w _{2} ) \ = \ \{ u: \ w _{2} \in \mathop{\rm ST}\nolimits (u) \} , $$

$$ \mathop{\rm ST}\nolimits^{-1} (L _{2} ) \ = \ \{ u: \ u \in \mathop{\rm ST}\nolimits^{-1} (w) \ \textrm{ for } \ \textrm{ some } \ w \in L _{2} \} . $$

Mappings of languages thus defined are referred to as (rational) transductions and inverse (rational) transductions. If $ \mathop{\rm ST}\nolimits $ is also a gsm, one speaks of gsm mappings and inverse gsm mappings.

Homomorphisms, inverse homomorphisms and the mappings $ \tau (L) = L \cap R $, where $ R $ is a fixed regular language, are all rational transductions, the first and the last being also gsm mappings. The composite of two rational transductions (respectively, gsm mappings) is again a rational transduction (respectively, a gsm mapping). Every rational transduction $ \tau $ can be expressed in the form

$$ \tau (L) \ = \ h _{1} (h _ 2^{-1} (L) \cap R), $$

where $ h _{1} $ and $ h _{2} $ are homomorphisms and $ R $ is a regular language. These results show that a language family is closed under rational transductions if and only if it is closed under homomorphisms, inverse homomorphisms and intersections with regular languages.

A finite probabilistic automaton (or stochastic automaton, cf. Automaton, probabilistic) is an ordered quintuple

$$ \mathop{\rm PA}\nolimits \ = \ (V _{1} ,\ V _{s} ,\ s _{0} ,\ S _{1} ,\ H), $$

where $ V _{1} $ and $ V _{s} = \{ s _{0} \dots s _{ {n - 1}} \} $ are disjoint alphabets (inputs and states), $ s _{0} \in V _{s} $ and $ S _{1} \subseteq V _{s} $( initial state and final state set), and $ H $ is a mapping of $ V _{1} $ into the set of $ n $- dimensional stochastic matrices. (A stochastic matrix is a square matrix with non-negative real entries and with row sums equal to 1.) The mapping $ H $ is extended to a homomorphism of $ V _ 1^{*} $ into the monoid of $ n $- dimensional stochastic matrices. Consider $ V _{s} $ to be an ordered set as indicated, let $ \pi $ be the $ n $- dimensional stochastic row vector whose first component equals 1, and let $ \eta $ be the $ n $- dimensional column vector consisting of 0's and 1's such that the $ i $- th component of $ \eta $ equals 1 if and only if the $ i $- th element of $ V _{s} $ belongs to $ S _{1} $. The language accepted by a $ \textrm{ PA } $ with cut-point $ \alpha $, where $ \alpha $ is a real number satisfying $ 0 \leq \alpha < 1 $, is defined by

$$ L ( \mathop{\rm PA}\nolimits ,\ \alpha ) \ = \ \{ {w \in V _{1} ^ *} : { \pi H (w) \eta > \alpha} \} . $$

Languages obtained in this fashion are referred to as stochastic languages.

Decision problems (cf. Decision problem) play an important role in language theory. The usual method of proving that a problem is undecidable is to reduce it to some problem whose undecidability is known. The most useful tool for problems in language theory is in this respect the Post correspondence problem. By definition, a Post correspondence problem is an ordered quadruple $ \mathop{\rm PCP}\nolimits = \{ \Sigma ,\ n,\ \alpha ,\ \beta \} $, where $ \Sigma $ is an alphabet, $ n \geq 1 $, and $ \alpha = ( \alpha _{1} \dots \alpha _{n} ) $, $ \beta = ( \beta _{1} \dots \beta _{n} ) $ are ordered $ n $- tuples of elements of $ \Sigma^{+} $. A solution to the $ \mathop{\rm PCP}\nolimits $ is a non-empty finite sequence of indices $ i _{1} \dots i _{k} $ such that

$$ \alpha _{ {i _ 1}} \dots \alpha _{ {i _ k}} \ = \ \beta _{ {i _ 1}} \dots \beta _{ {i _ k}} . $$

It is undecidable whether an arbitrary given $ \mathop{\rm PCP}\nolimits $( or an arbitrary given $ \mathop{\rm PCP}\nolimits $ over the alphabet $ \Sigma = \{ a _{1} ,\ a _{2} \} $) has a solution.

Also Hilbert's tenth problem is undecidable: Given a polynomial $ P (x _{1} \dots x _{k} ) $ with integer coefficients, one has to decide whether or not there are non-negative integers $ x _{i} $, $ i = 1 \dots k $, satisfying the equation

$$ P (x _{1} \dots x _{k} ) \ = \ 0. $$

The membership problem is decidable for context-sensitive languages but undecidable for type-0 languages. (More specifically, given a context-sensitive grammar $ G $ and a word $ w $, it is decidable whether or not $ w \in L (G) $.) It is decidable whether two given regular languages are equal and also whether one of them is contained in the other. Both of these problems (the equivalence problem and the inclusion problem) are undecidable for context-free languages. It is also undecidable whether a given regular and a given context-free language is empty or infinite, whereas both of these problems are undecidable for context-sensitive languages. It is undecidable whether the intersection of two context-free languages is empty. The intersection of a context-free and a regular language is always context-free; and, hence, its emptiness is decidable. It is undecidable whether a given context-free language is regular.

The following example is due to M. Soittola. Consider the grammar $ G $ determined by the productions

$$ S \ \rightarrow \ abc,\ \ S \ \rightarrow \ aAbc, $$

$$ Ab \ \rightarrow \ bA,\ \ Ac \ \rightarrow \ Bbcc, $$

$$ bB \ \rightarrow \ Bb,\ \ aB \ \rightarrow \ aaA,\ \ aB \ \rightarrow \ aa. $$

Then

$$ L (G) \ = \ \{ {a^{n} b^{n} c ^ n} : { n \geq 1} \} . $$

In fact, any derivation according to $ G $ begins with an application of the first or second production, the first production directly yielding the terminal word $ abc $. Consider any derivation $ D $ from a word $ a^{i} A b^{i} c^{i} $, where $ i \geq 1 $, leading to a word over the terminal alphabet. $ D $ must begin with $ i $ applications of the third production ( $ A $ travels to the right) and then continue with an application of the fourth production (one further occurrence of $ b $ and $ c $ is deposited). Now the word $ a^{i} b^{i} Bbc ^ {i + 1} $ has been derived. For this the only possibility is to apply the fifth production $ i $ times ( $ B $ travels to the left), after which one obtains the word $ a^{i} Bb ^ {i + 1} c ^ {i + 1} $. This word directly yields one of the two words

$$ a ^ {i + 1} Ab ^ {i + 1} c ^ {i + 1} \ \ \textrm{ or } \ \ a ^ {i + 1} b ^ {i + 1} c ^ {i + 1} $$

by the last two productions (one further $ a $ is deposited, and either a new cycle is entered or the derivation is terminated). This argument shows that $ G $ generates all words belonging to the language and nothing else; every step in a derivation is uniquely determined, the only exception being that there is a choice between termination and entrance into a new cycle.

References

[a1] N. Chomsky, "Three models for the description of language" IRE Trans. Information Theory , IT-2 (1956) pp. 113–124
[a2] N. Chomsky, "Syntactic structures" , Mouton (1957)
[a3] M. Davis, "Computability and unsolvability" , McGraw-Hill (1958)
[a4] S. Eilenberg, "Automata, languages and machines" , A , Acad. Press (1974)
[a5] D. Hilbert, "Mathematische Probleme. Vortrag, gehalten auf dem internationalen Mathematiker-Kongress zu Paris, 1900" , Gesammelte Abhandlungen , III , Springer (1935)
[a6] S.C. Kleene, "Representation of events in nerve nets and finite automata" , Automata studies , 34 , Princeton Univ. Press (1956) pp. 3–42
[a7] A. Paz, "Introduction to probabilistic automata" , Acad. Press (1971)
[a8] E.L. Post, "A variant of a recursively unsolvable problem" Bull. Amer. Math. Soc. , 52 (1946) pp. 264–268
[a9] H. Rogers jr., "Theory of recursive functions and effective computability" , McGraw-Hill (1967) pp. 164–165
[a10] G. Rozenberg, "Selective substitution grammars" Elektronische Informationsverarbeitung und Kybernetik (EIK) , 13 (1977) pp. 455–463
[a11] A. Salomaa, "Theory of automata" , Pergamon (1969)
[a12] A. Salomaa, "Formal languages" , Acad. Press (1973)
[a13] A. Salomaa, "Jewels of formal language theory" , Computer Science Press (1981)
[a14] A. Salomaa, M. Soittola, "Automata-theoretic aspects of formal power series" , Springer (1978)
[a15] A. Thue, "Ueber unendliche Zeichenreihen" Skrifter utgit av Videnskapsselskapet i Kristiania , I (1906) pp. 1–22
[a16] A. Thue, "Probleme über Veränderungen von Zeichenreihen nach gegebenen Regeln" Skrifter utgit av Videnskapsselskapet i Kristiania , I.10 (1914)
[a17] A.M. Turing, "On computable numbers, with an application to the Entscheidungsproblem" Proc. London Math. Soc. , 42 (1936) pp. 230–265
How to Cite This Entry:
Formal languages and automata. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Formal_languages_and_automata&oldid=18174
This article was adapted from an original article by G. RozenbergA. Salomaa (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article