Namespaces
Variants
Actions

Difference between revisions of "User:Joachim Draeger/sandbox"

From Encyclopedia of Mathematics
Jump to: navigation, search
 
(24 intermediate revisions by the same user not shown)
Line 1: Line 1:
  
{{MSC|91A05}}  
+
{{MSC|68Q05}}
 +
 
 
{{TEX|done}}
 
{{TEX|done}}
  
==Rules==
+
A probabilistic Turing machine (PTM) is a [[Turing machine]] (TM) modified for executing a randomized [[Computable function|computation]]. From the computability point of view, a PTM is equivalent to a TM. In other respects, however, the behavior of a PTM is profoundly different from the deterministic computation of a TM; false results, for example, can only be excluded statistically in this model. The physical realization of a true random number generation is possible by performing a measurement process in quantum theory.  
 
 
Sprouts is a 2-person paper-and-pencil game created by J.H.Conway and
 
M.S.Paterson in 1967 {{Cite|G1}}. It starts with $n$ points on a sheet of paper.
 
The two players alternate with making moves. Each move consists
 
of drawing a line between two points $a$ and $b$ ($a=b$ is a
 
valid choice) and of adding a new point anywhere on the new line with
 
exception of the endpoints. Planarity of the graph must be maintained,
 
i.e. the line must not intersect another line
 
or itself or cross one of the points already existing. The number
 
of lines incident to a point is limited by three, whereby drawing a
 
line from a point $p$ to itself is counted as two lines ending at $p$.
 
In the normal version of Sprouts, the player making the last valid move
 
wins; in the misere version, the player making the last valid move loses.
 
 
 
{| class="wikitable" style="border: 2px solid darkgray;"
 
|- align="center"
 
| [[File:sprouts_1.png|350px|Start Position]]
 
| [[File:sprouts_2.png|350px|Move 1]]
 
|- align="center"
 
| [[File:sprouts_3.png|350px|Move 2]]
 
| [[File:sprouts_4.png|350px|Move 3]]
 
|- align="center"
 
| [[File:sprouts_5.png|350px|Move 4]]
 
| [[File:sprouts_6.png|350px|Move 5]]
 
|- align="center"
 
| [[File:sprouts_7.png|350px|Move 6]]
 
| [[File:sprouts_8.png|350px|Move 7 and End of Game]]
 
|-
 
| align="center" colspan="2" |
 
Example: Game of Sprouts starting with 3 points (with the second player as winner in the normal version).
 
|}
 
 
 
==Formal Representation==
 
 
 
A Sprouts position can be formalized quite naturally as topological graph
 
$\bar G=(\bar V,\bar E)$ embedded in the plane ${{\mathbb{R}}}^2$. The starting
 
position of an $n$-points game is a planar graph $\bar G_0=(\bar V_0,\bar E_0)$
 
with $\bar V_0=\{p_1,\ldots,p_n\}$ and $\bar E=\emptyset$. A move in a position
 
$\bar G_i=(\bar V_i,\bar E_i)$ consists of connecting two points
 
$a,b\in \bar V$ with a line and placing a point $q$ on this line. The new line
 
has to preserve planarity and consists of two edges $e_1,e_2$ (modelled as
 
Jordan curves) with endpoints $a,q$ and $q,b$, respectively. The execution
 
of this move gives a new graph {{Cite|FL1}} $\bar G_{i+1}=(\bar V_{i+1},\bar E_{i+1})$
 
with $\bar V_{i+1}:=\bar V_i\cup \{q\}$ and with $\bar E_{i+1}:=
 
\bar E_i\cup \{e_1,e_2\}$.
 
 
 
The representation as topological graph is far too detailed, if
 
only the outcome of a game is of interest. One possible option to
 
abstract from irrelevant information --- like the precise shape of the edges
 
--- is to consider equivalence classes
 
of Sprouts positions with respect to homeomorphisms, because the gaming
 
aspects of Sprouts positions --- (non)existence of valid moves --- are
 
invariant under homeomorphisms {{Cite|DHKR}}: For two positions $A, B$ being
 
homeomorphic to each other, all moves executed in the position
 
$A$ can be transferred to the position $B$ and vice versa.
 
 
 
The elements of an equivalence class can be characterized at a purely
 
combinatorical level by describing their node-, edge-, and face-structure.
 
Such a description may consist of
 
* A set $V$ of nodes.
 
* A set $E$ of edges.
 
* A set $F$ of faces.
 
* A set $O$ giving for each node $v\in V$ the order of edges incident to $v$
 
* A set $B$ giving for each face $f\in F$ the subsets of nodes and edges contained in its boundary.
 
Equivalent descriptions are known and used in programs analyzing Sprouts.
 
 
 
{| class="wikitable" style="border: 2px solid darkgray;"
 
|-
 
| [[File:sprouts_multiedge.png|400px]]
 
|-
 
| The representation of edges by their endpoints only will not suffice, because there may be more than one edge in a Sprouts position with the same endpoints.
 
|}
 
 
 
==Length of a Sprouts game==
 
 
 
===Maximum length of a game===
 
 
 
A node is said to have $k$ lives, $k\le 3$, if its degree is $3-k$ {{Cite|Pr}}.
 
A node is called dead, it it has no lives left. Otherwise, it is called alive.
 
In a final position {{Cite|L}}, nodes still alive have only one life; otherwise it
 
would be possible to make another move (draw a line connecting this point
 
with itself). A new point introduced in a move has already two lines incident
 
to it and thus only one life remaining.
 
 
 
A Sprouts game starting with $n$ points has a total number of $3n$ lives
 
at the beginning. Each move reduces the total number of lives in the position
 
by 1, since the connecting line uses up two lives, whereas the new node adds
 
one life. If only 1 life remaines, a valid move is not possible anymore.
 
Thus a Sprouts game with $n$ initial points can not have
 
more than $3n-1$ moves {{Cite|C}}.
 
 
 
===Minimum length of a game===
 
 
 
Two nodes of a topological graph are called topological neighbors, if they
 
can be connected with an edge preserving planarity. In a final position,
 
the topological neighbors of a node $p$ still
 
alive are dead. Otherwise it would be possible to make another move
 
according to the rules of the game. For $p$ at least two (dead)
 
topological neighbours must exist {{Cite|L}}.
 
 
 
{| class="wikitable" style="border: 2px solid darkgray;"
 
|-
 
| [[File:sprouts_neighbour.png|400px]]
 
|-
 
|
 
The two cases of topological neighbors of a node $p$. Left: $p$ is connected with two different nodes $a,b$ with one edge each. The nodes $a,b$ are topological neighbors of $p$. Right: $p$ is connected with a node $p'$ via two edges. The node $p'$ must be dead, because otherwise another move would be possible. Thus it is connected with another point $p''$. The nodes $p',p''$ are topological neighbors of $p$.
 
|}
 
  
If a game starts with $n$ nodes and ends after $m$ moves, the final position
+
Some applications of computer science can be better modeled by a PTM than by a classical TM. An example are environments with strong radiation like space missions crossing the radiation belt of Jupiter or robots for handling accidents in a nuclear plant. But even a usual calculation involving a very large number of single operations (e.g. calculations of $\pi$ with record precision) may be potentially influenced by cosmic rays making the calculation probabilistic.
consists of exactly $n+m$ nodes. This node set consists of
 
$s$ nodes still alive, $s'$ dead nodes being topological neighbors
 
of nodes still alive and $s''$ dead nodes, which are no topological neighbor
 
of a node still alive. This leads to the equation $n+m=s+s'+s''$. Since each
 
of the $s$ points has exactly one life, it holds $s=3n-m$. Furthermore, the
 
inequalities $s'\ge 2s= 2(3n-m)$ and $s''\ge 0$ will hold. This leads
 
to $n+m \ge 3n-m+6n-2m$ and thus $m\ge 2n$, i.e. a Sprouts game will have
 
at least $2n$ moves {{Cite|C}},{{Cite|L}}.
 
  
==Gaming Aspects==
+
===Definition of a Probabilistic Turing Machine===
  
===Game Strategy===
+
A PTM $(Q,\Sigma,\Gamma,\sqcup,q_0,q_f,\delta)$ has the same components as a TM. The set $Q$ is a finite set of states, $\Sigma$ is a finite input/output alphabet, $\Gamma$ is a finite tape alphabet with $\Sigma\subseteq\Gamma$, $\sqcup\in \Gamma$ is a blank symbol with $\sqcup \notin \Sigma$, the state $q_0 \in Q$ is a start state, and $q_f \in Q$ is a stop state. The transition function $\delta$, however, does not define deterministic transitions as in the case of a Turing machine, but gives a probability distribution of possible transitions according to $ \delta: Q \times \Sigma \times Q \times \Sigma \times \{L,R\} \longrightarrow [0,1]$.
  
As stated above,
+
For probabilistic Turing machines, the set $C$ of <i>configurations</i> is defined in the same way as for Turing machines. It is also called the set of <i>basic states</i>. The set $\Omega$ of <i>states</i> is the set of possible probability distributions on the basic states, i.e.
the maximum number of moves in a Sprouts game starting with $n$ nodes equals
+
$$\Omega=\left\{(p_c)_{c\in C}\in [0,1]^C \,\,\,\left| \,\,\,\sum\limits_{c\in C} p_c=1\right.\right\}.$$
$3n-1$. The term '$-1$' results from the one node still being alive at the  
+
The set of states serves as memory for the computation history. Since the run of the computation is probabilistic, the definition of a state must be probabilistic as well. Thus the distinction between basic states and states.
end of the game. More generally, the number $m$ of moves of a Sprouts game
 
is equal to $3n$ minus the number of nodes still being alive in the final
 
position. This leads immediately to an important conclusion: The creation
 
of nodes, which remain alive but are not connectable with other living
 
nodes anymore, controls the number of moves in a Sprouts game.  
 
Each player can create such a node $p$ by separating
 
it from other parts of the position using a closed curve. The
 
node $p$ is thus excluded from further play making it dead effectively.
 
Since according to the winning
 
condition the outcome of a game of Sprouts depends on the number of moves
 
{{Cite|BCG}}, this approach will also control the game itself. It was used
 
as starting point for the development of winning strategies for specific
 
values of $n$ {{Cite|FL2}}. Nevertheless no general winning strategy has been
 
developed so far.  
 
  
{| class="wikitable" style="border: 2px solid darkgray;"
+
The transition function $\delta$ can be considered as [[Stochastic matrix|stochastic matrix]] $M_{ji}$ defined on the space $C$ of configurations with $ M_{ji} = \mathrm{Prob}[\delta\colon c_i \mapsto c_j] \in [0,1]$.  As a stochastic matrix, the $L_1$-norm of each column of $M_{ji}$ is equal to 1, i.e. $\sum_i M_{ji} = 1$. $L_1$-norms are preserved by $M$ according to $L_1(M\cdot c) = L_1(c) = \sum_{i} c_i$ for a configuration $c\in C$. Not every stochastic matrix provides the transition function $\delta$ of a PTM, however, because such a $\delta$ must fulfill additionally a locality constraint. A Turing machine changes only a single symbol in each step and moves its head to a new position in its immediate neighborhood.
|-
 
| [[File:sprouts_separation.png|400px]]
 
|-
 
| A move connecting the node $c$ with itself creates a closed curve, which splits the plane into two faces $A$ and $B$. In the final position, both faces will contain at least one node being still alive.
 
|}
 
  
===Game Tree===
+
Some alternative definitions of probabilistic Turing machines can be shown to be equivalent to the definition given here.
 +
* A probabilistic Turing machine can also be understood as a Turing machine $(Q,\Sigma,\Gamma,\sqcup,q_0,q_f,\delta_0,\delta_1)$ having two transition functions $\delta_0$ and $\delta_1$. Which one of these two functions has to be applied in the next transition step is chosen randomly with probability $1/2$ each. This can be understood as a random number generator executing a coin toss for the binary decision between two possible continuations.
 +
* In a slight variation of the above approach, a probabilsitic Turing machine is a deterministic Turing machine with an additional tape (usually considered as read-only and its head moving only to the right) containing binary random numbers. Though $\delta$ is a deterministic transition function, the additional tape introduces a random decision for each step.
  
Sprouts is a finite game. Thus a winning strategy exists for one of the
+
===Complexity Theory of Probabilistic Turing Machines===
two players. Which
 
player can enforce the win, is still an open question in the general,
 
however. Lacking suitable other techniques, its discussion relies
 
on a construction of the game tree. For $n\le 7$ initial nodes this
 
was done by hand (for $n=7$ look at {{Cite|FL2}}); for games with $n>7$
 
computer support seems to be mandatory {{Cite|AJS}},{{Cite|J}},{{Cite|LV1}},{{Cite|LV4}}.
 
  
The algorithmical construction of the game tree is based on
+
For a TM, the sequence of computation steps is uniquely determined. Such a machine accepts an input $x\in\Sigma^\ast$, if the terminating state of the computation is an accepting state. For a nondeterministic Turing machine, the input $x$ is accepted if it exists a computation sequence starting with $x$ and terminating in an accepting state. For probabilistic Turing machines, such a computation sequence exists in each case, even though its probability may be zero. Thus for defining acceptance, the probability of computation sequences is taken into consideration. This leads to the following definition.
a discrete and finite representation of Sprouts positions
 
like the one given in the section [[#section formalization|formalization]].
 
To enable an efficient computation, redundant information sill contained in
 
the description (e.g. renaming of nodes, edges, faces or mirroring a
 
position at a face boundary according to the Jordan curve theorem) has to
 
be minimized. Another important technique for reducing the necessary
 
computation effort is the decomposition of a Sprouts position in
 
independent subpositions whenever possible {{Cite|LV5}}. Despite of all efforts,
 
the question, whether a Sprouts position can continue for more than $k$
 
moves, remains NP-complete. The same holds for the analogous
 
question, whether a Sprouts position can end in fewer than $k$ moves {{Cite|BS}}.
 
Thus, only small values of $n$ have been checked up to now ($n\le 44$
 
as of 2011). Tables of the known results can be found here
 
[http://sprouts.tuxfamily.org/wiki/doku.php?id=records].
 
It is conjectured {{Cite|AJS}} that the first player wins for $n=3,4,5\bmod 6$.
 
  
The misere version is analyzed in a similar way. It is conjectured, that
+
For $T\colon \mathbb{N} \longrightarrow \mathbb{N}$, a PTM $M$ [[Decidable predicate|decides]] a language $L\subseteq \Sigma^\ast$ in time $T(n)$ if
here the first player wins for $n=0,4,5\bmod 6$ except for $n=1,4$.
+
* For each $x\in \Sigma^\ast$ and each possible computation sequence resulting from input $x$, $M$ terminates after at most $T(|x|)$ computation steps.
Outcomes are only known up to $n=20$, however. The fewer number of
+
* $\forall x \in L \colon \mathrm{Prob}[M(x)=1 ] \ge 2/3$
data points results from difficulties in the mathematical and computational
+
* $\forall x \notin L \colon \mathrm{Prob}[M(x)=0 ] \ge 2/3$
handling of misere games {{Cite|Pl}},{{Cite|LV3}}.
+
In this definition, $M(x)$ designates the result of the processing of input $x$ by $M$. The expression $M(x)=1 $ indicates a termination in an accepting state, whereas $M(x)=0$ indicates a termination in a nonaccepting state. $\mathrm{Prob}[M(x)=1 ]$ denotes the fraction of computations leading to $M(x)=1$. The class of languages decided by PTMs in $O(T(n))$ computation steps is designated as $\mathrm{BPTIME}(T(n))$.
  
==Variants==
+
Based on  $\mathrm{BPTIME}(T(n))$, the [[Complexity theory|complexity class]] $\mathrm{BPP}$ (an abbreviation of bounded-error, probabilistic, polynomial-time) is formally defined as
 +
$$\mathrm{BPP}:=\bigcup\limits_{c\in\mathbb{R},c>0} \mathrm{BPTIME}(|x|^c).$$
 +
This means it holds $L\in \mathrm{BPP}$ if a polynomial-time PTM $M$ exists with
 +
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\
 +
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3. \end{align*}
 +
Since the transition function $\delta$ can be chosen in such a way that a specific continuation is preferred with a probability of $1$, a deterministic TM is a special case of a PTM. Thus it holds $\mathrm{P}\subseteq \mathrm{BPP}$. Up to know (2013) it is unknown, whether it holds $\mathrm{BPP} = \mathrm{P}$ or not.
  
===Brussels Sprouts===
+
The complexity class $\mathrm{BPP}$ defines the polynomial-time complexity for a PTM $M$ based on a two-sided error, i.e. $M$ may indicate $0$ despite of $x\in L$ and $1$ despite of $x\notin L$. It is also possible to define complexity classes with one-sided error. In this case, $M(x)$ may still indicate, say, a false reject, but not a false accept. This leads to the definition of the complexity class $\mathrm{RP}$ (abbreviation for random polynomial-time). It holds $L\in \mathrm{RP}$ if a polynomial-time PTM $M$ exists with
 +
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\
 +
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0. \end{align*}
 +
This is equivalent to
 +
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\
 +
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] =1. \end{align*}
 +
An immediate consequence of the definition is the inclusion $\mathrm{RP} \subseteq \mathrm{NP}$, whereby $\mathrm{NP}$ is the complexity class of nondeterministically polynomial-time languages. Analogously, it holds $L\in \mathrm{coRP}$ if a polynomial-time PTM $M$ exists with
 +
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 1 \\
 +
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3 \end{align*}
 +
or, equivalently,
 +
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 0] = 0 \\
 +
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3. \end{align*}
 +
One can show both $\mathrm{RP}\subseteq \mathrm{BPP}$ and $\mathrm{coRP}\subseteq \mathrm{BPP}$. The members of $\mathrm{RP}$ gives no false accepts, while the members of $\mathrm{coRP}$ gives no false rejects. For avoiding both false accepts and rejects, i.e. false answers at all, one has to use algorithms belonging to the complexity class $\mathrm{ZPP}$.
  
This version of Sprouts uses crosses instead of points {{Cite|BCG}},{{Cite|CG}},{{Cite|G2}}.
+
The complexity class $\mathrm{ZPP}$ of zero-sided error, expected polynomial-time languages consists of all laguages $L$ for which it exists a $c\in\mathbb{R},c>0$ such that for all $x\in L$ the average running time is $|x|^c$ while the probability of providing the correct answer is equal to $1$, i.e.
Each line connecting two crosses (or a cross with itself) must start and
+
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 1 \\
end at a free arm of a cross. Anywhere on the new line a cross-bar is
+
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] = 1. \end{align*}
added, giving one new free arm at each side of the line. The cross-bar
+
For $L\in \mathrm{ZPP}$, the probability that $M(x)$ does not terminate for $x\in L$ is equal to $0$. It holds $\mathrm{ZPP} = \mathrm{RP}\cap \mathrm{coRP}$.
represents a new cross (with two opposing arms used up by the line just drawn).  
 
  
{| class="wikitable" style="border: 2px solid darkgray;"
+
===Improvement of Probabilistic Computations===
|- align="center"
 
| [[File:brussels_sprouts_1.png|300px|Start Position]]
 
| [[File:brussels_sprouts_2.png|300px|Move 1]]
 
| [[File:brussels_sprouts_3.png|300px|Move 2]]
 
|- align="center"
 
| [[File:brussels_sprouts_4.png|300px|Move 3]]
 
| [[File:brussels_sprouts_5.png|300px|Move 4]]
 
| [[File:brussels_sprouts_6.png|300px|Move 5]]
 
|- align="center"
 
| [[File:brussels_sprouts_7.png|300px|Move 6]]
 
| [[File:brussels_sprouts_8.png|300px|Move 7]]
 
| [[File:brussels_sprouts_9.png|300px|Move 8 and End of Game]]
 
|-
 
| align="center" colspan="3" |
 
Example: Game of Brussels Sprouts starting with 2 points
 
|}
 
  
The termination of Brussels Sprouts is not obvious, because a move
+
The definitions of probabilistic complexity classes given above use the specific value $2/3$ as required minimal probability. This somewhat arbitrarily chosen value can be replaced by any other value $1/2+\epsilon$, $\epsilon > 0$, without changing the essential meaning of the definitions. In the case of $\mathrm{RP}$ for example, an [[Algorithm|algorithm]] fulfilling
does not change the total number of free arms (two arms are used up, and
+
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\
two new free arms are introduced). However, it can be shown that a game
+
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0 \end{align*}
starting with $n$ crosses ends always after $m=5n-2$ moves. To see this,
+
iterated $m$ times in the case of $M(x) = 1$ leads to an algorithm fulfilling
apply the Euler Theorem. It holds:
+
\begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge (2/3)^m  \\
* The number $v$ of nodes is equal to $n+m$ (Each move adds one cross, i.e. a node).
+
\forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0. \end{align*}
* The number $e$ of edges is equal to $2m$ (Each move adds two edges).
+
In the same way, algorithms belonging to the complexity class $\mathrm{coRP}$ can be modified.
* The number $f$ of faces is equal to $4n$ (In a final position, each face contains one free arm. A move does not change the total number of free arms. Thus, the number of faces is equal to the number of free arms at the beginning. And at the beginning each cross has 4 free arms).
 
* The number of connection components is equal to one (Otherwise, it would exist a face $F$ containing nodes --- i.e. crosses --- of two connection components $X, Y$ in its boundary. For both $X$ and $Y$ there must exist at least one such cross with a free arm accessible from $F$. Thus, a move is possible connecting these free arms belonging to $X$ and $Y$ with a line across $F$. This is a contradiction to the assumption that we consider a final position).
 
For a planar graph, it holds $v-e+f=2$ and thus $n+m-2m+4n=5n-m=2$.  
 
  
Due to the fixed number of moves (independent from the play),
+
Algorithms belonging to the complexity class $\mathrm{BPP}$ require some more effort for modifying the probability of correctness.  Here, an $m$-fold repetition is used, whereby the results $b_1,\ldots,b_m$ are evaluated using a voting mechanism. Assuming that $M(x)$ decides the predicate $x\in L$ by producing the result $0$ or $1$ and that $m$ is an odd number, the modified algorithm gives $1$ if $\sum_i b_i > m/2$ and $0$ otherwise. The probability of correctness is modified according to Chernoff bounds as follows.
Brussels Sprouts is uninteresting from the gaming perspective.
 
A player only has the freedom to create new separated
 
regions by a closed curve; the rules ('add a cross-bar on
 
the line newly drawn') determine the number of new lives in- and outside
 
of the region to be exactly one. Sprouts, in the contrary, offers the
 
additional freedom to use the single new life added by a new point
 
either inside or outside of a separate region.
 
  
Analogous to Sprouts, Brussels Sprouts can be generalized to be played on
+
Let $x_1,\ldots,x_m$ be independent random variables having the same probability distribution with image set $\{0,1\}$. For $p:= \mathrm{Prob}[x_i=1]$, $X:=\sum_{i=1}^mx_i$, and $\Theta \in [0,1]$ it holds
orientable compact surfaces. On non-orientable compact surfaces, a 'real'
+
$$\begin{array}{rcl}
game results in this way {{Cite|CC}}, whereby winning strategies have been found
+
\mathrm{Prob}[X\ge (1+\Theta)pm] &\le & \exp\left(-{\Theta^2\over 3}pm\right) \\
for some special cases {{Cite|Gi}}. Another interesting option is to make
+
\mathrm{Prob}[X\le (1-\Theta)pm] &\le & \exp\left(-{\Theta^2\over 2}pm\right)
the addition of a new cross in a move optional.  
+
\end{array}$$
 +
The random variables $x_i$ are now interpreted as error variables, i.e. $x_i=1$ if the $i$-th repetition of the decision algorithm gives a wrong answer and $x_i=0$ otherwise.  According to the definition of the class $\mathrm{BPP}$, it holds $p=1-2/3=1/3$.  Taking $\Theta=1/2$ in the first Chernoff bound gives
 +
$$\mathrm{Prob}[X\ge m/2] \le \exp\left(-{\Theta^2\over 3}pm\right) = \exp\left(-{1\over 36}m\right) $$
 +
i.e. the error of the voting algorithm is smaller or equal to $\exp(-{m/36})$.
  
===Stars-and-Stripes, also called Cloves===  
+
===Applications of Probabilistic Computations===
  
This version of Sprouts is similar to Brussels Sprouts, but stars with
+
Some examples for probabilistic algorithms may be given. Only their basic ideas are presented.
$2, 3, 4, \ldots$ arms are allowed in the starting position instead of
+
* The probabilistic primality testing for a natural number $n\in\mathbb{N}$ can be realized as follows: Generate random natural numbers $k_1, \ldots, k_r$ with $1 < k_i < n$. For each $k_i$, calculate the greatest common divisor $g_i:=\gcd(n,k_i)$. If it exists a $i$ with $g_i>1$, output $0$ for 'not prime'. Otherwise, output $1$.
only crosses with particularly 4 arms {{Cite|BCG}}. In a move, always a
+
* The question, whether two polynomials $f(x)$, $g(x)$ are equal on a region $D$ can be reduced to the question, whether $f(x)-g(x)=0$ for $x\in D$. Thus, the algorithm generates random numbers $x_1, \ldots, x_r$ with $x_i \in D$. For each $x_i$, the algorithm calculates the difference $d_i:= f(x_i)-g(x_i)$. If it exists a $i$ with $d_i\neq 0$, output $0$ representing 'unequal'. Otherwise, output $1$ representing 'equal'.
4-arm star (stripe) is added. Winning strategies for special cases are
 
presented in {{Cite|CC}}.
 
  
===Black-and-white Sprouts, also called Weeds===
+
===References===
 
 
In this version, the player has the choice to add a new dot on the new line
 
drawn in a move or not. It exists a winning strategy for the first player
 
according to [http://www.webcitation.org/5o1OOC1x1].
 
 
 
===Sprouts on compact surfaces===
 
 
 
The game of Sprouts played on a sphere is equivalent to the play on a plane,
 
because the plane is homeomorphic to a punctured sphere. All other
 
compact surfaces can be represented on the plane using the fundamental polygon.
 
In this way, start positions with up to $n=14$ points on orientable compact
 
surfaces with a genus $g\le 9$ were analyzed {{Cite|LV2}}. The results
 
lead to the conjecture that the first player wins for $n=3,4,5\bmod 6$
 
(as in the plane), independent from $g$. For nonorientable compact
 
surfaces, the resulting pattern (start positions with $n\le 14$ on
 
nonorientable compact surfaces with $g\le 8$ were analyzed) shows a higher
 
complexity. Tables of the known results (for both normal and misere play) can
 
be found on [http://sprouts.tuxfamily.org/wiki/doku.php?id=records].
 
 
 
An interesting theoretical result in this respect is the following: For a
 
given $n$ it
 
suffices to solve the game for compact surfaces up to a certain limiting
 
genus, if one wants to formulate a winning strategy for compact surfaces of
 
arbitrary genus.
 
 
 
===Sorokins Game===
 
 
 
Sprouts is played on a sheet of paper, considering a specific embedding
 
of the developing graph. Sorokins game instead is played with the constraint,
 
that the developing graph must remain embeddable in the plane. In other words,
 
graphs admissable in Sorokins game are planar (and have nodes with degree
 
$\le 3$).
 
 
 
===Antwerp Sprouts===
 
 
 
Antwerp Sprouts is played with colored points. For the rules of two
 
different versions, see [http://www.neverendingbooks.org/index.php/antwerp-sprouts.html].
 
 
 
===Another generalization===
 
 
 
A move in Sprouts consists of adding a new point and connecting it with the
 
already existing points using 2 edges. A possible generalization is to use
 
always $q$ edges, $q\ge 2$, instead of specifically $q=2$, and allow
 
graphs of maximum degree $p$ (see [http://mathforum.org/kb/message.jspa?messageID=1091035]).
 
For e.g. $p=4$, $q=3$ this results in games with at most $2n-1$
 
moves. Correspondigly, the choice $p=5$, $q=3$ leads to games with at most
 
$5n-2$ moves.
 
 
 
==References==
 
  
 
{|
 
{|
 
|-
 
|-
|valign="top"|{{Ref|AJS}}||valign="top"| D. Applegate, G. Jacobson, and D. Sleator, [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.9472 " Computer analysis of Sprouts"], Carnegie Mellon University Computer Science Technical Report No. CMU-CS-91-144, May 1991
+
|valign="top"|{{Ref|DK2000}}||valign="top"| Ding-Zhu Du, Ker-I Ko, "Theory of Computational Complexity", Wiley 2000
|-
 
|valign="top"|{{Ref|BS}}||valign="top"| L. Baird III, D. Schweitzer, [http://www.usafa.edu/df/dfe/dfer/centers/accr/docs/baird2010b.pdf "Complexity of the Game of Sprouts"], FCS'10 (6th International Conference on Foundations of Computer Science), Las Vegas 2010
 
|-
 
|valign="top"|{{Ref|BCG}}||valign="top"| E. Berkelamp, J. Conway, R. Guy, "Winning Ways for your Mathematical Plays", vol. 2, Academic Press, London, 1982, chapter 17, pp. 564-568
 
|-
 
|valign="top"|{{Ref|CC}}||valign="top"| G. Cairns, K. Chartarrayawadee, "Brussels Sprouts and Cloves", Mathematics Magazine 80(2007) February pp.46-58
 
|-
 
|valign="top"|{{Ref|C}}||valign="top"| M. Copper, [http://compmath.files.wordpress.com/2008/08/graph_theory_and_the_game_of_sprouts.pdf "Graph theory and the game of Sprouts"], American Mathematical Monthly 100(1993)478-482
 
|-
 
|valign="top"|{{Ref|DHKR}}||valign="top"| J. Draeger, S. Hahndel, G. Koestler, P. Rosmanith, "Sprouts: Formalisierung eines topologischen Spiels", Technical Report TUM-I9015, Technische Universität München 1990
 
|-
 
|valign="top"|{{Ref|FL1}}||valign="top"| R. Focardi, F. Luccio, [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.212 "A new analysis technique for the Sprouts Game"], FUN 2001 (2nd Convegno International Conference on Fun with Algorithms), Isola d’Elba 2001, Ed. Carleton Scientific, pp. 117-132
 
|-
 
|valign="top"|{{Ref|FL2}}||valign="top"| R. Focardi, F. Luccio, [http://dx.doi.org/10.1016/j.dam.2003.11.008 "A modular approach to Sprouts"], Discrete Applied Mathematics 144 (2004)303-319
 
|-
 
|valign="top"|{{Ref|G1}}||valign="top"| M. Gardner, "Mathematical games: of sprouts and brussels sprouts, games with a topological flavor", Scientific American 217(1967), July 112-115
 
|-
 
|valign="top"|{{Ref|G2}}||valign="top"| M. Gardner, "Sprouts and Brussels Sprouts", in M. Gardener, "Mathematical Carnival", Mathematical Association of America 1989
 
|-
 
|valign="top"|{{Ref|Gi}}||valign="top"| P.J. Giblin, Graphs, surfaces and homology, Second edition, Chapman \& Hall, London, 1981.
 
|-
 
|valign="top"|{{Ref|J}}||valign="top"| Josh Jordan
 
|-
 
|valign="top"|{{Ref|L}}||valign="top"| T.K. Lam, [http://compmath.files.wordpress.com/2008/08/connected_sprouts.doc "Connected sprouts"], American Mathematical Monthly 104(1997) February 116-119
 
|-
 
|valign="top"|{{Ref|LV1}}||valign="top"| J. Lemoine, S. Viennot, [http://download.tuxfamily.org/sprouts/sprouts-lemoine-viennot-070407.pdf "A further computer analysis of sprouts"], draft, 2007
 
|-
 
|valign="top"|{{Ref|LV2}}||valign="top"| J. Lemoine, S. Viennot, [http://arxiv.org/pdf/0812.0081v2 "Sprouts games on compact surfaces"], arXiv:0812.0081v2 [math.CO] 2 Mar 2009
 
|-
 
|valign="top"|{{Ref|LV3}}||valign="top"| J. Lemoine, S. Viennot, [http://arxiv.org/pdf/0908.4407v1 "Analysis of misere Sprouts game with reduced canonical trees"], arXiv:0908.4407v1 [math.CO] 30 Aug 2009
 
|-
 
|valign="top"|{{Ref|LV4}}||valign="top"| J. Lemoine, S. Viennot, [http://arxiv.org/abs/1008.2320 "Computer analysis of Sprouts with nimbers"], arXiv:1008.2320v1 [math.CO] 13 Aug 2010
 
|-
 
|valign="top"|{{Ref|LV5}}||valign="top"| J. Lemoine, S. Viennot, [http://arxiv.org/abs/1011.5841 "Nimbers are inevitable"], arXiv:1011.5841v1 [math.CO] 26 Nov 2010
 
 
|-
 
|-
|valign="top"|{{Ref|Pl}}||valign="top"| T. Plambeck, [http://arxiv.org/abs/math/0603027 "Advances in Losing"], arXiv:math/0603027v1 [math.CO] 1 Mar 2006
+
|valign="top"|{{Ref|AB2009}}||valign="top"| Sanjeev Arora, Boaz Barak, "Computational Complexity: A Modern Approach", Cambridge University Press 2009
 
|-
 
|-
|valign="top"|{{Ref|Pr}}||valign="top"| G. Pritchett, "The game of Sprouts", Two-Year College Mathematics Journal 7(1976) December 21-25
+
|valign="top"|{{Ref|BC2006}}||valign="top"| [http://parlevink.cs.utwente.nl/~vdhoeven/CCC/bCC.pdf Daniel Pierre Bovet, Pierluigi Crescenzim, "Introduction to the Theory of Complexity", 2006]
 
|-
 
|-
 
|}
 
|}

Latest revision as of 17:42, 26 December 2013

2020 Mathematics Subject Classification: Primary: 68Q05 [MSN][ZBL]


A probabilistic Turing machine (PTM) is a Turing machine (TM) modified for executing a randomized computation. From the computability point of view, a PTM is equivalent to a TM. In other respects, however, the behavior of a PTM is profoundly different from the deterministic computation of a TM; false results, for example, can only be excluded statistically in this model. The physical realization of a true random number generation is possible by performing a measurement process in quantum theory.

Some applications of computer science can be better modeled by a PTM than by a classical TM. An example are environments with strong radiation like space missions crossing the radiation belt of Jupiter or robots for handling accidents in a nuclear plant. But even a usual calculation involving a very large number of single operations (e.g. calculations of $\pi$ with record precision) may be potentially influenced by cosmic rays making the calculation probabilistic.

Definition of a Probabilistic Turing Machine

A PTM $(Q,\Sigma,\Gamma,\sqcup,q_0,q_f,\delta)$ has the same components as a TM. The set $Q$ is a finite set of states, $\Sigma$ is a finite input/output alphabet, $\Gamma$ is a finite tape alphabet with $\Sigma\subseteq\Gamma$, $\sqcup\in \Gamma$ is a blank symbol with $\sqcup \notin \Sigma$, the state $q_0 \in Q$ is a start state, and $q_f \in Q$ is a stop state. The transition function $\delta$, however, does not define deterministic transitions as in the case of a Turing machine, but gives a probability distribution of possible transitions according to $ \delta: Q \times \Sigma \times Q \times \Sigma \times \{L,R\} \longrightarrow [0,1]$.

For probabilistic Turing machines, the set $C$ of configurations is defined in the same way as for Turing machines. It is also called the set of basic states. The set $\Omega$ of states is the set of possible probability distributions on the basic states, i.e. $$\Omega=\left\{(p_c)_{c\in C}\in [0,1]^C \,\,\,\left| \,\,\,\sum\limits_{c\in C} p_c=1\right.\right\}.$$ The set of states serves as memory for the computation history. Since the run of the computation is probabilistic, the definition of a state must be probabilistic as well. Thus the distinction between basic states and states.

The transition function $\delta$ can be considered as stochastic matrix $M_{ji}$ defined on the space $C$ of configurations with $ M_{ji} = \mathrm{Prob}[\delta\colon c_i \mapsto c_j] \in [0,1]$. As a stochastic matrix, the $L_1$-norm of each column of $M_{ji}$ is equal to 1, i.e. $\sum_i M_{ji} = 1$. $L_1$-norms are preserved by $M$ according to $L_1(M\cdot c) = L_1(c) = \sum_{i} c_i$ for a configuration $c\in C$. Not every stochastic matrix provides the transition function $\delta$ of a PTM, however, because such a $\delta$ must fulfill additionally a locality constraint. A Turing machine changes only a single symbol in each step and moves its head to a new position in its immediate neighborhood.

Some alternative definitions of probabilistic Turing machines can be shown to be equivalent to the definition given here.

  • A probabilistic Turing machine can also be understood as a Turing machine $(Q,\Sigma,\Gamma,\sqcup,q_0,q_f,\delta_0,\delta_1)$ having two transition functions $\delta_0$ and $\delta_1$. Which one of these two functions has to be applied in the next transition step is chosen randomly with probability $1/2$ each. This can be understood as a random number generator executing a coin toss for the binary decision between two possible continuations.
  • In a slight variation of the above approach, a probabilsitic Turing machine is a deterministic Turing machine with an additional tape (usually considered as read-only and its head moving only to the right) containing binary random numbers. Though $\delta$ is a deterministic transition function, the additional tape introduces a random decision for each step.

Complexity Theory of Probabilistic Turing Machines

For a TM, the sequence of computation steps is uniquely determined. Such a machine accepts an input $x\in\Sigma^\ast$, if the terminating state of the computation is an accepting state. For a nondeterministic Turing machine, the input $x$ is accepted if it exists a computation sequence starting with $x$ and terminating in an accepting state. For probabilistic Turing machines, such a computation sequence exists in each case, even though its probability may be zero. Thus for defining acceptance, the probability of computation sequences is taken into consideration. This leads to the following definition.

For $T\colon \mathbb{N} \longrightarrow \mathbb{N}$, a PTM $M$ decides a language $L\subseteq \Sigma^\ast$ in time $T(n)$ if

  • For each $x\in \Sigma^\ast$ and each possible computation sequence resulting from input $x$, $M$ terminates after at most $T(|x|)$ computation steps.
  • $\forall x \in L \colon \mathrm{Prob}[M(x)=1 ] \ge 2/3$
  • $\forall x \notin L \colon \mathrm{Prob}[M(x)=0 ] \ge 2/3$

In this definition, $M(x)$ designates the result of the processing of input $x$ by $M$. The expression $M(x)=1 $ indicates a termination in an accepting state, whereas $M(x)=0$ indicates a termination in a nonaccepting state. $\mathrm{Prob}[M(x)=1 ]$ denotes the fraction of computations leading to $M(x)=1$. The class of languages decided by PTMs in $O(T(n))$ computation steps is designated as $\mathrm{BPTIME}(T(n))$.

Based on $\mathrm{BPTIME}(T(n))$, the complexity class $\mathrm{BPP}$ (an abbreviation of bounded-error, probabilistic, polynomial-time) is formally defined as $$\mathrm{BPP}:=\bigcup\limits_{c\in\mathbb{R},c>0} \mathrm{BPTIME}(|x|^c).$$ This means it holds $L\in \mathrm{BPP}$ if a polynomial-time PTM $M$ exists with \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3. \end{align*} Since the transition function $\delta$ can be chosen in such a way that a specific continuation is preferred with a probability of $1$, a deterministic TM is a special case of a PTM. Thus it holds $\mathrm{P}\subseteq \mathrm{BPP}$. Up to know (2013) it is unknown, whether it holds $\mathrm{BPP} = \mathrm{P}$ or not.

The complexity class $\mathrm{BPP}$ defines the polynomial-time complexity for a PTM $M$ based on a two-sided error, i.e. $M$ may indicate $0$ despite of $x\in L$ and $1$ despite of $x\notin L$. It is also possible to define complexity classes with one-sided error. In this case, $M(x)$ may still indicate, say, a false reject, but not a false accept. This leads to the definition of the complexity class $\mathrm{RP}$ (abbreviation for random polynomial-time). It holds $L\in \mathrm{RP}$ if a polynomial-time PTM $M$ exists with \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0. \end{align*} This is equivalent to \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] =1. \end{align*} An immediate consequence of the definition is the inclusion $\mathrm{RP} \subseteq \mathrm{NP}$, whereby $\mathrm{NP}$ is the complexity class of nondeterministically polynomial-time languages. Analogously, it holds $L\in \mathrm{coRP}$ if a polynomial-time PTM $M$ exists with \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 1 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3 \end{align*} or, equivalently, \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 0] = 0 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] \ge 2/3. \end{align*} One can show both $\mathrm{RP}\subseteq \mathrm{BPP}$ and $\mathrm{coRP}\subseteq \mathrm{BPP}$. The members of $\mathrm{RP}$ gives no false accepts, while the members of $\mathrm{coRP}$ gives no false rejects. For avoiding both false accepts and rejects, i.e. false answers at all, one has to use algorithms belonging to the complexity class $\mathrm{ZPP}$.

The complexity class $\mathrm{ZPP}$ of zero-sided error, expected polynomial-time languages consists of all laguages $L$ for which it exists a $c\in\mathbb{R},c>0$ such that for all $x\in L$ the average running time is $|x|^c$ while the probability of providing the correct answer is equal to $1$, i.e. \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 1 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 0] = 1. \end{align*} For $L\in \mathrm{ZPP}$, the probability that $M(x)$ does not terminate for $x\in L$ is equal to $0$. It holds $\mathrm{ZPP} = \mathrm{RP}\cap \mathrm{coRP}$.

Improvement of Probabilistic Computations

The definitions of probabilistic complexity classes given above use the specific value $2/3$ as required minimal probability. This somewhat arbitrarily chosen value can be replaced by any other value $1/2+\epsilon$, $\epsilon > 0$, without changing the essential meaning of the definitions. In the case of $\mathrm{RP}$ for example, an algorithm fulfilling \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge 2/3 \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0 \end{align*} iterated $m$ times in the case of $M(x) = 1$ leads to an algorithm fulfilling \begin{align*}\forall x \in L \colon & \,\, \mathrm{Prob}[M(x) = 1] \ge (2/3)^m \\ \forall x \notin L \colon & \,\, \mathrm{Prob}[M(x) = 1] = 0. \end{align*} In the same way, algorithms belonging to the complexity class $\mathrm{coRP}$ can be modified.

Algorithms belonging to the complexity class $\mathrm{BPP}$ require some more effort for modifying the probability of correctness. Here, an $m$-fold repetition is used, whereby the results $b_1,\ldots,b_m$ are evaluated using a voting mechanism. Assuming that $M(x)$ decides the predicate $x\in L$ by producing the result $0$ or $1$ and that $m$ is an odd number, the modified algorithm gives $1$ if $\sum_i b_i > m/2$ and $0$ otherwise. The probability of correctness is modified according to Chernoff bounds as follows.

Let $x_1,\ldots,x_m$ be independent random variables having the same probability distribution with image set $\{0,1\}$. For $p:= \mathrm{Prob}[x_i=1]$, $X:=\sum_{i=1}^mx_i$, and $\Theta \in [0,1]$ it holds $$\begin{array}{rcl} \mathrm{Prob}[X\ge (1+\Theta)pm] &\le & \exp\left(-{\Theta^2\over 3}pm\right) \\ \mathrm{Prob}[X\le (1-\Theta)pm] &\le & \exp\left(-{\Theta^2\over 2}pm\right) \end{array}$$ The random variables $x_i$ are now interpreted as error variables, i.e. $x_i=1$ if the $i$-th repetition of the decision algorithm gives a wrong answer and $x_i=0$ otherwise. According to the definition of the class $\mathrm{BPP}$, it holds $p=1-2/3=1/3$. Taking $\Theta=1/2$ in the first Chernoff bound gives $$\mathrm{Prob}[X\ge m/2] \le \exp\left(-{\Theta^2\over 3}pm\right) = \exp\left(-{1\over 36}m\right) $$ i.e. the error of the voting algorithm is smaller or equal to $\exp(-{m/36})$.

Applications of Probabilistic Computations

Some examples for probabilistic algorithms may be given. Only their basic ideas are presented.

  • The probabilistic primality testing for a natural number $n\in\mathbb{N}$ can be realized as follows: Generate random natural numbers $k_1, \ldots, k_r$ with $1 < k_i < n$. For each $k_i$, calculate the greatest common divisor $g_i:=\gcd(n,k_i)$. If it exists a $i$ with $g_i>1$, output $0$ for 'not prime'. Otherwise, output $1$.
  • The question, whether two polynomials $f(x)$, $g(x)$ are equal on a region $D$ can be reduced to the question, whether $f(x)-g(x)=0$ for $x\in D$. Thus, the algorithm generates random numbers $x_1, \ldots, x_r$ with $x_i \in D$. For each $x_i$, the algorithm calculates the difference $d_i:= f(x_i)-g(x_i)$. If it exists a $i$ with $d_i\neq 0$, output $0$ representing 'unequal'. Otherwise, output $1$ representing 'equal'.

References

[DK2000] Ding-Zhu Du, Ker-I Ko, "Theory of Computational Complexity", Wiley 2000
[AB2009] Sanjeev Arora, Boaz Barak, "Computational Complexity: A Modern Approach", Cambridge University Press 2009
[BC2006] Daniel Pierre Bovet, Pierluigi Crescenzim, "Introduction to the Theory of Complexity", 2006
How to Cite This Entry:
Joachim Draeger/sandbox. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Joachim_Draeger/sandbox&oldid=28760