# Holographic proof

*transparent proof, instantly checkable proof, probabilistically checkable proof, PCP*

A form in which every proof or record of computation can be presented. This form has a remarkable property: the presence of any errors (essential deviations from the form or other requirements) is instantly apparent after checking just a negligible fraction of bits of the proof. Traditional proofs are verifiable in time that is a constant power (say, quadratic) of the proof length $n$. Verifying holographic proofs takes a poly-logarithmic, i.e., a constant power of the logarithm of $n$, number of bit operations. This is a tiny fraction: the binary logarithm of the number of atoms in the known Universe is under $300$.

(Of the names in the header, the phrase "probabilistically checkable" is somewhat misleading, since both holographic and traditional proofs can be checked either deterministically or probabilistically, though randomness does not speed up the checking of traditional proofs.)

There are four caveats: First, the verification is a so-called Monte-Carlo algorithm (cf. also Monte-Carlo method). It makes certain random choices, and any single round of verification has a chance of overlooking essential errors due to an unlucky choice. This chance never exceeds $50$%, regardless of the nature of errors, and vanishes as $1/2^k$ with $k$ independent rounds.

Second, only essential errors (i.e., not correctable from the context) have this probabilistic detection guarantee. There is a proofreader procedure, also running in poly-logarithmic Monte-Carlo time, that confirms or corrects any given bit in any proof accepted by the verifier. Of course, the instant verification can only assure the success of this proofreading; it has no time to actually perform it for all bits. An enhanced version can be made very tolerant: it can guarantee that any errors affecting a small, say $5$%, fraction of bits will be inessential, correctable by the proofreader and tolerated by the verifier with high probability.

The third caveat is trivial but often overlooked. The claim must be formal and self-contained: one cannot just write a mathematical theorem in English. The translation to a formal system, e.g., Zermelo–Fraenkel set theory, may be quite involved. Suppose one wants to state the Jordan theorem that every closed curve breaks a plane into two disconnected parts. One must give a number of concepts from advanced calculus just to explain what a curve is. This requires some background from topology, algebra, geometry, etc. One must add some set theory to formalize this background. Throw in some necessary logic, parsing, syntax procedures and one obtains a book instead of the above informal one-line formulation.

Fourth, the claim which the proof is to support (or the input/output, the matching of which the computation is to confirm) also must be given in error-correcting form. Otherwise one could supply a remarkable claim with a perfect proof of its useless (obtained by altering one random bit) variation. Were the claim given in a plain format, such tiny but devastating discrepancies could not be noticed without reading a significant fraction of it, for which the instant verifier has no time. The error-correcting form does not have to be special: Any classical (e.g., Reed–Solomon) code, correcting a constant fraction of errors in nearly linear time, will do. Then the verifier confirms that the code is within the required distance of a unique codeword encoding the claim supported by the given perfect (when filtered through the proofreader) proof.

Despite these caveats the result is surprising. Some known mathematical proofs are so huge that no single human has been able to verify them. Examples are the four-colour theorem (verified with an aid of a computer, see [a1] and Four-colour problem), the classification of simple finite groups (broken into many pieces, each supposedly verified by one member of a large group of researchers, see [a5] and Simple finite group), and others. Even more problematic seems the verification of large computations. In a holographic form, however, the verification time barely grows at all, even if the proof fills up the whole Universe.

## Some details.

Transforming arbitrary proofs into a holographic form starts with reducing an arbitrary proof system to a standard (not yet holographic) one: the domino pattern. A domino is a directed graph (cf. also Graph, oriented) of two nodes coloured with a fixed (independent of the proof size) set of colours. The nodes are renamed $1$ and $2$, so that the domino is the same wherever it appears in the graph. The domino problem is to restore a colouring of a graph, given the colouring of its first segment and a set of dominos appearing in this graph.

The graphs are taken from a standard family: only sizes and colourings can differ. An example is the family of shuffle exchange graphs. Their nodes are enumerated by binary strings (addresses) $x$. The neighbours of $x$ are obtained by simple manipulations of its bits: adding $1$ to the first (or last) bit or shifting bits left. These operations (or their constant combinations) define the edges of the graph. The graphs are finite: the length of $x$ is fixed for each. In the actual construction, $x$ is broken into several variables, so it is convenient to shuffle bits just within the first variable and permute variables (cycle-shifts of all variables and of the first two suffice). These graphs may be replaced by any other family with edges expressed as linear transformations of variables, as long as it has sufficient connectivity to implement an efficient sorting network.

A proof system is an algorithm that verifies a proof (given as input) and outputs the proven statement. Such an algorithm can be efficiently simulated, first on a special form of a random access machine and then on a sorting network. This allows one to reduce the problem of finding a proof in any particular proof system to the above standard domino problem.

Then comes the arithmetization stage. The colouring is a function from nodes of the graph to colours. Nodes are interpreted as elements of a field (a finite field or a segment of the field of rationals) and the colouring is a polynomial on it. This function is then extended from its original domain to a larger one (a larger field or a larger segment of rationals). The extension is done using the same expression, i.e., without increasing the degree of the colouring polynomial.

The condition that all dominos are restricted to given types is also expressed as equality to $0$ of a low-degree polynomial $P(x)$ of a node $x=x_1,\dots,x_k$, its neighbours, and their colours. Over the rationals, one needs to check $0=\sum_{x_1,\dots,x_k}P^2(x)$, where $x_i$ vary over the original domain. (In finite fields, constructing the equation to check is trickier.) A transparent proof is the collection of values of this expression, where summation is taken only over the first several variables. The other variables are taken with all values that exist in the extended domain.

The verification consists of statistical checking that all partial sums (with possibly only a small fraction of deviating points) are polynomials of low, for the extended domain, degree. Then the consistency of the sums with their successors (having one more variable summed over) is checked too. This is easy to do since low-degree polynomials cannot differ only in a small fraction of points: e.g., two different straight lines must differ in all points but one.

Of course, all parameters must be finely tuned and many other details addressed. The above description is just a general sketch.

## History.

Holographic proofs came as a result of a number of previous major advances. The error-correcting codes (cf. also Error-correcting code) based on low-degree polynomials and their Fourier transforms were a major area of research since the 1950s. Surprisingly, these techniques were not used in general computer theory until the middle of the 1980s; one of the first such uses was for generating pseudo-random strings in [a6]. A closer set of preceding results was a number of remarkable papers on the relations between high-level complexity classes associated with interactive proof systems. This exciting development was initiated by N. Nisan in a 1989 electronically distributed article. It was quickly followed by improved theorems, which contained powerful techniques used later for the construction of holographic proofs. [a7], [a8], and, especially, [a2] were the most relevant papers. [a3] introduced holographic proofs (called transparent proofs there). Another interesting application of similar techniques, due to [a4], is in proving -completeness of approximation problems (cf. also Complexity theory). These results were significantly extended in a number of subsequent papers.

#### References

[a1] | K. Appel, W. Haken, J Koch, "Every planar map is four colorable. Part II: Reducibility" Illinois J. Math. , 21 (1977) pp. 491–567 |

[a2] | L. Babai, L. Fortnow, C. Lund, "Non-deterministic exponential time has two-prover interactive protocols" Comput. Complexity , 1 (1991) pp. 3–40 |

[a3] | L. Babai, L. Fortnow, L. Levin, M. Szegedy, "Checking computation in polylogarithmic time" , 23rd ACM Symp. Theory of Computation, New Orleans, May, 1991 , ACM (1991) pp. 21–31 |

[a4] | U. Feige, S. Goldwasser, L. Lovasz, S. Safra, M. Szegedy, "Iterative proofs and the hardness of approximating cliques" J. Assoc. Comput. Mach. , 43 : 2 (1996) pp. 268–292 |

[a5] | D. Gorenstein, "The enormous theorem" Scientific Amer. , 253 : 6 (1985) pp. 104–115 |

[a6] | L. Levin, "One-way functions and pseudorandom generators" Combinatorica , 7 : 4 (1987) pp. 357–363 |

[a7] | C. Lund, L. Fortnow, H. Karloff, N. Nisan, "Algebraic methods for interactive proof systems" J. Assoc. Comput. Mach. , 39 : 4 (1992) pp. 859–868 |

[a8] | A. Shamir, "IP=PSPACE" J. Assoc. Comput. Mach. , 39 : 4 (1992) pp. 869–877 |

**How to Cite This Entry:**

Holographic proof.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Holographic_proof&oldid=33124