Namespaces
Variants
Actions

Difference between revisions of "Factorization of polynomials"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (Automatically changed introduction)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct and if all png images have been replaced by TeX code, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 68 formulas, 67 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|part}}
 
''factoring polynomials''
 
''factoring polynomials''
  
 
Since C.F. Gauss it is known that an arbitrary [[Polynomial|polynomial]] over a [[Field|field]] or over the integers can be factored into irreducible factors, essentially uniquely (cf. also [[Factorial ring|Factorial ring]]). For an efficient version of Gauss' theorem, one asks to find these factors algorithmically, and to devise such algorithms with low cost.
 
Since C.F. Gauss it is known that an arbitrary [[Polynomial|polynomial]] over a [[Field|field]] or over the integers can be factored into irreducible factors, essentially uniquely (cf. also [[Factorial ring|Factorial ring]]). For an efficient version of Gauss' theorem, one asks to find these factors algorithmically, and to devise such algorithms with low cost.
  
Based on a precocious uncomputability result in [[#References|[a13]]], one can construct sufficiently bizarre (but still  "computable" ) fields over which even square-freeness of polynomials is undecidable in the sense of A.M. Turing (cf. also [[Undecidability|Undecidability]]; [[Turing machine|Turing machine]]). But for the fields of practical interest, there are algorithms that perform extremely well, both in theory and practice. Of course, factorization of integers and thus also in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300101.png" /> remains difficult; much of cryptography (cf. [[Cryptology|Cryptology]]) is based on the belief that it will remain so. The base case concerns factoring univariate polynomials over a [[Finite field|finite field]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300102.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300103.png" /> elements, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300104.png" /> is a prime power. A first step is to make the input polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300105.png" />, of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300106.png" />, square-free. This is easy to do by computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300107.png" /> and possibly extracting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300108.png" />th roots, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f1300109.png" />. The main tool of all algorithms is the [[Frobenius automorphism|Frobenius automorphism]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001010.png" /> on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001011.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001012.png" />. The pioneering algorithms are due to E.R. Berlekamp [[#References|[a1]]], [[#References|[a2]]], who represents <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001013.png" /> by its matrix on the basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001014.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001015.png" />. A second approach, due to D.G. Cantor and H. Zassenhaus [[#References|[a3]]], is to compute <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001016.png" /> by repeated squaring. A third method (J. von zur Gathen and V. Shoup [[#References|[a8]]]) uses the so-called polynomial representation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001017.png" /> as its basic tool. The last two algorithms are based on Gauss' theorem that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001018.png" /> is the product of all monic irreducible polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001019.png" /> whose degree divides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001020.png" />. Thus, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001021.png" /> is the product of all linear factors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001022.png" />; next, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001023.png" /> consists of all quadratic factors, and so on. This yields the distinct-degree factorization <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001024.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001025.png" />.
+
Based on a precocious uncomputability result in [[#References|[a13]]], one can construct sufficiently bizarre (but still  "computable" ) fields over which even square-freeness of polynomials is undecidable in the sense of A.M. Turing (cf. also [[Undecidability|Undecidability]]; [[Turing machine|Turing machine]]). But for the fields of practical interest, there are algorithms that perform extremely well, both in theory and practice. Of course, factorization of integers and thus also in $\mathbf Z [ x ]$ remains difficult; much of cryptography (cf. [[Cryptology|Cryptology]]) is based on the belief that it will remain so. The base case concerns factoring univariate polynomials over a [[Finite field|finite field]] $\mathbf{F} _ { q }$ with $q$ elements, where $q$ is a prime power. A first step is to make the input polynomial $f \in \mathbf{F} _ { q } [ x ]$, of degree $n$, square-free. This is easy to do by computing $\operatorname { gcd } ( f , \partial f / \partial x )$ and possibly extracting $p$th roots, where $p = \operatorname { char } \mathbf{F} _ { q }$. The main tool of all algorithms is the [[Frobenius automorphism|Frobenius automorphism]] $\sigma : R \rightarrow R$ on the $\mathbf{F} _ { q }$-algebra $R = \mathbf{F} _ { q } [ x ] / ( f )$. The pioneering algorithms are due to E.R. Berlekamp [[#References|[a1]]], [[#References|[a2]]], who represents $\sigma$ by its matrix on the basis $1 , x , x ^ { 2 } , \ldots , x ^ { n - 1 } ( \operatorname { mod } f )$ of $R$. A second approach, due to D.G. Cantor and H. Zassenhaus [[#References|[a3]]], is to compute $\sigma$ by repeated squaring. A third method (J. von zur Gathen and V. Shoup [[#References|[a8]]]) uses the so-called polynomial representation of $\sigma$ as its basic tool. The last two algorithms are based on Gauss' theorem that $x ^ { q ^ { d } } - x$ is the product of all monic irreducible polynomials in $\mathbf{F} _ { q } [ x ]$ whose degree divides $d$. Thus, $f _ { 1 } = \operatorname { gcd } ( x ^ {q } - x , f )$ is the product of all linear factors of $f$; next, $f _ { 2 } = \operatorname { gcd } ( x ^ { q ^ { 2 } } - x , f / f _ { 1 } )$ consists of all quadratic factors, and so on. This yields the distinct-degree factorization $( f _ { 1 } , f _ { 2 } , \ldots )$ of $f$.
  
The equal-degree factorization step splits any of the resulting factors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001026.png" />. This is only necessary if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001027.png" />. Since all irreducible factors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001028.png" /> have degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001029.png" />, the algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001030.png" /> is a direct product of (at least two) copies of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001031.png" />. A random element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001032.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001033.png" /> is likely to have [[Legendre symbol|Legendre symbol]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001034.png" /> in some and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001035.png" /> in other copies; then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001036.png" /> is a non-trivial factor of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001037.png" />. To describe the cost of these methods, one uses fast arithmetic, so that polynomials over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001038.png" /> of degree up to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001039.png" /> can be multiplied with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001040.png" /> operations in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001041.png" />, or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001042.png" /> for short, where the so-called  "soft O"  hides factors that are logarithmic in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001043.png" />. Furthermore, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001044.png" /> is an exponent for matrix multiplication, with the current (in 2000) world record <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001045.png" />, from [[#References|[a4]]].
+
The equal-degree factorization step splits any of the resulting factors $f_i$. This is only necessary if $\operatorname { deg } f _ { i } &gt; i$. Since all irreducible factors of $f_i$ have degree $i$, the algebra $R _ { i } = \mathbf{F} _ { q } [ x ] / ( f _ { i } )$ is a direct product of (at least two) copies of $\mathbf{F} _ { q ^ { i } }$. A random element $a$ of $R_i$ is likely to have [[Legendre symbol|Legendre symbol]] $+ 1$ in some and $- 1$ in other copies; then $\operatorname { gcd } ( a ^ { ( q ^ { i } - 1 ) / 2 } - 1 , f _ { i } )$ is a non-trivial factor of $f_i$. To describe the cost of these methods, one uses fast arithmetic, so that polynomials over $\mathbf{F} _ { q }$ of degree up to $n$ can be multiplied with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001040.png"/> operations in $\mathbf{F} _ { q }$, or $O ^ { \sim } ( n )$ for short, where the so-called  "soft O"  hides factors that are logarithmic in $n$. Furthermore, $\omega$ is an exponent for matrix multiplication, with the current (in 2000) world record $\omega &lt; 2.376$, from [[#References|[a4]]].
  
All algorithms first compute <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001046.png" /> modulo <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001047.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001048.png" /> operations in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001049.png" />. One can show that in an appropriate model, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001050.png" /> operations are necessary, even for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001051.png" />. The further cost is as follows:''''''<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellspacing="1" cellpadding="4" style="background-color:black;"> <tbody> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001052.png" /> as</td> <td colname="3" style="background-color:white;" colspan="1">Cost in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001053.png" /></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Berlekamp</td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001054.png" />-vector space</td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001055.png" /></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Cantor–Zassenhaus</td> <td colname="2" style="background-color:white;" colspan="1">multiplicative semi-group</td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001056.png" /></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">von zur Gathen–Shoup</td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001057.png" />-algebra</td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001058.png" /></td> </tr> </tbody> </table>
+
All algorithms first compute $x ^ { q }$ modulo $f$, with $O ^ { \sim } ( n \operatorname { log } q )$ operations in $\mathbf{F} _ { q }$. One can show that in an appropriate model, $\Omega ( \operatorname { log } q )$ operations are necessary, even for $n = 2$. The further cost is as follows:''''''<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellpadding="4" cellspacing="1" style="background-color:black;"> <tr> <td colname="1" colspan="1" style="background-color:white;"></td> <td colname="2" colspan="1" style="background-color:white;">${\bf F} _ { q } [ x ] / ( f )$ as</td> <td colname="3" colspan="1" style="background-color:white;">Cost in $O ^ { \sim }$</td> </tr> <tr> <td colname="1" colspan="1" style="background-color:white;">Berlekamp</td> <td colname="2" colspan="1" style="background-color:white;">$\mathbf{F} _ { q }$-vector space</td> <td colname="3" colspan="1" style="background-color:white;">$n ^ { \omega }$</td> </tr> <tr> <td colname="1" colspan="1" style="background-color:white;">Cantor–Zassenhaus</td> <td colname="2" colspan="1" style="background-color:white;">multiplicative semi-group</td> <td colname="3" colspan="1" style="background-color:white;">$n ^ { 2 } \operatorname { log } q$</td> </tr> <tr> <td colname="1" colspan="1" style="background-color:white;">von zur Gathen–Shoup</td> <td colname="2" colspan="1" style="background-color:white;">$\mathbf{F} _ { q }$-algebra</td> <td colname="3" colspan="1" style="background-color:white;">$n ^ { 2 }$</td> </tr> </table>
  
 
</td></tr> </table>
 
</td></tr> </table>
Line 13: Line 21:
 
For small fields, even better algorithms exist.
 
For small fields, even better algorithms exist.
  
The next problem is factorization of some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001059.png" />. The central tool is Hensel lifting, which lifts a factorization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001060.png" /> modulo an appropriate prime number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001061.png" /> to one modulo a large power <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001062.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001063.png" />. Irreducible factors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001064.png" /> will usually factor modulo <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001065.png" />, according to the [[Chebotarev density theorem|Chebotarev density theorem]]. One can then try various factor combinations of the irreducible factors modulo <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001066.png" /> to recover a true factor in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001067.png" />. This works quite well in practice, at least for polynomials of moderate degree, but uses exponential time on some inputs (for example, on the Swinnerton-Dyer polynomials). In a celebrated paper, A.K. Lenstra, H.W. Lenstra, Jr. and L. Lovász [[#References|[a12]]] introduced basis reduction of integer lattices (cf. [[LLL basis reduction method|LLL basis reduction method]]), and applied this to obtain a polynomial-time algorithm. Their reduction method has since found many applications, notably in cryptanalysis (cf. also [[Cryptology|Cryptology]]). A method in [[#References|[a10]]] promises an even faster factorizing method.
+
The next problem is factorization of some $f \in \mathbf Q [ x ]$. The central tool is Hensel lifting, which lifts a factorization of $f$ modulo an appropriate prime number $p$ to one modulo a large power $p ^ { k }$ of $p$. Irreducible factors of $f$ will usually factor modulo $p$, according to the [[Chebotarev density theorem|Chebotarev density theorem]]. One can then try various factor combinations of the irreducible factors modulo $p ^ { k }$ to recover a true factor in $\mathbf{Q} [ x ]$. This works quite well in practice, at least for polynomials of moderate degree, but uses exponential time on some inputs (for example, on the Swinnerton-Dyer polynomials). In a celebrated paper, A.K. Lenstra, H.W. Lenstra, Jr. and L. Lovász [[#References|[a12]]] introduced basis reduction of integer lattices (cf. [[LLL basis reduction method|LLL basis reduction method]]), and applied this to obtain a polynomial-time algorithm. Their reduction method has since found many applications, notably in cryptanalysis (cf. also [[Cryptology|Cryptology]]). A method in [[#References|[a10]]] promises an even faster factorizing method.
  
 
The next tasks are bivariate polynomials. It can be solved in a similar fashion, with Hensel lifting, say, modulo one variable, and an appropriate version of basis reduction, which is easy in this case. Algebraic extensions of the ground field are handled similarly.
 
The next tasks are bivariate polynomials. It can be solved in a similar fashion, with Hensel lifting, say, modulo one variable, and an appropriate version of basis reduction, which is easy in this case. Algebraic extensions of the ground field are handled similarly.
Line 19: Line 27:
 
Multivariate polynomials pose a new type of problem: how to represent them? The dense representation, where each term up to the degree is written out, is often too long. One would like to work with the sparse representation, using only the non-zero coefficients. The methods discussed above can be adapted and work reasonably well on many examples, but no guarantees of polynomial time are given. Two new ingredients are required. The first are efficient versions (due to E. Kaltofen and von zur Gathen) of Hilbert's irreducibility theorem (cf. also [[Hilbert theorem|Hilbert theorem]]). These versions say that if one reduces many to two variables with a certain type of random linear substitution, then each irreducible factor is very likely to remain irreducible. The second ingredient is an even more concise representation, namely by a black box which returns the polynomial's value at any requested point. A highlight of this theory is the random polynomial-time factorization method in [[#References|[a11]]].
 
Multivariate polynomials pose a new type of problem: how to represent them? The dense representation, where each term up to the degree is written out, is often too long. One would like to work with the sparse representation, using only the non-zero coefficients. The methods discussed above can be adapted and work reasonably well on many examples, but no guarantees of polynomial time are given. Two new ingredients are required. The first are efficient versions (due to E. Kaltofen and von zur Gathen) of Hilbert's irreducibility theorem (cf. also [[Hilbert theorem|Hilbert theorem]]). These versions say that if one reduces many to two variables with a certain type of random linear substitution, then each irreducible factor is very likely to remain irreducible. The second ingredient is an even more concise representation, namely by a black box which returns the polynomial's value at any requested point. A highlight of this theory is the random polynomial-time factorization method in [[#References|[a11]]].
  
Each major computer algebra system has some variant of these methods implemented. Special-purpose software can factor huge polynomials, for example of degree more than one million over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f130/f130010/f13001068.png" />. Several textbooks describe the details of some of these methods, e.g. [[#References|[a9]]], [[#References|[a5]]], [[#References|[a6]]], [[#References|[a14]]].
+
Each major computer algebra system has some variant of these methods implemented. Special-purpose software can factor huge polynomials, for example of degree more than one million over $\mathbf{F} _ { 2 }$. Several textbooks describe the details of some of these methods, e.g. [[#References|[a9]]], [[#References|[a5]]], [[#References|[a6]]], [[#References|[a14]]].
  
 
Factorization of polynomials modulo a composite number presents some surprises, such as the possibility of exponentially many irreducible factors, which can nevertheless be determined in polynomial time, in an appropriate data structure; see [[#References|[a7]]].
 
Factorization of polynomials modulo a composite number presents some surprises, such as the possibility of exponentially many irreducible factors, which can nevertheless be determined in polynomial time, in an appropriate data structure; see [[#References|[a7]]].
Line 26: Line 34:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.R. Berlekamp,  "Factoring polynomials over finite fields"  ''Bell Syst. Techn. J.'' , '''46'''  (1967)  pp. 1853–1859</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  E.R. Berlekamp,  "Factoring polynomials over large finite fields"  ''Math. Comput.'' , '''24''' :  11  (1970)  pp. 713–735</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  D.G. Cantor,  H. Zassenhaus,  "A new algorithm for factoring polynomials over finite fields"  ''Math. Comput.'' , '''36''' :  154  (1981)  pp. 587–592</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  D. Coppersmith,  S. Winograd,  "Matrix multiplication via arithmetic progressions"  ''J. Symbolic Comput.'' , '''9'''  (1990)  pp. 251–280</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  I.E. Shparlinski,  "Finite fields: theory and computation" , Kluwer Acad. Publ.  (1999)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  J. von zur Gathen,  J. Gerhard,  "Modern computer algebra" , Cambridge Univ. Press  (1999)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  J. von zur Gathen,  S. Hartlieb,  "Factoring modular polynomials"  ''J. Symbolic Comput.'' , '''26''' :  5  (1998)  pp. 583–606</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  J. von zur Gathen,  V. Shoup,  "Computing Frobenius maps and factoring polynomials"  ''Comput. Complexity'' , '''2'''  (1992)  pp. 187–224</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  K.O. Geddes,  S.R. Czapor,  G. Labahn,  "Algorithms for computer algebra" , Kluwer Acad. Publ.  (1992)</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  M. van Hoeij,  "Factoring polynomials and the knapsack problem"  ''www.math.fsu.edu/~hoeij/knapsack/paper/knapsack.ps''  (2000)</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  E. Kaltofen,  B.M. Trager,  "Computing with polynomials given by black boxes for their evaluations: Greatest common divisors, factorization, separation of numerators and denominators"  ''J. Symbolic Comput.'' , '''9'''  (1990)  pp. 301–320</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  A.K. Lenstra,  H.W. Lenstra, Jr.,  L. Lovász,  "Factoring polynomials with rational coefficients"  ''Math. Ann.'' , '''261'''  (1982)  pp. 515–534</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  B.L. van der Waerden,  "Eine Bemerkung über die Unzerlegbarkeit von Polynomen"  ''Math. Ann.'' , '''102'''  (1930)  pp. 738–739</TD></TR><TR><TD valign="top">[a14]</TD> <TD valign="top">  Chee Keng Yap,  "Fundamental problems of algorithmic algebra" , Oxford Univ. Press  (2000)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  E.R. Berlekamp,  "Factoring polynomials over finite fields"  ''Bell Syst. Techn. J.'' , '''46'''  (1967)  pp. 1853–1859</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  E.R. Berlekamp,  "Factoring polynomials over large finite fields"  ''Math. Comput.'' , '''24''' :  11  (1970)  pp. 713–735</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  D.G. Cantor,  H. Zassenhaus,  "A new algorithm for factoring polynomials over finite fields"  ''Math. Comput.'' , '''36''' :  154  (1981)  pp. 587–592</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  D. Coppersmith,  S. Winograd,  "Matrix multiplication via arithmetic progressions"  ''J. Symbolic Comput.'' , '''9'''  (1990)  pp. 251–280</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  I.E. Shparlinski,  "Finite fields: theory and computation" , Kluwer Acad. Publ.  (1999)</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  J. von zur Gathen,  J. Gerhard,  "Modern computer algebra" , Cambridge Univ. Press  (1999)</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  J. von zur Gathen,  S. Hartlieb,  "Factoring modular polynomials"  ''J. Symbolic Comput.'' , '''26''' :  5  (1998)  pp. 583–606</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  J. von zur Gathen,  V. Shoup,  "Computing Frobenius maps and factoring polynomials"  ''Comput. Complexity'' , '''2'''  (1992)  pp. 187–224</td></tr><tr><td valign="top">[a9]</td> <td valign="top">  K.O. Geddes,  S.R. Czapor,  G. Labahn,  "Algorithms for computer algebra" , Kluwer Acad. Publ.  (1992)</td></tr><tr><td valign="top">[a10]</td> <td valign="top">  M. van Hoeij,  "Factoring polynomials and the knapsack problem"  ''www.math.fsu.edu/~hoeij/knapsack/paper/knapsack.ps''  (2000)</td></tr><tr><td valign="top">[a11]</td> <td valign="top">  E. Kaltofen,  B.M. Trager,  "Computing with polynomials given by black boxes for their evaluations: Greatest common divisors, factorization, separation of numerators and denominators"  ''J. Symbolic Comput.'' , '''9'''  (1990)  pp. 301–320</td></tr><tr><td valign="top">[a12]</td> <td valign="top">  A.K. Lenstra,  H.W. Lenstra, Jr.,  L. Lovász,  "Factoring polynomials with rational coefficients"  ''Math. Ann.'' , '''261'''  (1982)  pp. 515–534</td></tr><tr><td valign="top">[a13]</td> <td valign="top">  B.L. van der Waerden,  "Eine Bemerkung über die Unzerlegbarkeit von Polynomen"  ''Math. Ann.'' , '''102'''  (1930)  pp. 738–739</td></tr><tr><td valign="top">[a14]</td> <td valign="top">  Chee Keng Yap,  "Fundamental problems of algorithmic algebra" , Oxford Univ. Press  (2000)</td></tr></table>

Latest revision as of 17:44, 1 July 2020

factoring polynomials

Since C.F. Gauss it is known that an arbitrary polynomial over a field or over the integers can be factored into irreducible factors, essentially uniquely (cf. also Factorial ring). For an efficient version of Gauss' theorem, one asks to find these factors algorithmically, and to devise such algorithms with low cost.

Based on a precocious uncomputability result in [a13], one can construct sufficiently bizarre (but still "computable" ) fields over which even square-freeness of polynomials is undecidable in the sense of A.M. Turing (cf. also Undecidability; Turing machine). But for the fields of practical interest, there are algorithms that perform extremely well, both in theory and practice. Of course, factorization of integers and thus also in $\mathbf Z [ x ]$ remains difficult; much of cryptography (cf. Cryptology) is based on the belief that it will remain so. The base case concerns factoring univariate polynomials over a finite field $\mathbf{F} _ { q }$ with $q$ elements, where $q$ is a prime power. A first step is to make the input polynomial $f \in \mathbf{F} _ { q } [ x ]$, of degree $n$, square-free. This is easy to do by computing $\operatorname { gcd } ( f , \partial f / \partial x )$ and possibly extracting $p$th roots, where $p = \operatorname { char } \mathbf{F} _ { q }$. The main tool of all algorithms is the Frobenius automorphism $\sigma : R \rightarrow R$ on the $\mathbf{F} _ { q }$-algebra $R = \mathbf{F} _ { q } [ x ] / ( f )$. The pioneering algorithms are due to E.R. Berlekamp [a1], [a2], who represents $\sigma$ by its matrix on the basis $1 , x , x ^ { 2 } , \ldots , x ^ { n - 1 } ( \operatorname { mod } f )$ of $R$. A second approach, due to D.G. Cantor and H. Zassenhaus [a3], is to compute $\sigma$ by repeated squaring. A third method (J. von zur Gathen and V. Shoup [a8]) uses the so-called polynomial representation of $\sigma$ as its basic tool. The last two algorithms are based on Gauss' theorem that $x ^ { q ^ { d } } - x$ is the product of all monic irreducible polynomials in $\mathbf{F} _ { q } [ x ]$ whose degree divides $d$. Thus, $f _ { 1 } = \operatorname { gcd } ( x ^ {q } - x , f )$ is the product of all linear factors of $f$; next, $f _ { 2 } = \operatorname { gcd } ( x ^ { q ^ { 2 } } - x , f / f _ { 1 } )$ consists of all quadratic factors, and so on. This yields the distinct-degree factorization $( f _ { 1 } , f _ { 2 } , \ldots )$ of $f$.

The equal-degree factorization step splits any of the resulting factors $f_i$. This is only necessary if $\operatorname { deg } f _ { i } > i$. Since all irreducible factors of $f_i$ have degree $i$, the algebra $R _ { i } = \mathbf{F} _ { q } [ x ] / ( f _ { i } )$ is a direct product of (at least two) copies of $\mathbf{F} _ { q ^ { i } }$. A random element $a$ of $R_i$ is likely to have Legendre symbol $+ 1$ in some and $- 1$ in other copies; then $\operatorname { gcd } ( a ^ { ( q ^ { i } - 1 ) / 2 } - 1 , f _ { i } )$ is a non-trivial factor of $f_i$. To describe the cost of these methods, one uses fast arithmetic, so that polynomials over $\mathbf{F} _ { q }$ of degree up to $n$ can be multiplied with operations in $\mathbf{F} _ { q }$, or $O ^ { \sim } ( n )$ for short, where the so-called "soft O" hides factors that are logarithmic in $n$. Furthermore, $\omega$ is an exponent for matrix multiplication, with the current (in 2000) world record $\omega < 2.376$, from [a4].

All algorithms first compute $x ^ { q }$ modulo $f$, with $O ^ { \sim } ( n \operatorname { log } q )$ operations in $\mathbf{F} _ { q }$. One can show that in an appropriate model, $\Omega ( \operatorname { log } q )$ operations are necessary, even for $n = 2$. The further cost is as follows:'

${\bf F} _ { q } [ x ] / ( f )$ as Cost in $O ^ { \sim }$
Berlekamp $\mathbf{F} _ { q }$-vector space $n ^ { \omega }$
Cantor–Zassenhaus multiplicative semi-group $n ^ { 2 } \operatorname { log } q$
von zur Gathen–Shoup $\mathbf{F} _ { q }$-algebra $n ^ { 2 }$

For small fields, even better algorithms exist.

The next problem is factorization of some $f \in \mathbf Q [ x ]$. The central tool is Hensel lifting, which lifts a factorization of $f$ modulo an appropriate prime number $p$ to one modulo a large power $p ^ { k }$ of $p$. Irreducible factors of $f$ will usually factor modulo $p$, according to the Chebotarev density theorem. One can then try various factor combinations of the irreducible factors modulo $p ^ { k }$ to recover a true factor in $\mathbf{Q} [ x ]$. This works quite well in practice, at least for polynomials of moderate degree, but uses exponential time on some inputs (for example, on the Swinnerton-Dyer polynomials). In a celebrated paper, A.K. Lenstra, H.W. Lenstra, Jr. and L. Lovász [a12] introduced basis reduction of integer lattices (cf. LLL basis reduction method), and applied this to obtain a polynomial-time algorithm. Their reduction method has since found many applications, notably in cryptanalysis (cf. also Cryptology). A method in [a10] promises an even faster factorizing method.

The next tasks are bivariate polynomials. It can be solved in a similar fashion, with Hensel lifting, say, modulo one variable, and an appropriate version of basis reduction, which is easy in this case. Algebraic extensions of the ground field are handled similarly.

Multivariate polynomials pose a new type of problem: how to represent them? The dense representation, where each term up to the degree is written out, is often too long. One would like to work with the sparse representation, using only the non-zero coefficients. The methods discussed above can be adapted and work reasonably well on many examples, but no guarantees of polynomial time are given. Two new ingredients are required. The first are efficient versions (due to E. Kaltofen and von zur Gathen) of Hilbert's irreducibility theorem (cf. also Hilbert theorem). These versions say that if one reduces many to two variables with a certain type of random linear substitution, then each irreducible factor is very likely to remain irreducible. The second ingredient is an even more concise representation, namely by a black box which returns the polynomial's value at any requested point. A highlight of this theory is the random polynomial-time factorization method in [a11].

Each major computer algebra system has some variant of these methods implemented. Special-purpose software can factor huge polynomials, for example of degree more than one million over $\mathbf{F} _ { 2 }$. Several textbooks describe the details of some of these methods, e.g. [a9], [a5], [a6], [a14].

Factorization of polynomials modulo a composite number presents some surprises, such as the possibility of exponentially many irreducible factors, which can nevertheless be determined in polynomial time, in an appropriate data structure; see [a7].

For a historical perspective, note that the basic idea of equal-degree factorization was known to A.M. Legendre, while Gauss had found, around 1798, the distinct-degree factorization algorithm and Hensel lifting. They were to form part of the eighth chapter of his "Disquisitiones Arithmeticae" , but only seven got published, due to lack of funding.

References

[a1] E.R. Berlekamp, "Factoring polynomials over finite fields" Bell Syst. Techn. J. , 46 (1967) pp. 1853–1859
[a2] E.R. Berlekamp, "Factoring polynomials over large finite fields" Math. Comput. , 24 : 11 (1970) pp. 713–735
[a3] D.G. Cantor, H. Zassenhaus, "A new algorithm for factoring polynomials over finite fields" Math. Comput. , 36 : 154 (1981) pp. 587–592
[a4] D. Coppersmith, S. Winograd, "Matrix multiplication via arithmetic progressions" J. Symbolic Comput. , 9 (1990) pp. 251–280
[a5] I.E. Shparlinski, "Finite fields: theory and computation" , Kluwer Acad. Publ. (1999)
[a6] J. von zur Gathen, J. Gerhard, "Modern computer algebra" , Cambridge Univ. Press (1999)
[a7] J. von zur Gathen, S. Hartlieb, "Factoring modular polynomials" J. Symbolic Comput. , 26 : 5 (1998) pp. 583–606
[a8] J. von zur Gathen, V. Shoup, "Computing Frobenius maps and factoring polynomials" Comput. Complexity , 2 (1992) pp. 187–224
[a9] K.O. Geddes, S.R. Czapor, G. Labahn, "Algorithms for computer algebra" , Kluwer Acad. Publ. (1992)
[a10] M. van Hoeij, "Factoring polynomials and the knapsack problem" www.math.fsu.edu/~hoeij/knapsack/paper/knapsack.ps (2000)
[a11] E. Kaltofen, B.M. Trager, "Computing with polynomials given by black boxes for their evaluations: Greatest common divisors, factorization, separation of numerators and denominators" J. Symbolic Comput. , 9 (1990) pp. 301–320
[a12] A.K. Lenstra, H.W. Lenstra, Jr., L. Lovász, "Factoring polynomials with rational coefficients" Math. Ann. , 261 (1982) pp. 515–534
[a13] B.L. van der Waerden, "Eine Bemerkung über die Unzerlegbarkeit von Polynomen" Math. Ann. , 102 (1930) pp. 738–739
[a14] Chee Keng Yap, "Fundamental problems of algorithmic algebra" , Oxford Univ. Press (2000)
How to Cite This Entry:
Factorization of polynomials. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Factorization_of_polynomials&oldid=12787
This article was adapted from an original article by Joachim von zur Gathen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article