# Number theory, probabilistic methods in

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In the broad sense, that part of number theory in which ideas and methods from probability theory are used.

By probabilistic number theory in the narrow sense one means the statistical theory of the distribution of values of an arithmetic function.

The great majority of arithmetic functions studied in number theory are either additive or multiplicative (cf. Additive arithmetic function; Multiplicative arithmetic function). Their values are usually distributed in a very complicated way. If one traces the change of values of such functions as the argument runs through the sequence of natural numbers, one obtains a highly chaotic graph such as is usually observed when the additive and multiplicative properties of numbers are considered simultaneously. In classical investigations concerning the distribution of the values of a real arithmetic function $f(m)$, one usually studied the asymptotic behaviour of $f(m)$ itself, or of its average values. In the first case one has to find two simple functions $\psi_1(m)$, $\psi_2(m)$ such that $\psi_1(m) \le f(m) \le \psi_2(m)$ for all $m$, or at least for all sufficiently large $m$. For example, if $\omega(m)$ denotes the number of distinct prime divisors of a number $m$, then $\omega(m) \ge 1$ for all $m > 1$, and $\omega(m) \le 2(\ln \ln m)^{-1} \ln m$ for $m \ge m_0$;

$$\liminf_{m\to\infty} \omega(m) = 1,$$

$$\limsup_{m\to\infty} \omega(m) (\ln m)^{-1} \ln \ln m = 1.$$ In the second case one considers the behaviour of

$$\frac{1}{n} \sum_{m=1}^n f(m). \tag{1}$$ For $\omega(m)$ the average value (1) is equal to $(1+o(1) \ln \ln n)$. A solution to the first and to the second problem in the general case gives little information about the function $f(m)$, or about its oscillations. A function may deviate significantly from its average value. Here it turns out that large deviations occur rather seldom. The problem arises of finding bounds within which the values of the function $f(m)$ fluctuate for an overwhelming majority of values of the argument. If $f(m)$ is a real additive arithmetic function, and if

$$A_n = \sum_{p \le n} \frac{f(p)}{p}, \qquad B_n^2 = \sum_{p^\alpha \le n} \frac{f^2(p^\alpha)}{p^\alpha}, \tag{2}$$ where the sums are taken over all prime numbers $p \le n$ and all powers of primes $p^\alpha \le n$, respectively, then

$$\frac{1}{n} \sum_{m=1}^n (f(m) - A_n)^2 \le B_n^2 \left( \frac32 + \frac{c}{\ln n}\right),$$ where $c$ is an absolute constant. Thus, for any $t > 0$ and for all $m \le n$ with the exception of $< (3/2 + c/\ln n)nt^{-2}$ numbers, the inequality

$$|f(m) - A_n| < t B_n$$ holds (an analogue of the law of large numbers in probability theory). For the function $\omega(m)$ this inequality can be written in the form

$$|\omega(m) - \ln \ln n| < t \sqrt{\ln \ln n}.$$ Denote by $N_n(\cdots)$ the number of natural numbers $m \le n$ that satisfy a condition to be described in between the brackets in place of the dots. If one wants to characterize more exactly the distribution of the values of a real arithmetic function $f(m)$, then one is forced to consider the asymptotic behaviour of the frequencies

$$\frac 1n N_n(f(m) \in E) \tag{3}$$ as $n \to \infty$, where $E$ is an arbitrary Borel set. Among the asymptotic laws for (3), the most interesting ones are of local or integral form.

## Integral laws.

One studies the asymptotic behaviour of the distribution function

$$F_n(C_n + D_n x) = \frac 1n N_n(f(m) < C_n + D_n x),$$ as $n \to \infty$, for given $C_n, D_n$.

In the case of an additive arithmetic function one searches for conditions under which $F_n(C_n + D_n x)$ converges to some distribution function $F(x)$ at all its points of continuity. Here, if $F(x)$ is not degenerate and $D_n$ must converge to a finite (distinct from $0$) or infinite limit.

In the case of a finite limit, it is sufficient to consider only $F_n(C_n + x)$. The function $F_n(C_n + x)$ has a non-degenerate limit distribution as $n \to \infty$, for some $C_n$, if and only if $f(m)$ has the form

$$f(m) = a \ln m + g(m),$$ where $a$ is a constant and the function $g(m)$ satisfies the conditions

$$\sum_{|g(p)| \ge 1} \frac 1p < \infty, \qquad \sum_{|g(p)|<1} \frac{g^2(p)}{p} < \infty.$$ Here $C_n$ must be equal to

$$C_n = a \ln n + \sum_{\substack{p \le n \\ |g(p)| < 1}} \frac{g(p)}{p} + C + o(1),$$ where $C$ is a constant. The choice of $C_n$ is unique up to an additive term $C + o(1)$. The limit distribution is discrete when $\sum_{f(p) \ne 0} 1/p < \infty$, and continuous otherwise.

In particular, $F_n(x)$ (the case $C_n \equiv 0$) has a limit distribution if and only if the series

$$\sum_{|f(p)| \ge 1} \frac 1p, \qquad \sum_{|f(p)| < 1} \frac{f(p)}{p}, \qquad \sum_{|f(p)| < 1} \frac{f^2(p)}{p}$$ converge (an analogue of the three-series theorem in probability theory).

The case $D_n \to \infty$ has not been completely investigated. Some of the simpler results are presented below, when $C_n = A_n$ and $D_n = B_n$ are defined by formula (2).

If, for any fixed $\epsilon > 0$,

$$B_n^{-2} \sum_{\substack{p^\alpha = n \\ |f(p^\alpha)| > \epsilon B_n}} \frac{f^2(p^\alpha)}{p^\alpha} \to 0, \tag{4}$$ as $n \to \infty$ (an analogue of the Lindeberg condition, cf. Lindeberg–Feller theorem), then

$$F_n(A_n + B_n x) \to \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-u^2/2} du = \Phi(x) \tag{5}$$ (the normal law). If (4) is satisfied, then $B_n$ is a slowly-varying function of $\ln n$ in the sense of Karamata. Moreover, if $B_n$ is such a function, then (4) is a necessary condition for (5).

Let $B_n$ be a slowly-varying function in $\ln n$. Then $F_n(A_n + B_n x)$ converges to a limit distribution with variance 1 if and only if there exists a non-decreasing function $V(u)$, $-\infty < u < \infty$, such that $V(-\infty) = 0$, $V(\infty) = 1$, and such that as $n \to \infty$, for all $u$ with the possible exception of $u = 0$,

$$B_n^{-2} \sum_{\substack{p^\alpha \le n \\ f(p^\alpha) < u B_n}} \frac{f^2(p^\alpha)}{p^\alpha} \to V(u) .$$ The characteristic function $\phi(t)$ of the limit law, if it exists at all, is given by the formula

$$\phi(t) = \exp \left( \int_{-\infty}^\infty (e^{itu} - 1 - itu) u^{-2} dV(u) \right).$$ One has studied the rate of convergence to the limit law. For example, if $f(m)$ is a strongly-additive function and if

$$\mu_n = B_n^{-1} \max_{p\le n} |f(p)| \to 0$$ as $n \to \infty$, then

$$F_n(A_n + B_n x) = \Phi(x) + O(\mu_n),$$ uniformly in $x$.

There are similar results for multiplicative arithmetic functions.

## Local laws.

Here one studies the behaviour of the frequency $N_n(f(m) = c)/n$ as $n \to \infty$ for a fixed $c$. In the case of a real additive arithmetic function, this frequency always has a limit, which is distinct from zero only for a countable set of values of $c$. Let

$$\lambda_l(f) = \lim_{n\to\infty} \frac 1n N_n(f(m) = c_l), \quad l=1, 2, \ldots,$$ be the collection of non-zero limits, assuming that at least one such limit exists. Then

$$\sum_l \lambda_l(f) = 1$$ and

$$\sum_l \lambda_l(f) e^{itc_l} = \prod_p \left( 1 - \frac 1p \right) \sum_{\alpha = 0}^\infty p^{-\alpha} e^{itf(p^\alpha)}.$$ If $f(m)$ takes only integer values and if

$$\mu_k(f) = \lim_{n\to\infty} \frac 1n N_n(f(m) = k),$$ then

$$\sum_k \mu_k(f) = 1$$ if and only if

$$\sum_{f(p)\ne 0} \frac 1p < \infty.$$ One has studied the rate of convergence to $\mu_k(f)$. There exists an absolute constant $C$ such that for all integers $k$ and all integer-valued additive arithmetic functions $f(m)$ such that $f(p) = 0$ for all primes $p$,

$$\left| \frac 1n N_n(f(m) = k) - \mu_k(f) \right| \le C_n^{-1/2}.$$ One has also studied the asymptotic behaviour of the frequency $N_n(f(m) = k_n)/n$ when $k_n$ increases with $n$.

#### References

 [1] M. Kac, "Statistical independence in probability, analysis and number theory" , Math. Assoc. Amer. (1963) [2] I.P. [I.P. Kubilyus] Kubilius, "Probabilistic methods in the theory of numbers" , Amer. Math. Soc. (1978) (Translated from Russian) [3] I.P. Kubilyus, "Current problems of analytic number theory" , Minsk (1974) (In Russian) [4] Yu.V. Linnik, "The dispersion method in binary additive problems" , Amer. Math. Soc. (1963) (Translated from Russian) [5] Yu.V. Linnik, "Ergodic properties of algebraic fields" , Springer (1968) (Translated from Russian) [6] A.G. Postnikov, "Ergodic problems in the theory of congruences and of Diophantine approximations" , Amer. Math. Soc. (1967) (Translated from Russian) [7] P.D.T.A. Elliot, "Probabilistic number theory" , 1–2 , Springer (1979–1980)

The Erdös–Kac theorem gives some more precise information on $\omega(n)$. Denote by $N(x;a,b)$ the number of integers in the interval $[3,x]$ for which the inequalities $$a \le \frac{ \omega(n) - \log\log n }{ \sqrt{\log\log n} } \le b$$ hold. Then, as $x \rightarrow \infty$, $$N(x;a,b) = (x+o(x)) \frac{1}{\sqrt{2\pi}} \int_a^b e^{-t^2/2} dt \ .$$