Minimax estimator

From Encyclopedia of Mathematics
Jump to: navigation, search

A statistical estimator obtained as a result of the application of the notion of a minimax statistical procedure in the problem of statistical estimation.

Example 1. Let a random variable $X$ be subject to the binomial law with parameters $n$ and $\theta$, where $\theta$, $0<\theta<1$, is unknown. The statistic

$$t=\frac Xn\frac{\sqrt n}{1+\sqrt n}+\frac1{2(1+\sqrt n)}$$

is a minimax estimator for the parameter $\theta$ with respect to the loss function


Example 2. Let $X_1,\dots,X_n$ be independent random variables subject to the same probability law, with a continuous probability density $f(x-\theta)$, $|x|<\infty$, $|\theta|<\infty$. The Pitman estimator

$$t=t(X_1,\dots,X_n)=X_{(1)}-\frac{\int\limits_{-\infty}^\infty xf(x)\prod_{i=2}^nf(x+Y_{(i)})dx}{\int\limits_{-\infty}^\infty f(x)\prod_{i=2}^nf(x+Y_{(i)})dx}$$

is a minimax estimator for the unknown shift parameter $\theta$ relative to the loss function $L(\theta,t)=(\theta-t)^2$, where $X_{(1)}\leq\dots\leq X_{(n)}$ are the order statistics (cf. Order statistic) obtained from the sample $X_1,\dots,X_n$ and $Y_{(i)}=X_{(i)}-X_{(1)}$. In particular, if $X_1,\dots,X_n\sim N(\theta,1)$, then $t=(X_1+\dots+X_n)/n$.


[1] S. Zacks, "The theory of statistical inference" , Wiley (1971)
[2] D.R. Cox, D.V. Hinkley, "Theoretical statistics" , Chapman & Hall (1974)
How to Cite This Entry:
Minimax estimator. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article