Difference between revisions of "Minimax statistical procedure"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
| Line 1: | Line 1: | ||
| − | + | <!-- | |
| + | m0639701.png | ||
| + | $#A+1 = 30 n = 0 | ||
| + | $#C+1 = 30 : ~/encyclopedia/old_files/data/M063/M.0603970 Minimax statistical procedure | ||
| + | Automatically converted into TeX, above some diagnostics. | ||
| + | Please remove this comment and the {{TEX|auto}} line below, | ||
| + | if TeX found to be correct. | ||
| + | --> | ||
| − | + | {{TEX|auto}} | |
| + | {{TEX|done}} | ||
| + | |||
| + | One of the versions of optimality in mathematical statistics, according to which a statistical procedure is pronounced optimal in the minimax sense if it minimizes the maximal risk. In terms of decision functions (cf. [[Decision-function(2)|Decision function]]) the notion of a minimax statistical procedure is defined as follows. Let a random variable $ X $ | ||
| + | take values in a sampling space $ ( \mathfrak X , \mathfrak B , {\mathsf P} _ \theta ) $, | ||
| + | $ \theta \in \Theta $, | ||
| + | and let $ \Delta = \{ \delta \} $ | ||
| + | be the class of decision functions which are used to make a decision $ d $ | ||
| + | from the decision space $ D $ | ||
| + | on the basis of a realization of $ X $, | ||
| + | that is, $ \delta ( \cdot ) : \mathfrak X \rightarrow D $. | ||
| + | In this connection, the loss function $ L ( \theta , d) $, | ||
| + | defined on $ \Theta \times D $, | ||
| + | is assumed given. In such a case a statistical procedure $ \delta ^ {*} \in \Delta $ | ||
| + | is called a minimax procedure in the problem of making a statistical decision relative to the loss function $ L ( \theta , d ) $ | ||
| + | if for all $ \delta \in \Delta $, | ||
| + | |||
| + | $$ | ||
| + | \sup _ {\theta \in \Theta } \ | ||
| + | {\mathsf E} _ \theta L ( \theta , \delta ^ {*} ( X) ) | ||
| + | \leq \sup _ {\theta \in \Theta } \ | ||
| + | {\mathsf E} _ \theta L ( \theta , \delta ( X) ) , | ||
| + | $$ | ||
where | where | ||
| − | + | $$ | |
| + | {\mathsf E} _ \theta L ( \theta , \delta ( X) ) = \ | ||
| + | R ( \theta , \delta ) = \ | ||
| + | \int\limits _ { \mathfrak X } L ( \theta , \delta ( X) ) \ | ||
| + | d {\mathsf P} _ \theta ( x) | ||
| + | $$ | ||
| − | is the risk function associated to the statistical procedure (decision rule) | + | is the risk function associated to the statistical procedure (decision rule) $ \delta $; |
| + | the decision $ d ^ {*} = \delta ^ {*} ( x) $ | ||
| + | corresponding to an observation $ x $ | ||
| + | and the minimax procedure $ \delta ^ {*} $ | ||
| + | is called the minimax decision. Since the quantity | ||
| − | + | $$ | |
| + | \sup _ {\theta \in \Theta } \ | ||
| + | {\mathsf E} _ \theta L ( \theta , \delta ( X) ) | ||
| + | $$ | ||
| − | shows the expected loss under the procedure | + | shows the expected loss under the procedure $ \delta \in \Delta $, |
| + | $ \delta ^ {*} $ | ||
| + | being maximal means that if $ \delta ^ {*} $ | ||
| + | is used to choose a decision $ d $ | ||
| + | from $ D $, | ||
| + | then the largest expected risk, | ||
| − | + | $$ | |
| + | \sup _ {\theta \in \Theta } \ | ||
| + | R ( \theta , \delta ^ {*} ) , | ||
| + | $$ | ||
will be as small as possible. | will be as small as possible. | ||
| Line 21: | Line 70: | ||
Figure: m063970a | Figure: m063970a | ||
| − | The minimax principle for a statistical procedure does not always lead to a reasonable conclusion (see Fig. a); in this case one must be guided by | + | The minimax principle for a statistical procedure does not always lead to a reasonable conclusion (see Fig. a); in this case one must be guided by $ \delta _ {1} $ |
| + | and not by $ \delta _ {2} $, | ||
| + | although | ||
| − | + | $$ | |
| + | \sup _ {\theta \in \Theta } \ | ||
| + | R ( \theta , \delta _ {1} ) > \ | ||
| + | \sup _ {\theta \in \Theta } \ | ||
| + | R ( \theta , \delta _ {2} ) . | ||
| + | $$ | ||
| − | The notion of a minimax statistical procedure is useful in problems of statistical decision making in the absence of a priori information regarding | + | The notion of a minimax statistical procedure is useful in problems of statistical decision making in the absence of a priori information regarding $ \theta $. |
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> S. Zacks, "The theory of statistical inference" , Wiley (1971)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> S. Zacks, "The theory of statistical inference" , Wiley (1971)</TD></TR></table> | ||
Latest revision as of 08:00, 6 June 2020
One of the versions of optimality in mathematical statistics, according to which a statistical procedure is pronounced optimal in the minimax sense if it minimizes the maximal risk. In terms of decision functions (cf. Decision function) the notion of a minimax statistical procedure is defined as follows. Let a random variable $ X $
take values in a sampling space $ ( \mathfrak X , \mathfrak B , {\mathsf P} _ \theta ) $,
$ \theta \in \Theta $,
and let $ \Delta = \{ \delta \} $
be the class of decision functions which are used to make a decision $ d $
from the decision space $ D $
on the basis of a realization of $ X $,
that is, $ \delta ( \cdot ) : \mathfrak X \rightarrow D $.
In this connection, the loss function $ L ( \theta , d) $,
defined on $ \Theta \times D $,
is assumed given. In such a case a statistical procedure $ \delta ^ {*} \in \Delta $
is called a minimax procedure in the problem of making a statistical decision relative to the loss function $ L ( \theta , d ) $
if for all $ \delta \in \Delta $,
$$ \sup _ {\theta \in \Theta } \ {\mathsf E} _ \theta L ( \theta , \delta ^ {*} ( X) ) \leq \sup _ {\theta \in \Theta } \ {\mathsf E} _ \theta L ( \theta , \delta ( X) ) , $$
where
$$ {\mathsf E} _ \theta L ( \theta , \delta ( X) ) = \ R ( \theta , \delta ) = \ \int\limits _ { \mathfrak X } L ( \theta , \delta ( X) ) \ d {\mathsf P} _ \theta ( x) $$
is the risk function associated to the statistical procedure (decision rule) $ \delta $; the decision $ d ^ {*} = \delta ^ {*} ( x) $ corresponding to an observation $ x $ and the minimax procedure $ \delta ^ {*} $ is called the minimax decision. Since the quantity
$$ \sup _ {\theta \in \Theta } \ {\mathsf E} _ \theta L ( \theta , \delta ( X) ) $$
shows the expected loss under the procedure $ \delta \in \Delta $, $ \delta ^ {*} $ being maximal means that if $ \delta ^ {*} $ is used to choose a decision $ d $ from $ D $, then the largest expected risk,
$$ \sup _ {\theta \in \Theta } \ R ( \theta , \delta ^ {*} ) , $$
will be as small as possible.
Figure: m063970a
The minimax principle for a statistical procedure does not always lead to a reasonable conclusion (see Fig. a); in this case one must be guided by $ \delta _ {1} $ and not by $ \delta _ {2} $, although
$$ \sup _ {\theta \in \Theta } \ R ( \theta , \delta _ {1} ) > \ \sup _ {\theta \in \Theta } \ R ( \theta , \delta _ {2} ) . $$
The notion of a minimax statistical procedure is useful in problems of statistical decision making in the absence of a priori information regarding $ \theta $.
References
| [1] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986) |
| [2] | S. Zacks, "The theory of statistical inference" , Wiley (1971) |
Minimax statistical procedure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimax_statistical_procedure&oldid=47846