Namespaces
Variants
Actions

Difference between revisions of "Bernoulli experiment"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 162 formulas out of 169 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
''of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201502.png" />''
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
The special case of a statistical experiment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201503.png" /> (cf. also [[Probability space|Probability space]]; [[Statistical experiments, method of|Statistical experiments, method of]]) consisting of a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201504.png" /> of probability measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201505.png" /> on a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201506.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201507.png" /> of subsets of a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201508.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b1201509.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015011.png" /> the set of natural numbers), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015012.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015013.png" />-algebra of all subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015014.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015015.png" />. Here, the [[Probability measure|probability measure]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015016.png" /> describes the probability
+
Out of 169 formulas, 162 were replaced by TEX code.-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015017.png" /></td> </tr></table>
+
{{TEX|semi-auto}}{{TEX|partial}}
 +
''of size $n$''
  
for a given probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015018.png" /> of success that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015019.png" /> will be observed. Clearly, decision-theoretical procedures associated with Bernoulli experiments are based on the sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015020.png" /> of observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015021.png" /> because of the corresponding sufficient and complete data reduction (cf. [[#References|[a2]]] and [[#References|[a3]]]). Therefore, uniformly most powerful, as well as uniformly most powerful unbiased, level tests for one-sided and two-sided hypotheses about the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015022.png" /> of success are based on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015023.png" /> (cf. [[#References|[a2]]]; see also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). Moreover, based on the quadratic loss function, the sample mean
+
The special case of a statistical experiment $( \Omega , \mathcal{A} , \mathcal{P} )$ (cf. also [[Probability space|Probability space]]; [[Statistical experiments, method of|Statistical experiments, method of]]) consisting of a set $\mathcal{P}$ of probability measures $\mathsf{P}$ on a $\sigma$-algebra $\mathcal{A}$ of subsets of a set $\Omega$, where $\Omega = \{ 0,1 \} ^ { n }$ ($n \in \mathbf N$, $\mathbf{N}$ the set of natural numbers), $\mathcal{A}$ is the $\sigma$-algebra of all subsets of $\{ 0,1 \} ^ { n }$ and $\mathcal{P} = \{ \mathsf{P} _ { p } : p \in [ 0,1 ] \}$. Here, the [[Probability measure|probability measure]] $\mathsf{P} _ { p }$ describes the probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015024.png" /></td> </tr></table>
+
\begin{equation*} p^{\sum _ { j = 1 } ^ { n } x _ { j }} (1-p)^{ n - \sum _ { j = 1 } ^ { n } x _ { j }} \end{equation*}
 +
 
 +
for a given probability $p \in [ 0,1 ]$ of success that $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ will be observed. Clearly, decision-theoretical procedures associated with Bernoulli experiments are based on the sum $\sum _ { j = 1 } ^ { n } x _ { j }$ of observations $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ because of the corresponding sufficient and complete data reduction (cf. [[#References|[a2]]] and [[#References|[a3]]]). Therefore, uniformly most powerful, as well as uniformly most powerful unbiased, level tests for one-sided and two-sided hypotheses about the probability $p$ of success are based on $\sum _ { j = 1 } ^ { n } x _ { j }$ (cf. [[#References|[a2]]]; see also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). Moreover, based on the quadratic loss function, the sample mean
 +
 
 +
\begin{equation*} \overline{x} = \frac { 1 } { n } \sum _ { j = 1 } ^ { n } x_{j} \end{equation*}
  
 
is admissible on account of the [[Rao–Cramér inequality|Rao–Cramér inequality]] (cf. [[#References|[a3]]]) and the estimator (cf. also [[Statistical estimator|Statistical estimator]])
 
is admissible on account of the [[Rao–Cramér inequality|Rao–Cramér inequality]] (cf. [[#References|[a3]]]) and the estimator (cf. also [[Statistical estimator|Statistical estimator]])
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015025.png" /></td> </tr></table>
+
\begin{equation*} \frac { 1 } { 1 + \sqrt { n } } \left( \bar{x} \sqrt { n } + \frac { 1 } { 2 } \right) \end{equation*}
  
is minimax by means of equalizer decision rules (cf. [[#References|[a2]]]). Furthermore, the Lehmann–Scheffé theorem implies that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015026.png" /> is a uniform minimum-variance unbiased estimator (an UMVU estimator; cf. also [[Unbiased estimator|Unbiased estimator]]) for the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015027.png" /> of success (cf. [[#References|[a2]]] and [[#References|[a3]]]).
+
is minimax by means of equalizer decision rules (cf. [[#References|[a2]]]). Furthermore, the Lehmann–Scheffé theorem implies that $\bar{x}$ is a uniform minimum-variance unbiased estimator (an UMVU estimator; cf. also [[Unbiased estimator|Unbiased estimator]]) for the probability $p$ of success (cf. [[#References|[a2]]] and [[#References|[a3]]]).
  
All UMVU estimators, as well as all unbiased estimators of zero, might be characterized in connection with Bernoulli experiments by introducing the following notion for general statistical experiments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015028.png" />: A <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015029.png" />, being square-integrable for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015030.png" />, is called an UMVU estimator if
+
All UMVU estimators, as well as all unbiased estimators of zero, might be characterized in connection with Bernoulli experiments by introducing the following notion for general statistical experiments $( \Omega , \mathcal{A} , \mathcal{P} )$: A $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, being square-integrable for all $\mathsf{P} \in \mathcal{P}$, is called an UMVU estimator if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015031.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015031.png"/></td> </tr></table>
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015032.png" />. The covariance method tells that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015033.png" /> is a UMVU estimator if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015035.png" />, for all unbiased estimators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015036.png" /> of zero, i.e. if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015038.png" /> (cf. [[#References|[a3]]]). In particular, the covariance method implies the following properties of UMVU estimators:
+
for all $\mathsf{P} \in \mathcal{P}$. The covariance method tells that $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ is a UMVU estimator if and only if $\operatorname { Cov } _ { \mathsf{P} } ( d ^ { * } , d _ { 0 } ) = 0$, $\mathsf{P} \in \mathcal{P}$, for all unbiased estimators $d _ { 0 } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ of zero, i.e. if $\text{E} _ { \mathsf{P} } ( d _ { 0 } ) = 0$, $\mathsf{P} \in \mathcal{P}$ (cf. [[#References|[a3]]]). In particular, the covariance method implies the following properties of UMVU estimators:
  
i) (uniqueness) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015040.png" />, UMVU estimators with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015042.png" />, implies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015043.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015044.png" />-a.e. for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015045.png" />.
+
i) (uniqueness) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $j = 1,2$, UMVU estimators with $\mathsf{E} _ { \mathsf{P} } ( d _ { 1 } ^ { * } ) = \mathsf{E} _ { \mathsf{P} } ( d _ { 2 } ^ { * } )$, $\mathsf{P} \in \mathcal{P}$, implies $d _ { 1 } ^ { * } = d _ { 2 } ^ { * }$ $\mathsf{P}$-a.e. for all $\mathsf{P} \in \mathcal{P}$.
  
ii) (linearity) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015046.png" />, UMVU estimators, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015047.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015048.png" /> the set of real numbers), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015049.png" />, implies that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015050.png" /> is also an UMVU estimator.
+
ii) (linearity) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, UMVU estimators, $a_j  \in \mathbf{R}$ ($\mathbf{R}$ the set of real numbers), $j = 1,2$, implies that $a _ { 1 } d _ { 1 } ^ { * } + a _ { 2 } d _ { 2 } ^ { * }$ is also an UMVU estimator.
  
iii) (multiplicativity) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015052.png" />, UMVU estimators with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015053.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015054.png" /> bounded, implies that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015055.png" /> is also an UMVU estimator.
+
iii) (multiplicativity) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $j = 1,2$, UMVU estimators with $d _ { 1 } ^ { * }$ or $d _ { 2 } ^ { * }$ bounded, implies that $d _ { 1 } ^ { * } d _ { 2 } ^ { * }$ is also an UMVU estimator.
  
iv) (closedness) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015056.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015057.png" />, UMVU estimators satisfying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015058.png" /> for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015059.png" /> and all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015060.png" /> implies that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015061.png" /> is an UMVU estimator.
+
iv) (closedness) $d _ { n } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $n = 1,2 , \dots$, UMVU estimators satisfying $\operatorname { lim } _ { n \rightarrow \infty } \mathsf E _ { \mathsf P } [ ( d _ { n } ^ { * } - d ^ { * } ) ^ { 2 } ] = 0$ for some $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ and all $\mathsf{P} \in \mathcal{P}$ implies that $d ^ { * }$ is an UMVU estimator.
  
In the special case of a Bernoulli experiment of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015062.png" /> one arrives by the property of uniqueness i) and the property of linearity ii), together with an argument based on interpolation polynomials, at the following characterization of UMVU estimators: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015063.png" /> is a UMVU estimator if and only if one of the following conditions is valid:
+
In the special case of a Bernoulli experiment of size $n$ one arrives by the property of uniqueness i) and the property of linearity ii), together with an argument based on interpolation polynomials, at the following characterization of UMVU estimators: $d ^ { * } : \{ 0,1 \} ^ { n } \rightarrow \mathbf R$ is a UMVU estimator if and only if one of the following conditions is valid:
  
v) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015064.png" /> is a polynomial in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015065.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015067.png" />, of degree not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015068.png" />;
+
v) $d ^ { * }$ is a polynomial in $\sum _ { j = 1 } ^ { n } x _ { j }$, $x _ { j } \in \{ 0,1 \}$, $j = 1 , \ldots , n$, of degree not exceeding $n$;
  
vi) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015069.png" /> is symmetric (permutation invariant).
+
vi) $d ^ { * }$ is symmetric (permutation invariant).
  
Moreover, the set of all real-valued parameter functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015070.png" /> admitting some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015071.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015073.png" />, coincides with the set consisting of all polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015074.png" /> of degree not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015075.png" />. In particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015076.png" /> is an unbiased estimator of zero if and only if its symmetrization <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015077.png" />, defined by
+
Moreover, the set of all real-valued parameter functions $f : [ 0,1 ] \rightarrow \mathbf{R}$ admitting some $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \text{P} _ { p } } ( d ) = f ( p )$, $p \in [ 0,1 ]$, coincides with the set consisting of all polynomials in $p \in [ 0,1 ]$ of degree not exceeding $n$. In particular, $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ is an unbiased estimator of zero if and only if its symmetrization $d_{s}$, defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015078.png" /></td> </tr></table>
+
\begin{equation*} d _ { s } ( x _ { 1 } , \ldots , x _ { n } ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015079.png" /></td> </tr></table>
+
\begin{equation*} = \frac { 1 } { n ! } \sum _ { \pi \text { a permutation } } d ( x _ { \pi ( 1 )} , \ldots , x _ { \pi  ( n )} ) ,\; ( x _ { 1 } , \ldots , x _ { n } ) \in \{ 0,1 \} ^ { n }, \end{equation*}
  
vanishes. Therefore, the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015080.png" /> consisting of all estimators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015081.png" /> is equal to the direct sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015082.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015083.png" />, stands for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015084.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015085.png" /> is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015086.png" />. In particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015087.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015088.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015089.png" />.
+
vanishes. Therefore, the set $D$ consisting of all estimators $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ is equal to the direct sum $D _ { s } \oplus D _ { s } ^ { \perp }$, where $D _ { S }$, stands for $\{ d \in D : d = d _ { s } \}$ and $D _ { s } ^ { \perp }$ is equal to $\{ d \in D : d _ { s } = 0 \}$. In particular, $\operatorname { dim } D = 2 ^ { n }$, $\operatorname { dim } D _ { s } = n + 1$ and $\operatorname { dim } D _ { s } ^ { \perp } = 2 ^ { n } - n - 1$.
  
If one is interested, in connection with general statistical experiments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015090.png" />, only in locally minimum-variance unbiased estimators at some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015091.png" />, one might start from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015092.png" /> satisfying
+
If one is interested, in connection with general statistical experiments $( \Omega , \mathcal{A} , \mathcal{P} )$, only in locally minimum-variance unbiased estimators at some $\mathsf{P} _ { 0 } \in \mathcal{P}$, one might start from $d ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 1 } ( \Omega , \mathcal{A} , \mathsf{P} ) \cap L _ { 2 } ( \Omega ,\mathcal{A} , \mathsf{P}_ { 0 } )$ satisfying
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015093.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { Var } _ { \mathsf {P} _ { 0 } } ( d ^ { * } ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015094.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015094.png"/></td> </tr></table>
  
Then the covariance method yields again the properties of uniqueness, linearity and closedness (with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015095.png" />), whereas the property of multiplicativity does not hold, in general, for locally minimum-variance unbiased estimators; this can be illustrated by infinite Bernoulli experiments, where the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015096.png" /> of success is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015097.png" />, as follows.
+
Then the covariance method yields again the properties of uniqueness, linearity and closedness (with respect to $\mathsf{P} _ { 0 }$), whereas the property of multiplicativity does not hold, in general, for locally minimum-variance unbiased estimators; this can be illustrated by infinite Bernoulli experiments, where the probability $p$ of success is equal to $1/2$, as follows.
  
Let (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015098.png" />) be the special statistical experiment with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b12015099.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150100.png" /> coinciding with the set of all subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150101.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150102.png" /> being the set of all binomial distributions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150103.png" /> with integer-valued parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150104.png" /> and probability of success <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150105.png" /> (cf. also [[Binomial distribution|Binomial distribution]]). Then the covariance method, together with an argument based on interpolation polynomials, yields the following characterization of locally optimal unbiased estimators: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150106.png" /> is locally optimal at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150107.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150108.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150109.png" /> fixed) among all estimators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150110.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150111.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150112.png" />, if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150113.png" /> is a polynomial in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150114.png" /> of degree not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150115.png" />. In particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150116.png" /> is a UMVU estimator if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150117.png" /> is already deterministic. Moreover, the property of multiplicativity of locally optimal unbiased estimators is not valid.
+
Let ($\Omega , \mathcal{A} , \mathcal{P}$) be the special statistical experiment with $\Omega = \mathbf{N} \cup \{ 0 \}$, $\mathcal{A}$ coinciding with the set of all subsets of $\mathbf{N} \cup \{ 0 \}$, and $\mathcal{P}$ being the set of all binomial distributions $B ( n , 1 / 2 )$ with integer-valued parameter $n \in \mathbf N$ and probability of success $p = 1 / 2$ (cf. also [[Binomial distribution|Binomial distribution]]). Then the covariance method, together with an argument based on interpolation polynomials, yields the following characterization of locally optimal unbiased estimators: $d ^ { * } : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ is locally optimal at $\mathsf{P} _ { n }$ for all $n &gt; \delta$ ($\delta \in \mathbf{N} \cup \{ 0 \}$ fixed) among all estimators $d : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \mathsf{P} _ { n } } ( d ) = \mathsf{E} _ { \mathsf{P}_ { n } } ( d ^ { * } )$, $n \in \mathbf N$, if and only if $d ^ { * }$ is a polynomial in $k \in \mathbf{N} \cup \{ 0 \}$ of degree not exceeding $\delta$. In particular, $d ^ { * } : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ is a UMVU estimator if and only if $d ^ { * }$ is already deterministic. Moreover, the property of multiplicativity of locally optimal unbiased estimators is not valid.
  
There is also the following version of the preceding characterization of locally optimal unbiased estimators for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150118.png" /> realizations of independent, identically distributed random variables with some binomial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150119.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150120.png" />, as follows. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150121.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150122.png" /> be the set of all subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150123.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150124.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150125.png" /> denotes the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150126.png" />-fold direct product of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150127.png" /> having the binomial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150128.png" />. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150129.png" /> is locally optimal at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150130.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150131.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150132.png" /> fixed) among all estimators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150133.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150134.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150135.png" />, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150136.png" /> is a [[Symmetric polynomial|symmetric polynomial]] in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150137.png" /> and a [[Polynomial|polynomial]] in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150138.png" /> keeping the remaining variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150139.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150140.png" /> fixed, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150141.png" />, of degree not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150142.png" />. In particular, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150143.png" /> the sample mean
+
There is also the following version of the preceding characterization of locally optimal unbiased estimators for $m$ realizations of independent, identically distributed random variables with some binomial distribution $B ( n , 1 / 2 )$, $n \in \mathbf N$, as follows. Let $\Omega = ( \mathbf{N} \cup \{ 0 \} ) ^ { m }$, let $\mathcal{A}$ be the set of all subsets of $\Omega$, let $\mathcal{P} = \{ \mathsf{P} _ { n } ^ { m } : n \in \mathbf{N} \}$, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150125.png"/> denotes the $m$-fold direct product of $\mathsf{P} _ { n }$ having the binomial distribution $B ( n , 1 / 2 )$. Then $d ^ { * } : \Omega \rightarrow \mathbf{R}$ is locally optimal at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150130.png"/> for all $n &gt; \delta$ ($\delta \in \mathbf{N} \cup \{ 0 \}$ fixed) among all estimators $d : \Omega \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \mathsf{P} _ { n } ^ { m } } ( d ) = \mathsf{E} _ { \mathsf{P} _ { n } ^ { m } } ( d ^ { * } )$, $n \in \mathbf N$, if $d$ is a [[Symmetric polynomial|symmetric polynomial]] in $( k _ { 1 } , \dots , k _ { m } ) \in ( \mathbf{N} \cup \{ 0 \} ) ^ { m }$ and a [[Polynomial|polynomial]] in $k_{j} \in {\bf N} \cup \{ 0 \}$ keeping the remaining variables $k_i$, $i \in \{ 1 , \ldots , m \} \backslash \{ j \}$ fixed, $j = 1 , \ldots , m$, of degree not exceeding $\delta$. In particular, for $m &gt; 1$ the sample mean
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150144.png" /></td> </tr></table>
+
\begin{equation*} \frac { 1 } { m } \sum _ { j = 1 } ^ { m } k _ { j } \end{equation*}
  
is not locally optimal at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150145.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150146.png" /> and some fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150147.png" />.
+
is not locally optimal at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150145.png"/> for any $n &gt; \delta$ and some fixed $\delta \in \mathbf{N} \cup \{ 0 \}$.
  
Finally, there are also interesting results about Bernoulli experiments of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150148.png" /> with varying probabilities of success, which, in connection with the randomized response model (cf. [[#References|[a1]]]), have the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150149.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150150.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150151.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150152.png" />, fixed and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150153.png" />. Then there exists an UMVU estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150154.png" /> based on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150155.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150156.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150157.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150158.png" />. In this case
+
Finally, there are also interesting results about Bernoulli experiments of size $n$ with varying probabilities of success, which, in connection with the randomized response model (cf. [[#References|[a1]]]), have the form $p p _ { i } + ( 1 - p ) ( 1 - p _ { i } )$, $i = 1 , \dots , n$, with $p _ { i } \neq 1 / 2$, $i = 1 , \dots , n$, fixed and $p \in [ 0,1 ]$. Then there exists an UMVU estimator for $p$ based on $( x _ { i } , \ldots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ if and only if $p _ { i } = p _ { j }$ or $p _ { i } = 1 - p _ { j }$ for all $i , j \in \{ 1 , \ldots , n \}$. In this case
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150159.png" /></td> </tr></table>
+
\begin{equation*} \frac { 1 } { n } \sum _ { j = 1 } ^ { n } \frac { x _ { j } - 1 + p _ { j } } { 2 p _ { j } - 1 } \end{equation*}
  
is a UMVU estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150160.png" />.
+
is a UMVU estimator for $p$.
  
If the probabilities of success <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150161.png" /> are functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150162.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150163.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150164.png" /> as parameter space, there exists a symmetric and sufficient data reduction of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150165.png" /> if and only if there are functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150166.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150167.png" /> such that
+
If the probabilities of success $p _ { i }$ are functions $f _ { i } : \Theta \rightarrow [ 0,1 ]$, $i = 1 , \dots , n$, with $\Theta$ as parameter space, there exists a symmetric and sufficient data reduction of $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ if and only if there are functions $g : \Theta \rightarrow \mathbf R$, $h : \{ 1 , \dots , n \} \rightarrow \bf R$ such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120150/b120150168.png" /></td> </tr></table>
+
\begin{equation*} f _ { i } ( \vartheta ) = \frac { \operatorname { exp } ( g ( \vartheta ) + h ( i ) ) } { 1 + \operatorname { exp } ( g ( \vartheta ) + h ( i ) ) } , \vartheta \in \Theta , i = 1 , \ldots , n . \end{equation*}
  
 
In particular, the sample mean is sufficient in this case.
 
In particular, the sample mean is sufficient in this case.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A. Chaudhuri,  R. Mukerjee,  "Randomized response" , M. Dekker  (1988)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  T.S. Ferguson,  "Mathematical statistics: a decision theoretic approach" , Acad. Press  (1967)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  E.L. Lehmann,  "Theory of point estimation" , Wiley  (1983)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  A. Chaudhuri,  R. Mukerjee,  "Randomized response" , M. Dekker  (1988)</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  T.S. Ferguson,  "Mathematical statistics: a decision theoretic approach" , Acad. Press  (1967)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  E.L. Lehmann,  "Theory of point estimation" , Wiley  (1983)</td></tr></table>

Revision as of 16:45, 1 July 2020

of size $n$

The special case of a statistical experiment $( \Omega , \mathcal{A} , \mathcal{P} )$ (cf. also Probability space; Statistical experiments, method of) consisting of a set $\mathcal{P}$ of probability measures $\mathsf{P}$ on a $\sigma$-algebra $\mathcal{A}$ of subsets of a set $\Omega$, where $\Omega = \{ 0,1 \} ^ { n }$ ($n \in \mathbf N$, $\mathbf{N}$ the set of natural numbers), $\mathcal{A}$ is the $\sigma$-algebra of all subsets of $\{ 0,1 \} ^ { n }$ and $\mathcal{P} = \{ \mathsf{P} _ { p } : p \in [ 0,1 ] \}$. Here, the probability measure $\mathsf{P} _ { p }$ describes the probability

\begin{equation*} p^{\sum _ { j = 1 } ^ { n } x _ { j }} (1-p)^{ n - \sum _ { j = 1 } ^ { n } x _ { j }} \end{equation*}

for a given probability $p \in [ 0,1 ]$ of success that $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ will be observed. Clearly, decision-theoretical procedures associated with Bernoulli experiments are based on the sum $\sum _ { j = 1 } ^ { n } x _ { j }$ of observations $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ because of the corresponding sufficient and complete data reduction (cf. [a2] and [a3]). Therefore, uniformly most powerful, as well as uniformly most powerful unbiased, level tests for one-sided and two-sided hypotheses about the probability $p$ of success are based on $\sum _ { j = 1 } ^ { n } x _ { j }$ (cf. [a2]; see also Statistical hypotheses, verification of). Moreover, based on the quadratic loss function, the sample mean

\begin{equation*} \overline{x} = \frac { 1 } { n } \sum _ { j = 1 } ^ { n } x_{j} \end{equation*}

is admissible on account of the Rao–Cramér inequality (cf. [a3]) and the estimator (cf. also Statistical estimator)

\begin{equation*} \frac { 1 } { 1 + \sqrt { n } } \left( \bar{x} \sqrt { n } + \frac { 1 } { 2 } \right) \end{equation*}

is minimax by means of equalizer decision rules (cf. [a2]). Furthermore, the Lehmann–Scheffé theorem implies that $\bar{x}$ is a uniform minimum-variance unbiased estimator (an UMVU estimator; cf. also Unbiased estimator) for the probability $p$ of success (cf. [a2] and [a3]).

All UMVU estimators, as well as all unbiased estimators of zero, might be characterized in connection with Bernoulli experiments by introducing the following notion for general statistical experiments $( \Omega , \mathcal{A} , \mathcal{P} )$: A $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, being square-integrable for all $\mathsf{P} \in \mathcal{P}$, is called an UMVU estimator if

for all $\mathsf{P} \in \mathcal{P}$. The covariance method tells that $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ is a UMVU estimator if and only if $\operatorname { Cov } _ { \mathsf{P} } ( d ^ { * } , d _ { 0 } ) = 0$, $\mathsf{P} \in \mathcal{P}$, for all unbiased estimators $d _ { 0 } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ of zero, i.e. if $\text{E} _ { \mathsf{P} } ( d _ { 0 } ) = 0$, $\mathsf{P} \in \mathcal{P}$ (cf. [a3]). In particular, the covariance method implies the following properties of UMVU estimators:

i) (uniqueness) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $j = 1,2$, UMVU estimators with $\mathsf{E} _ { \mathsf{P} } ( d _ { 1 } ^ { * } ) = \mathsf{E} _ { \mathsf{P} } ( d _ { 2 } ^ { * } )$, $\mathsf{P} \in \mathcal{P}$, implies $d _ { 1 } ^ { * } = d _ { 2 } ^ { * }$ $\mathsf{P}$-a.e. for all $\mathsf{P} \in \mathcal{P}$.

ii) (linearity) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, UMVU estimators, $a_j \in \mathbf{R}$ ($\mathbf{R}$ the set of real numbers), $j = 1,2$, implies that $a _ { 1 } d _ { 1 } ^ { * } + a _ { 2 } d _ { 2 } ^ { * }$ is also an UMVU estimator.

iii) (multiplicativity) $d _ { j } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $j = 1,2$, UMVU estimators with $d _ { 1 } ^ { * }$ or $d _ { 2 } ^ { * }$ bounded, implies that $d _ { 1 } ^ { * } d _ { 2 } ^ { * }$ is also an UMVU estimator.

iv) (closedness) $d _ { n } ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$, $n = 1,2 , \dots$, UMVU estimators satisfying $\operatorname { lim } _ { n \rightarrow \infty } \mathsf E _ { \mathsf P } [ ( d _ { n } ^ { * } - d ^ { * } ) ^ { 2 } ] = 0$ for some $d ^ { * } \in \cap_{ \mathsf{P} \in \mathcal{P}} L _ { 2 } ( \Omega , \mathcal{A} , \mathsf{P} )$ and all $\mathsf{P} \in \mathcal{P}$ implies that $d ^ { * }$ is an UMVU estimator.

In the special case of a Bernoulli experiment of size $n$ one arrives by the property of uniqueness i) and the property of linearity ii), together with an argument based on interpolation polynomials, at the following characterization of UMVU estimators: $d ^ { * } : \{ 0,1 \} ^ { n } \rightarrow \mathbf R$ is a UMVU estimator if and only if one of the following conditions is valid:

v) $d ^ { * }$ is a polynomial in $\sum _ { j = 1 } ^ { n } x _ { j }$, $x _ { j } \in \{ 0,1 \}$, $j = 1 , \ldots , n$, of degree not exceeding $n$;

vi) $d ^ { * }$ is symmetric (permutation invariant).

Moreover, the set of all real-valued parameter functions $f : [ 0,1 ] \rightarrow \mathbf{R}$ admitting some $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \text{P} _ { p } } ( d ) = f ( p )$, $p \in [ 0,1 ]$, coincides with the set consisting of all polynomials in $p \in [ 0,1 ]$ of degree not exceeding $n$. In particular, $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ is an unbiased estimator of zero if and only if its symmetrization $d_{s}$, defined by

\begin{equation*} d _ { s } ( x _ { 1 } , \ldots , x _ { n } ) = \end{equation*}

\begin{equation*} = \frac { 1 } { n ! } \sum _ { \pi \text { a permutation } } d ( x _ { \pi ( 1 )} , \ldots , x _ { \pi ( n )} ) ,\; ( x _ { 1 } , \ldots , x _ { n } ) \in \{ 0,1 \} ^ { n }, \end{equation*}

vanishes. Therefore, the set $D$ consisting of all estimators $d : \{ 0,1 \} ^ { n } \rightarrow \mathbf{R}$ is equal to the direct sum $D _ { s } \oplus D _ { s } ^ { \perp }$, where $D _ { S }$, stands for $\{ d \in D : d = d _ { s } \}$ and $D _ { s } ^ { \perp }$ is equal to $\{ d \in D : d _ { s } = 0 \}$. In particular, $\operatorname { dim } D = 2 ^ { n }$, $\operatorname { dim } D _ { s } = n + 1$ and $\operatorname { dim } D _ { s } ^ { \perp } = 2 ^ { n } - n - 1$.

If one is interested, in connection with general statistical experiments $( \Omega , \mathcal{A} , \mathcal{P} )$, only in locally minimum-variance unbiased estimators at some $\mathsf{P} _ { 0 } \in \mathcal{P}$, one might start from $d ^ { * } \in \cap _ { \mathsf{P} \in \mathcal{P} } L _ { 1 } ( \Omega , \mathcal{A} , \mathsf{P} ) \cap L _ { 2 } ( \Omega ,\mathcal{A} , \mathsf{P}_ { 0 } )$ satisfying

\begin{equation*} \operatorname { Var } _ { \mathsf {P} _ { 0 } } ( d ^ { * } ) = \end{equation*}

Then the covariance method yields again the properties of uniqueness, linearity and closedness (with respect to $\mathsf{P} _ { 0 }$), whereas the property of multiplicativity does not hold, in general, for locally minimum-variance unbiased estimators; this can be illustrated by infinite Bernoulli experiments, where the probability $p$ of success is equal to $1/2$, as follows.

Let ($\Omega , \mathcal{A} , \mathcal{P}$) be the special statistical experiment with $\Omega = \mathbf{N} \cup \{ 0 \}$, $\mathcal{A}$ coinciding with the set of all subsets of $\mathbf{N} \cup \{ 0 \}$, and $\mathcal{P}$ being the set of all binomial distributions $B ( n , 1 / 2 )$ with integer-valued parameter $n \in \mathbf N$ and probability of success $p = 1 / 2$ (cf. also Binomial distribution). Then the covariance method, together with an argument based on interpolation polynomials, yields the following characterization of locally optimal unbiased estimators: $d ^ { * } : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ is locally optimal at $\mathsf{P} _ { n }$ for all $n > \delta$ ($\delta \in \mathbf{N} \cup \{ 0 \}$ fixed) among all estimators $d : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \mathsf{P} _ { n } } ( d ) = \mathsf{E} _ { \mathsf{P}_ { n } } ( d ^ { * } )$, $n \in \mathbf N$, if and only if $d ^ { * }$ is a polynomial in $k \in \mathbf{N} \cup \{ 0 \}$ of degree not exceeding $\delta$. In particular, $d ^ { * } : \mathbf{N} \cup \{ 0 \} \rightarrow \mathbf{R}$ is a UMVU estimator if and only if $d ^ { * }$ is already deterministic. Moreover, the property of multiplicativity of locally optimal unbiased estimators is not valid.

There is also the following version of the preceding characterization of locally optimal unbiased estimators for $m$ realizations of independent, identically distributed random variables with some binomial distribution $B ( n , 1 / 2 )$, $n \in \mathbf N$, as follows. Let $\Omega = ( \mathbf{N} \cup \{ 0 \} ) ^ { m }$, let $\mathcal{A}$ be the set of all subsets of $\Omega$, let $\mathcal{P} = \{ \mathsf{P} _ { n } ^ { m } : n \in \mathbf{N} \}$, where denotes the $m$-fold direct product of $\mathsf{P} _ { n }$ having the binomial distribution $B ( n , 1 / 2 )$. Then $d ^ { * } : \Omega \rightarrow \mathbf{R}$ is locally optimal at for all $n > \delta$ ($\delta \in \mathbf{N} \cup \{ 0 \}$ fixed) among all estimators $d : \Omega \rightarrow \mathbf{R}$ with $\mathsf{E} _ { \mathsf{P} _ { n } ^ { m } } ( d ) = \mathsf{E} _ { \mathsf{P} _ { n } ^ { m } } ( d ^ { * } )$, $n \in \mathbf N$, if $d$ is a symmetric polynomial in $( k _ { 1 } , \dots , k _ { m } ) \in ( \mathbf{N} \cup \{ 0 \} ) ^ { m }$ and a polynomial in $k_{j} \in {\bf N} \cup \{ 0 \}$ keeping the remaining variables $k_i$, $i \in \{ 1 , \ldots , m \} \backslash \{ j \}$ fixed, $j = 1 , \ldots , m$, of degree not exceeding $\delta$. In particular, for $m > 1$ the sample mean

\begin{equation*} \frac { 1 } { m } \sum _ { j = 1 } ^ { m } k _ { j } \end{equation*}

is not locally optimal at for any $n > \delta$ and some fixed $\delta \in \mathbf{N} \cup \{ 0 \}$.

Finally, there are also interesting results about Bernoulli experiments of size $n$ with varying probabilities of success, which, in connection with the randomized response model (cf. [a1]), have the form $p p _ { i } + ( 1 - p ) ( 1 - p _ { i } )$, $i = 1 , \dots , n$, with $p _ { i } \neq 1 / 2$, $i = 1 , \dots , n$, fixed and $p \in [ 0,1 ]$. Then there exists an UMVU estimator for $p$ based on $( x _ { i } , \ldots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ if and only if $p _ { i } = p _ { j }$ or $p _ { i } = 1 - p _ { j }$ for all $i , j \in \{ 1 , \ldots , n \}$. In this case

\begin{equation*} \frac { 1 } { n } \sum _ { j = 1 } ^ { n } \frac { x _ { j } - 1 + p _ { j } } { 2 p _ { j } - 1 } \end{equation*}

is a UMVU estimator for $p$.

If the probabilities of success $p _ { i }$ are functions $f _ { i } : \Theta \rightarrow [ 0,1 ]$, $i = 1 , \dots , n$, with $\Theta$ as parameter space, there exists a symmetric and sufficient data reduction of $( x _ { 1 } , \dots , x _ { n } ) \in \{ 0,1 \} ^ { n }$ if and only if there are functions $g : \Theta \rightarrow \mathbf R$, $h : \{ 1 , \dots , n \} \rightarrow \bf R$ such that

\begin{equation*} f _ { i } ( \vartheta ) = \frac { \operatorname { exp } ( g ( \vartheta ) + h ( i ) ) } { 1 + \operatorname { exp } ( g ( \vartheta ) + h ( i ) ) } , \vartheta \in \Theta , i = 1 , \ldots , n . \end{equation*}

In particular, the sample mean is sufficient in this case.

References

[a1] A. Chaudhuri, R. Mukerjee, "Randomized response" , M. Dekker (1988)
[a2] T.S. Ferguson, "Mathematical statistics: a decision theoretic approach" , Acad. Press (1967)
[a3] E.L. Lehmann, "Theory of point estimation" , Wiley (1983)
How to Cite This Entry:
Bernoulli experiment. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bernoulli_experiment&oldid=49971
This article was adapted from an original article by D. Plachky (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article