Nuisance parameter
Any unknown parameter of a probability distribution in a statistical problem connected with the study of other parameters of a given distribution. More precisely, for a realization of a random variable ,
taking values in a sample space ( \mathfrak X , \mathfrak B , {\mathsf P} _ \theta ) ,
\theta = ( \theta _ {1} \dots \theta _ {n} ) ,
\theta \in \mathbf R ^ {n} ,
suppose it is necessary to make a statistical inference about the parameters \theta _ {1} \dots \theta _ {k} ,
k < n .
Then \theta _ {k+} 1 \dots \theta _ {n}
are nuisance parameters in the problem. For example, let X _ {1} \dots X _ {n}
be independent random variables, subject to the normal law \phi ( ( x - \xi ) / \sigma ) ,
with unknown parameters \xi
and \sigma ^ {2} ,
and one wishes to test the hypothesis H _ {0} :
\xi = \xi _ {0} ,
where \xi _ {0}
is some fixed number. The unknown variance \sigma ^ {2}
is a nuisance parameter in the problem of testing H _ {0} .
Another important example of a problem with a nuisance parameter is the Behrens–Fisher problem. Naturally, for the solution of a statistical problem with nuisance parameters it is desirable to be able to make a statistical inference not depending on these parameters. In the theory of statistical hypothesis testing one often achieves this by narrowing the class of tests intended for testing a certain hypothesis H _ {0}
in the presence of a nuisance parameter to a class of similar tests (cf. Statistical test).
References
[1] | Yu.V. Linnik, "Statistical problems with nuisance parameters" , Amer. Math. Soc. (1968) (Translated from Russian) |
Comments
References
[a1] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1978) |
[a2] | E.L. Lehmann, "Theory of point estimation" , Wiley (1983) |
Nuisance parameter. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Nuisance_parameter&oldid=48028