Namespaces
Variants
Actions

Difference between revisions of "Cramér-von Mises test"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (→‎References: latexify)
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
A [[Non-parametric test|non-parametric test]] for testing a hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270101.png" /> which states that independent and identically-distributed random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270102.png" /> have a given continuous distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270103.png" />. The Cramér–von Mises test is based on a statistic of the type
+
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270104.png" /></td> </tr></table>
+
{{MSC|62G10}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270105.png" /> is the [[Empirical distribution|empirical distribution]] function constructed from the sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270106.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270107.png" /> is a certain non-negative function defined on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270108.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c0270109.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701011.png" /> are integrable on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701012.png" />. Tests of this type, based on the  "square metric" , were first considered by H. Cramér [[#References|[1]]] and R. von Mises [[#References|[2]]]. N.V. Smirnov proposed putting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701013.png" />, and showed that in that case, if the hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701014.png" /> is valid and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701015.png" />, the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701016.png" /> has in the limit an [[Chi-squared test| "omega-squared"  distribution]], independent of the hypothetical distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701017.png" />. A statistical test for testing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701018.png" /> based on the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701019.png" />, is called an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701020.png" /> (Cramér–von Mises–Smirnov) test, and the numerical value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701022.png" /> is found using the following representation:
+
[[Category:Nonparametric inference]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701023.png" /></td> </tr></table>
+
A [[Non-parametric test|non-parametric test]] for testing a hypothesis  $  H _{0} $
 +
which states that independent and identically-distributed random variables  $  X _{1} \dots X _{n} $
 +
have a given continuous distribution function  $  F (x) $.  
 +
The Cramér–von Mises test is based on a statistic of the type
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701024.png" /> is the variational series based on the sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701025.png" />. According to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701026.png" /> test with significance level <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701027.png" />, the hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701028.png" /> is rejected whenever <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701029.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701030.png" /> is the upper <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701031.png" />-quantile of the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701032.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701033.png" />. T.W. Anderson and D.A. Darling proposed a similarly constructed test, based on the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701034.png" /> (see [[#References|[5]]]).
+
$$
 +
\omega _ n^{2} [ \Psi (F (x))] \  = \
 +
\int\limits _ {- \infty} ^ {+ \infty}
 +
\left [ \sqrt n (F _{n} (x) -
 +
F (x)) \right ]^{2} \Psi (F (x)) \  dF (x),
 +
$$
  
====References====
+
where  $  F _{n} (x) $
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> H. Cramér,  "Sannolikhetskalkylen och nåcgra av dess användningar" , Stockholm (1926)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> R. von Mises,   "Mathematical theory of probability and statistics" (1964)  (Translated from German)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.V. Smirnov,   "On the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701035.png" />-distribution of von Mises"  ''Mat. Sb.'' , '''2''' : 5 (1937pp. 973–993 (In Russian) (French abstract)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> L.N. Bol'shev,   N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  T.W. Anderson,  D.A. Darling,  "Asymptotic theory of certain "goodness-of-fit" criteria based on stochastic processes" ''Ann. of Math. Stat.'' , '''23''' (1952pp. 193–212</TD></TR></table>
+
is the [[Empirical distribution|empirical distribution]] function constructed from the sample  $  X _{1} \dots X _{n} $
 +
and $ \Psi (t) $
 +
is a certain non-negative function defined on the interval  $  [0,\  1] $
 +
such that  $ \Psi (t) $,  
 +
$ t \Psi (t) $
 +
and $  t^{2} \Psi (t) $
 +
are integrable on  $  [0,\  1] $.  
 +
Tests of this type, based on the "square metric" , were first considered by H. Cramér {{Cite|C}} and R. von Mises {{Cite|M}}. N.V. Smirnov proposed putting $ \Psi (t) \equiv 1 $,
 +
and showed that in that case, if the hypothesis  $ H _{0} $
 +
is valid and  $  n \rightarrow \infty $,
 +
the statistic $ \omega^{2} = \omega _ n^{2} $
 +
has in the limit an [[Chi-squared test| "omega-squareddistribution]], independent of the hypothetical distribution function $ F (x) $.  
 +
A statistical test for testing $ H _{0} $
 +
based on the statistic  $  \omega _ n^{2} $,
 +
is called an  $ \omega^{2} $(
 +
Cramér–von Mises–Smirnov) test, and the numerical value of  $  \omega _ n^{2} $
 +
is found using the following representation:
 +
 
 +
$$
 +
\omega _ n^{2} \  = \
 +
{
 +
\frac{1}{12n}
 +
} +
 +
\sum _ {j = 1} ^ n
 +
\left [ F (X _{(j)} ) -
 +
 
 +
\frac{2j - 1}{2n}
 +
 
 +
\right ]^{2} $$
  
 +
where  $  X _{(1)} \leq  \dots \leq  X _{(n)} $
 +
is the variational series based on the sample  $  X _{1} \dots X _{n} $.
 +
According to the  $  \omega^{2} $
 +
test with significance level  $  \alpha $,
 +
the hypothesis  $  H _{0} $
 +
is rejected whenever  $  \omega _ n^{2} \geq \omega _ \alpha^{2} $,
 +
where  $  \omega _ \alpha^{2} $
 +
is the upper  $  \alpha $-
 +
quantile of the distribution of  $  \omega^{2} $,
 +
i.e.  $  {\mathsf P} \{ \omega^{2} < \omega _ \alpha^{2} \} = 1 - \alpha $.
 +
T.W. Anderson and D.A. Darling proposed a similarly constructed test, based on the statistic  $  \omega _ n^{2} [(1 - F (x))/F(x)] $(
 +
see {{Cite|AD}}).
  
 +
====References====
 +
{|
 +
|valign="top"|{{Ref|C}}|| H. Cramér,  "Sannolikhetskalkylen och nåcgra av dess användningar" , Stockholm  (1926)
 +
|-
 +
|valign="top"|{{Ref|M}}|| R. von Mises,  "Mathematical theory of probability and statistics"  (1964)  (Translated from German)
 +
|-
 +
|valign="top"|{{Ref|S}}|| N.V. Smirnov,  "On the $\omega^2$-distribution of von Mises"  ''Mat. Sb.'' , '''2''' :  5  (1937)  pp. 973–993  (In Russian)  (French abstract)
 +
|-
 +
|valign="top"|{{Ref|BS}}|| L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)
 +
|-
 +
|valign="top"|{{Ref|AD}}|| T.W. Anderson,  D.A. Darling,  "Asymptotic theory of certain  "goodness-of-fit"  criteria based on stochastic processes"  ''Ann. of Math. Stat.'' , '''23'''  (1952)  pp. 193–212
 +
|}
  
 
====Comments====
 
====Comments====
Usually, the choice <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701036.png" /> is simply called the Cramér–von Mises test in Western literature. However, Smirnov first proposed making this choice and rewrote the statistic in the distribution-free form above. The limit distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701037.png" /> is independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701038.png" /> whatever the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701039.png" />. (The term  "square metric"  refers to the expression <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701040.png" />, not to some choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701041.png" />.) Cramér actually considered the test with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701042.png" /> replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701043.png" />, while von Mises used <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c027/c027010/c02701044.png" />.
+
Usually, the choice $  \Psi (t) \equiv 1 $
 +
is simply called the Cramér–von Mises test in Western literature. However, Smirnov first proposed making this choice and rewrote the statistic in the distribution-free form above. The limit distribution of $  \omega _ n^{2} $
 +
is independent of $  F $
 +
whatever the choice of $  \Psi $.  
 +
(The term  "square metric"  refers to the expression $  [ \sqrt n (F _{n} (x) - F (x))]^{2} $,  
 +
not to some choice of $  \Psi $.)  
 +
Cramér actually considered the test with $  \Psi (F (x)) \  dF (x) $
 +
replaced by $  dx $,  
 +
while von Mises used $  \lambda (x) \  dx $.
  
An alternative to [[#References|[1]]] is [[#References|[a1]]].
+
An alternative to {{Cite|C}} is {{Cite|C2}}.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  H. Cramér,  "On the composition of elementary errors II"  ''Skand. Aktuarietidskr.''  (1928)  pp. 171–280</TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|C2}}|| H. Cramér,  "On the composition of elementary errors II"  ''Skand. Aktuarietidskr.''  (1928)  pp. 171–280
 +
|}

Latest revision as of 11:08, 26 March 2023


2020 Mathematics Subject Classification: Primary: 62G10 [MSN][ZBL]

A non-parametric test for testing a hypothesis $ H _{0} $ which states that independent and identically-distributed random variables $ X _{1} \dots X _{n} $ have a given continuous distribution function $ F (x) $. The Cramér–von Mises test is based on a statistic of the type

$$ \omega _ n^{2} [ \Psi (F (x))] \ = \ \int\limits _ {- \infty} ^ {+ \infty} \left [ \sqrt n (F _{n} (x) - F (x)) \right ]^{2} \Psi (F (x)) \ dF (x), $$

where $ F _{n} (x) $ is the empirical distribution function constructed from the sample $ X _{1} \dots X _{n} $ and $ \Psi (t) $ is a certain non-negative function defined on the interval $ [0,\ 1] $ such that $ \Psi (t) $, $ t \Psi (t) $ and $ t^{2} \Psi (t) $ are integrable on $ [0,\ 1] $. Tests of this type, based on the "square metric" , were first considered by H. Cramér [C] and R. von Mises [M]. N.V. Smirnov proposed putting $ \Psi (t) \equiv 1 $, and showed that in that case, if the hypothesis $ H _{0} $ is valid and $ n \rightarrow \infty $, the statistic $ \omega^{2} = \omega _ n^{2} $ has in the limit an "omega-squared" distribution, independent of the hypothetical distribution function $ F (x) $. A statistical test for testing $ H _{0} $ based on the statistic $ \omega _ n^{2} $, is called an $ \omega^{2} $( Cramér–von Mises–Smirnov) test, and the numerical value of $ \omega _ n^{2} $ is found using the following representation:

$$ \omega _ n^{2} \ = \ { \frac{1}{12n} } + \sum _ {j = 1} ^ n \left [ F (X _{(j)} ) - \frac{2j - 1}{2n} \right ]^{2} $$

where $ X _{(1)} \leq \dots \leq X _{(n)} $ is the variational series based on the sample $ X _{1} \dots X _{n} $. According to the $ \omega^{2} $ test with significance level $ \alpha $, the hypothesis $ H _{0} $ is rejected whenever $ \omega _ n^{2} \geq \omega _ \alpha^{2} $, where $ \omega _ \alpha^{2} $ is the upper $ \alpha $- quantile of the distribution of $ \omega^{2} $, i.e. $ {\mathsf P} \{ \omega^{2} < \omega _ \alpha^{2} \} = 1 - \alpha $. T.W. Anderson and D.A. Darling proposed a similarly constructed test, based on the statistic $ \omega _ n^{2} [(1 - F (x))/F(x)] $( see [AD]).

References

[C] H. Cramér, "Sannolikhetskalkylen och nåcgra av dess användningar" , Stockholm (1926)
[M] R. von Mises, "Mathematical theory of probability and statistics" (1964) (Translated from German)
[S] N.V. Smirnov, "On the $\omega^2$-distribution of von Mises" Mat. Sb. , 2 : 5 (1937) pp. 973–993 (In Russian) (French abstract)
[BS] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)
[AD] T.W. Anderson, D.A. Darling, "Asymptotic theory of certain "goodness-of-fit" criteria based on stochastic processes" Ann. of Math. Stat. , 23 (1952) pp. 193–212

Comments

Usually, the choice $ \Psi (t) \equiv 1 $ is simply called the Cramér–von Mises test in Western literature. However, Smirnov first proposed making this choice and rewrote the statistic in the distribution-free form above. The limit distribution of $ \omega _ n^{2} $ is independent of $ F $ whatever the choice of $ \Psi $. (The term "square metric" refers to the expression $ [ \sqrt n (F _{n} (x) - F (x))]^{2} $, not to some choice of $ \Psi $.) Cramér actually considered the test with $ \Psi (F (x)) \ dF (x) $ replaced by $ dx $, while von Mises used $ \lambda (x) \ dx $.

An alternative to [C] is [C2].

References

[C2] H. Cramér, "On the composition of elementary errors II" Skand. Aktuarietidskr. (1928) pp. 171–280
How to Cite This Entry:
Cramér-von Mises test. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cram%C3%A9r-von_Mises_test&oldid=11734
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article