Namespaces
Variants
Actions

Difference between revisions of "Fisher-F-distribution"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 63 formulas out of 63 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
''<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404903.png" />-distribution, Fisher–Snedecor distribution, Snedecor distribution''
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
A continuous probability distribution concentrated on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404904.png" /> with density
+
Out of 63 formulas, 63 were replaced by TEX code.-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404905.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|semi-auto}}{{TEX|done}}
 +
''$F$-distribution, Fisher–Snedecor distribution, Snedecor distribution''
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404906.png" /></td> </tr></table>
+
A continuous probability distribution concentrated on $(0,\infty)$ with density
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404907.png" /> are parameters, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404908.png" /> is the [[Beta-function|beta-function]]. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f0404909.png" /> it is a unimodal positive asymmetric distribution with mode at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049010.png" />. Its mathematical expectation and variance are, respectively, equal to
+
\begin{equation} \tag{1} f _ { \nu _ { 1 } , \nu _ { 2 } } ( x ) = \frac { 1 } { B ( \nu _ { 1 } / 2 , \nu _ { 2 } / 2 ) } ( \frac { \nu _ {1 } } { \nu _ { 2 } } ) ^ { \nu _ { 1 } / 2 } \times  \end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049011.png" /></td> </tr></table>
+
\begin{equation*} \times \,x ^ { ( \nu _ { 1 } / 2 ) - 1 } \left( 1 + \frac { \nu _ { 1 } } { \nu _ { 2 } } x \right) ^ { ( \nu _ { 1 } + \nu _ { 2 } ) / 2 } , \quad x &gt; 0, \end{equation*}
 +
 
 +
where $\nu _ { 1 } , \nu _ { 2 } &gt; 0$ are parameters, and $B ( l _ { 1 } , l _ { 2 } )$ is the [[Beta-function|beta-function]]. For $\nu _ { 1 } &gt; 2$ it is a unimodal positive asymmetric distribution with mode at the point $x = [ ( \nu _ { 1 } - 2 ) / \nu _ { 1 } ] . [ \nu _ { 2 } / ( \nu _ { 2 } + 2 ) ]$. Its mathematical expectation and variance are, respectively, equal to
 +
 
 +
\begin{equation*} \frac { \nu _ { 2 } } { \nu _ { 2 } - 2 } \quad \text { for } \nu _ { 2 } &gt; 2 \end{equation*}
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049012.png" /></td> </tr></table>
+
\begin{equation*} \frac { 2 \nu_2 ^ { 2 }( \nu _ { 1 } + \nu _ { 2 } - 2 ) } { \nu _ { 1 } ( \nu _ { 2 } - 2 ) ^ { 2 } ( \nu _ { 2 } - 4 ) } \quad \text { for } \nu _ { 2 } &gt; 4. \end{equation*}
  
The Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049013.png" />-distribution reduces to a [[Beta-distribution|beta-distribution]] of the second kind (a type-VI distribution in Pearson's classification). It can be regarded as the distribution of a random variable represented in the form of the quotient
+
The Fisher $F$-distribution reduces to a [[Beta-distribution|beta-distribution]] of the second kind (a type-VI distribution in Pearson's classification). It can be regarded as the distribution of a random variable represented in the form of the quotient
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049014.png" /></td> </tr></table>
+
\begin{equation*} F _ { \nu _ { 1 } , \nu _ { 2 } } = \frac { \nu _ { 2 } X _ { 1 }} { \nu _ { 1 } X _ { 2 } } , \end{equation*}
  
where the independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049015.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049016.png" /> have gamma-distributions (cf. [[Gamma-distribution|Gamma-distribution]]) with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049017.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049018.png" />, respectively. The distribution function for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049019.png" /> can be expressed in terms of the distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049020.png" /> of the beta-distribution:
+
where the independent random variables $X _ { 1 }$ and $X _ { 2 }$ have gamma-distributions (cf. [[Gamma-distribution|Gamma-distribution]]) with parameters $\nu _ { 1 } / 2$ and $\nu _ { 2 } / 2$, respectively. The distribution function for $F _ { \nu _ { 1 } , \nu _ { 2 } }$ can be expressed in terms of the distribution function $B _ { l_{1} , l _ { 2 } } ( x )$ of the beta-distribution:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\begin{equation} \tag{2} \mathsf{P} \{ F _ { \nu _ { 1 } , \nu _ { 2 } } &lt; x \} = B _ { \nu _ { 1 } / 2 , \nu _ { 2 } / 2} \left( \frac { ( \nu _ { 1 } / \nu _ { 2 } ) x } { 1 + ( \nu _ { 1 } / \nu _ { 2 } ) x } \right). \end{equation}
  
This relation is used for calculating the values of the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049022.png" />-distribution by means of tables of the beta-distribution. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049023.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049024.png" /> are integers, then the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049026.png" />-distribution with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049027.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049028.png" /> degrees of freedom is the distribution of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049029.png" />-quotient
+
This relation is used for calculating the values of the Fisher $F$-distribution by means of tables of the beta-distribution. If $\nu _ { 1 } = m$ and $\nu _ { 2 } = n$ are integers, then the Fisher $F$-distribution with $m$ and $n$ degrees of freedom is the distribution of the $F$-quotient
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049030.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
\begin{equation} \tag{3} F _ { m n } = \frac { \chi _ { m } ^ { 2 } / m } { \chi _ { n } ^ { 2 } / n }, \end{equation}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049031.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049032.png" /> are independent random variables with  "chi-squared"  distributions (cf. [["Chi-squared" distribution| "Chi-squared" distribution]]) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049033.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049034.png" /> degrees of freedom, respectively.
+
where $\chi ^ { 2 }_{m}$ and $\chi_n ^ { 2 }$ are independent random variables with  "chi-squared"  distributions (cf. [[Chi-squared distribution| Chi-squared  distribution]]) with $m$ and $n$ degrees of freedom, respectively.
  
The Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049035.png" />-distribution plays a fundamental role in mathematical statistics and appears in the first place as the distribution of the quotient of two sample variances. Namely, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049037.png" /> be samples from normal populations with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049038.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049039.png" />. The expressions
+
The Fisher $F$-distribution plays a fundamental role in mathematical statistics and appears in the first place as the distribution of the quotient of two sample variances. Namely, let $X _ { 1 } , \dots , X _ { m }$ and $Y _ { 1 } , \ldots , Y _ { n }$ be samples from normal populations with parameters $( a _ { 1 } , \sigma _ { 1 } ^ { 2 } )$ and $( a _ { 2 } , \sigma _ { 2 } ^ { 2 } )$. The expressions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049040.png" /></td> </tr></table>
+
\begin{equation*} s _ { 1 } ^ { 2 } = \frac { 1 } { m - 1 } \sum _ { i } ( X _ { i } - \overline{X} ) ^ { 2 } \quad \text { and } \quad s _ { 2 } ^ { 2 } = \frac { 1 } { n - 1 } \sum _ { j } ( Y _ { j } - \overline{Y} ) ^ { 2 }, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049042.png" />, serve as estimators of the variances <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049043.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049044.png" />. Then the so-called dispersion proportion <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049045.png" /> has a Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049046.png" />-distribution with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049047.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049048.png" /> degrees of freedom under the hypothesis that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049049.png" /> (in this capacity the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049050.png" />-distribution is also called the distribution of the dispersion proportion). The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049051.png" />-test is based on the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049052.png" />, and it is used, in particular, for testing the hypothesis that the variances of two populations are equal, in the analysis of variance, regression analysis and multi-dimensional statistical analysis.
+
where $\overline{X} = \sum _ { i } X _ { i } / m$, $\overline{Y} = \sum _ { j } Y _ { j } / n$, serve as estimators of the variances $\sigma _ { 1 } ^ { 2 }$ and $\sigma _ { 2 } ^ { 2 }$. Then the so-called dispersion proportion $F = \sigma _ { 2 } ^ { 2 } s _ { 1 } ^ { 2 } / \sigma _ { 1 } ^ { 2 } s _ { 2 } ^ { 2 }$ has a Fisher $F$-distribution with $m - 1$ and $n - 1$ degrees of freedom under the hypothesis that $\sigma _ { 1 } = \sigma _ { 2 }$ (in this capacity the Fisher $F$-distribution is also called the distribution of the dispersion proportion). The $F$-test is based on the statistic $F$, and it is used, in particular, for testing the hypothesis that the variances of two populations are equal, in the [[analysis of variance]], regression analysis and multi-dimensional statistical analysis.
  
The universality of the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049053.png" />-distribution is underlined by its connections with other distributions. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049054.png" /> the square of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049055.png" /> in (3) has a [[Student distribution|Student distribution]] with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049056.png" /> degrees of freedom. There are a number of approximations of the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049057.png" />-distribution using the normal and  "chi-squared"  distributions.
+
The universality of the Fisher $F$-distribution is underlined by its connections with other distributions. For $m = 1$ the square of $F _ { m n }$ in (3) has a [[Student distribution|Student distribution]] with $n$ degrees of freedom. There are a number of approximations of the Fisher $F$-distribution using the normal and  "chi-squared"  distributions.
  
The introduction of the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049058.png" />-distribution in the analysis of variance is connected with the name of R.A. Fisher (1924), although Fisher himself used a quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049059.png" /> for the dispersion proportion, connected with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049060.png" /> by the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049061.png" />. The distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049062.png" /> was tabulated by Fisher, and the Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049063.png" />-distribution by G. Snedecor (1937). At present the simpler Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049064.png" />-distribution is preferred, making use of its connection with the beta-distribution and tables of the incomplete beta-function.
+
The introduction of the Fisher $F$-distribution in the analysis of variance is connected with the name of R.A. Fisher (1924), although Fisher himself used a quantity $z$ for the dispersion proportion, connected with $F$ by the relation $z = ( \operatorname { log } F ) / 2$. The distribution of $z$ was tabulated by Fisher, and the Fisher $F$-distribution by G. Snedecor (1937). At present the simpler Fisher $F$-distribution is preferred, making use of its connection with the beta-distribution and tables of the incomplete beta-function.
  
See also [[Dispersion analysis|Dispersion analysis]]; [[Fisher-z-distribution(2)|Fisher <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049065.png" />-distribution]].
+
See also [[Dispersion analysis|Dispersion analysis]]; [[Fisher z-distribution|Fisher $z$-distribution]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R.A. Fisher,  "On a distribution yielding the error functions of several well-known statistics" , ''Proc. Internat. Congress mathematicians (Toronto 1924)'' , '''2''' , Univ. Toronto Press  (1928)  pp. 805–813</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.G. Kendall,  A. Stuart,  "The advanced theory of statistics. Distribution theory" , '''3. Design and analysis''' , Griffin  (1969)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  H. Scheffé,  "The analysis of variance" , Wiley  (1959)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR></table>
+
<table><tr><td valign="top">[1]</td> <td valign="top">  R.A. Fisher,  "On a distribution yielding the error functions of several well-known statistics" , ''Proc. Internat. Congress mathematicians (Toronto 1924)'' , '''2''' , Univ. Toronto Press  (1928)  pp. 805–813</td></tr><tr><td valign="top">[2]</td> <td valign="top">  M.G. Kendall,  A. Stuart,  "The advanced theory of statistics. Distribution theory" , '''3. Design and analysis''' , Griffin  (1969)</td></tr><tr><td valign="top">[3]</td> <td valign="top">  H. Scheffé,  "The analysis of variance" , Wiley  (1959)</td></tr><tr><td valign="top">[4]</td> <td valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</td></tr></table>
  
  
  
 
====Comments====
 
====Comments====
The dispertion proportion is also known as the variance ratio, and is in the case of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049066.png" />-distribution also called the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040490/f04049068.png" />-ratio. Cf. also [[Dispersion proportion|Dispersion proportion]].
+
The dispersion proportion is also known as the variance ratio, and is in the case of the $F$-distribution also called the $F$-ratio. Cf. also [[Dispersion proportion|Dispersion proportion]].

Revision as of 17:02, 1 July 2020

$F$-distribution, Fisher–Snedecor distribution, Snedecor distribution

A continuous probability distribution concentrated on $(0,\infty)$ with density

\begin{equation} \tag{1} f _ { \nu _ { 1 } , \nu _ { 2 } } ( x ) = \frac { 1 } { B ( \nu _ { 1 } / 2 , \nu _ { 2 } / 2 ) } ( \frac { \nu _ {1 } } { \nu _ { 2 } } ) ^ { \nu _ { 1 } / 2 } \times \end{equation}

\begin{equation*} \times \,x ^ { ( \nu _ { 1 } / 2 ) - 1 } \left( 1 + \frac { \nu _ { 1 } } { \nu _ { 2 } } x \right) ^ { ( \nu _ { 1 } + \nu _ { 2 } ) / 2 } , \quad x > 0, \end{equation*}

where $\nu _ { 1 } , \nu _ { 2 } > 0$ are parameters, and $B ( l _ { 1 } , l _ { 2 } )$ is the beta-function. For $\nu _ { 1 } > 2$ it is a unimodal positive asymmetric distribution with mode at the point $x = [ ( \nu _ { 1 } - 2 ) / \nu _ { 1 } ] . [ \nu _ { 2 } / ( \nu _ { 2 } + 2 ) ]$. Its mathematical expectation and variance are, respectively, equal to

\begin{equation*} \frac { \nu _ { 2 } } { \nu _ { 2 } - 2 } \quad \text { for } \nu _ { 2 } > 2 \end{equation*}

and

\begin{equation*} \frac { 2 \nu_2 ^ { 2 }( \nu _ { 1 } + \nu _ { 2 } - 2 ) } { \nu _ { 1 } ( \nu _ { 2 } - 2 ) ^ { 2 } ( \nu _ { 2 } - 4 ) } \quad \text { for } \nu _ { 2 } > 4. \end{equation*}

The Fisher $F$-distribution reduces to a beta-distribution of the second kind (a type-VI distribution in Pearson's classification). It can be regarded as the distribution of a random variable represented in the form of the quotient

\begin{equation*} F _ { \nu _ { 1 } , \nu _ { 2 } } = \frac { \nu _ { 2 } X _ { 1 }} { \nu _ { 1 } X _ { 2 } } , \end{equation*}

where the independent random variables $X _ { 1 }$ and $X _ { 2 }$ have gamma-distributions (cf. Gamma-distribution) with parameters $\nu _ { 1 } / 2$ and $\nu _ { 2 } / 2$, respectively. The distribution function for $F _ { \nu _ { 1 } , \nu _ { 2 } }$ can be expressed in terms of the distribution function $B _ { l_{1} , l _ { 2 } } ( x )$ of the beta-distribution:

\begin{equation} \tag{2} \mathsf{P} \{ F _ { \nu _ { 1 } , \nu _ { 2 } } < x \} = B _ { \nu _ { 1 } / 2 , \nu _ { 2 } / 2} \left( \frac { ( \nu _ { 1 } / \nu _ { 2 } ) x } { 1 + ( \nu _ { 1 } / \nu _ { 2 } ) x } \right). \end{equation}

This relation is used for calculating the values of the Fisher $F$-distribution by means of tables of the beta-distribution. If $\nu _ { 1 } = m$ and $\nu _ { 2 } = n$ are integers, then the Fisher $F$-distribution with $m$ and $n$ degrees of freedom is the distribution of the $F$-quotient

\begin{equation} \tag{3} F _ { m n } = \frac { \chi _ { m } ^ { 2 } / m } { \chi _ { n } ^ { 2 } / n }, \end{equation}

where $\chi ^ { 2 }_{m}$ and $\chi_n ^ { 2 }$ are independent random variables with "chi-squared" distributions (cf. Chi-squared distribution) with $m$ and $n$ degrees of freedom, respectively.

The Fisher $F$-distribution plays a fundamental role in mathematical statistics and appears in the first place as the distribution of the quotient of two sample variances. Namely, let $X _ { 1 } , \dots , X _ { m }$ and $Y _ { 1 } , \ldots , Y _ { n }$ be samples from normal populations with parameters $( a _ { 1 } , \sigma _ { 1 } ^ { 2 } )$ and $( a _ { 2 } , \sigma _ { 2 } ^ { 2 } )$. The expressions

\begin{equation*} s _ { 1 } ^ { 2 } = \frac { 1 } { m - 1 } \sum _ { i } ( X _ { i } - \overline{X} ) ^ { 2 } \quad \text { and } \quad s _ { 2 } ^ { 2 } = \frac { 1 } { n - 1 } \sum _ { j } ( Y _ { j } - \overline{Y} ) ^ { 2 }, \end{equation*}

where $\overline{X} = \sum _ { i } X _ { i } / m$, $\overline{Y} = \sum _ { j } Y _ { j } / n$, serve as estimators of the variances $\sigma _ { 1 } ^ { 2 }$ and $\sigma _ { 2 } ^ { 2 }$. Then the so-called dispersion proportion $F = \sigma _ { 2 } ^ { 2 } s _ { 1 } ^ { 2 } / \sigma _ { 1 } ^ { 2 } s _ { 2 } ^ { 2 }$ has a Fisher $F$-distribution with $m - 1$ and $n - 1$ degrees of freedom under the hypothesis that $\sigma _ { 1 } = \sigma _ { 2 }$ (in this capacity the Fisher $F$-distribution is also called the distribution of the dispersion proportion). The $F$-test is based on the statistic $F$, and it is used, in particular, for testing the hypothesis that the variances of two populations are equal, in the analysis of variance, regression analysis and multi-dimensional statistical analysis.

The universality of the Fisher $F$-distribution is underlined by its connections with other distributions. For $m = 1$ the square of $F _ { m n }$ in (3) has a Student distribution with $n$ degrees of freedom. There are a number of approximations of the Fisher $F$-distribution using the normal and "chi-squared" distributions.

The introduction of the Fisher $F$-distribution in the analysis of variance is connected with the name of R.A. Fisher (1924), although Fisher himself used a quantity $z$ for the dispersion proportion, connected with $F$ by the relation $z = ( \operatorname { log } F ) / 2$. The distribution of $z$ was tabulated by Fisher, and the Fisher $F$-distribution by G. Snedecor (1937). At present the simpler Fisher $F$-distribution is preferred, making use of its connection with the beta-distribution and tables of the incomplete beta-function.

See also Dispersion analysis; Fisher $z$-distribution.

References

[1] R.A. Fisher, "On a distribution yielding the error functions of several well-known statistics" , Proc. Internat. Congress mathematicians (Toronto 1924) , 2 , Univ. Toronto Press (1928) pp. 805–813
[2] M.G. Kendall, A. Stuart, "The advanced theory of statistics. Distribution theory" , 3. Design and analysis , Griffin (1969)
[3] H. Scheffé, "The analysis of variance" , Wiley (1959)
[4] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)


Comments

The dispersion proportion is also known as the variance ratio, and is in the case of the $F$-distribution also called the $F$-ratio. Cf. also Dispersion proportion.

How to Cite This Entry:
Fisher-F-distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Fisher-F-distribution&oldid=17380
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article