Namespaces
Variants
Actions

Difference between revisions of "Discriminant informant"

From Encyclopedia of Mathematics
Jump to: navigation, search
(TeX)
m (dots)
 
Line 1: Line 1:
 
{{TEX|done}}
 
{{TEX|done}}
A term in [[Discriminant analysis|discriminant analysis]] denoting a variable used to establish a rule for assigning an object with measurements $x=(x_1,\ldots,x_p)$, drawn from a mixture of $k$ sets with distribution densities $p_1(x),\ldots,p_k(x)$ and a priori probabilities $q_1,\ldots,q_k$, to one of these sets. The $i$-th discriminant informant of the object with measurement $x$ is defined as
+
A term in [[Discriminant analysis|discriminant analysis]] denoting a variable used to establish a rule for assigning an object with measurements $x=(x_1,\dotsc,x_p)$, drawn from a mixture of $k$ sets with distribution densities $p_1(x),\dotsc,p_k(x)$ and a priori probabilities $q_1,\dotsc,q_k$, to one of these sets. The $i$-th discriminant informant of the object with measurement $x$ is defined as
  
$$S_i=-[q_1p_1(x)r_1i+\ldots+q_kp_k(x)r_{ki}],\quad i=1,\ldots,k,$$
+
$$S_i=-[q_1p_1(x)r_1i+\dotsb+q_kp_k(x)r_{ki}],\quad i=1,\dotsc,k,$$
  
 
where $r_{ij}$ is the loss due to assigning an element from the distribution $i$ to the distribution $j$. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all $k$ distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if $k=2$, the difference $S_1-S_2$ is Fisher's linear [[Discriminant function|discriminant function]].
 
where $r_{ij}$ is the loss due to assigning an element from the distribution $i$ to the distribution $j$. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all $k$ distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if $k=2$, the difference $S_1-S_2$ is Fisher's linear [[Discriminant function|discriminant function]].

Latest revision as of 12:24, 14 February 2020

A term in discriminant analysis denoting a variable used to establish a rule for assigning an object with measurements $x=(x_1,\dotsc,x_p)$, drawn from a mixture of $k$ sets with distribution densities $p_1(x),\dotsc,p_k(x)$ and a priori probabilities $q_1,\dotsc,q_k$, to one of these sets. The $i$-th discriminant informant of the object with measurement $x$ is defined as

$$S_i=-[q_1p_1(x)r_1i+\dotsb+q_kp_k(x)r_{ki}],\quad i=1,\dotsc,k,$$

where $r_{ij}$ is the loss due to assigning an element from the distribution $i$ to the distribution $j$. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all $k$ distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if $k=2$, the difference $S_1-S_2$ is Fisher's linear discriminant function.

References

[1] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965)
How to Cite This Entry:
Discriminant informant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Discriminant_informant&oldid=44583
This article was adapted from an original article by N.M. MitrofanovaA.P. Khusu (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article