Namespaces
Variants
Actions

Difference between revisions of "Kullback-Leibler information"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
k1101801.png
 +
$#A+1 = 15 n = 0
 +
$#C+1 = 15 : ~/encyclopedia/old_files/data/K110/K.1100180 Kullback\ANDLeibler information,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''Kullback–Leibler quantity of information, Kullback–Leibler information quantity, directed divergence''
 
''Kullback–Leibler quantity of information, Kullback–Leibler information quantity, directed divergence''
  
For discrete distributions (cf. [[Discrete distribution|Discrete distribution]]) given by probability vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101801.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101802.png" />, the Kullback–Leibler (quantity of) information of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101803.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101804.png" /> is:
+
For discrete distributions (cf. [[Discrete distribution|Discrete distribution]]) given by probability vectors $  p = ( p _ {1} \dots p _ {n} ) $,  
 +
$  q = ( q _ {1} \dots q _ {n} ) $,  
 +
the Kullback–Leibler (quantity of) information of $  p $
 +
with respect to $  q $
 +
is:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101805.png" /></td> </tr></table>
+
$$
 +
I ( p;q ) = \sum _ {i = 1 } ^ { m }  p _ {i} ( { \mathop{\rm log} } p _ {i} - { \mathop{\rm log} } q _ {i} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101806.png" /> is the natural logarithm (cf. also [[Logarithm of a number|Logarithm of a number]]).
+
where $  { \mathop{\rm log} } $
 +
is the natural logarithm (cf. also [[Logarithm of a number|Logarithm of a number]]).
  
 
More generally, one has:
 
More generally, one has:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101807.png" /></td> </tr></table>
+
$$
 +
I ( P;Q ) = \int\limits _  \Omega  { { \mathop{\rm log} } {
 +
\frac{p ( \omega ) }{q ( \omega ) }
 +
} }  {P ( d \omega ) }
 +
$$
  
for probability distributions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101808.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k1101809.png" /> with densities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018011.png" /> (cf. [[Density of a probability distribution|Density of a probability distribution]]).
+
for probability distributions $  P ( d \omega ) $
 +
and $  Q ( d \omega ) $
 +
with densities $  p ( \omega ) $
 +
and $  q ( \omega ) $(
 +
cf. [[Density of a probability distribution|Density of a probability distribution]]).
  
The negative of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018012.png" /> is the conditional entropy (or relative entropy) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018013.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018014.png" />; see [[Entropy|Entropy]].
+
The negative of $  I ( P;Q ) $
 +
is the conditional entropy (or relative entropy) of $  P ( d \omega ) $
 +
with respect to $  Q ( d \omega ) $;  
 +
see [[Entropy|Entropy]].
  
 
Various notions of (asymmetric and symmetric) information distances are based on the Kullback–Leibler information.
 
Various notions of (asymmetric and symmetric) information distances are based on the Kullback–Leibler information.
  
The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k110/k110180/k11018015.png" /> is also called the informational divergence (see [[Huffman code|Huffman code]]).
+
The quantity $  I ( p,q ) $
 +
is also called the informational divergence (see [[Huffman code|Huffman code]]).
  
 
See also [[Information distance|Information distance]]; [[Kullback–Leibler-type distance measures|Kullback–Leibler-type distance measures]].
 
See also [[Information distance|Information distance]]; [[Kullback–Leibler-type distance measures|Kullback–Leibler-type distance measures]].

Latest revision as of 22:15, 5 June 2020


Kullback–Leibler quantity of information, Kullback–Leibler information quantity, directed divergence

For discrete distributions (cf. Discrete distribution) given by probability vectors $ p = ( p _ {1} \dots p _ {n} ) $, $ q = ( q _ {1} \dots q _ {n} ) $, the Kullback–Leibler (quantity of) information of $ p $ with respect to $ q $ is:

$$ I ( p;q ) = \sum _ {i = 1 } ^ { m } p _ {i} ( { \mathop{\rm log} } p _ {i} - { \mathop{\rm log} } q _ {i} ) , $$

where $ { \mathop{\rm log} } $ is the natural logarithm (cf. also Logarithm of a number).

More generally, one has:

$$ I ( P;Q ) = \int\limits _ \Omega { { \mathop{\rm log} } { \frac{p ( \omega ) }{q ( \omega ) } } } {P ( d \omega ) } $$

for probability distributions $ P ( d \omega ) $ and $ Q ( d \omega ) $ with densities $ p ( \omega ) $ and $ q ( \omega ) $( cf. Density of a probability distribution).

The negative of $ I ( P;Q ) $ is the conditional entropy (or relative entropy) of $ P ( d \omega ) $ with respect to $ Q ( d \omega ) $; see Entropy.

Various notions of (asymmetric and symmetric) information distances are based on the Kullback–Leibler information.

The quantity $ I ( p,q ) $ is also called the informational divergence (see Huffman code).

See also Information distance; Kullback–Leibler-type distance measures.

References

[a1] S. Kullback, "Information theory and statistics" , Wiley (1959)
[a2] S. Kullback, R.A. Leibler, "On information and sufficiency" Ann. Math. Stat. , 22 (1951) pp. 79–86
[a3] J. Sakamoto, M. Ishiguro, G. Kitagawa, "Akaike information criterion statistics" , Reidel (1986)
How to Cite This Entry:
Kullback-Leibler information. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Kullback-Leibler_information&oldid=22681
This article was adapted from an original article by M. Hazewinkel (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article