Namespaces
Variants
Actions

Difference between revisions of "Determinant"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(link to signature)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
''of a square matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314101.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314102.png" /> over a commutative associative ring <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314103.png" /> with unit 1''
+
{{MSC|15}}
 +
{{TEX|done}}
  
The element of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314104.png" /> equal to the sum of all terms of the form
 
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314105.png" /></td> </tr></table>
+
The ''determinant of a square matrix $A = (a_{ij})$ of order $n$ over a commutative associative ring $R$ with unit 1'' is
 +
the element of $R$ equal to the sum of all terms of the form
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314106.png" /> is a permutation of the numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314107.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314108.png" /> is the number of inversions of the permutation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d0314109.png" />. The determinant of the matrix
+
$$(-1)^k a_{1i_1}\cdots a_{ni_n},$$
 
+
where $i_1,\dots,i_n$ is a permutation of the numbers $1,\dots,n$ and $k$ is the number of inversions of the permutation $1\mapsto i_1,\dots,n\mapsto i_n$, so that $(-1)^k$ is the [[signature (permutation)|signature]] of this permutation.. The determinant of the matrix
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141010.png" /></td> </tr></table>
 
  
 +
$$A=\begin{pmatrix}a_{11} & \dots & a_{1n}\\
 +
\vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{pmatrix}$$
 
is written as
 
is written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141011.png" /></td> </tr></table>
+
$$\begin{vmatrix}a_{11} & \dots & a_{1n}\\
 
+
\vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{vmatrix} \textrm{ or } \det A.$$
The determinant of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141012.png" /> contains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141013.png" /> terms; when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141015.png" />, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141017.png" />. The most important instances in practice are those in which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141018.png" /> is a field (especially a number field), a ring of functions (especially a ring of polynomials) or a ring of integers.
+
The determinant of the matrix $A$ contains $n!$ terms. When $n=1$, $\det A = a_{11}$, when $n=2$, $\det A = a_{11}a_{22} - a_{21}a_{12}$. The most important instances in practice are those in which $R$ is a field (especially a number field), a ring of functions (especially a ring of polynomials) or a ring of integers.
 
 
From now on, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141019.png" /> is a commutative associative ring with 1, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141020.png" /> is the set of all square matrices of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141021.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141022.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141023.png" /> is the identity matrix over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141024.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141025.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141026.png" /> are the rows of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141027.png" />. (All that is said from here on is equally true for the columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141028.png" />.) The determinant of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141029.png" /> can be considered as a function of its rows:
 
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141030.png" /></td> </tr></table>
+
From now on, $R$ is a commutative associative ring with 1, $\def\Mn{\textrm{M}_n(R)}\Mn$ is the set of all square matrices of order $n$ over $R$ and $E_n$ is the identity matrix over $R$. Let $A\in\Mn$, while $a_1,\dots,a_n$ are the rows of the matrix $A$. (All that is said from here on is equally true for the columns of $A$.) The determinant of $A$ can be considered as a function of its rows:
  
 +
$$\det A = D(a_1,\dots,a_n).$$
 
The mapping
 
The mapping
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141031.png" /></td> </tr></table>
+
$$d:\Mn\to R\quad(A\mapsto \det A)$$
 
 
 
is subject to the following three conditions:
 
is subject to the following three conditions:
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141032.png" /> is a linear function of any row of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141033.png" />:
+
1) $d$ is a linear function of any row of $A$:
 
+
$$\def\l{\lambda}\def\m{\mu}D(a_1,\dots,\l a_i+\m b_i,\dots,a_n) = \l D(a_1,\dots,a_i,\dots,a_n) + \m D(a_1,\dots,b_i,\dots,a_n),$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141034.png" /></td> </tr></table>
+
where $\l,\m\in R$;
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141035.png" /></td> </tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141036.png" />;
 
 
 
2) if the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141037.png" /> is obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141038.png" /> by replacing a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141039.png" /> by a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141040.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141041.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141042.png" />;
 
 
 
3) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141043.png" />.
 
 
 
Conditions 1)–3) uniquely define <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141044.png" />, i.e. if a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141045.png" /> satisfies conditions 1)–3), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141046.png" />. An axiomatic construction of the theory of determinants is obtained in this way.
 
 
 
Let a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141047.png" /> satisfy the condition:
 
 
 
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141048.png" />) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141049.png" /> is obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141050.png" /> by multiplying one row by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141051.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141052.png" />. Clearly 1) implies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141053.png" />). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141054.png" /> is a field, the conditions 1)–3) prove to be equivalent to the conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141055.png" />), 2), 3).
 
 
 
The determinant of a diagonal matrix is equal to the product of its diagonal entries. The surjectivity of the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141056.png" /> follows from this. The determinant of a triangular matrix is also equal to the product of its diagonal entries. For a matrix
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141057.png" /></td> </tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141058.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141059.png" /> are square matrices,
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141060.png" /></td> </tr></table>
 
  
It follows from the properties of transposition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141061.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141062.png" /> denotes transposition. If the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141063.png" /> has two identical rows, its determinant equals zero; if two rows of a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141064.png" /> change places, then its determinant changes its sign;
+
2) if the matrix $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, $i\ne j$, then $d(A)=d(b)$;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141065.png" /></td> </tr></table>
+
3) $d(E_n) = 1$.
  
when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141067.png" />; for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141068.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141069.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141070.png" />,
+
Conditions 1)–3) uniquely define $d$, i.e. if a mapping $h:\Mn\to R$ satisfies conditions 1)–3), then $h(A) = \det(A)$. An axiomatic construction of the theory of determinants is obtained in this way.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141071.png" /></td> </tr></table>
+
Let a mapping $f:\Mn\to R$ satisfy the condition:
  
Thus, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141072.png" /> is an epimorphism of the multiplicative semi-groups <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141073.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141074.png" />.
+
$1'$) if $B$ is obtained from $A$ by multiplying one row by $\l\in R$, then $f(B)=\l f(A)$. Clearly 1) implies $1'$). If $R$ is a field, the conditions 1)–3) prove to be equivalent to the conditions $1'$), 2), 3).
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141075.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141076.png" /> be an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141077.png" />-matrix, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141078.png" /> be an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141079.png" />-matrix over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141080.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141081.png" />. Then the Binet–Cauchy formula holds:
+
The determinant of a diagonal matrix is equal to the product of its diagonal entries. The surjectivity of the mapping $d:\Mn\to R$ follows from this. The determinant of a triangular matrix is also equal to the product of its diagonal entries. For a matrix
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141082.png" /></td> </tr></table>
+
$$A=\begin{pmatrix}B&0\\D&C\end{pmatrix}\;,$$
 +
where $B$ and $C$ are square matrices,
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141083.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141084.png" /> be the [[Cofactor|cofactor]] of the entry <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141085.png" />. The following formulas are then true:
+
$$\det A = \det B \det C.$$
 +
It follows from the properties of transposition that $\det A^t = \det A$, where ${}^t$ denotes transposition. If the matrix $A$ has two identical rows, its determinant equals zero; if two rows of a matrix $A$ change places, then its determinant changes its sign;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141086.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$D(a_1,\dots,a_i+\l a_j,\dots,a_n) = D(a_1,\dots,a_i,\dots,a_n)$$
 +
when $i\ne j$, $\l\in R$; for $A$ and $B$ from $\Mn$,
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141087.png" /> is the Kronecker symbol. Determinants are often calculated by development according to the elements of a row or column, i.e. by the formulas (1), by the Laplace theorem (see [[Cofactor|Cofactor]]) and by transformations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141088.png" /> which do not alter the determinant. For a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141089.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141090.png" />, the inverse matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141091.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141092.png" /> exists if and only if there is an element in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141093.png" /> which is the inverse of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141094.png" />. Consequently, the mapping
+
$$\det (AB) = (\det A)(\det B).$$
 +
Thus, $d$ is an epimorphism of the multiplicative semi-groups $\Mn$ and $R$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141095.png" /></td> </tr></table>
+
Let $m\le n$, let $A=(a_{ij})$ be an $(m\times n)$-matrix, let $B=(b_{ij})$ be an $(n\times m)$-matrix over $R$, and let $C=AB$. Then the Binet–Cauchy formula holds:
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141096.png" /> is the group of all invertible matrices in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141097.png" /> (i.e. the general linear group) and where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141098.png" /> is the group of invertible elements in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d03141099.png" />, is an epimorphism of these groups.
+
$$\det C = \sum_{1\le j_1<\cdots<j_m\le n}
 +
\begin{vmatrix}
 +
a_{ij_1}&\dots&a_{ij_m}\\
 +
\vdots&\ddots&\vdots\\
 +
a_{mj_1}&\dots&a_{mj_m}
 +
\end{vmatrix}
 +
\begin{vmatrix}
 +
b_{j_11}&\dots&a_{j_1m}\\
 +
\vdots&\ddots&\vdots\\
 +
b_{j_m1}&\dots&a_{j_mm}
 +
\end{vmatrix}
 +
$$
 +
Let $A=(a_{ij})\in \Mn$, and let $A_{ij}$ be the
 +
[[cofactor]] of the entry $a_{ij}$. The following formulas are then true:
  
A square matrix over a field is invertible if and only if its determinant is not zero. The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410100.png" />-dimensional vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410101.png" /> over a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410102.png" /> are linearly dependent if and only if
+
$$\begin{equation}\left.\begin{aligned}
 +
\sum_{j=1}^n a_{ij}A_{kj} &= \delta_{ik} \det A\\
 +
\sum_{i=1}^n a_{ij}A_{ik} &= \delta_{jk} \det A
 +
\end{aligned}\right\}\label{1}\end{equation}$$
 +
where $\delta_{ij}$ is the Kronecker symbol. Determinants are often calculated by development according to the elements of a row or column, i.e. by the formulas (1), by the Laplace theorem (see
 +
[[Cofactor|Cofactor]]) and by transformations of $A$ which do not alter the determinant. For a matrix $A$ from $\Mn$, the inverse matrix $A^{-1}$ in $\Mn$ exists if and only if there is an element in $R$ which is the inverse of $\det A$. Consequently, the mapping
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410103.png" /></td> </tr></table>
+
$$\def\GL{\textrm{GL}}\GL(n.K)\to K^*\quad (A\mapsto \det A).$$
 +
where $\GL(n,K)$ is the group of all invertible matrices in $\Mn$ (i.e. the general linear group) and where $K^*$ is the group of invertible elements in $K$, is an epimorphism of these groups.
  
The determinant of a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410104.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410105.png" /> over a field is equal to 1 if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410106.png" /> is the product of elementary matrices of the form
+
A square matrix over a field is invertible if and only if its determinant is not zero. The $n$-dimensional vectors $a_1,\dots,a_n$ over a field $F$ are linearly dependent if and only if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410107.png" /></td> </tr></table>
+
$$D(a_1,\dots,a_n) = 0.$$
 +
The determinant of a matrix $A$ of order $N>1$ over a field is equal to 1 if and only if $A$ is the product of [[Elementary matrix|elementary matrices]] of the form
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410108.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410109.png" /> is a matrix with its only non-zero entries equal to 1 and positioned at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410110.png" />.
+
$$x_{ij}(\l) = E_n+\l e_{ij},$$
 +
where $i\ne j$, while $e_{ij}$ is a matrix with its only non-zero entries equal to 1 and positioned at $(i,j)$.
  
 
The theory of determinants was developed in relation to the problem of solving systems of linear equations:
 
The theory of determinants was developed in relation to the problem of solving systems of linear equations:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410111.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$\begin{equation}\left.\begin{aligned}
 +
a_{11}x_1+\cdots+a_{1n}x_n &=b_1\\
 +
\cdots &\\
 +
a_{n1}x_1+\cdots+a_{nn}x_n &=b_n\\
 +
\end{aligned}\right\}\label{2}\end{equation}$$
 +
where $a_{ij}, b_j$ are elements of the field $R$. If $\det A\ne 0$, where $A=(a_{ij})$ is the matrix of the system (2), then this system has a unique solution, which can be calculated by Cramer's formulas (see
 +
[[Cramer rule]]). When the system (2) is given over a ring $R$ and $\det A$ is invertible in $R$, the system also has a unique solution, also given by Cramer's formulas.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410112.png" /> are elements of the field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410113.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410114.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410115.png" /> is the matrix of the system (2), then this system has a unique solution, which can be calculated by Cramer's formulas (see [[Cramer rule|Cramer rule]]). When the system (2) is given over a ring <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410116.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410117.png" /> is invertible in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410118.png" />, the system also has a unique solution, also given by Cramer's formulas.
+
A theory of determinants has also been constructed for matrices over non-commutative associative skew-fields. The determinant of a matrix over a skew-field $k$ (the Dieudonné determinant) is introduced in the following way. The skew-field $k$ is considered as a semi-group, and its commutative homomorphic image $\bar k$ is formed. $k$ consists of a group, $k^*$, with added zero 0, while the role of $\bar k$ is taken by the group $\overline{k^*}$ with added zero $\bar 0$, where $\overline{k^*}$ is the quotient group of $k^*$ by the commutator subgroup. The epimorphism $k\to \bar k$, $\l \mapsto \bar\l$, is given by the canonical epimorphism of groups $k^*\to \overline{k^*}$ and by the condition $0\to \bar0$. Clearly, $\bar 1$ is the unit of the semi-group $\bar k$.
 
 
A theory of determinants has also been constructed for matrices over non-commutative associative skew-fields. The determinant of a matrix over a skew-field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410119.png" /> (the Dieudonné determinant) is introduced in the following way. The skew-field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410120.png" /> is considered as a semi-group, and its commutative homomorphic image <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410121.png" /> is formed. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410122.png" /> is a group, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410123.png" />, with added zero 0, while the role of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410124.png" /> is taken by the group <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410125.png" /> with added zero <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410126.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410127.png" /> is the quotient group of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410128.png" /> by the commutator subgroup. The epimorphism <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410129.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410130.png" />, is given by the canonical epimorphism of groups <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410131.png" /> and by the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410132.png" />. Clearly, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410133.png" /> is the unit of the semi-group <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410134.png" />.
 
  
 
The theory of determinants over a skew-field is based on the following theorem: There exists a unique mapping
 
The theory of determinants over a skew-field is based on the following theorem: There exists a unique mapping
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410135.png" /></td> </tr></table>
+
$$\delta:\textrm{M}_n(k)\to \bar k$$
 
 
 
satisfying the following three axioms:
 
satisfying the following three axioms:
  
I) if the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410136.png" /> is obtained from the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410137.png" /> by multiplying one row from the left by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410138.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410139.png" />;
+
I) if the matrix $B$ is obtained from the matrix $X$ by multiplying one row from the left by $\l \in k$, then $\delta(B) = \bar\l \delta(A)$;
  
II) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410140.png" /> is obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410141.png" /> by replacing a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410142.png" /> by a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410143.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410144.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410145.png" />;
+
II) if $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, where $i\ne j$, then $\delta(B)=\delta(A)$;
  
III) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410146.png" />.
+
III) $\delta(E_n)=\bar1$.
  
The element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410147.png" /> is called the determinant of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410148.png" /> and is written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410149.png" />. For a commutative skew-field, axioms I), II) and III) coincide with conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410150.png" />), 2) and 3), respectively, and, consequently, in this instance ordinary determinants over a field are obtained. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410151.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410152.png" />; thus, the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410153.png" /> is surjective. A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410154.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410155.png" /> is invertible if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410156.png" />. The equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410157.png" /> holds. As in the commutative case, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410158.png" /> will not change if a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410159.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410160.png" /> is replaced by a row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410161.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410162.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410163.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410164.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410165.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410166.png" /> is the product of elementary matrices of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410167.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410168.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410169.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410170.png" />, then
+
The element $\delta(A)$ is called the determinant of $A$ and is written as $\det A$. For a commutative skew-field, axioms I), II) and III) coincide with conditions $1'$), 2) and 3), respectively, and, consequently, in this instance ordinary determinants over a field are obtained. If $A=\textrm{diag}(a_{11},\dots,a_{nn})$, then $\det A = a_{11}\cdots a_{nn}$; thus, the mapping $\delta:\textrm{M}_n(k)\to \bar k$ is surjective. A matrix $A$ from $\textrm{M}_n(k)$ is invertible if and only if $\det A \ne 0$. The equation $\det AB = (\det A)(\det B)$ holds. As in the commutative case, $\det A$ will not change if a row $a_i$ of $A$ is replaced by a row $a_i+\l a_j$, where $i\ne j$, $\l\in k$. If $n>1$, $\det A = \bar1$ if and only if $A$ is the product of elementary matrices of the form $x_{ij} = E_n+\l e_{ij}$, $i\ne j$, $\l\in k$. If $a\ne 0$, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410171.png" /></td> </tr></table>
+
$$\begin{vmatrix}a&b\\c&d\end{vmatrix} = \overline{ad-aca^{-1}b},\quad
 +
\begin{vmatrix}0&b\\c&d\end{vmatrix} = -\overline{cd}.
 +
$$
 +
Unlike the commutative case, $\det A^t$ does not have to coincide with $\det A$. For example, for the matrix
  
Unlike the commutative case, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410172.png" /> does not have to coincide with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410173.png" />. For example, for the matrix
+
$$A=\begin{pmatrix}i&j\\k&-1\end{pmatrix}$$
 
+
over the skew-field of quaternions (cf.
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410174.png" /></td> </tr></table>
+
[[Quaternion|Quaternion]]), $\det A = -\overline{2i}$, while $\det A^t = \bar0$.
 
 
over the skew-field of quaternions (cf. [[Quaternion|Quaternion]]), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410175.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031410/d031410176.png" />.
 
  
 
Infinite determinants, i.e. determinants of infinite matrices, are defined as the limit towards which the determinant of a finite submatrix converges when its order is growing infinitely. If this limit exists, the determinant is called convergent; in the opposite case it is called divergent.
 
Infinite determinants, i.e. determinants of infinite matrices, are defined as the limit towards which the determinant of a finite submatrix converges when its order is growing infinitely. If this limit exists, the determinant is called convergent; in the opposite case it is called divergent.
Line 120: Line 127:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.G. Kurosh,  "Higher algebra" , MIR (1972(Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.I. Kostrikin,  "Introduction to algebra" , Springer (1982)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> N.V. Efimov,  E.R. Rozendorn,  "Linear algebra and multi-dimensional geometry" , Moscow (1970) (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> R.I. TyshkevichA.S. Fedenko,  "Linear algebra and analytic geometry" , Minsk (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> E. Artin,  "Geometric algebra" , Interscience (1957)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> N. Bourbaki,  "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , '''1''' , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> V.F. Kagan,  "Foundations of the theory of determinants" , Odessa (1922)  (In Russian)</TD></TR></table>
+
{|
 
+
|-
 
+
|valign="top"|{{Ref|Ar}}||valign="top"| E. Artin,  "Geometric algebra", Interscience (1957{{MR|0082463}}  {{ZBL|0077.02101}}
 
+
|-
====Comments====
+
|valign="top"|{{Ref|Bo}}||valign="top"| N. Bourbaki,  "Elements of mathematics. Algebra: Algebraic structures. Linear algebra", '''1''', Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) {{MR|0354207}} 
 
+
|-
 
+
|valign="top"|{{Ref|Di}}||valign="top"| J.A. Dieudonné,  "La géométrie des groups classiques", Springer (1955)  
====References====
+
|-
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> J.A. Dieudonné,  "La géométrie des groups classiques" , Springer  (1955)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K. Hoffman,  R. Kunze,  "Linear algebra" , Prentice-Hall (1961)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> M. Koecher,  "Lineare Algebra und analytische Geometrie" , Springer (1983)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> S. Lang,  "Linear algebra" , Addison-Wesley (1970)</TD></TR></table>
+
|valign="top"|{{Ref|EfRo}}||valign="top"| N.V. EfimovE.R. Rozendorn,  "Linear algebra and multi-dimensional geometry", Moscow (1970)  (In Russian)  
 +
|-
 +
|valign="top"|{{Ref|HoKu}}||valign="top"| K. Hoffman,  R. Kunze,  "Linear algebra", Prentice-Hall (1961) {{MR|0125849}} 
 +
|-
 +
|valign="top"|{{Ref|Ka}}||valign="top"| V.F. Kagan,  "Foundations of the theory of determinants", Odessa (1922)  (In Russian)  
 +
|-
 +
|valign="top"|{{Ref|Ko}}||valign="top"| A.I. Kostrikin,  "Introduction to algebra", Springer (1982)  (Translated from Russian) {{MR|0661256}}  {{ZBL|0482.00001}}
 +
|-
 +
|valign="top"|{{Ref|Ko2}}||valign="top"| M. Koecher,  "Lineare Algebra und analytische Geometrie", Springer  (1983) {{MR|0725166}}  {{ZBL|0517.15001}}
 +
|-
 +
|valign="top"|{{Ref|Ku}}||valign="top"| A.G. Kurosh,  "Higher algebra", MIR  (1972) (Translated from Russian)   {{ZBL|0237.13001}}
 +
|-
 +
|valign="top"|{{Ref|La}}||valign="top"| S. Lang,  "Linear algebra", Addison-Wesley (1970)   {{ZBL|0216.06001}}
 +
|-
 +
|valign="top"|{{Ref|TyFe}}||valign="top"| R.I. Tyshkevich,  A.S. Fedenko,  "Linear algebra and analytic geometry", Minsk  (1976) (In Russian)  
 +
|-
 +
|}

Latest revision as of 20:27, 30 November 2016

2020 Mathematics Subject Classification: Primary: 15-XX [MSN][ZBL]


The determinant of a square matrix $A = (a_{ij})$ of order $n$ over a commutative associative ring $R$ with unit 1 is the element of $R$ equal to the sum of all terms of the form

$$(-1)^k a_{1i_1}\cdots a_{ni_n},$$ where $i_1,\dots,i_n$ is a permutation of the numbers $1,\dots,n$ and $k$ is the number of inversions of the permutation $1\mapsto i_1,\dots,n\mapsto i_n$, so that $(-1)^k$ is the signature of this permutation.. The determinant of the matrix

$$A=\begin{pmatrix}a_{11} & \dots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{pmatrix}$$ is written as

$$\begin{vmatrix}a_{11} & \dots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{vmatrix} \textrm{ or } \det A.$$ The determinant of the matrix $A$ contains $n!$ terms. When $n=1$, $\det A = a_{11}$, when $n=2$, $\det A = a_{11}a_{22} - a_{21}a_{12}$. The most important instances in practice are those in which $R$ is a field (especially a number field), a ring of functions (especially a ring of polynomials) or a ring of integers.

From now on, $R$ is a commutative associative ring with 1, $\def\Mn{\textrm{M}_n(R)}\Mn$ is the set of all square matrices of order $n$ over $R$ and $E_n$ is the identity matrix over $R$. Let $A\in\Mn$, while $a_1,\dots,a_n$ are the rows of the matrix $A$. (All that is said from here on is equally true for the columns of $A$.) The determinant of $A$ can be considered as a function of its rows:

$$\det A = D(a_1,\dots,a_n).$$ The mapping

$$d:\Mn\to R\quad(A\mapsto \det A)$$ is subject to the following three conditions:

1) $d$ is a linear function of any row of $A$: $$\def\l{\lambda}\def\m{\mu}D(a_1,\dots,\l a_i+\m b_i,\dots,a_n) = \l D(a_1,\dots,a_i,\dots,a_n) + \m D(a_1,\dots,b_i,\dots,a_n),$$ where $\l,\m\in R$;

2) if the matrix $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, $i\ne j$, then $d(A)=d(b)$;

3) $d(E_n) = 1$.

Conditions 1)–3) uniquely define $d$, i.e. if a mapping $h:\Mn\to R$ satisfies conditions 1)–3), then $h(A) = \det(A)$. An axiomatic construction of the theory of determinants is obtained in this way.

Let a mapping $f:\Mn\to R$ satisfy the condition:

$1'$) if $B$ is obtained from $A$ by multiplying one row by $\l\in R$, then $f(B)=\l f(A)$. Clearly 1) implies $1'$). If $R$ is a field, the conditions 1)–3) prove to be equivalent to the conditions $1'$), 2), 3).

The determinant of a diagonal matrix is equal to the product of its diagonal entries. The surjectivity of the mapping $d:\Mn\to R$ follows from this. The determinant of a triangular matrix is also equal to the product of its diagonal entries. For a matrix

$$A=\begin{pmatrix}B&0\\D&C\end{pmatrix}\;,$$ where $B$ and $C$ are square matrices,

$$\det A = \det B \det C.$$ It follows from the properties of transposition that $\det A^t = \det A$, where ${}^t$ denotes transposition. If the matrix $A$ has two identical rows, its determinant equals zero; if two rows of a matrix $A$ change places, then its determinant changes its sign;

$$D(a_1,\dots,a_i+\l a_j,\dots,a_n) = D(a_1,\dots,a_i,\dots,a_n)$$ when $i\ne j$, $\l\in R$; for $A$ and $B$ from $\Mn$,

$$\det (AB) = (\det A)(\det B).$$ Thus, $d$ is an epimorphism of the multiplicative semi-groups $\Mn$ and $R$.

Let $m\le n$, let $A=(a_{ij})$ be an $(m\times n)$-matrix, let $B=(b_{ij})$ be an $(n\times m)$-matrix over $R$, and let $C=AB$. Then the Binet–Cauchy formula holds:

$$\det C = \sum_{1\le j_1<\cdots<j_m\le n} \begin{vmatrix} a_{ij_1}&\dots&a_{ij_m}\\ \vdots&\ddots&\vdots\\ a_{mj_1}&\dots&a_{mj_m} \end{vmatrix} \begin{vmatrix} b_{j_11}&\dots&a_{j_1m}\\ \vdots&\ddots&\vdots\\ b_{j_m1}&\dots&a_{j_mm} \end{vmatrix} $$ Let $A=(a_{ij})\in \Mn$, and let $A_{ij}$ be the cofactor of the entry $a_{ij}$. The following formulas are then true:

$$\begin{equation}\left.\begin{aligned} \sum_{j=1}^n a_{ij}A_{kj} &= \delta_{ik} \det A\\ \sum_{i=1}^n a_{ij}A_{ik} &= \delta_{jk} \det A \end{aligned}\right\}\label{1}\end{equation}$$ where $\delta_{ij}$ is the Kronecker symbol. Determinants are often calculated by development according to the elements of a row or column, i.e. by the formulas (1), by the Laplace theorem (see Cofactor) and by transformations of $A$ which do not alter the determinant. For a matrix $A$ from $\Mn$, the inverse matrix $A^{-1}$ in $\Mn$ exists if and only if there is an element in $R$ which is the inverse of $\det A$. Consequently, the mapping

$$\def\GL{\textrm{GL}}\GL(n.K)\to K^*\quad (A\mapsto \det A).$$ where $\GL(n,K)$ is the group of all invertible matrices in $\Mn$ (i.e. the general linear group) and where $K^*$ is the group of invertible elements in $K$, is an epimorphism of these groups.

A square matrix over a field is invertible if and only if its determinant is not zero. The $n$-dimensional vectors $a_1,\dots,a_n$ over a field $F$ are linearly dependent if and only if

$$D(a_1,\dots,a_n) = 0.$$ The determinant of a matrix $A$ of order $N>1$ over a field is equal to 1 if and only if $A$ is the product of elementary matrices of the form

$$x_{ij}(\l) = E_n+\l e_{ij},$$ where $i\ne j$, while $e_{ij}$ is a matrix with its only non-zero entries equal to 1 and positioned at $(i,j)$.

The theory of determinants was developed in relation to the problem of solving systems of linear equations:

$$\begin{equation}\left.\begin{aligned} a_{11}x_1+\cdots+a_{1n}x_n &=b_1\\ \cdots &\\ a_{n1}x_1+\cdots+a_{nn}x_n &=b_n\\ \end{aligned}\right\}\label{2}\end{equation}$$ where $a_{ij}, b_j$ are elements of the field $R$. If $\det A\ne 0$, where $A=(a_{ij})$ is the matrix of the system (2), then this system has a unique solution, which can be calculated by Cramer's formulas (see Cramer rule). When the system (2) is given over a ring $R$ and $\det A$ is invertible in $R$, the system also has a unique solution, also given by Cramer's formulas.

A theory of determinants has also been constructed for matrices over non-commutative associative skew-fields. The determinant of a matrix over a skew-field $k$ (the Dieudonné determinant) is introduced in the following way. The skew-field $k$ is considered as a semi-group, and its commutative homomorphic image $\bar k$ is formed. $k$ consists of a group, $k^*$, with added zero 0, while the role of $\bar k$ is taken by the group $\overline{k^*}$ with added zero $\bar 0$, where $\overline{k^*}$ is the quotient group of $k^*$ by the commutator subgroup. The epimorphism $k\to \bar k$, $\l \mapsto \bar\l$, is given by the canonical epimorphism of groups $k^*\to \overline{k^*}$ and by the condition $0\to \bar0$. Clearly, $\bar 1$ is the unit of the semi-group $\bar k$.

The theory of determinants over a skew-field is based on the following theorem: There exists a unique mapping

$$\delta:\textrm{M}_n(k)\to \bar k$$ satisfying the following three axioms:

I) if the matrix $B$ is obtained from the matrix $X$ by multiplying one row from the left by $\l \in k$, then $\delta(B) = \bar\l \delta(A)$;

II) if $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, where $i\ne j$, then $\delta(B)=\delta(A)$;

III) $\delta(E_n)=\bar1$.

The element $\delta(A)$ is called the determinant of $A$ and is written as $\det A$. For a commutative skew-field, axioms I), II) and III) coincide with conditions $1'$), 2) and 3), respectively, and, consequently, in this instance ordinary determinants over a field are obtained. If $A=\textrm{diag}(a_{11},\dots,a_{nn})$, then $\det A = a_{11}\cdots a_{nn}$; thus, the mapping $\delta:\textrm{M}_n(k)\to \bar k$ is surjective. A matrix $A$ from $\textrm{M}_n(k)$ is invertible if and only if $\det A \ne 0$. The equation $\det AB = (\det A)(\det B)$ holds. As in the commutative case, $\det A$ will not change if a row $a_i$ of $A$ is replaced by a row $a_i+\l a_j$, where $i\ne j$, $\l\in k$. If $n>1$, $\det A = \bar1$ if and only if $A$ is the product of elementary matrices of the form $x_{ij} = E_n+\l e_{ij}$, $i\ne j$, $\l\in k$. If $a\ne 0$, then

$$\begin{vmatrix}a&b\\c&d\end{vmatrix} = \overline{ad-aca^{-1}b},\quad \begin{vmatrix}0&b\\c&d\end{vmatrix} = -\overline{cd}. $$ Unlike the commutative case, $\det A^t$ does not have to coincide with $\det A$. For example, for the matrix

$$A=\begin{pmatrix}i&j\\k&-1\end{pmatrix}$$ over the skew-field of quaternions (cf. Quaternion), $\det A = -\overline{2i}$, while $\det A^t = \bar0$.

Infinite determinants, i.e. determinants of infinite matrices, are defined as the limit towards which the determinant of a finite submatrix converges when its order is growing infinitely. If this limit exists, the determinant is called convergent; in the opposite case it is called divergent.

The concept of a determinant goes back to G. Leibniz (1678). H. Cramer was the first to publish on the subject (1750). The theory of determinants is based on the work of A. Vandermonde, P. Laplace, A.L. Cauchy and C.G.J. Jacobi. The term "determinant" was first coined by C.F. Gauss (1801). The modern meaning was introduced by A. Cayley (1841).

References

[Ar] E. Artin, "Geometric algebra", Interscience (1957) MR0082463 Zbl 0077.02101
[Bo] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra", 1, Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207
[Di] J.A. Dieudonné, "La géométrie des groups classiques", Springer (1955)
[EfRo] N.V. Efimov, E.R. Rozendorn, "Linear algebra and multi-dimensional geometry", Moscow (1970) (In Russian)
[HoKu] K. Hoffman, R. Kunze, "Linear algebra", Prentice-Hall (1961) MR0125849
[Ka] V.F. Kagan, "Foundations of the theory of determinants", Odessa (1922) (In Russian)
[Ko] A.I. Kostrikin, "Introduction to algebra", Springer (1982) (Translated from Russian) MR0661256 Zbl 0482.00001
[Ko2] M. Koecher, "Lineare Algebra und analytische Geometrie", Springer (1983) MR0725166 Zbl 0517.15001
[Ku] A.G. Kurosh, "Higher algebra", MIR (1972) (Translated from Russian) Zbl 0237.13001
[La] S. Lang, "Linear algebra", Addison-Wesley (1970) Zbl 0216.06001
[TyFe] R.I. Tyshkevich, A.S. Fedenko, "Linear algebra and analytic geometry", Minsk (1976) (In Russian)
How to Cite This Entry:
Determinant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Determinant&oldid=12692
This article was adapted from an original article by D.A. Suprunenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article