Namespaces
Variants
Actions

Difference between revisions of "Vector algebra"

From Encyclopedia of Mathematics
Jump to: navigation, search
(MSC 15A72)
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
v0963501.png
 +
$#A+1 = 189 n = 0
 +
$#C+1 = 189 : ~/encyclopedia/old_files/data/V096/V.0906350 Vector algebra
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
{{MSC|15A72}}
 
{{MSC|15A72}}
  
 
A branch of [[Vector calculus|vector calculus]] dealing with the simplest operations involving (free) vectors (cf. [[Vector|Vector]]). These include linear operations, viz. addition of vectors and multiplication of a vector by a number.
 
A branch of [[Vector calculus|vector calculus]] dealing with the simplest operations involving (free) vectors (cf. [[Vector|Vector]]). These include linear operations, viz. addition of vectors and multiplication of a vector by a number.
  
The sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963501.png" /> of two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963502.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963503.png" /> is the vector drawn from the origin of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963504.png" /> to the end of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963505.png" /> if the end of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963506.png" /> and the origin of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963507.png" /> coincide. The operation of vector addition has the following properties:
+
The sum $  \mathbf a + \mathbf b $
 +
of two vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
is the vector drawn from the origin of $  \mathbf a $
 +
to the end of $  \mathbf b $
 +
if the end of $  \mathbf a $
 +
and the origin of $  \mathbf b $
 +
coincide. The operation of vector addition has the following properties:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963508.png" /> (commutativity);
+
$  \mathbf a + \mathbf b = \mathbf b + \mathbf a $(
 +
commutativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v0963509.png" /> (associativity);
+
$  ( \mathbf a + \mathbf b ) + \mathbf c = \mathbf a + ( \mathbf b + \mathbf c ) $(
 +
associativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635010.png" /> (existence of a zero-element);
+
$  \mathbf a + \mathbf 0 = \mathbf a $(
 +
existence of a zero-element);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635011.png" /> (existence of an inverse element).
+
$  \mathbf a + (- \mathbf a ) = \mathbf 0 $(
 +
existence of an inverse element).
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635012.png" /> is the zero vector, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635013.png" /> is the vector opposite to the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635014.png" /> (its inverse). The difference <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635015.png" /> of two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635017.png" /> is the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635018.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635019.png" />.
+
Here $  \mathbf 0 $
 +
is the zero vector, and $  - \mathbf a $
 +
is the vector opposite to the vector $  \mathbf a $(
 +
its inverse). The difference $  \mathbf a - \mathbf b $
 +
of two vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
is the vector $  \mathbf x $
 +
for which $  \mathbf x + \mathbf b = \mathbf a $.
  
The product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635020.png" /> of a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635021.png" /> by a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635022.png" /> is, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635024.png" />, the vector whose modulus equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635025.png" /> and whose direction is that of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635026.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635027.png" />, and that of the inverse of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635028.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635029.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635030.png" /> or (and) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635031.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635032.png" />. The operation of multiplication of a vector by a number has the properties:
+
The product $  \lambda \mathbf a $
 +
of a vector $  \mathbf a $
 +
by a number $  \lambda $
 +
is, if $  \lambda \neq 0 $,  
 +
$  \mathbf a \neq \mathbf 0 $,  
 +
the vector whose modulus equals $  | \lambda |  | \mathbf a | $
 +
and whose direction is that of $  \mathbf a $
 +
if  $  \lambda > 0 $,  
 +
and that of the inverse of $  \mathbf a $
 +
if $  \lambda < 0 $.  
 +
If $  \lambda = 0 $
 +
or (and) $  \mathbf a = \mathbf 0 $,  
 +
then $  \lambda \mathbf a = \mathbf 0 $.  
 +
The operation of multiplication of a vector by a number has the properties:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635033.png" /> (distributivity with respect to vector addition);
+
$  \lambda ( \mathbf a + \mathbf b ) = \lambda \mathbf a + \lambda \mathbf b $(
 +
distributivity with respect to vector addition);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635034.png" /> (distributivity with respect to addition of numbers);
+
$  ( \lambda + \mu ) \mathbf a = \lambda \mathbf a + \mu \mathbf a $(
 +
distributivity with respect to addition of numbers);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635035.png" /> (associativity);
+
$  \lambda ( \mu \mathbf a ) = ( \lambda \mu ) \mathbf a $(
 +
associativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635036.png" /> (multiplication by one).
+
$  1 \cdot \mathbf a = \mathbf a $(
 +
multiplication by one).
  
 
The set of all free vectors of a space with the induced operations of addition and multiplication by a number forms a [[Vector space|vector space]] (a linear space). Below  "vector"  means free vector, or equivalently, element of a given vector space.
 
The set of all free vectors of a space with the induced operations of addition and multiplication by a number forms a [[Vector space|vector space]] (a linear space). Below  "vector"  means free vector, or equivalently, element of a given vector space.
  
An important concept in vector algebra is that of linear dependence of vectors. Vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635037.png" /> are said to be linearly dependent if there exist numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635038.png" />, at least one of which is non-zero, such that the equation
+
An important concept in vector algebra is that of linear dependence of vectors. Vectors $  \mathbf a , \mathbf b \dots \mathbf c $
 +
are said to be linearly dependent if there exist numbers $  \alpha , \beta \dots \gamma $,  
 +
at least one of which is non-zero, such that the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\alpha \mathbf a + \beta \mathbf b + \dots +
 +
\gamma \mathbf c  = \mathbf 0
 +
$$
  
is valid. For two vectors to be linearly dependent it is necessary and sufficient that they are collinear; for three vectors to be linearly dependent it is necessary and sufficient that they are coplanar. If one of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635040.png" /> is zero, the vectors are linearly dependent. The vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635041.png" /> are said to be linearly independent if it follows from (1) that the numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635042.png" /> are equal to zero. At most two, respectively three, linearly independent vectors exist in a plane, respectively three-dimensional space.
+
is valid. For two vectors to be linearly dependent it is necessary and sufficient that they are collinear; for three vectors to be linearly dependent it is necessary and sufficient that they are coplanar. If one of the vectors $  \mathbf a , \mathbf b \dots \mathbf c $
 +
is zero, the vectors are linearly dependent. The vectors $  \mathbf a , \mathbf b \dots \mathbf c $
 +
are said to be linearly independent if it follows from (1) that the numbers $  \alpha , \beta \dots \gamma $
 +
are equal to zero. At most two, respectively three, linearly independent vectors exist in a plane, respectively three-dimensional space.
  
A set of three (two) linearly independent vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635043.png" /> of three-dimensional space (a plane), taken in a certain order, forms a basis. Any vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635044.png" /> can be uniquely represented as the sum
+
A set of three (two) linearly independent vectors $  \mathbf e _ {1} , \mathbf e _ {2} , \mathbf e _ {3} $
 +
of three-dimensional space (a plane), taken in a certain order, forms a basis. Any vector $  \mathbf a $
 +
can be uniquely represented as the sum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635045.png" /></td> </tr></table>
+
$$
 +
\mathbf a  = a _ {1} \mathbf e _ {1} + a _ {2} \mathbf e _ {2} + a _ {3} \mathbf e _ {3} .
 +
$$
  
The numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635046.png" /> are said to be the coordinates (components) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635047.png" /> in the given basis; this is written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635048.png" />.
+
The numbers $  a _ {1} , a _ {2} , a _ {3} $
 +
are said to be the coordinates (components) of $  \mathbf a $
 +
in the given basis; this is written as $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $.
  
Two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635049.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635050.png" /> are equal if and only if their coordinates in the same basis are equal. A necessary and sufficient condition for two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635051.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635052.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635053.png" />, to be collinear is proportionality of their corresponding coordinates: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635054.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635055.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635056.png" />. A necessary and sufficient condition for three vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635057.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635058.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635059.png" /> to be coplanar is the equality
+
Two vectors $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
and $  \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $
 +
are equal if and only if their coordinates in the same basis are equal. A necessary and sufficient condition for two vectors $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
and $  \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $,  
 +
$  \mathbf b \neq \mathbf 0 $,  
 +
to be collinear is proportionality of their corresponding coordinates: $  a _ {1} = \lambda b _ {1} $,  
 +
$  a _ {2} = \lambda b _ {2} $,  
 +
$  a _ {3} = \lambda b _ {3} $.  
 +
A necessary and sufficient condition for three vectors $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $,
 +
$  \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $
 +
and $  \mathbf c = \{ c _ {1} , c _ {2} , c _ {3} \} $
 +
to be coplanar is the equality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635060.png" /></td> </tr></table>
+
$$
 +
\left |
  
Linear operations on vectors can be reduced to linear operations on coordinates. The coordinates of the sum of two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635061.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635062.png" /> are equal to the sums of the corresponding coordinates: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635063.png" />. The coordinates of the product of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635064.png" /> by a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635065.png" /> are equal to the products of the coordinates of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635066.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635067.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635068.png" />.
+
Linear operations on vectors can be reduced to linear operations on coordinates. The coordinates of the sum of two vectors $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
and $  \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $
 +
are equal to the sums of the corresponding coordinates: $  \mathbf a + \mathbf b = \{ a _ {1} + b _ {1} , a _ {2} + b _ {2} , a _ {3} + b _ {3} \} $.  
 +
The coordinates of the product of the vector $  \mathbf a $
 +
by a number $  \lambda $
 +
are equal to the products of the coordinates of $  \mathbf a $
 +
by $  \lambda $:  
 +
$  \lambda \mathbf a = \{ \lambda a _ {1} , \lambda a _ {2} , \lambda a _ {3} \} $.
  
The scalar product (or [[Inner product|inner product]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635069.png" /> of two non-zero vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635070.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635071.png" /> is the product of their moduli by the cosine of the angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635072.png" /> between them:
+
The scalar product (or [[Inner product|inner product]]) $  ( \mathbf a , \mathbf b ) $
 +
of two non-zero vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
is the product of their moduli by the cosine of the angle $  \phi $
 +
between them:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635073.png" /></td> </tr></table>
+
$$
 +
( \mathbf a , \mathbf b )  = | a | \cdot | b |  \cos  \phi .
 +
$$
  
In this context, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635074.png" /> is understood as the angle between the vectors that does not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635075.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635076.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635077.png" />, their scalar product is defined as zero. The scalar product has the following properties:
+
In this context, $  \phi $
 +
is understood as the angle between the vectors that does not exceeding $  \pi $.  
 +
If $  \mathbf a = \mathbf 0 $
 +
or $  \mathbf b = \mathbf 0 $,  
 +
their scalar product is defined as zero. The scalar product has the following properties:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635078.png" /> (commutativity);
+
$  ( \mathbf a , \mathbf b ) = ( \mathbf b , \mathbf a ) $(
 +
commutativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635079.png" /> (distributivity with respect to vector addition);
+
$  ( \mathbf a , \mathbf b + \mathbf c ) = ( \mathbf a , \mathbf b ) + ( \mathbf a , \mathbf c ) $(
 +
distributivity with respect to vector addition);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635080.png" /> (associativity with respect to multiplication by a number);
+
$  \lambda ( \mathbf a , \mathbf b ) = ( \lambda \mathbf a , \mathbf b ) = ( \mathbf a , \lambda \mathbf b ) $(
 +
associativity with respect to multiplication by a number);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635081.png" /> only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635082.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635083.png" />, or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635084.png" />.
+
$  ( \mathbf a , \mathbf b ) = 0 $
 +
only if $  \mathbf a = \mathbf 0 $
 +
and/or $  \mathbf b = \mathbf 0 $,  
 +
or $  \mathbf a \perp  \mathbf b $.
  
Scalar vector products are often calculated using orthogonal Cartesian coordinates, i.e. vector coordinates in a basis consisting of mutually perpendicular unit vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635085.png" /> (an orthonormal basis). The scalar product of two vectors
+
Scalar vector products are often calculated using orthogonal Cartesian coordinates, i.e. vector coordinates in a basis consisting of mutually perpendicular unit vectors $  \mathbf i , \mathbf j , \mathbf k $(
 +
an orthonormal basis). The scalar product of two vectors
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635086.png" /></td> </tr></table>
+
$$
 +
\mathbf a  = \{ a _ {1} , a _ {2} , a _ {3} \} \ \
 +
\textrm{ and } \  \mathbf b  = \{ b _ {1} , b _ {2} , b _ {3} \} ,
 +
$$
  
 
defined in an orthonormal basis, is calculated by the formula
 
defined in an orthonormal basis, is calculated by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635087.png" /></td> </tr></table>
+
$$
 +
( \mathbf a , \mathbf b )  = a _ {1} b _ {1} +
 +
a _ {2} b _ {2} + a _ {3} b _ {3} .
 +
$$
 +
 
 +
The cosine of the angle  $  \phi $
 +
between two non-zero vectors  $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
and  $  \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $
 +
may be calculated by the formula
 +
 
 +
$$
 +
\cos  \phi  = \
 +
 
 +
\frac{( \mathbf a , \mathbf b ) }{| \mathbf a | \cdot | \mathbf b | }
 +
,
 +
$$
  
The cosine of the angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635088.png" /> between two non-zero vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635089.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635090.png" /> may be calculated by the formula
+
where  $  | \mathbf a | = \sqrt {a _ {1}  ^ {2} + a _ {2}  ^ {2} + a _ {3}  ^ {2} } $
 +
and $  | \mathbf b | = \sqrt {b _ {1}  ^ {2} + b _ {2}  ^ {2} + b _ {3}  ^ {2} } $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635091.png" /></td> </tr></table>
+
The cosines of the angles formed by the vector  $  \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
with the basis vectors  $  \mathbf i , \mathbf j , \mathbf k $
 +
are said to be the direction cosines of  $  \mathbf a $:
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635092.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635093.png" />.
+
$$
 +
\cos  \alpha  = \
  
The cosines of the angles formed by the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635094.png" /> with the basis vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635095.png" /> are said to be the direction cosines of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635096.png" />:
+
\frac{a _ {1} }{\sqrt {a _ {1}  ^ {2} + a _ {2}  ^ {2} + a _ {3}  ^ {2} } }
 +
,
 +
\ \
 +
\cos  \beta  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635097.png" /></td> </tr></table>
+
\frac{a _ {2} }{\sqrt {a _ {1}  ^ {2} + a _ {2}  ^ {2} + a _ {3}  ^ {2} } }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635098.png" /></td> </tr></table>
+
$$
 +
\cos  \gamma  =
 +
\frac{a _ {3} }{\sqrt {a _ {1}  ^ {2} + a _ {2}  ^ {2} + a _ {3}  ^ {2} } }
 +
.
 +
$$
  
 
The direction cosines have the following property:
 
The direction cosines have the following property:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v09635099.png" /></td> </tr></table>
+
$$
 +
\cos  ^ {2}  \alpha + \cos  ^ {2}  \beta + \cos  ^ {2}  \gamma  = 1.
 +
$$
  
A straight line with a unit vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350100.png" /> chosen on it, which specifies the positive direction on the straight line, is said to be an axis. The projection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350101.png" /> of a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350102.png" /> onto the axis is the directed segment on the axis whose algebraic value is equal to the scalar product of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350103.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350104.png" />. Projections are additive:
+
A straight line with a unit vector $  \mathbf e $
 +
chosen on it, which specifies the positive direction on the straight line, is said to be an axis. The projection $  Pr _ {\mathbf e }  ( \mathbf a ) $
 +
of a vector $  \mathbf a $
 +
onto the axis is the directed segment on the axis whose algebraic value is equal to the scalar product of $  \mathbf a $
 +
and $  \mathbf e $.  
 +
Projections are additive:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350105.png" /></td> </tr></table>
+
$$
 +
Pr _ {\mathbf e }  ( \mathbf a + \mathbf b )  = \
 +
Pr _ {\mathbf e }  \mathbf a + Pr _ {\mathbf e }  \mathbf b ,
 +
$$
  
 
and homogeneous:
 
and homogeneous:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350106.png" /></td> </tr></table>
+
$$
 +
\lambda Pr _ {\mathbf e }  ( \mathbf a )  = \
 +
Pr _ {\mathbf e }  ( \lambda \mathbf a ).
 +
$$
  
 
Each coordinate of a vector in an orthonormal basis is equal to the projection of this vector on the axis defined by the respective basis vector.
 
Each coordinate of a vector in an orthonormal basis is equal to the projection of this vector on the axis defined by the respective basis vector.
Line 97: Line 238:
 
Figure: v096350a
 
Figure: v096350a
  
Left and right vector triples are distinguished in space. A triple of non-coplanar vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350107.png" /> is said to be right if, to the observer at the common vector origin, the movement <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350108.png" />, in that order, appears to be clockwise. If it appears to be counterclockwise, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350109.png" /> is a left triple. The direction in space of the right (left) vector triples may be represented by stretching out the thumb, index finger and middle finger of the right (left) hand, as shown in the figure. All right (left) vector triples are said to be identically directed. In what follows, the vector triple of basis vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350110.png" /> will be assumed to be a right triple.
+
Left and right vector triples are distinguished in space. A triple of non-coplanar vectors $  \mathbf a , \mathbf b , \mathbf c $
 +
is said to be right if, to the observer at the common vector origin, the movement $  \mathbf a , \mathbf b , \mathbf c $,  
 +
in that order, appears to be clockwise. If it appears to be counterclockwise, $  \mathbf a , \mathbf b , \mathbf c $
 +
is a left triple. The direction in space of the right (left) vector triples may be represented by stretching out the thumb, index finger and middle finger of the right (left) hand, as shown in the figure. All right (left) vector triples are said to be identically directed. In what follows, the vector triple of basis vectors $  \mathbf i , \mathbf j , \mathbf k $
 +
will be assumed to be a right triple.
  
Let the direction of positive rotation (from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350111.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350112.png" />) be given on a plane. Then the pseudo-scalar product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350113.png" /> of two non-zero vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350114.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350115.png" /> is defined as the product of their lengths (moduli) by the sine of the angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350116.png" /> of positive rotation from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350117.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350118.png" />:
+
Let the direction of positive rotation (from $  \mathbf i $
 +
to $  \mathbf j $)  
 +
be given on a plane. Then the pseudo-scalar product $  \mathbf a \lor \mathbf b $
 +
of two non-zero vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
is defined as the product of their lengths (moduli) by the sine of the angle $  \phi $
 +
of positive rotation from $  \mathbf a $
 +
to $  \mathbf b $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350119.png" /></td> </tr></table>
+
$$
 +
\mathbf a \lor \mathbf b  = \
 +
| \mathbf a | \cdot | \mathbf b |  \sin  \phi .
 +
$$
  
By definition, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350120.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350121.png" /> is zero, their pseudo-scalar product is set equal to zero. The pseudo-scalar product has the following properties:
+
By definition, if $  a $
 +
or $  b $
 +
is zero, their pseudo-scalar product is set equal to zero. The pseudo-scalar product has the following properties:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350122.png" /> (anti-commutativity);
+
$  \mathbf a \lor \mathbf b = - \mathbf b \lor \mathbf a $(
 +
anti-commutativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350123.png" /> (distributivity with respect to vector addition);
+
$  \mathbf a \lor ( \mathbf b + \mathbf c ) = \mathbf a \lor \mathbf b + \mathbf a \lor \mathbf c $(
 +
distributivity with respect to vector addition);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350124.png" /> (associativity with respect to multiplication by a number);
+
$  \lambda ( \mathbf a \lor \mathbf b ) = \lambda \mathbf a \lor \mathbf b $(
 +
associativity with respect to multiplication by a number);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350125.png" /> only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350126.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350127.png" />, or if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350128.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350129.png" /> are collinear.
+
$  \mathbf a \lor \mathbf b = 0 $
 +
only if $  \mathbf a = \mathbf 0 $
 +
and/or $  \mathbf b = \mathbf 0 $,  
 +
or if $  \mathbf a $
 +
and $  \mathbf b $
 +
are collinear.
  
If, in an orthonormal basis, the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350130.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350131.png" /> have coordinates <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350132.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350133.png" />, then
+
If, in an orthonormal basis, the vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
have coordinates $  \{ a _ {1} , a _ {2} \} $
 +
and $  \{ b _ {1} , b _ {2} \} $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350134.png" /></td> </tr></table>
+
$$
 +
\mathbf a \lor \mathbf b  = a _ {1} b _ {2} - a _ {2} b _ {1} .
 +
$$
  
The vector product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350135.png" /> of two non-zero non-collinear vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350136.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350137.png" /> is the vector whose modulus is equal to the product of the moduli by the sine of the angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350138.png" /> between them, which is perpendicular to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350139.png" /> and to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350140.png" /> and is so directed that the vector triple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350141.png" /> is a right triple:
+
The vector product $  [ \mathbf a , \mathbf b ] $
 +
of two non-zero non-collinear vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
is the vector whose modulus is equal to the product of the moduli by the sine of the angle $  \phi $
 +
between them, which is perpendicular to $  \mathbf a $
 +
and to $  \mathbf b $
 +
and is so directed that the vector triple $  \mathbf a , \mathbf b , [ \mathbf a , \mathbf b ] $
 +
is a right triple:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350142.png" /></td> </tr></table>
+
$$
 +
| [ \mathbf a , \mathbf b ] |  = \
 +
| \mathbf a | \cdot | \mathbf b |  \sin  \phi .
 +
$$
  
This product is defined as zero if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350143.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350144.png" />, or if the two vectors are collinear. The vector product has the following properties:
+
This product is defined as zero if $  \mathbf a = \mathbf 0 $
 +
and/or $  \mathbf b = \mathbf 0 $,  
 +
or if the two vectors are collinear. The vector product has the following properties:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350145.png" /> (anti-commutativity);
+
$  [ \mathbf a , \mathbf b ] = -[ \mathbf b , \mathbf a ] $(
 +
anti-commutativity);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350146.png" /> (distributivity with respect to vector addition);
+
$  [ \mathbf a , \mathbf b + \mathbf c ] = [ \mathbf a , \mathbf b ]+[ \mathbf a , \mathbf c ] $(
 +
distributivity with respect to vector addition);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350147.png" /> (associativity with respect to multiplication by a number);
+
$  \lambda [ \mathbf a , \mathbf b ] = [ \lambda \mathbf a , \mathbf b ] = [ \mathbf a , \lambda \mathbf b ] $(
 +
associativity with respect to multiplication by a number);
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350148.png" /> only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350149.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350150.png" />, or if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350151.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350152.png" /> are collinear.
+
$  [ \mathbf a , \mathbf b ] = 0 $
 +
only if $  \mathbf a = \mathbf 0 $
 +
and/or $  \mathbf b = \mathbf 0 $,  
 +
or if $  \mathbf a $
 +
and $  \mathbf b $
 +
are collinear.
  
If the coordinates of two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350153.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350154.png" /> in an orthonormal basis are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350155.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350156.png" />, then
+
If the coordinates of two vectors $  \mathbf a $
 +
and $  \mathbf b $
 +
in an orthonormal basis are $  \{ a _ {1} , a _ {2} , a _ {3} \} $
 +
and $  \{ b _ {1} , b _ {2} , b _ {3} \} $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350157.png" /></td> </tr></table>
+
$$
 +
[ \mathbf a , \mathbf b ]  = \left \{
 +
\left |
  
The mixed product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350158.png" /> of three vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350159.png" /> is the scalar product of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350160.png" /> and the vector product of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350161.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350162.png" />:
+
The mixed product $  ( \mathbf a , \mathbf b , \mathbf c ) $
 +
of three vectors $  \mathbf a , \mathbf b , \mathbf c $
 +
is the scalar product of $  \mathbf a $
 +
and the vector product of the vectors $  \mathbf b $
 +
and $  \mathbf c $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350163.png" /></td> </tr></table>
+
$$
 +
( \mathbf a , \mathbf b , \mathbf c )  = \
 +
( \mathbf a , [ \mathbf b , \mathbf c ] ).
 +
$$
  
 
The mixed product has the following properties:
 
The mixed product has the following properties:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350164.png" /></td> </tr></table>
+
$$
 +
( \mathbf a , \mathbf b , \mathbf c )  = \
 +
( \mathbf b , \mathbf c , \mathbf a )  = \
 +
( \mathbf c , \mathbf a , \mathbf b )  = \
 +
- ( \mathbf b , \mathbf a , \mathbf c ) =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350165.png" /></td> </tr></table>
+
$$
 +
= \
 +
-( \mathbf c , \mathbf b , \mathbf a )  = -( \mathbf a , \mathbf c , \mathbf b );
 +
$$
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350166.png" /> only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350167.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350168.png" /> and/or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350169.png" />, or if the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350170.png" /> are coplanar;
+
$  ( \mathbf a , \mathbf b , \mathbf c ) = 0 $
 +
only if $  \mathbf a = \mathbf 0 $
 +
and/or $  \mathbf b = \mathbf 0 $
 +
and/or $  \mathbf c = \mathbf 0 $,  
 +
or if the vectors $  \mathbf a , \mathbf b , \mathbf c $
 +
are coplanar;
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350171.png" /> if the vector triple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350172.png" /> is a right triple; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350173.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350174.png" /> is a left triple.
+
$  ( \mathbf a , \mathbf b , \mathbf c ) > 0 $
 +
if the vector triple $  \mathbf a , \mathbf b , \mathbf c $
 +
is a right triple; $  ( \mathbf a , \mathbf b , \mathbf c ) < 0 $
 +
if $  \mathbf a , \mathbf b , \mathbf c $
 +
is a left triple.
  
The modulus of the mixed product is equal to the volume of the parallelepipedon constructed on the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350175.png" />. If, in an orthonormal basis, the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350176.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350177.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350178.png" /> have coordinates <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350179.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350180.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350181.png" />, then
+
The modulus of the mixed product is equal to the volume of the parallelepipedon constructed on the vectors $  \mathbf a , \mathbf b , \mathbf c $.  
 +
If, in an orthonormal basis, the vectors $  \mathbf a $,  
 +
$  \mathbf b $
 +
and $  \mathbf c $
 +
have coordinates $  \{ a _ {1} , a _ {2} , a _ {3} \} $,
 +
$  \{ b _ {1} , b _ {2} , b _ {3} \} $
 +
and $  \{ c _ {1} , c _ {2} , c _ {3} \} $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350182.png" /></td> </tr></table>
+
$$
 +
( \mathbf a , \mathbf b , \mathbf c )  = \left |
  
The double vector product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350183.png" /> of three vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350184.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350185.png" />.
+
The double vector product $  [ \mathbf a , \mathbf b , \mathbf c ] $
 +
of three vectors $  \mathbf a , \mathbf b , \mathbf c $
 +
is $  [ \mathbf a , [ \mathbf b , \mathbf c ]] $.
  
 
The following formulas are used in calculating double vector products:
 
The following formulas are used in calculating double vector products:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350186.png" /></td> </tr></table>
+
$$
 +
[ \mathbf a , \mathbf b , \mathbf c ]  = \
 +
[ \mathbf b , ( \mathbf a , \mathbf c )] -
 +
[ \mathbf c , ( \mathbf a , \mathbf b )] ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350187.png" /></td> </tr></table>
+
$$
 +
([ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d
 +
])  = ( \mathbf a , \mathbf c )( \mathbf b , \mathbf d
 +
)- ( \mathbf a , \mathbf d )( \mathbf b , \mathbf c ),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350188.png" /></td> </tr></table>
+
$$
 +
[[ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d ]]
 +
= ( \mathbf a , \mathbf c , \mathbf d ) \mathbf b - ( \mathbf b , \mathbf c , \mathbf d ) \mathbf a =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096350/v096350189.png" /></td> </tr></table>
+
$$
 +
= \
 +
( \mathbf a , \mathbf b , \mathbf d ) \mathbf c - ( \mathbf a , \mathbf b , \mathbf c ) \mathbf d .
 +
$$
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  P.S. Aleksandrov,  "Lectures on analytical geometry" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.V. Efimov,  "A short course of analytical geometry" , Moscow  (1967)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Il'in,  E.G. Poznyak,  "Analytical geometry" , MIR  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A.V. Pogorelov,  "Analytical geometry" , Moscow  (1968)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  P.S. Aleksandrov,  "Lectures on analytical geometry" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.V. Efimov,  "A short course of analytical geometry" , Moscow  (1967)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Il'in,  E.G. Poznyak,  "Analytical geometry" , MIR  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A.V. Pogorelov,  "Analytical geometry" , Moscow  (1968)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  P.R. Halmos,  "Finite-dimensional vector spaces" , v. Nostrand  (1958)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R. Capildeo,  "Vector algebra and mechanics" , Addison-Wesley  (1968)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  P.R. Halmos,  "Finite-dimensional vector spaces" , v. Nostrand  (1958)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R. Capildeo,  "Vector algebra and mechanics" , Addison-Wesley  (1968)</TD></TR></table>

Latest revision as of 08:28, 6 June 2020


2020 Mathematics Subject Classification: Primary: 15A72 [MSN][ZBL]

A branch of vector calculus dealing with the simplest operations involving (free) vectors (cf. Vector). These include linear operations, viz. addition of vectors and multiplication of a vector by a number.

The sum $ \mathbf a + \mathbf b $ of two vectors $ \mathbf a $ and $ \mathbf b $ is the vector drawn from the origin of $ \mathbf a $ to the end of $ \mathbf b $ if the end of $ \mathbf a $ and the origin of $ \mathbf b $ coincide. The operation of vector addition has the following properties:

$ \mathbf a + \mathbf b = \mathbf b + \mathbf a $( commutativity);

$ ( \mathbf a + \mathbf b ) + \mathbf c = \mathbf a + ( \mathbf b + \mathbf c ) $( associativity);

$ \mathbf a + \mathbf 0 = \mathbf a $( existence of a zero-element);

$ \mathbf a + (- \mathbf a ) = \mathbf 0 $( existence of an inverse element).

Here $ \mathbf 0 $ is the zero vector, and $ - \mathbf a $ is the vector opposite to the vector $ \mathbf a $( its inverse). The difference $ \mathbf a - \mathbf b $ of two vectors $ \mathbf a $ and $ \mathbf b $ is the vector $ \mathbf x $ for which $ \mathbf x + \mathbf b = \mathbf a $.

The product $ \lambda \mathbf a $ of a vector $ \mathbf a $ by a number $ \lambda $ is, if $ \lambda \neq 0 $, $ \mathbf a \neq \mathbf 0 $, the vector whose modulus equals $ | \lambda | | \mathbf a | $ and whose direction is that of $ \mathbf a $ if $ \lambda > 0 $, and that of the inverse of $ \mathbf a $ if $ \lambda < 0 $. If $ \lambda = 0 $ or (and) $ \mathbf a = \mathbf 0 $, then $ \lambda \mathbf a = \mathbf 0 $. The operation of multiplication of a vector by a number has the properties:

$ \lambda ( \mathbf a + \mathbf b ) = \lambda \mathbf a + \lambda \mathbf b $( distributivity with respect to vector addition);

$ ( \lambda + \mu ) \mathbf a = \lambda \mathbf a + \mu \mathbf a $( distributivity with respect to addition of numbers);

$ \lambda ( \mu \mathbf a ) = ( \lambda \mu ) \mathbf a $( associativity);

$ 1 \cdot \mathbf a = \mathbf a $( multiplication by one).

The set of all free vectors of a space with the induced operations of addition and multiplication by a number forms a vector space (a linear space). Below "vector" means free vector, or equivalently, element of a given vector space.

An important concept in vector algebra is that of linear dependence of vectors. Vectors $ \mathbf a , \mathbf b \dots \mathbf c $ are said to be linearly dependent if there exist numbers $ \alpha , \beta \dots \gamma $, at least one of which is non-zero, such that the equation

$$ \tag{1 } \alpha \mathbf a + \beta \mathbf b + \dots + \gamma \mathbf c = \mathbf 0 $$

is valid. For two vectors to be linearly dependent it is necessary and sufficient that they are collinear; for three vectors to be linearly dependent it is necessary and sufficient that they are coplanar. If one of the vectors $ \mathbf a , \mathbf b \dots \mathbf c $ is zero, the vectors are linearly dependent. The vectors $ \mathbf a , \mathbf b \dots \mathbf c $ are said to be linearly independent if it follows from (1) that the numbers $ \alpha , \beta \dots \gamma $ are equal to zero. At most two, respectively three, linearly independent vectors exist in a plane, respectively three-dimensional space.

A set of three (two) linearly independent vectors $ \mathbf e _ {1} , \mathbf e _ {2} , \mathbf e _ {3} $ of three-dimensional space (a plane), taken in a certain order, forms a basis. Any vector $ \mathbf a $ can be uniquely represented as the sum

$$ \mathbf a = a _ {1} \mathbf e _ {1} + a _ {2} \mathbf e _ {2} + a _ {3} \mathbf e _ {3} . $$

The numbers $ a _ {1} , a _ {2} , a _ {3} $ are said to be the coordinates (components) of $ \mathbf a $ in the given basis; this is written as $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $.

Two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ are equal if and only if their coordinates in the same basis are equal. A necessary and sufficient condition for two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $, $ \mathbf b \neq \mathbf 0 $, to be collinear is proportionality of their corresponding coordinates: $ a _ {1} = \lambda b _ {1} $, $ a _ {2} = \lambda b _ {2} $, $ a _ {3} = \lambda b _ {3} $. A necessary and sufficient condition for three vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $, $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ and $ \mathbf c = \{ c _ {1} , c _ {2} , c _ {3} \} $ to be coplanar is the equality

$$ \left | Linear operations on vectors can be reduced to linear operations on coordinates. The coordinates of the sum of two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ are equal to the sums of the corresponding coordinates: $ \mathbf a + \mathbf b = \{ a _ {1} + b _ {1} , a _ {2} + b _ {2} , a _ {3} + b _ {3} \} $. The coordinates of the product of the vector $ \mathbf a $ by a number $ \lambda $ are equal to the products of the coordinates of $ \mathbf a $ by $ \lambda $: $ \lambda \mathbf a = \{ \lambda a _ {1} , \lambda a _ {2} , \lambda a _ {3} \} $. The scalar product (or [[Inner product|inner product]]) $ ( \mathbf a , \mathbf b ) $ of two non-zero vectors $ \mathbf a $ and $ \mathbf b $ is the product of their moduli by the cosine of the angle $ \phi $ between them: $$ ( \mathbf a , \mathbf b ) = | a | \cdot | b | \cos \phi . $$ In this context, $ \phi $ is understood as the angle between the vectors that does not exceeding $ \pi $. If $ \mathbf a = \mathbf 0 $ or $ \mathbf b = \mathbf 0 $, their scalar product is defined as zero. The scalar product has the following properties: $ ( \mathbf a , \mathbf b ) = ( \mathbf b , \mathbf a ) $( commutativity); $ ( \mathbf a , \mathbf b + \mathbf c ) = ( \mathbf a , \mathbf b ) + ( \mathbf a , \mathbf c ) $( distributivity with respect to vector addition); $ \lambda ( \mathbf a , \mathbf b ) = ( \lambda \mathbf a , \mathbf b ) = ( \mathbf a , \lambda \mathbf b ) $( associativity with respect to multiplication by a number); $ ( \mathbf a , \mathbf b ) = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or $ \mathbf a \perp \mathbf b $. Scalar vector products are often calculated using orthogonal Cartesian coordinates, i.e. vector coordinates in a basis consisting of mutually perpendicular unit vectors $ \mathbf i , \mathbf j , \mathbf k $( an orthonormal basis). The scalar product of two vectors $$ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} \ \ \textrm{ and } \ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} , $$ defined in an orthonormal basis, is calculated by the formula $$ ( \mathbf a , \mathbf b ) = a _ {1} b _ {1} + a _ {2} b _ {2} + a _ {3} b _ {3} . $$ The cosine of the angle $ \phi $ between two non-zero vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ may be calculated by the formula $$ \cos \phi = \

\frac{( \mathbf a , \mathbf b ) }{| \mathbf a | \cdot | \mathbf b | }

,

$$ where $ | \mathbf a | = \sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } $ and $ | \mathbf b | = \sqrt {b _ {1} ^ {2} + b _ {2} ^ {2} + b _ {3} ^ {2} } $. The cosines of the angles formed by the vector $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ with the basis vectors $ \mathbf i , \mathbf j , \mathbf k $ are said to be the direction cosines of $ \mathbf a $: $$ \cos \alpha = \

\frac{a _ {1} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

,

\ \ \cos \beta = \

\frac{a _ {2} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

,

$$ $$ \cos \gamma = \frac{a _ {3} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

.

$$ The direction cosines have the following property: $$ \cos ^ {2} \alpha + \cos ^ {2} \beta + \cos ^ {2} \gamma = 1. $$ A straight line with a unit vector $ \mathbf e $ chosen on it, which specifies the positive direction on the straight line, is said to be an axis. The projection $ Pr _ {\mathbf e } ( \mathbf a ) $ of a vector $ \mathbf a $ onto the axis is the directed segment on the axis whose algebraic value is equal to the scalar product of $ \mathbf a $ and $ \mathbf e $. Projections are additive: $$ Pr _ {\mathbf e } ( \mathbf a + \mathbf b ) = \ Pr _ {\mathbf e } \mathbf a + Pr _ {\mathbf e } \mathbf b , $$ and homogeneous: $$ \lambda Pr _ {\mathbf e } ( \mathbf a ) = \ Pr _ {\mathbf e } ( \lambda \mathbf a ). $$ Each coordinate of a vector in an orthonormal basis is equal to the projection of this vector on the axis defined by the respective basis vector. <img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/v096350a.gif"/> Figure: v096350a Left and right vector triples are distinguished in space. A triple of non-coplanar vectors $ \mathbf a , \mathbf b , \mathbf c $ is said to be right if, to the observer at the common vector origin, the movement $ \mathbf a , \mathbf b , \mathbf c $, in that order, appears to be clockwise. If it appears to be counterclockwise, $ \mathbf a , \mathbf b , \mathbf c $ is a left triple. The direction in space of the right (left) vector triples may be represented by stretching out the thumb, index finger and middle finger of the right (left) hand, as shown in the figure. All right (left) vector triples are said to be identically directed. In what follows, the vector triple of basis vectors $ \mathbf i , \mathbf j , \mathbf k $ will be assumed to be a right triple. Let the direction of positive rotation (from $ \mathbf i $ to $ \mathbf j $) be given on a plane. Then the pseudo-scalar product $ \mathbf a \lor \mathbf b $ of two non-zero vectors $ \mathbf a $ and $ \mathbf b $ is defined as the product of their lengths (moduli) by the sine of the angle $ \phi $ of positive rotation from $ \mathbf a $ to $ \mathbf b $: $$ \mathbf a \lor \mathbf b = \ | \mathbf a | \cdot | \mathbf b | \sin \phi . $$ By definition, if $ a $ or $ b $ is zero, their pseudo-scalar product is set equal to zero. The pseudo-scalar product has the following properties: $ \mathbf a \lor \mathbf b = - \mathbf b \lor \mathbf a $( anti-commutativity); $ \mathbf a \lor ( \mathbf b + \mathbf c ) = \mathbf a \lor \mathbf b + \mathbf a \lor \mathbf c $( distributivity with respect to vector addition); $ \lambda ( \mathbf a \lor \mathbf b ) = \lambda \mathbf a \lor \mathbf b $( associativity with respect to multiplication by a number); $ \mathbf a \lor \mathbf b = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if $ \mathbf a $ and $ \mathbf b $ are collinear. If, in an orthonormal basis, the vectors $ \mathbf a $ and $ \mathbf b $ have coordinates $ \{ a _ {1} , a _ {2} \} $ and $ \{ b _ {1} , b _ {2} \} $, then $$ \mathbf a \lor \mathbf b = a _ {1} b _ {2} - a _ {2} b _ {1} . $$ The vector product $ [ \mathbf a , \mathbf b ] $ of two non-zero non-collinear vectors $ \mathbf a $ and $ \mathbf b $ is the vector whose modulus is equal to the product of the moduli by the sine of the angle $ \phi $ between them, which is perpendicular to $ \mathbf a $ and to $ \mathbf b $ and is so directed that the vector triple $ \mathbf a , \mathbf b , [ \mathbf a , \mathbf b ] $ is a right triple: $$ | [ \mathbf a , \mathbf b ] | = \ | \mathbf a | \cdot | \mathbf b | \sin \phi . $$ This product is defined as zero if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if the two vectors are collinear. The vector product has the following properties: $ [ \mathbf a , \mathbf b ] = -[ \mathbf b , \mathbf a ] $( anti-commutativity); $ [ \mathbf a , \mathbf b + \mathbf c ] = [ \mathbf a , \mathbf b ]+[ \mathbf a , \mathbf c ] $( distributivity with respect to vector addition); $ \lambda [ \mathbf a , \mathbf b ] = [ \lambda \mathbf a , \mathbf b ] = [ \mathbf a , \lambda \mathbf b ] $( associativity with respect to multiplication by a number); $ [ \mathbf a , \mathbf b ] = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if $ \mathbf a $ and $ \mathbf b $ are collinear. If the coordinates of two vectors $ \mathbf a $ and $ \mathbf b $ in an orthonormal basis are $ \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \{ b _ {1} , b _ {2} , b _ {3} \} $, then $$ [ \mathbf a , \mathbf b ] = \left \{ \left |

The mixed product $ ( \mathbf a , \mathbf b , \mathbf c ) $ of three vectors $ \mathbf a , \mathbf b , \mathbf c $ is the scalar product of $ \mathbf a $ and the vector product of the vectors $ \mathbf b $ and $ \mathbf c $:

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \ ( \mathbf a , [ \mathbf b , \mathbf c ] ). $$

The mixed product has the following properties:

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \ ( \mathbf b , \mathbf c , \mathbf a ) = \ ( \mathbf c , \mathbf a , \mathbf b ) = \ - ( \mathbf b , \mathbf a , \mathbf c ) = $$

$$ = \ -( \mathbf c , \mathbf b , \mathbf a ) = -( \mathbf a , \mathbf c , \mathbf b ); $$

$ ( \mathbf a , \mathbf b , \mathbf c ) = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $ and/or $ \mathbf c = \mathbf 0 $, or if the vectors $ \mathbf a , \mathbf b , \mathbf c $ are coplanar;

$ ( \mathbf a , \mathbf b , \mathbf c ) > 0 $ if the vector triple $ \mathbf a , \mathbf b , \mathbf c $ is a right triple; $ ( \mathbf a , \mathbf b , \mathbf c ) < 0 $ if $ \mathbf a , \mathbf b , \mathbf c $ is a left triple.

The modulus of the mixed product is equal to the volume of the parallelepipedon constructed on the vectors $ \mathbf a , \mathbf b , \mathbf c $. If, in an orthonormal basis, the vectors $ \mathbf a $, $ \mathbf b $ and $ \mathbf c $ have coordinates $ \{ a _ {1} , a _ {2} , a _ {3} \} $, $ \{ b _ {1} , b _ {2} , b _ {3} \} $ and $ \{ c _ {1} , c _ {2} , c _ {3} \} $, then

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \left | The double vector product $ [ \mathbf a , \mathbf b , \mathbf c ] $ of three vectors $ \mathbf a , \mathbf b , \mathbf c $ is $ [ \mathbf a , [ \mathbf b , \mathbf c ]] $. The following formulas are used in calculating double vector products: $$ [ \mathbf a , \mathbf b , \mathbf c ] = \ [ \mathbf b , ( \mathbf a , \mathbf c )] - [ \mathbf c , ( \mathbf a , \mathbf b )] , $$ $$ ([ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d ]) = ( \mathbf a , \mathbf c )( \mathbf b , \mathbf d )- ( \mathbf a , \mathbf d )( \mathbf b , \mathbf c ), $$ $$ [[ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d ]]

=  ( \mathbf a , \mathbf c , \mathbf d ) \mathbf b - ( \mathbf b , \mathbf c , \mathbf d ) \mathbf a =

$$ $$ = \ ( \mathbf a , \mathbf b , \mathbf d ) \mathbf c - ( \mathbf a , \mathbf b , \mathbf c ) \mathbf d . $$

References

[1] P.S. Aleksandrov, "Lectures on analytical geometry" , Moscow (1968) (In Russian)
[2] N.V. Efimov, "A short course of analytical geometry" , Moscow (1967) (In Russian)
[3] V.A. Il'in, E.G. Poznyak, "Analytical geometry" , MIR (1984) (Translated from Russian)
[4] A.V. Pogorelov, "Analytical geometry" , Moscow (1968) (In Russian)

Comments

References

[a1] P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand (1958)
[a2] R. Capildeo, "Vector algebra and mechanics" , Addison-Wesley (1968)
How to Cite This Entry:
Vector algebra. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Vector_algebra&oldid=42733
This article was adapted from an original article by Yu.P. Pyt'ev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article