Namespaces
Variants
Actions

Vector algebra

From Encyclopedia of Mathematics
Revision as of 08:28, 6 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


2020 Mathematics Subject Classification: Primary: 15A72 [MSN][ZBL]

A branch of vector calculus dealing with the simplest operations involving (free) vectors (cf. Vector). These include linear operations, viz. addition of vectors and multiplication of a vector by a number.

The sum $ \mathbf a + \mathbf b $ of two vectors $ \mathbf a $ and $ \mathbf b $ is the vector drawn from the origin of $ \mathbf a $ to the end of $ \mathbf b $ if the end of $ \mathbf a $ and the origin of $ \mathbf b $ coincide. The operation of vector addition has the following properties:

$ \mathbf a + \mathbf b = \mathbf b + \mathbf a $( commutativity);

$ ( \mathbf a + \mathbf b ) + \mathbf c = \mathbf a + ( \mathbf b + \mathbf c ) $( associativity);

$ \mathbf a + \mathbf 0 = \mathbf a $( existence of a zero-element);

$ \mathbf a + (- \mathbf a ) = \mathbf 0 $( existence of an inverse element).

Here $ \mathbf 0 $ is the zero vector, and $ - \mathbf a $ is the vector opposite to the vector $ \mathbf a $( its inverse). The difference $ \mathbf a - \mathbf b $ of two vectors $ \mathbf a $ and $ \mathbf b $ is the vector $ \mathbf x $ for which $ \mathbf x + \mathbf b = \mathbf a $.

The product $ \lambda \mathbf a $ of a vector $ \mathbf a $ by a number $ \lambda $ is, if $ \lambda \neq 0 $, $ \mathbf a \neq \mathbf 0 $, the vector whose modulus equals $ | \lambda | | \mathbf a | $ and whose direction is that of $ \mathbf a $ if $ \lambda > 0 $, and that of the inverse of $ \mathbf a $ if $ \lambda < 0 $. If $ \lambda = 0 $ or (and) $ \mathbf a = \mathbf 0 $, then $ \lambda \mathbf a = \mathbf 0 $. The operation of multiplication of a vector by a number has the properties:

$ \lambda ( \mathbf a + \mathbf b ) = \lambda \mathbf a + \lambda \mathbf b $( distributivity with respect to vector addition);

$ ( \lambda + \mu ) \mathbf a = \lambda \mathbf a + \mu \mathbf a $( distributivity with respect to addition of numbers);

$ \lambda ( \mu \mathbf a ) = ( \lambda \mu ) \mathbf a $( associativity);

$ 1 \cdot \mathbf a = \mathbf a $( multiplication by one).

The set of all free vectors of a space with the induced operations of addition and multiplication by a number forms a vector space (a linear space). Below "vector" means free vector, or equivalently, element of a given vector space.

An important concept in vector algebra is that of linear dependence of vectors. Vectors $ \mathbf a , \mathbf b \dots \mathbf c $ are said to be linearly dependent if there exist numbers $ \alpha , \beta \dots \gamma $, at least one of which is non-zero, such that the equation

$$ \tag{1 } \alpha \mathbf a + \beta \mathbf b + \dots + \gamma \mathbf c = \mathbf 0 $$

is valid. For two vectors to be linearly dependent it is necessary and sufficient that they are collinear; for three vectors to be linearly dependent it is necessary and sufficient that they are coplanar. If one of the vectors $ \mathbf a , \mathbf b \dots \mathbf c $ is zero, the vectors are linearly dependent. The vectors $ \mathbf a , \mathbf b \dots \mathbf c $ are said to be linearly independent if it follows from (1) that the numbers $ \alpha , \beta \dots \gamma $ are equal to zero. At most two, respectively three, linearly independent vectors exist in a plane, respectively three-dimensional space.

A set of three (two) linearly independent vectors $ \mathbf e _ {1} , \mathbf e _ {2} , \mathbf e _ {3} $ of three-dimensional space (a plane), taken in a certain order, forms a basis. Any vector $ \mathbf a $ can be uniquely represented as the sum

$$ \mathbf a = a _ {1} \mathbf e _ {1} + a _ {2} \mathbf e _ {2} + a _ {3} \mathbf e _ {3} . $$

The numbers $ a _ {1} , a _ {2} , a _ {3} $ are said to be the coordinates (components) of $ \mathbf a $ in the given basis; this is written as $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $.

Two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ are equal if and only if their coordinates in the same basis are equal. A necessary and sufficient condition for two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $, $ \mathbf b \neq \mathbf 0 $, to be collinear is proportionality of their corresponding coordinates: $ a _ {1} = \lambda b _ {1} $, $ a _ {2} = \lambda b _ {2} $, $ a _ {3} = \lambda b _ {3} $. A necessary and sufficient condition for three vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $, $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ and $ \mathbf c = \{ c _ {1} , c _ {2} , c _ {3} \} $ to be coplanar is the equality

$$ \left | Linear operations on vectors can be reduced to linear operations on coordinates. The coordinates of the sum of two vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ are equal to the sums of the corresponding coordinates: $ \mathbf a + \mathbf b = \{ a _ {1} + b _ {1} , a _ {2} + b _ {2} , a _ {3} + b _ {3} \} $. The coordinates of the product of the vector $ \mathbf a $ by a number $ \lambda $ are equal to the products of the coordinates of $ \mathbf a $ by $ \lambda $: $ \lambda \mathbf a = \{ \lambda a _ {1} , \lambda a _ {2} , \lambda a _ {3} \} $. The scalar product (or [[Inner product|inner product]]) $ ( \mathbf a , \mathbf b ) $ of two non-zero vectors $ \mathbf a $ and $ \mathbf b $ is the product of their moduli by the cosine of the angle $ \phi $ between them: $$ ( \mathbf a , \mathbf b ) = | a | \cdot | b | \cos \phi . $$ In this context, $ \phi $ is understood as the angle between the vectors that does not exceeding $ \pi $. If $ \mathbf a = \mathbf 0 $ or $ \mathbf b = \mathbf 0 $, their scalar product is defined as zero. The scalar product has the following properties: $ ( \mathbf a , \mathbf b ) = ( \mathbf b , \mathbf a ) $( commutativity); $ ( \mathbf a , \mathbf b + \mathbf c ) = ( \mathbf a , \mathbf b ) + ( \mathbf a , \mathbf c ) $( distributivity with respect to vector addition); $ \lambda ( \mathbf a , \mathbf b ) = ( \lambda \mathbf a , \mathbf b ) = ( \mathbf a , \lambda \mathbf b ) $( associativity with respect to multiplication by a number); $ ( \mathbf a , \mathbf b ) = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or $ \mathbf a \perp \mathbf b $. Scalar vector products are often calculated using orthogonal Cartesian coordinates, i.e. vector coordinates in a basis consisting of mutually perpendicular unit vectors $ \mathbf i , \mathbf j , \mathbf k $( an orthonormal basis). The scalar product of two vectors $$ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} \ \ \textrm{ and } \ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} , $$ defined in an orthonormal basis, is calculated by the formula $$ ( \mathbf a , \mathbf b ) = a _ {1} b _ {1} + a _ {2} b _ {2} + a _ {3} b _ {3} . $$ The cosine of the angle $ \phi $ between two non-zero vectors $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \mathbf b = \{ b _ {1} , b _ {2} , b _ {3} \} $ may be calculated by the formula $$ \cos \phi = \

\frac{( \mathbf a , \mathbf b ) }{| \mathbf a | \cdot | \mathbf b | }

,

$$ where $ | \mathbf a | = \sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } $ and $ | \mathbf b | = \sqrt {b _ {1} ^ {2} + b _ {2} ^ {2} + b _ {3} ^ {2} } $. The cosines of the angles formed by the vector $ \mathbf a = \{ a _ {1} , a _ {2} , a _ {3} \} $ with the basis vectors $ \mathbf i , \mathbf j , \mathbf k $ are said to be the direction cosines of $ \mathbf a $: $$ \cos \alpha = \

\frac{a _ {1} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

,

\ \ \cos \beta = \

\frac{a _ {2} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

,

$$ $$ \cos \gamma = \frac{a _ {3} }{\sqrt {a _ {1} ^ {2} + a _ {2} ^ {2} + a _ {3} ^ {2} } }

.

$$ The direction cosines have the following property: $$ \cos ^ {2} \alpha + \cos ^ {2} \beta + \cos ^ {2} \gamma = 1. $$ A straight line with a unit vector $ \mathbf e $ chosen on it, which specifies the positive direction on the straight line, is said to be an axis. The projection $ Pr _ {\mathbf e } ( \mathbf a ) $ of a vector $ \mathbf a $ onto the axis is the directed segment on the axis whose algebraic value is equal to the scalar product of $ \mathbf a $ and $ \mathbf e $. Projections are additive: $$ Pr _ {\mathbf e } ( \mathbf a + \mathbf b ) = \ Pr _ {\mathbf e } \mathbf a + Pr _ {\mathbf e } \mathbf b , $$ and homogeneous: $$ \lambda Pr _ {\mathbf e } ( \mathbf a ) = \ Pr _ {\mathbf e } ( \lambda \mathbf a ). $$ Each coordinate of a vector in an orthonormal basis is equal to the projection of this vector on the axis defined by the respective basis vector. <img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/v096350a.gif"/> Figure: v096350a Left and right vector triples are distinguished in space. A triple of non-coplanar vectors $ \mathbf a , \mathbf b , \mathbf c $ is said to be right if, to the observer at the common vector origin, the movement $ \mathbf a , \mathbf b , \mathbf c $, in that order, appears to be clockwise. If it appears to be counterclockwise, $ \mathbf a , \mathbf b , \mathbf c $ is a left triple. The direction in space of the right (left) vector triples may be represented by stretching out the thumb, index finger and middle finger of the right (left) hand, as shown in the figure. All right (left) vector triples are said to be identically directed. In what follows, the vector triple of basis vectors $ \mathbf i , \mathbf j , \mathbf k $ will be assumed to be a right triple. Let the direction of positive rotation (from $ \mathbf i $ to $ \mathbf j $) be given on a plane. Then the pseudo-scalar product $ \mathbf a \lor \mathbf b $ of two non-zero vectors $ \mathbf a $ and $ \mathbf b $ is defined as the product of their lengths (moduli) by the sine of the angle $ \phi $ of positive rotation from $ \mathbf a $ to $ \mathbf b $: $$ \mathbf a \lor \mathbf b = \ | \mathbf a | \cdot | \mathbf b | \sin \phi . $$ By definition, if $ a $ or $ b $ is zero, their pseudo-scalar product is set equal to zero. The pseudo-scalar product has the following properties: $ \mathbf a \lor \mathbf b = - \mathbf b \lor \mathbf a $( anti-commutativity); $ \mathbf a \lor ( \mathbf b + \mathbf c ) = \mathbf a \lor \mathbf b + \mathbf a \lor \mathbf c $( distributivity with respect to vector addition); $ \lambda ( \mathbf a \lor \mathbf b ) = \lambda \mathbf a \lor \mathbf b $( associativity with respect to multiplication by a number); $ \mathbf a \lor \mathbf b = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if $ \mathbf a $ and $ \mathbf b $ are collinear. If, in an orthonormal basis, the vectors $ \mathbf a $ and $ \mathbf b $ have coordinates $ \{ a _ {1} , a _ {2} \} $ and $ \{ b _ {1} , b _ {2} \} $, then $$ \mathbf a \lor \mathbf b = a _ {1} b _ {2} - a _ {2} b _ {1} . $$ The vector product $ [ \mathbf a , \mathbf b ] $ of two non-zero non-collinear vectors $ \mathbf a $ and $ \mathbf b $ is the vector whose modulus is equal to the product of the moduli by the sine of the angle $ \phi $ between them, which is perpendicular to $ \mathbf a $ and to $ \mathbf b $ and is so directed that the vector triple $ \mathbf a , \mathbf b , [ \mathbf a , \mathbf b ] $ is a right triple: $$ | [ \mathbf a , \mathbf b ] | = \ | \mathbf a | \cdot | \mathbf b | \sin \phi . $$ This product is defined as zero if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if the two vectors are collinear. The vector product has the following properties: $ [ \mathbf a , \mathbf b ] = -[ \mathbf b , \mathbf a ] $( anti-commutativity); $ [ \mathbf a , \mathbf b + \mathbf c ] = [ \mathbf a , \mathbf b ]+[ \mathbf a , \mathbf c ] $( distributivity with respect to vector addition); $ \lambda [ \mathbf a , \mathbf b ] = [ \lambda \mathbf a , \mathbf b ] = [ \mathbf a , \lambda \mathbf b ] $( associativity with respect to multiplication by a number); $ [ \mathbf a , \mathbf b ] = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $, or if $ \mathbf a $ and $ \mathbf b $ are collinear. If the coordinates of two vectors $ \mathbf a $ and $ \mathbf b $ in an orthonormal basis are $ \{ a _ {1} , a _ {2} , a _ {3} \} $ and $ \{ b _ {1} , b _ {2} , b _ {3} \} $, then $$ [ \mathbf a , \mathbf b ] = \left \{ \left |

The mixed product $ ( \mathbf a , \mathbf b , \mathbf c ) $ of three vectors $ \mathbf a , \mathbf b , \mathbf c $ is the scalar product of $ \mathbf a $ and the vector product of the vectors $ \mathbf b $ and $ \mathbf c $:

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \ ( \mathbf a , [ \mathbf b , \mathbf c ] ). $$

The mixed product has the following properties:

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \ ( \mathbf b , \mathbf c , \mathbf a ) = \ ( \mathbf c , \mathbf a , \mathbf b ) = \ - ( \mathbf b , \mathbf a , \mathbf c ) = $$

$$ = \ -( \mathbf c , \mathbf b , \mathbf a ) = -( \mathbf a , \mathbf c , \mathbf b ); $$

$ ( \mathbf a , \mathbf b , \mathbf c ) = 0 $ only if $ \mathbf a = \mathbf 0 $ and/or $ \mathbf b = \mathbf 0 $ and/or $ \mathbf c = \mathbf 0 $, or if the vectors $ \mathbf a , \mathbf b , \mathbf c $ are coplanar;

$ ( \mathbf a , \mathbf b , \mathbf c ) > 0 $ if the vector triple $ \mathbf a , \mathbf b , \mathbf c $ is a right triple; $ ( \mathbf a , \mathbf b , \mathbf c ) < 0 $ if $ \mathbf a , \mathbf b , \mathbf c $ is a left triple.

The modulus of the mixed product is equal to the volume of the parallelepipedon constructed on the vectors $ \mathbf a , \mathbf b , \mathbf c $. If, in an orthonormal basis, the vectors $ \mathbf a $, $ \mathbf b $ and $ \mathbf c $ have coordinates $ \{ a _ {1} , a _ {2} , a _ {3} \} $, $ \{ b _ {1} , b _ {2} , b _ {3} \} $ and $ \{ c _ {1} , c _ {2} , c _ {3} \} $, then

$$ ( \mathbf a , \mathbf b , \mathbf c ) = \left | The double vector product $ [ \mathbf a , \mathbf b , \mathbf c ] $ of three vectors $ \mathbf a , \mathbf b , \mathbf c $ is $ [ \mathbf a , [ \mathbf b , \mathbf c ]] $. The following formulas are used in calculating double vector products: $$ [ \mathbf a , \mathbf b , \mathbf c ] = \ [ \mathbf b , ( \mathbf a , \mathbf c )] - [ \mathbf c , ( \mathbf a , \mathbf b )] , $$ $$ ([ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d ]) = ( \mathbf a , \mathbf c )( \mathbf b , \mathbf d )- ( \mathbf a , \mathbf d )( \mathbf b , \mathbf c ), $$ $$ [[ \mathbf a , \mathbf b ] , [ \mathbf c , \mathbf d ]]

=  ( \mathbf a , \mathbf c , \mathbf d ) \mathbf b - ( \mathbf b , \mathbf c , \mathbf d ) \mathbf a =

$$ $$ = \ ( \mathbf a , \mathbf b , \mathbf d ) \mathbf c - ( \mathbf a , \mathbf b , \mathbf c ) \mathbf d . $$

References

[1] P.S. Aleksandrov, "Lectures on analytical geometry" , Moscow (1968) (In Russian)
[2] N.V. Efimov, "A short course of analytical geometry" , Moscow (1967) (In Russian)
[3] V.A. Il'in, E.G. Poznyak, "Analytical geometry" , MIR (1984) (Translated from Russian)
[4] A.V. Pogorelov, "Analytical geometry" , Moscow (1968) (In Russian)

Comments

References

[a1] P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand (1958)
[a2] R. Capildeo, "Vector algebra and mechanics" , Addison-Wesley (1968)
How to Cite This Entry:
Vector algebra. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Vector_algebra&oldid=42733
This article was adapted from an original article by Yu.P. Pyt'ev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article