Vector space/Linear independence/Introduction/Section

From Wikiversity
Jump to navigation Jump to search


Definition  

Let be a field, and let be a -vector space. A family of vectors , , (where denotes a finite index set) is called linearly independent, if an equation of the form

is only possible when

for all .

If a family is not linear independent, then it is called linearly dependent. A linear combination is called a representation of the null vector. It is called the trivial representation, if all coefficients equal , and, if at least one coefficient is not , a nontrivial representation of the null vector. A family of vectors is linearly independent, if and only if one can represent with it the null vector only in the trivial way. This is equivalent with the property that no vector of the family can be expressed as a linear combination by the others.


Example

The standard vectors in are linearly independent. A representation

just means

The -th row yields directly .


Example

The three vectors

are linearly dependent. The equation

is a nontrivial representation of the null vector.


Lemma

Let be a field, let be a -vector space, and let , ,

be a family of vectors in . Then the following statements hold.
  1. If the family is linearly independent, then for each subset , also the family  , , is linearly independent.
  2. The empty family is linearly independent.
  3. If the family contains the null vector, then it is not linearly independent.
  4. If a vector appears several times in the family, then the family is not linearly independent.
  5. A single vector is linearly independent if and only if .
  6. Two vectors and are linearly independent if and only if is not a scalar multiple of and vice versa.

Proof



Remark

The vectors are linearly dependent, if and only if the homogeneous linear system

has a nontrivial solution.