Jump to content

Vector space/Base change/Diagram/Introduction/Section

From Wikiversity

We know, due to fact, that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates (the coefficient tuple). How do these coordinates behave when we change the bases? This is answered by the following statement.


Let be a field, and let be a -vector space of dimension . Let and denote bases of . Suppose that

with coefficients , which we collect into the -matrix

Then a vector , which has the coordinates with respect to the basis , has the coordinates
with respect to the basis .

This follows directly from

and the definition of matrix multiplication.


If for a basis , we consider the corresponding bijective mapping (see remark)

then we can express the preceding statement as saying that the triangle

commutes.[1]

Let denote a field, and let denote a -vector space of dimension . Let and denote two bases of . Let

with coefficients . Then the -matrix

is called the transformation matrix of the base change from to .


The -th column of a transformation matrix consists of the coordinates of with respect to the basis . The vector has the coordinate tuple with respect to the basis , and when we apply the matrix to , we get the -th column of the matrix, and this is just the coordinate tuple of with respect to the basis .

For a one-dimensional space and

we have , where the fraction is well-defined. This might help in memorizing the order of the bases in this notation.

Another important relation is

Note that here, the matrix is not applied to an -tuple of but to an -tuple of , yielding a new -tuple of . This equation might be an argument to define the transformation matrix the other way around; however, we consider the behavior in fact as decisive.

In case

if is the standard basis, and some further basis, we obtain the transformation matrix of the base change from to by expressing each as a linear combination of the basis vectors , and writing down the corresponding tuples as columns. The inverse transformation matrix, , consists simply in , written as columns.


We consider in the standard basis,

and the basis

The basis vectors of can be expressed directly with the standard basis, namely

Therefore, we get immediately

For example, the vector that has the coordinates with respect to , has the coordinates

with respect to the standard basis . The transformation matrix is more difficult to compute. We have to write the standard vectors as linear combinations of and . A direct computation (solving two linear systems) yields

and

Hence,


Let be a field, and let be a -vector space of dimension . Let and denote bases of . Then the three transformation matrices fulfill the relation

In particular, we have

Proof

  1. The commutativity of such a diagram of arrows and mappings means that all composed mappings coincide as long as their domain and codomain coincide. In this case, it simply means that holds.