Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 9/refcontrol

From Wikiversity



Base change

We know, due to Theorem 8.4 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates (the coefficient tuple). How do these coordinates behave when we change the bases? This is answered by the following statement.


LemmaLemma 9.1 change

Let be a field,MDLD/field and let be a -vector spaceMDLD/vector space of dimensionMDLD/dimension (fgvs) . Let and denote basesMDLD/bases (vs) of . Suppose that

with coefficients , which we collect into the -matrixMDLD/matrix

Then a vector , which has the coordinates with respect to the basis , has the coordinates
with respect to the basis .

This follows directly from

and the definition of matrix multiplication.MDLD/matrix multiplication


If for a basis , we consider the corresponding bijective mapping (see Remark 7.12 )

then we can express the preceding statement as saying that the triangle
commutes.[1]

Let denote a field,MDLD/field and let denote a -vector spaceMDLD/vector space of dimensionMDLD/dimension (fgvs) . Let and denote two basesMDLD/bases (vs) of . Let

with coefficients . Then the -matrixMDLD/matrix

is called the transformation matrix of the base change from to .

The -th column of a transformation matrixMDLD/transformation matrix (basis) consists of the coordinates of with respect to the basis . The vector has the coordinate tuple with respect to the basis , and when we apply the matrix to , we get the -th column of the matrix, and this is just the coordinate tuple of with respect to the basis .

For a one-dimensional space and

we have , where the fraction is well-defined. This might help in memorizing the order of the bases in this notation.

Another important relation is

Note that here, the matrix is not applied to an -tuple of but to an -tuple of , yielding a new -tuple of . This equation might be an argument to define the transformation matrix the other way around; however, we consider the behavior in Lemma 9.1 as decisive.

In case

if is the standard basis, and some further basis, we obtain the transformation matrix of the base change from to by expressing each as a linear combination of the basis vectors , and writing down the corresponding tuples as columns. The inverse transformation matrix, , consists simply in , written as columns.


We consider in the standard basis,MDLD/standard basis

and the basis

The basis vectors of can be expressed directly with the standard basis, namely

Therefore, we get immediately

For example, the vector that has the coordinatesMDLD/coordinates with respect to , has the coordinates

with respect to the standard basis . The transformation matrix is more difficult to compute. We have to write the standard vectors as linear combinationsMDLD/linear combinations of and . A direct computation (solving two linear systems) yields

and

Hence,


LemmaLemma 9.5 change

Let be a field,MDLD/field and let be a -vector spaceMDLD/vector space of dimensionMDLD/dimension (fgvs) . Let and denote basesMDLD/bases (vs) of . Then the three transformation matricesMDLD/transformation matrices fulfill the relation

In particular, we have

Proof



Sum of linear subspaces

For a -vector spaceMDLD/vector space and a family of linear subspacesMDLD/linear subspaces , we define the sum of these linear subspaces by

This sum is again a linear subspace. In case

we say that is the sum of the linear subspaces . The following theorem describes an important relation between the dimension of the sum of two linear subspaces and the dimension of their intersection.


TheoremTheorem 9.7 change

Let denote a field,MDLD/field and let denote a -vector spaceMDLD/vector space of finite dimension.MDLD/finite dimension Let denote linear subspaces.MDLD/linear subspaces Then

Let be a basisMDLD/basis (vs) of . On one hand, we can extend this basis, according to Theorem 8.10 , to a basis of , on the other hand, we can extend it to a basis of . Then

is a generating systemMDLD/generating system (vs) of . We claim that it is even a basis. To see this, let

This implies that the element

belongs to . From this, we get directly for , and for . From the equation before, we can then infer that also holds for all . Hence, we have linear independence.MDLD/linear independence This gives altogether


The intersection of two planes (through the origin) in is "usually“ a line; it is the plane itself if the same plane is taken twice, but it is never just a point. This observation is generalized in the following statement.


CorollaryCorollary 9.8 change

Let be a field,MDLD/field and let be a -vector spaceMDLD/vector space of dimensionMDLD/dimension (fgvs) . Let denote linear subspacesMDLD/linear subspaces of dimensions and . Then

Due to Theorem 9.7 , we have


Recall that, for a linear subspace , the difference is called the codimension of in . With this concept, we can paraphrase the statement above by saying that the codimension of an intersection of linear subspaces equals at most the sum of their codimensions.


Let a homogeneous system of linear equationsMDLD/homogeneous system of linear equations with equations in variables be given. Then the dimensionMDLD/dimension (vs)

of the solution space of the system is at least .

The solution space of one linear equation in variables has dimension or . The solution space of the system is the intersection of the solution spaces of the individual equations. Therefore, the statement follows by applying Corollary 9.8 to the individual solution spaces.



Direct sum

Let denote a field,MDLD/field and let denote a -vector space.MDLD/vector space Let be a family of linear subspacesMDLD/linear subspaces of . We say that is the direct sum of the if the following conditions are fulfilled.

  1. Every vector has a representation

    where .

  2. for all .

If the sum of the is direct, then we also write instead of . For two linear subspaces

the second condition just means .


Let denote a finite-dimensionalMDLD/finite-dimensional -vector spaceMDLD/vector space together with a basisMDLD/basis (vs) . Let

be a partitionMDLD/partition of the index set. Let

be the linear subspacesMDLD/linear subspaces generated by the subfamilies. Then

The extreme case yields the direct sum

with one-dimensional linear subspaces.


LemmaLemma 9.12 change

Let be a finite-dimensionalMDLD/finite-dimensional -vector space,MDLD/vector space and let be a linear subspace.MDLD/linear subspace Then there exists a linear subspace such that we have the direct sum decompositionMDLD/direct sum decomposition

Let denote a basisMDLD/basis (vs) of . We can extend this basis, according to Theorem 8.10 , to a basis of . Then

fulfills all the properties of a direct sum.


In the preceding statement, the linear subspace is called a direct complement for (in ). In general, there are many different direct complements.



Direct sum and product

Recall that, for a family , , of sets , the product setMDLD/product set is defined. If all are -vector spacesMDLD/vector spaces over a fieldMDLD/field , then this is, using componentwise addition and scalar multiplication, again a -vector space. This is called the direct product of vector spaces. If it is always the same space, say , then we also write . This is just the mapping space .

Each vector space is a linear subspace inside the direct product, namely as the set of all tuples

The set of all these tuples that are only (at most) at one place different from generates a linear subspace of the direct product. For infinite, it is not the direct product.


Let denote a set, and let denote a field.MDLD/field Suppose that, for every , a -vector spaceMDLD/vector space is given. Then the set

is called the direct sum of the .

We have the linear subspace relation

If we always have the same vector space, then we write for this direct sum. In particular,

is a linear subspace. For finite, there is no difference, but for an infinite index set, this inclusion is strict. For example, is the space of all real sequences, but consists only of those sequences satisfying the property that only finitely many members are different from . The polynomial ringMDLD/polynomial ring (1) is the direct sum of the vector spaces . Every -vector space with a basisMDLD/basis (vs) , , is "isomorphic“ to the direct sum .



Footnotes
  1. The commutativity of such a diagram of arrows and mappings means that all composed mappings coincide as long as their domain and codomain coincide. In this case, it simply means that holds.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)