Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 15

From Wikiversity



Linear subspaces and dual space

Linear subspaces of some -vector space are in direct relation with linear subspaces of the dual space .


For a linear subspace in a -vector space, we call

the orthogonal space

of .

These orthogonal spaces are again linear subspaces, see Exercise 15.4 . Whether a linear form belongs to can be checked on a generating system of , see Exercise 15.5 . In the second semester, when we have inner products at hand, there will also be an orthogonal space for in itself.


We consider the linear subspace

The orthogonal space of consists of all linear forms

with and . Because a linear form is described by a row matrix with respect to the standard basis, we are dealing with the solution set of the linear system

The solution space is


Let be a finite-dimensional -vector space with a basis , , and the corresponding dual basis , . Let

for a subset . Then


Let be a -vector space, and be a linear subspace of the dual space of . Then

is called the orthogonal space

of .

Let a homogeneous linear system

over a field be given. We consider the -th equation as a kernel condition for the linear form

Let

denote the linear subspace of the dual space generated by these linear forms. Then is the solution space of this linear system.

In general, we have the relation

In particular,


Let be a -vector space with dual space

. Then the following statements hold.
  1. For linear subspaces , we have
  2. For linear subspaces , we have
  3. Let be finite-dimensional. Then

    and

  4. Let be finite-dimensional. Then

    and

(1) and (2) are clear. (3). The inclusion

is also clear. Let , . Then we can choose a basis of and extend it to a basis of . The linear form vanishes on , therefore, it belongs to . Because of

we have .

The inclusion

holds immediately. Let , that is,

Let be a generating system of . Due to exercise ***** we have that is a linear combination of the ; therefore, .

(4). We first prove the second part. Let be a basis of , and let

denote the mapping where these linear forms are the components. Here, we have

Assume that the mapping is not surjective. Then is a strict linear subspace of and its dimension is at most . Let be a -dimensional linear subspace with

Due to Lemma 14.6 , there is a linear form

whose kernel is exactly . Write , where is the th projection. Then

contradicting the linear independence of the . Moreover, is surjective and the statement follows from Theorem 11.5 .

The first part follows by using and applying the second part to .


Let be a finite-dimensional -vector space and a linear subspace. Then the following hold.

a) There exist linear forms on such that


b) is the kernel of a linear mapping on to some .


c) Every linear subspace is the solution space of a

system of linear equations.

Proof




The dual mapping

Let denote a field, let and denote -vector spaces, and let

denote a -linear mapping. Then the mapping

is called the dual mapping of .

This assignment arises from just considering the composition

The dual mapping is a special case of the situation described in Lemma 13.8   (1). In particular, the dual mapping is again linear.


Let denote vector spaces over a field and let

and

be

linear mappings. Then the following hold.
  1. For the dual mapping, we have
  2. For the identity on , we have
  3. If is surjective then is injective.
  4. If is injective then is surjective.

(1). For , we have

(2) follows directly from .

(3). Let and

Because of the surjectivity of , there exist for every a such that . Therefore

and is itself the zero mapping. Due to Lemma 11.4 , injective.

(4). The condition means that we may consider as a linear subspace. Because of Lemma 9.12 , we can write

with another -linear subspace . A linear form

can always be extended to a linear form

for example, by defining on to be the zero form. This means the surjectivity.


Let be a field and let and be vector spaces over , where is finite-dimensional. Let

denote a linear mapping. Then there exist vectors and linear forms on such that[1]

holds.

Let be a basis of and the corresponding dual basis. We set

Then, for every vector , we have

where the last equation rests on Lemma 14.13 .



Let be a field, let denote an -dimensional -vector space with a basis , and let be an -dimensional vector space with a basis . Let and be the corresponding dual bases. Let

be a linear mapping, and suppose that it is described by the -matrix

with respect to the given bases. Then the dual mapping

is described by the transposed matrix with respect to the dual bases of

and .

The claim means the equality[2]

in . This can be checked on the basis , . On one hand, we have

on the other hand, we have




The bidual

Let be a field, and let be a -vector space. Then the dual space of the dual space , that is,

is called the Bidual of .

Let be a field, and let be a -vector space. Then there exists a natural injective linear mapping

If has finite dimension, then is an

isomorphism.

Let be fixed. First of all, we show that is a linear form on the dual space . Obviously, is a mapping from to . The additivity follows from

where we have used the definition of the addition on the dual space. The compatibility with the scalar multiplication follows similarly from

In order to prove the additivity of , let be given. We have to show the equality

This is an equality inside of , in particular, it is an equality of mappings. So let be given. Then, the additivity follows from

The scalar compatibility follows from

In order to prove injectivity, let with be given. this means that for all linear forms , we have . But then, due to Fact *****, we have

By the criterion for injectiviy, is injective.

In the finite-dimensional case, the bijectivity follows from injectivity and from Corollary 13.12 .


Thus, the mapping sends a vector to the evaluation (or evaluation mapping) which evaluates a linear form an the point .



Footnotes
  1. Here, is to be understood in the sense of Remark 14.4 .
  2. In , the relation hold. Note that here, the running index is the first index; in the equation claimed, the running index is the second index, corresponding to transposing.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)