Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 14

From Wikiversity



Linear forms

Let be a field and let be a -vector space. A linear mapping

is called a Linear form on .

A linear form on is of the form

for a tuple . The projections

are the easiest linear forms.

The zero mapping to is also a linear form, called the zero form.

We have encountered many linear forma already, for example, the price function for a purchase of several products, or the content of vitamin of fruit salads of various fruits. With respect to a basis of and a basis of (where is just an element from different from ), the describing matrix of a linear form is simply a row with entries.


Many important examples of linear forms on some vector spaces of infinite dimension arise in analysis. For a real interval , the set of functions , or the set of continuous functions , or the set of continuously differentiable functions form real vector spaces. For a point , the evaluation is a linear form (because addition and scalar multiplication is defined pointwisely on these spaces). Also, the evaluation of the derivative at ,

is a linear form. For , the integral, that is, the mapping

is a linear form. This rests on the linearity of the integral.


Let be a field, and let and denote vector spaces over . For a linear form

and a vector , the mapping

is linear. It is just the composition

where denotes the mapping .

The kernel of the zero form is the total space; for any other linear form with , the dimension is . This follows from the dimension formula. With the exception of the zero form, a linear form is always surjective.


Let be an -dimensional -vector space, and let denote an -dimensional linear subspace. Then there exists a linear form such that

.

Proof



Let denote a -vector space and let be a vector different from . Then there exists a linear form such that

.

The one-dimensional -linear subspace contains a direct complement, that is,

with some linear subspace . The projection onto for this decomposition sends to .



Let be a field, and let be a -vector space. Let be vectors. Suppose that for every , there exists a linear form

such that

Then the are linearly independent.

Proof




The dual space

Let be a field and let denote an -vector space. Then the space of homomorphisms

is called the dual space of .

Addition and scalar multiplication are defined as in the general case of a space of homomorphisms, thus and . For a finite-dimensional , we obtain, due to Corollary 13.12 , that the dimension of the dual space equals the dimension of .


Let denote a finite-dimensional -vector space, endowed with a basis . Then the linear forms

defined by[1]

are called the dual basis

of the given basis.


Because of Theorem 10.10 , this rule defines indeed a linear form. The linear form assigns to an arbitrary vector the -th coordinate of with respect to the given basis. Note that for , we have

It is important to stress that does not only depend on the vector , but on the basis. There doe not exist something like a "dual vector“ for a vector. This looks different in the situation where an inner product is given on .


For the standard basis of , the dual basis consists in the projections onto some component, that is, we have , where

This basis is called the standard dual basis.


Let be a finite-dimensional -vector space, endowed with a basis . Then the dual basis

is a basis of the

dual space.

Suppose that

where . If we apply this linear form to , we get directly

Therefore, the are linearly independent. Due to Corollary 13.12 , the dual space has dimension , thus we have a basis already.



Let be a finite-dimensional -vector space, endowed with a basis , and the corresponding dual basis

Then, for every vector

, the equality

holds. The linear forms yield the scalars (coordinates)

of a vector with respect to a basis.

The vector has a unique representation

with . The right hand side of the claimed equality is therefore



Let be a finite-dimensional -vector space. Let be a basis of with the dual basis , and let be another basis with the dual basis , and with

Then

where is the transposed matrix of the inverse matrix of

.

We have

Here, we have the "product“ of the -th column of and the -th column of , which is also the product of the -th row of and the -te column of . For , this is , and for , this is . Therefore, the given linear form coincides with .


With the help of transformation matrices, this can be expressed as


We consider with the standard basis , its dual basis , and the basis consisting in and . We want to express the dual basis and as a linear combination of the standard dual basis, that is, we want to determine the coefficients and (and and ) in

(and in ). Here, and . In order to compute this, we have to express and as a linear combination of and . This is

and

Therefore, we have

and

Hence,

With similar computations we get

The transformation matrix from to is thus

The transposed matrix of this is

The inverse task to express the standard dual basis with and , is easier to solve, because we can read of directly the representations of the with respect to the standard basis. We have

and

as becomes clear by evaluation on both sides.



The trace

Let be a field and let be an -matrix over . Then

is called the trace of .

Let be a field, and let denote a finite-dimensional -vector space. Let be a linear mapping, which is described by the matrix with respect to a basis. Then is called the trace

of , written as .

Because of Exercise 14.15 , this is independent of the basis chosen. The trace is a linear form on the vector space of all square matrices, and on the vector space of all endomorphisms.



Footnotes
  1. This symbol is called Kronecker-Delta.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)