Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 10/refcontrol

From Wikiversity



Linear mappings

We are interested in mappings between two vector spaces that respect the structures, that is, they are compatible with addition and with scalar multiplication.


Let be a field,MDLD/field and let and be -vector spaces.MDLD/vector spaces A mappingMDLD/mapping

is called a linear mapping if the following two properties are fulfilled.

  1. for all .
  2. for all and .


Here, the first property is called additivity and the second property is called compatibility with scaling. When we want to stress the base field, then we say -linear. The identity , the null mapping , and the inclusion of a linear subspace are the simplest examples of a linear mapping. For a linear mapping, the compatibility with arbitrary linear combination holds, that is,

see Exercise 10.2 .  Instead of linear mapping, we also say homomorphism.

The graph of a linear mapping from '"`UNIQ--postMath-00000010-QINU`"' to '"`UNIQ--postMath-00000011-QINU`"', the mapping is determined by the proportionality factor '"`UNIQ--postMath-00000012-QINU`"' alone.
The graph of a linear mapping from to , the mapping is determined by the proportionality factor alone.

The easiest linear mappingsMDLD/linear mappings are (beside the null mapping) the linear maps from to . Such a linear mapping

is determined (by Theorem 10.10 , but this is also directly clear) by , or by the value for a single element , . In particular, , with a uniquely determined . In the context of physics, for , and if there is a linear relation between two measurable quantities, we talk about proportionality, and is called the proportionality factor. In school, such a linear relation occurs as "rule of three“.

Many important functions, in particular from to , are not linear. For example, the squaring , the square root, the trigonometric functions, the exponential function, the logarithm is not linear. But also for such more complicated functions there are, in the framework of differential calculus, linear approximations, which help to understand these functions.


Let denote a field,MDLD/field and let be the -dimensionalMDLD/dimensional (fgvs) standard space.MDLD/standard space Then the -th projection, this is the mappingMDLD/mapping

is a -linear mapping.MDLD/linear mapping This follows immediately from componentwise addition and scalar multiplication on the standard space. The -th projection is also called the -th coordinate function.

If you buy ten times this stuff, you have to pay ten times as much. In the linear world, there is no rebate.
If you buy ten times this stuff, you have to pay ten times as much. In the linear world, there is no rebate.

In a shop, there are different products to buy, and the price of the -th product (with respect to a certain unit) is . A purchase is described by the -tuple

where is the amount of the -th product bought. The price for the purchase is hence . The price mapping

is linear.MDLD/linear This means, for example, that if we do first the purchase and then, a week later, the purchase , then the price of the two purchases together is the same as the price of the purchase .


The mapping

which is given (see Example 2.8 ) by an -matrixMDLD/matrix , is linear.MDLD/linear


Let be a vector spaceMDLD/vector space over a fieldMDLD/field . For , the linear mappingMDLD/linear mapping

is called homothety (or dilation)

with scaling factor .

For a homothety, the domain space and the target space are the same. The number is called scaling factor. For , we get the identity, for , we get the point reflection at the origin.


Let denote the space of continuous functions from to , and let denote the space of continuously differentiable functions. Then the mapping

which assigns to a function its derivative, is linear.MDLD/linear In analysis, we prove that

holds for and another function .


Let denote a field,MDLD/field and let denote vector spacesMDLD/vector spaces over . Suppose that

are linear mappings.MDLD/linear mappings Then also the compositionMDLD/composition (map)

is a linear mapping.

Proof



Let be a field,MDLD/field and let and be -vector spaces.MDLD/vector spaces Let

be a bijectiveMDLD/bijective linear map.MDLD/linear map Then also the inverse mappingMDLD/inverse mapping

is linear.

Proof



Determination on a basis

Behind the following statement (the determination theorem) is the important principle, that in linear algebra (of finite dimensional vector spaces), the objects are determined by finitely many data.


TheoremTheorem 10.10 change

Let be a field,MDLD/field and let and be -vector spaces.MDLD/vector spaces Let , , denote a basisMDLD/basis (vs) of , and let , , denote elements in . Then there exists a unique linear mappingMDLD/linear mapping

with

Since we want , and since a linear mappingMDLD/linear mapping respects all linear combinations,MDLD/linear combinations that is [1]

holds, and since every vector is such a linear combination, there can exist at most one such linear mapping.
We define now a mappingMDLD/mapping

in the following way: we write every vector with the given basis as

(where for almost all ) and define

Since the representation of as such a linear combinationMDLD/linear combination is unique, this mapping is well-defined. Also, is clear.
Linearity. For two vectors and , we have


The compatibility with scalar multiplication is shown in a similar way, see Exercise 10.21 .


In particular, a linear mapping is uniquely determined by .


== Example Example 10.11

change==

In many situations, a certain object (like a cube) in space shall be drawn in the plane . One possibility is to work a projection. This is a linear mappingMDLD/linear mapping

which is given (with respect to the standard bases and ) by

where the coefficients are usually chosen in the range . Linearity has the effect that parallel lines are mapped to parallel lines (unless they are mapped to a point). The point is mapped to . The imageMDLD/image of the object under such a linear mapping is called a projection image.



Linear mappings and matrices
The effect of several linear mappings from to itself, represented on a brain cell.

A linear mapping

is determined uniquely by the images , , of the standard vectors, and every is a linear combination

and hence determined by the elements . This means all together that such a linear mapping is given by the elements , , . Such a set of data can be written as a matrix. Due to Theorem 10.10 , this observation holds for all finite-dimensional vector spaces, as long as bases are fixed on the source space and on the target space of the linear mapping.


Let denote a field,MDLD/field and let be an -dimensionalMDLD/dimensional (vs) vector spaceMDLD/vector space with a basisMDLD/basis (vs) , and let be an -dimensional vector space with a basis .

For a linear mappingMDLD/linear mapping

the matrixMDLD/matrix

where is the -th coordinateMDLD/coordinate (vs) of with respect to the basis , is called the describing matrix for with respect to the bases.

For a matrix , the linear mapping determined by

in the sense of Theorem 10.10 ,

is called the linear mapping determined by the matrix .

For a linear mapping , we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping from a vector space in itself (what is called an endomorphism), one usually takes the same bases on both sides. The identity on a vector space of dimension is described by the identity matrix, with respect to every basis.

If , then we are usually interested in the describing matrix with respect to one basis of .


== Example Example 10.13

change==

Let denote a vector space with basesMDLD/bases (vs) and . If we consider the identity

with respect to the basis on the source and the basis on the target, we get, because of

directly

This means that the describing matrix of the identical linear mapping is the transformation matrixMDLD/transformation matrix for the base change from to .


LemmaLemma 10.14 change

Let denote a fieldMDLD/field and let denote an -dimensionalMDLD/dimensional (vs) vector spaceMDLD/vector space with a basisMDLD/basis (vs) . Let be an -dimensional vector space with a basis , and let

and

be the corresponding mappings. Let

denote a linear mappingMDLD/linear mapping with describing matrixMDLD/describing matrix (linear) . Then

hold, that is, the diagram

commutes. For a vector ,

we can compute by determining the coefficient tuple of with respect to the basis , applying the matrix and determining for the resulting -tuple the corresponding vector with respect to .

Proof



TheoremTheorem 10.15 change

Let be a field, and let be an -dimensionalMDLD/dimensional (vs) vector spaceMDLD/vector space with a basisMDLD/basis (vs) , and let be an -dimensional vector space with a basis . Then the mappings

defined in definition,MDLD/definition are inverseMDLD/inverse

to each other.

We show that both compositions are the identity. We start with a matrix and consider the matrix

Two matrices are equal, when the entries coincide for every index pair . We have


Now, let be a linear mapping, we consider

Two linear mappings coincide, due to Theorem 10.10 , when they have the same values on the basis . We have

Due to the definition, the coefficient is the -th coordinate of with respect to the basis . Hence, this sum equals .


We denote the set of all linear mappings from to by . Theorem 10.15 means that the mapping

is bijective with the given inverse mapping. A linear mapping

is called an endomorphism. The set of all endomorphisms on is denoted by .



Isomorphic vector spaces

Let denote a fieldMDLD/field and let and denote -vector spaces.MDLD/vector spaces A bijectiveMDLD/bijective linear mappingMDLD/linear mapping

is called isomorphism.

An isomorphism from to is called automorphism.


Let denote a field.MDLD/field Two -vector spacesMDLD/vector spaces and are called isomorphic, if there exists an isomorphismMDLD/isomorphism (vs)

from to .

Let denote a fieldMDLD/field and let and denote finite-dimensionalMDLD/finite-dimensional -vector spaces.MDLD/vector spaces Then and are isomorphicMDLD/isomorphic (vs) to each other if and only if their dimensionMDLD/dimension (fgvs)

coincides. In particular, an -dimensional -vector space is isomorphic to .

Proof



An isomorphismMDLD/isomorphism (vs) between an -dimensionalMDLD/dimensional (fgvs) vector spaceMDLD/vector space and the standard space is essentially equivalent with the choice of a basisMDLD/basis (vs) of . For a basis

we associate the linear mapping

which maps from the standard space to the vector space by sending the -th standard vectorMDLD/standard vector (vs) to the -th basis vector of the given basis. This defines a unique linear mapping due to Theorem 10.10 . Due to Exercise 10.23 , this mapping is bijective.MDLD/bijective It is just the mapping

The inverse mappingMDLD/inverse mapping

is also linear, and it is called the coordinate mapping for this basis. The -th component of this map, that is, the composed mapping

is called the -th coordinate function. It is denoted by . It assigns to vector with the unique representation

the coordinate . Note that the linear mapping depends on the basis, not only on the vector .

If an isomorphismMDLD/isomorphism (vs)

is given, then the images

form a basis of .



Footnotes
  1. If is an infinite index set, then, in all sums considered here, only finitely many coefficients are not .


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)