Jump to content

Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 24

From Wikiversity



Base change

We know, due to Theorem 23.15 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates (the coefficient tuple). How do these coordinates behave when we change the bases? This is answered by the following statement.


Let be a field, and let be a -vector space of dimension . Let and denote bases of . Suppose that

with coefficients , which we collect into the -matrix

Then a vector , which has the coordinates with respect to the basis , has the coordinates
with respect to the basis .

This follows directly from

and the definition of matrix multiplication.


The matrix , which describes the base change from to , is called the transformation matrix. In the -th column of the transformation matrix, there are the coordinates of with respect to the basis . When we denote, for a vector and a basis , the corresponding coordinate tuple by , then the transformation can be quickly written as


We consider in the standard basis,

and the basis

The basis vectors of can be expressed directly with the standard basis, namely

Therefore, we get immediately

For example, the vector that has the coordinates with respect to , has the coordinates

with respect to the standard basis . The transformation matrix is more difficult to compute. We have to write the standard vectors as linear combinations of and . A direct computation (solving two linear systems) yields

and

Hence,



Linear mappings

Let be a field, and let and be -vector spaces. A mapping

is called a linear mapping if the following two properties are fulfilled.

  1. for all .
  2. for all and .


Here, the first property is called additivity and the second property is called compatibility with scaling. When we want to stress the base field, then we say -linear. The identity , the null mapping , and the inclusion of a linear subspace are the simplest examples of a linear mapping. For a linear mapping, the compatibility with arbitrary linear combination holds, that is,

see exercise *****.


Let denote a field, and let be the -dimensional standard space. Then the -th projection, this is the mapping

is a -linear mapping. This follows immediately from componentwise addition and scalar multiplication on the standard space. The -th projection is also called the -th coordinate function.


Let denote a field, and let denote vector spaces over . Suppose that

are linear mappings. Then also the composition

is a linear mapping.

Proof



Let be a field, and let and be -vector spaces. Let

be a bijective linear map. Then also the inverse mapping

is linear.

Proof



Determination on a basis

Behind the following statement (the determination theorem), there is the important principle that in linear algebra (of finite-dimensional vector spaces), the objects are determined by finitely many data.


Let be a field, and let and be -vector spaces. Let , , denote a basis of , and let , , denote elements in . Then there exists a unique linear mapping

with

Proof

This proof was not presented in the lecture.


The graph of a linear mapping from to , the mapping is determined by the proportionality factor alone.

The easiest linear mappings are (beside the null mapping) the linear maps from to . Such a linear mapping

is determined (by Theorem 24.7 , but this is also directly clear) by , or by the value for a single element , . In particular, , with a uniquely determined . In the context of physics, for , and if there is a linear relation between two measurable quantities, we talk about proportionality, and is called the proportionality factor. In school, such a linear relation occurs as "rule of three“.



Linear mappings and matrices
The effect of several linear mappings from to itself, represented on a brain cell.

Due to Theorem 24.7 , a linear mapping

is determined by the images , , of the standard vectors. Every is a linear combination

and therefore the linear mapping is determined by the elements . So, such a linear map is determined by the elements , , , from the field. We can write such a data set as a matrix. Because of the determination theorem, this holds for linear maps in general, as soon as in both vector spaces bases are fixed.


Let denote a field, and let be an -dimensional vector space with a basis , and let be an -dimensional vector space with a basis .

For a linear mapping

the matrix

where is the -th coordinate of with respect to the basis , is called the describing matrix for with respect to the bases.

For a matrix , the linear mapping determined by

in the sense of Theorem 24.7 ,

is called the linear mapping determined by the matrix .

For a linear mapping , we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping from a vector space in itself (what is called an endomorphism), one usually takes the same bases on both sides. The identity on a vector space of dimension is described by the identity matrix, with respect to every basis.


Let be a field, and let be an -dimensional vector space with a basis , and let be an -dimensional vector space with a basis . Then the mappings

defined in definition, are inverse

to each other.

Proof

This proof was not presented in the lecture.



A linear mapping

is usually described by the matrix with respect to the standard bases on the left and on the right. The result of the matrix multiplication

can be interpreted directly as a point in . The -th column of is the image of the -th standard vector .



Rotations

A rotation of the real plane around the origin, given the angle counterclockwise, maps to and to . Therefore, plane rotations are described in the following way.


A linear mapping

which is given by a rotation matrix (with some )with respect to the standard basis is called

rotation.

A space rotation is a linear mapping of the space in itself around a rotation axis (a line through the origin) with an certain angle . If the vector defines the axis, and and are orthogonal to and to each other, and all have length , then the rotation is described by the matrix

with respect to the basis .



The kernel of a linear mapping

Let denote a field, let and denote -vector spaces, and let

denote a -linear mapping. Then

is called the kernel of .

The kernel is a linear subspace of .

The following criterion for injectivity is important.


Let denote a field, let and denote -vector spaces, and let

denote a -linear mapping. Then is injective if and only if

holds.

If the mapping is injective, then there can exist, apart from , no other vector with . Hence, .
So suppose that , and let be given with . Then, due to linearity,

Therefore, , and so .


<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)