Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 24/latex

From Wikiversity
Jump to navigation Jump to search

\setcounter{section}{24}






\zwischenueberschrift{Base change}

We know, due to Theorem 23.15 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates \zusatzklammer {the coefficient tuple} {} {.} How do these coordinates behave when we change the bases? This is answered by the following statement.




\inputfaktbeweis
{Vector space/Finite dimensional/Change of basis/Fact}
{Lemma}
{}
{

\faktsituation {Let $K$ be a field, and let $V$ be a $K$-vector space of dimension $n$. Let \mathkor {} {\mathfrak{ v } = v_1 , \ldots , v_n} {and} {\mathfrak{ w } = w_1 , \ldots , w_n} {} denote bases of $V$.}
\faktvoraussetzung {Suppose that
\mavergleichskettedisp
{\vergleichskette
{v_j }
{ =} { \sum_{ i = 1 }^{ n } c_{ij} w_i }
{ } { }
{ } { }
{ } { }
} {}{}{} with coefficients
\mavergleichskette
{\vergleichskette
{ c_{ij} }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} which we collect into the $n \times n$-matrix
\mavergleichskettedisp
{\vergleichskette
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }
{ =} { { \left( c_{ij} \right) }_{ij} }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\faktfolgerung {Then a vector $u$, which has the coordinates $\begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix}$ with respect to the basis $\mathfrak{ v }$, has the coordinates
\mavergleichskettedisp
{\vergleichskette
{\begin{pmatrix} t _{1 } \\ \vdots\\ t _{ n } \end{pmatrix} }
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix} }
{ =} { \begin{pmatrix} c_{11 } & c_{1 2} & \ldots & c_{1 n } \\ c_{21 } & c_{2 2} & \ldots & c_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ c_{ n 1 } & c_{ n 2 } & \ldots & c_{ n n } \end{pmatrix} \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix} }
{ } { }
{ } { }
} {}{}{} with respect to the basis $\mathfrak{ w }$.}
\faktzusatz {}

}
{

This follows directly from
\mavergleichskettedisp
{\vergleichskette
{ u }
{ =} { \sum_{ j = 1 }^{ n } s_j v_j }
{ =} { \sum_{ j = 1 }^{ n } s_j { \left( \sum_{ i = 1 }^{ n } c_{ij} w_i \right) } }
{ =} { \sum_{ i = 1 }^{ n } { \left( \sum_{ j = 1 }^{ n } s_j c_{ij} \right) } w_i }
{ } { }
} {}{}{,} and the definition of matrix multiplication.

}


The matrix \mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ w } }}{,} which describes the base change from $\mathfrak{ v }$ to $\mathfrak{ w }$, is called the \stichwort {transformation matrix} {.} In the $j$-th column of the transformation matrix, there are the coordinates of $v_j$ with respect to the basis $\mathfrak{ w }$. When we denote, for a vector
\mavergleichskette
{\vergleichskette
{u }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} and a basis $\mathfrak{ v }$, the corresponding coordinate tuple by $\Psi_{ \mathfrak{ v } } (u)$, then the transformation can be quickly written as
\mavergleichskettedisp
{\vergleichskette
{ \Psi_{ \mathfrak{ w } } (u) }
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \Psi_{ \mathfrak{ v } } (u)) }
{ } { }
{ } { }
{ } { }
} {}{}{.}




\inputbeispiel{}
{

We consider in $\R^2$ the standard basis,
\mavergleichskettedisp
{\vergleichskette
{ \mathfrak{ u } }
{ =} { \begin{pmatrix} 1 \\0 \end{pmatrix} , \, \begin{pmatrix} 0 \\1 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{,} and the basis
\mavergleichskettedisp
{\vergleichskette
{ \mathfrak{ v } }
{ =} { \begin{pmatrix} 1 \\2 \end{pmatrix} , \, \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} The basis vectors of $\mathfrak{ v }$ can be expressed directly with the standard basis, namely
\mathdisp {v_1= \begin{pmatrix} 1 \\2 \end{pmatrix} = 1 \begin{pmatrix} 1 \\0 \end{pmatrix} + 2 \begin{pmatrix} 0 \\1 \end{pmatrix} \text{ and } v_2= \begin{pmatrix} -2 \\3 \end{pmatrix} = -2 \begin{pmatrix} 1 \\0 \end{pmatrix} + 3 \begin{pmatrix} 0 \\1 \end{pmatrix}} { . }
Therefore, we get immediately
\mavergleichskettedisp
{\vergleichskette
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } }
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} For example, the vector which has with respect to $\mathfrak{ v }$ the coordinates \mathl{(4,-3)}{,} has the coordinates
\mavergleichskettedisp
{\vergleichskette
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } \begin{pmatrix} 4 \\-3 \end{pmatrix} }
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix} \begin{pmatrix} 4 \\-3 \end{pmatrix} }
{ =} { \begin{pmatrix} 10 \\-1 \end{pmatrix} }
{ } { }
{ } { }
} {}{}{} with respect to the standard basis $\mathfrak{ u }$. The transformation matrix \mathl{M^{ \mathfrak{ u } }_{ \mathfrak{ v } }}{} is more difficult to compute: We have to write the standard vectors as linear combinations of \mathkor {} {v_1} {and} {v_2} {.} A direct computation \zusatzklammer {solving two linear systems} {} {} yields
\mavergleichskettedisp
{\vergleichskette
{ \begin{pmatrix} 1 \\0 \end{pmatrix} }
{ =} { { \frac{ 3 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} - { \frac{ 2 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mavergleichskettedisp
{\vergleichskette
{ \begin{pmatrix} 0 \\1 \end{pmatrix} }
{ =} { { \frac{ 2 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} + { \frac{ 1 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} Hence,
\mavergleichskettedisp
{\vergleichskette
{ M^{ \mathfrak{ u } }_{ \mathfrak{ v } } }
{ =} { \begin{pmatrix} { \frac{ 3 }{ 7 } } & { \frac{ 2 }{ 7 } } \\ - { \frac{ 2 }{ 7 } } & { \frac{ 1 }{ 7 } } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}






\zwischenueberschrift{Linear mappings}




\inputdefinition
{ }
{

Let $K$ be a field, and let \mathkor {} {V} {and} {W} {} be $K$-vector spaces. A mapping
\mathdisp {\varphi \colon V \longrightarrow W} { }
is called a \definitionswort {linear mapping}{,} if the following two properties are fulfilled. \aufzaehlungzwei {
\mavergleichskette
{\vergleichskette
{ \varphi(u+v) }
{ = }{ \varphi(u) + \varphi(v) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for all
\mavergleichskette
{\vergleichskette
{ u,v }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {
\mavergleichskette
{\vergleichskette
{ \varphi( s v) }
{ = }{ s \varphi(v) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for all \mathkor {} {s \in K} {and} {v \in V} {.}

}

}

Here, the first property is called \stichwort {additivity} {} and the second property is called \stichwort {compatibility with scaling} {.} When we want to stress the base field, then we say $K$-linearity. The identity $\operatorname{Id}_{ V } \colon V \rightarrow V$, the null mapping $V \rightarrow 0$ and the inclusion
\mavergleichskette
{\vergleichskette
{U }
{ \subseteq }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} of a linear subspace are the simplest examples for linear mappings.




\inputbeispiel{}
{

Let $K$ denote a field, and let $K^n$ be the $n$-dimensional standard space. Then the $i$-th \stichwort {projection} {,} this is the mapping
\mathdisp {K^n \longrightarrow K , \left( x_1 , \, \ldots , \, x_{i-1} , \, x_i , \, x_{i+1} , \, \ldots , \, x_n \right) \longmapsto x_i} { , }
is a $K$-linear mapping. This follows immediately from componentwise addition and scalar multiplication on the standard space. The $i$-th projection is also called the $i$-th \stichwort {coordinate function} {.}

}




\inputfaktbeweis
{Linear mapping/Composition/Fact}
{Lemma}
{}
{

\faktsituation {Let $K$ denote a field, and let \mathl{U,V,W}{} denote vector spaces over $K$. Suppose that
\mathdisp {\varphi : U \longrightarrow V \text{ and } \psi : V \longrightarrow W} { }
are linear mappings.}
\faktfolgerung {Then also the composition
\mathdisp {\psi \circ \varphi \colon U \longrightarrow W} { }
is a linear mapping.}
\faktzusatz {}

}
{See Exercise 24.17 . }





\inputfaktbeweis
{Linear mapping/Bijective/Inverse mapping linear/Fact}
{Lemma}
{}
{

\faktsituation {Let $K$ be a field, and let \mathkor {} {V} {and} {W} {} be $K$-vector spaces. Let
\mathdisp {\varphi \colon V \longrightarrow W} { }
be a bijective linear map.}
\faktfolgerung {Then also the inverse mapping
\mathdisp {\varphi^{-1} \colon W \longrightarrow V} { }
is linear.}
\faktzusatz {}

}
{See Exercise 24.18 . }






\zwischenueberschrift{Determination on a basis}

Behind the following statement \zusatzklammer {the \stichwort {determination theorem} {}} {} {,} there is the important principle that in linear algebra \zusatzklammer {of finite-dimensional vector spaces} {} {,} the objects are determined by finitely many data.




\inputfaktbeweisnichtvorgefuehrt
{Linear mapping/Determination on basis/Fact}
{Theorem}
{}
{

\faktsituation {Let $K$ be a field, and let \mathkor {} {V} {and} {W} {} be $K$-vector spaces. Let
\mathbed {v_i} {}
{i \in I} {}
{} {} {} {,} denote a basis of $V$, and let
\mathbed {w_i} {}
{i \in I} {}
{} {} {} {,} denote elements in $W$.}
\faktfolgerung {Then there exists a unique linear mapping
\mathdisp {f \colon V \longrightarrow W} { }
with
\mathdisp {f(v_i)= w_i \text { for all } i \in I} { . }
}
\faktzusatz {}

}
{Linear mapping/Determination on basis/Fact/Proof

}


The graph of a linear mapping from $\R$ to $\R$, the mapping is determined by the proportionality factor $k$ alone.




\inputbeispiel{}
{

The easiest linear mappings are \zusatzklammer {beside the null mapping} {} {} the linear maps from $K$ to $K$. Such a linear mapping
\mathdisp {\varphi \colon K \longrightarrow K , x \longmapsto \varphi(x)} { , }
is determined \zusatzklammer {by Theorem 24.7 , but this is also directly clear} {} {} by \mathl{\varphi(1)}{,} or by the value \mathl{\varphi(t)}{} for an single element
\mathbed {t \in K} {}
{t \neq 0} {}
{} {} {} {.} In particular,
\mavergleichskette
{\vergleichskette
{ \varphi(x) }
{ = }{ ax }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} with a uniquely determined
\mavergleichskette
{\vergleichskette
{a }
{ \in }{K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} In the context of physics, for
\mavergleichskette
{\vergleichskette
{K }
{ = }{\R }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and if there is a linear relation between two measurable quantities, we talk about \stichwort {proportionality} {,} and $a$ is called the \stichwort {proportionality factor} {.} In school, such a linear relation occurs as \anfuehrung{rule of three}{.}

}






\zwischenueberschrift{Linear mappings and matrices}

The effect of several linear mappings from $\R^2$ to itself, represented on a brain cell.

Due to Theorem 24.7 , a linear mapping
\mathdisp {\varphi \colon K^n \longrightarrow K^m} { }
is determined by the images
\mathbed {\varphi(e_j)} {}
{j = 1 , \ldots , n} {}
{} {} {} {,} of the standard vectors. Every \mathl{\varphi(e_j)}{} is a linear combination
\mavergleichskettedisp
{\vergleichskette
{ \varphi(e_j) }
{ =} { \sum_{i = 1}^m a_{ij} e_i }
{ } { }
{ } { }
{ } { }
} {}{}{,} and therefore the linear mapping is determined by the elements \mathl{a_{ij}}{.} So, such a linear map is determined by the $mn$ elements
\mathbed {a_{ij}} {}
{1 \leq i \leq m} {}
{1 \leq j \leq n} {} {} {,} from the field. We can write such a data set as a matrix. Because of the determination theorem, this holds for linear maps in general, as soon as in both vector spaces bases are fixed.




\inputdefinition
{ }
{

Let $K$ denote a field, and let $V$ be an $n$-dimensional vector space with a basis
\mavergleichskette
{\vergleichskette
{ \mathfrak{ v } }
{ = }{ v_1 , \ldots , v_n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and let $W$ be an $m$-dimensional vector space with a basis
\mavergleichskette
{\vergleichskette
{ \mathfrak{ w } }
{ = }{ w_1 , \ldots , w_m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

For a linear mapping
\mathdisp {\varphi \colon V \longrightarrow W} { , }
the matrix
\mavergleichskettedisp
{\vergleichskette
{ M }
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi) }
{ =} { (a_{ij})_{ij} }
{ } { }
{ } { }
} {}{}{,} where \mathl{a_{ij}}{} is the $i$-th coordinate of \mathl{\varphi(v_j )}{} with respect to the basis $\mathfrak{ w }$, is called the \definitionswort {describing matrix for}{} $\varphi$ with respect to the bases.

For a matrix
\mavergleichskette
{\vergleichskette
{M }
{ = }{ (a_{ij})_{ij} }
{ \in }{ \operatorname{Mat}_{ m \times n } (K) }
{ }{ }
{ }{ }
} {}{}{,} the linear mapping \mathl{\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)}{} determined by
\mathdisp {v_j \longmapsto \sum_{ i = 1 }^{ m } a_{ij} w_i} { }
in the sense of Theorem 24.7 ,

is called the \definitionswort {linear mapping determined by the matrix}{} $M$.

}

For a linear mapping $\varphi \colon K^n \rightarrow K^m$, we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping $\varphi \colon V \rightarrow V$ from a vector space in itself \zusatzklammer {what is called an \stichwort {endomorphism} {}} {} {,} one usually takes the same bases on both sides. The identity on a vector space of dimension $n$ is described by the identity matrix, with respect to every basis.




\inputfaktbeweisnichtvorgefuehrt
{Linear mapping/Matrix to bases/Correspondence/Fact}
{Theorem}
{}
{

\faktsituation {Let $K$ be a field, and let $V$ be an $n$-dimensional vector space with a basis
\mavergleichskette
{\vergleichskette
{ \mathfrak{ v } }
{ = }{ v_1 , \ldots , v_n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and let $W$ be an $m$-dimensional vector space with a basis
\mavergleichskette
{\vergleichskette
{ \mathfrak{ w } }
{ = }{ w_1 , \ldots , w_m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}}
\faktfolgerung {Then the mappings
\mathdisp {\varphi \longmapsto M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi) \text{ and } M \longmapsto \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)} { , }
defined in definition, are inverse to each other.}
\faktzusatz {}

}
{Linear mapping/Matrix to bases/Correspondence/Fact/Proof

}





\inputbeispiel{}
{

A linear mapping
\mathdisp {\varphi \colon K^n \longrightarrow K^m} { }
is usually described by the matrix $M$ with respect to the standard bases on the left and on the right. The result of the matrix multiplication
\mavergleichskettedisp
{\vergleichskette
{ \begin{pmatrix} y_{1 } \\ \vdots\\ y_{ m } \end{pmatrix} }
{ =} { M \begin{pmatrix} x_{1 } \\ \vdots\\ x_{ n } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} can be interpreted directly as a point in $K^m$. The $j$-th column of $M$ is the image of the $j$-th standard vector $e_j$.

}






\zwischenueberschrift{Rotations}

A rotation of the real plane $\R^2$ around the origin, given the angle $\alpha$ counterclockwise, maps \mathl{\begin{pmatrix} 1 \\0 \end{pmatrix}}{} to \mathl{\begin{pmatrix} \cos \alpha \\ \sin \alpha \end{pmatrix}}{} and \mathl{\begin{pmatrix} 0 \\1 \end{pmatrix}}{} to \mathl{\begin{pmatrix} - \sin \alpha \\ \cos \alpha \end{pmatrix}}{.} Therefore, plane rotations are described in the following way.


\inputdefinition
{ {{{2}}} }
{

A linear mapping
\mathdisp {D(\alpha) \colon \R^2 \longrightarrow \R^2} { , }
which is given by a \definitionswort {rotation matrix}{} \mathl{\begin{pmatrix} \operatorname{cos} \, \alpha & - \operatorname{sin} \, \alpha \\ \operatorname{sin} \, \alpha & \operatorname{cos} \,\alpha \end{pmatrix}}{} \zusatzklammer {with some
\mavergleichskette
{\vergleichskette
{ \alpha }
{ \in }{\R }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {} is called

\definitionswort {rotation}{.}

}

A \stichwort {space rotation} {} is a linear mapping of the space $\R^3$ in itself around a rotation axis \zusatzklammer {a line through the origin} {} {} with an certain angle $\alpha$. If the vector
\mavergleichskette
{\vergleichskette
{ v_1 }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} defines the axis, and $u_2$ and $u_3$ are orthogonal to $v_1$ and to each other, and all have length $1$, then the rotation is described by the matrix
\mathdisp {\begin{pmatrix} 1 & 0 & 0 \\ 0 & \operatorname{cos} \, \alpha & - \operatorname{sin} \, \alpha \\ 0 &\operatorname{sin} \, \alpha & \operatorname{cos} \,\alpha \end{pmatrix}} { }
with respect to the basis \mathl{v_1,u_2,u_3}{.}






\zwischenueberschrift{The kernel of a linear mapping}




\inputdefinition
{ }
{

Let $K$ denote a field, let \mathkor {} {V} {and} {W} {} denote $K$-vector spaces, and let
\mathdisp {\varphi \colon V \longrightarrow W} { }
denote a $K$-linear mapping. Then
\mavergleichskettedisp
{\vergleichskette
{ \operatorname{kern} \varphi }
{ \defeq} { \varphi^{-1}(0) }
{ =} { { \left\{ v \in V \mid \varphi(v) = 0 \right\} } }
{ } { }
{ } { }
} {}{}{}

is called the \definitionswort {kernel}{} of $\varphi$.

}

The kernel is a linear subspace of $V$.

The following \stichwort {criterion for injectivity} {} is important.




\inputfaktbeweis
{Linear mapping/Kernel/Injectivity/Fact}
{Lemma}
{}
{

\faktsituation {Let $K$ denote a field, let \mathkor {} {V} {and} {W} {} denote $K$-vector spaces, and let
\mathdisp {\varphi \colon V \longrightarrow W} { }
denote a $K$-linear mapping.}
\faktfolgerung {Then $\varphi$ is injective if and only if
\mavergleichskette
{\vergleichskette
{ \operatorname{kern} \varphi }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds.}
\faktzusatz {}

}
{

If the mapping is injective, then there can exist, apart from
\mavergleichskette
{\vergleichskette
{0 }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} no other vector
\mavergleichskette
{\vergleichskette
{v }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} with
\mavergleichskette
{\vergleichskette
{ \varphi(v) }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Hence,
\mavergleichskette
{\vergleichskette
{ \varphi^{-1}(0) }
{ = }{ \{ 0 \} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}
So suppose that
\mavergleichskette
{\vergleichskette
{ \operatorname{kern} \varphi }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and let
\mavergleichskette
{\vergleichskette
{ v_1,v_2 }
{ \in }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} be given with
\mavergleichskette
{\vergleichskette
{ \varphi(v_1) }
{ = }{ \varphi(v_2) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Then, due to linearity,
\mavergleichskettedisp
{\vergleichskette
{\varphi(v_1 - v_2) }
{ =} {\varphi(v_1) - \varphi(v_2) }
{ =} { 0 }
{ } { }
{ } { }
} {}{}{.} Therefore,
\mavergleichskette
{\vergleichskette
{ v_1-v_2 }
{ \in }{ \operatorname{kern} \varphi }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and so
\mavergleichskette
{\vergleichskette
{v_1 }
{ = }{v_2 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

}