Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 24/latex
\setcounter{section}{24}
\subtitle {Base change}
We know, due to Theorem 23.15 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates \extrabracket {the coefficient tuple} {} {.} How do these coordinates behave when we change the bases? This is answered by the following statement.
\inputfactproof
{Vector space/Finite dimensional/Change of basis/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ be a
field,
and let $V$ be a
$K$-vector space
of
dimension
$n$. Let
\mathcor {} {\mathfrak{ v } = v_1 , \ldots , v_n} {and} {\mathfrak{ w } = w_1 , \ldots , w_n} {}
denote
bases
of $V$.}
\factcondition {Suppose that
\mathrelationchaindisplay
{\relationchain
{v_j
}
{ =} { \sum_{ i = 1 }^{ n } c_{ij} w_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
with coefficients
\mathrelationchain
{\relationchain
{ c_{ij}
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
which we collect into the
$n \times n$-matrix
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } }
}
{ =} { { \left( c_{ij} \right) }_{ij}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}}
\factconclusion {Then a vector $u$, which has the coordinates $\begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix}$ with respect to the basis $\mathfrak{ v }$, has the coordinates
\mathrelationchaindisplay
{\relationchain
{\begin{pmatrix} t _{1 } \\ \vdots\\ t _{ n } \end{pmatrix}
}
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix}
}
{ =} { \begin{pmatrix} c_{11 } & c_{1 2} & \ldots & c_{1 n } \\
c_{21 } & c_{2 2} & \ldots & c_{2 n } \\
\vdots & \vdots & \ddots & \vdots \\ c_{ n 1 } & c_{ n 2 } & \ldots & c_{ n n } \end{pmatrix} \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix}
}
{ } {
}
{ } {
}
}
{}{}{}
with respect to the basis $\mathfrak{ w }$.}
\factextra {}
}
{
This follows directly from
\mathrelationchaindisplay
{\relationchain
{ u
}
{ =} { \sum_{ j = 1 }^{ n } s_j v_j
}
{ =} { \sum_{ j = 1 }^{ n } s_j { \left( \sum_{ i = 1 }^{ n } c_{ij} w_i \right) }
}
{ =} { \sum_{ i = 1 }^{ n } { \left( \sum_{ j = 1 }^{ n } s_j c_{ij} \right) } w_i
}
{ } {
}
}
{}{}{,}
and the definition of
matrix multiplication.
The matrix \mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ w } }}{,} which describes the base change from $\mathfrak{ v }$ to $\mathfrak{ w }$, is called the \keyword {transformation matrix} {.} In the $j$-th column of the transformation matrix, there are the coordinates of $v_j$ with respect to the basis $\mathfrak{ w }$. When we denote, for a vector
\mathrelationchain
{\relationchain
{u
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
and a basis $\mathfrak{ v }$, the corresponding coordinate tuple by $\Psi_{ \mathfrak{ v } } (u)$, then the transformation can be quickly written as
\mathrelationchaindisplay
{\relationchain
{ \Psi_{ \mathfrak{ w } } (u)
}
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \Psi_{ \mathfrak{ v } } (u))
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
\inputexample{}
{
We consider in $\R^2$ the
standard basis,
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ u }
}
{ =} { \begin{pmatrix} 1 \\0 \end{pmatrix} , \, \begin{pmatrix} 0 \\1 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
and the basis
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ v }
}
{ =} { \begin{pmatrix} 1 \\2 \end{pmatrix} , \, \begin{pmatrix} -2 \\3 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
The basis vectors of $\mathfrak{ v }$ can be expressed directly with the standard basis, namely
\mathdisp {v_1= \begin{pmatrix} 1 \\2 \end{pmatrix} = 1 \begin{pmatrix} 1 \\0 \end{pmatrix} + 2 \begin{pmatrix} 0 \\1 \end{pmatrix} \text{ and } v_2= \begin{pmatrix} -2 \\3 \end{pmatrix} = -2 \begin{pmatrix} 1 \\0 \end{pmatrix} + 3 \begin{pmatrix} 0 \\1 \end{pmatrix}} { . }
Therefore, we get immediately
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } }
}
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
For example, the vector that has the
coordinates
\mathl{(4,-3)}{} with respect to $\mathfrak{ v }$, has the coordinates
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } \begin{pmatrix} 4 \\-3 \end{pmatrix}
}
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix} \begin{pmatrix} 4 \\-3 \end{pmatrix}
}
{ =} { \begin{pmatrix} 10 \\-1 \end{pmatrix}
}
{ } {
}
{ } {
}
}
{}{}{}
with respect to the standard basis $\mathfrak{ u }$. The transformation matrix \mathl{M^{ \mathfrak{ u } }_{ \mathfrak{ v } }}{} is more difficult to compute. We have to write the standard vectors as
linear combinations
of
\mathcor {} {v_1} {and} {v_2} {.}
A direct computation
\extrabracket {solving two linear systems} {} {}
yields
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 1 \\0 \end{pmatrix}
}
{ =} { { \frac{ 3 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} - { \frac{ 2 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
and
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 0 \\1 \end{pmatrix}
}
{ =} { { \frac{ 2 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} + { \frac{ 1 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Hence,
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ u } }_{ \mathfrak{ v } }
}
{ =} { \begin{pmatrix} { \frac{ 3 }{ 7 } } & { \frac{ 2 }{ 7 } } \\ - { \frac{ 2 }{ 7 } } & { \frac{ 1 }{ 7 } } \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
}
\subtitle {Linear mappings}
\inputdefinition
{ }
{
Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
A
mapping
\mathdisp {\varphi \colon V \longrightarrow W} { }
is called a \definitionword {linear mapping}{} if the following two properties are fulfilled.
\enumerationtwo {
\mathrelationchain
{\relationchain
{ \varphi(u+v)
}
{ = }{ \varphi(u) + \varphi(v)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for all
\mathrelationchain
{\relationchain
{ u,v
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
} {
\mathrelationchain
{\relationchain
{ \varphi( s v)
}
{ = }{ s \varphi(v)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for all
\mathcor {} {s \in K} {and} {v \in V} {.}
}
Here, the first property is called \keyword {additivity} {} and the second property is called \keyword {compatibility with scaling} {.} When we want to stress the base field, then we say $K$-linear. The identity
$\operatorname{Id}_{ V } \colon V \rightarrow V$,
the null mapping
$V \rightarrow 0$,
and the inclusion
\mathrelationchain
{\relationchain
{U
}
{ \subseteq }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
of a linear subspace are the simplest examples of a linear mapping. For a linear mapping, the compatibility with arbitrary linear combination holds, that is,
\mathrelationchaindisplay
{\relationchain
{ \varphi { \left( \sum_{i = 1}^n s_iv_i \right) }
}
{ =} { \sum_{i = 1}^n s_i \varphi(v_i)
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
see
exercise *****.
\inputexample{}
{
Let $K$ denote a
field,
and let $K^n$ be the
$n$-dimensional
standard space.
Then the $i$-th \keyword {projection} {,} this is the
mapping
\mathdisp {K^n \longrightarrow K
, \left( x_1 , \, \ldots , \, x_{i-1} , \, x_i , \, x_{i+1} , \, \ldots , \, x_n \right) \longmapsto x_i} { , }
is a
$K$-linear mapping.
This follows immediately from componentwise addition and scalar multiplication on the standard space. The $i$-th projection is also called the $i$-th \keyword {coordinate function} {.}
}
\inputfactproof
{Linear mapping/Composition/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ denote a
field,
and let \mathl{U,V,W}{} denote vector spaces
over $K$. Suppose that
\mathdisp {\varphi : U \longrightarrow V \text{ and } \psi : V \longrightarrow W} { }
are
linear mappings.}
\factconclusion {Then also the
composition
\mathdisp {\psi \circ \varphi \colon U \longrightarrow W} { }
is a linear mapping.}
\factextra {}
{See Exercise 24.17 .}
\inputfactproof
{Linear mapping/Bijective/Inverse mapping linear/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
Let
\mathdisp {\varphi \colon V \longrightarrow W} { }
be a
bijective
linear map.}
\factconclusion {Then also the
inverse mapping
\mathdisp {\varphi^{-1} \colon W \longrightarrow V} { }
is linear.}
\factextra {}
{See Exercise 24.18 .}
\subtitle {Determination on a basis}
Behind the following statement \extrabracket {the \keyword {determination theorem} {}} {} {,} there is the important principle that in linear algebra \extrabracket {of finite-dimensional vector spaces} {} {,} the objects are determined by finitely many data.
\inputfaktbeweisnichtvorgefuehrt
{Linear mapping/Determination on basis/Fact}
{Theorem}
{}
{
\factsituation {Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
Let
\mathcond {v_i} {}
{i \in I} {}
{} {} {} {,}
denote a
basis
of $V$, and let
\mathcond {w_i} {}
{i \in I} {}
{} {} {} {,}
denote elements in $W$.}
\factconclusion {Then there exists a unique
linear mapping
\mathdisp {f \colon V \longrightarrow W} { }
with
\mathdisp {f(v_i)= w_i \text { for all } i \in I} { . }
}
\factextra {}
}
{
Since we want
\mathrelationchain
{\relationchain
{f(v_i)
}
{ = }{ w_i
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and since a
linear mapping
respects all
linear combinations,
that is
\mathrelationchaindisplay
{\relationchain
{ f { \left( \sum_{i \in I} s_i v_i \right) }
}
{ =} { \sum_{i \in I} s_i f { \left( v_i \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
holds, and since every vector
\mathrelationchain
{\relationchain
{ v
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
is such a linear combination, there can exist at most one such linear mapping.
We define now a
mapping
\mathdisp {f \colon V \longrightarrow W} { }
in the following way: we write every vector
\mathrelationchain
{\relationchain
{ v
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with the given basis as
\mathrelationchaindisplay
{\relationchain
{v
}
{ =} {\sum_{i \in I} s_i v_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
\extrabracket {where
\mathrelationchain
{\relationchain
{ s_i
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for almost all
\mathrelationchain
{\relationchain
{ i
}
{ \in }{ I
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}} {} {}
and define
\mathrelationchaindisplay
{\relationchain
{ f(v)
}
{ \defeq} { \sum_{i \in I} s_i w_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Since the representation of $v$ as such a
linear combination
is unique, this mapping is well-defined. Also,
\mathrelationchain
{\relationchain
{ f(v_i)
}
{ = }{ w_i
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
is clear.
Linearity. For two vectors
\mathcor {} {u= \sum_{i \in I} s_iv_i} {and} {v= \sum_{i \in I} t_iv_i} {,}
we have
\mathrelationchainalign
{\relationchainalign
{ f { \left( u+v \right) }
}
{ =} { f { \left( { \left( \sum_{i \in I} s_iv_i \right) } + { \left( \sum_{i \in I} t_iv_i \right) } \right) }
}
{ =} { f { \left( \sum_{i \in I} { \left( s_i + t_i \right) } v_i \right) }
}
{ =} { \sum_{i \in I} (s_i + t_i) f { \left( v_i \right) }
}
{ =} { \sum_{i \in I} s_i f { \left( v_i \right) } + \sum_{i \in I} t_i f(v_i)
}
}
{
\relationchainextensionalign
{ =} { f { \left( \sum_{i \in I} s_iv_i \right) } + f { \left( \sum_{i \in I} t_iv_i \right) }
}
{ =} { f(u) +f(v)
}
{ } {}
{ } {}
}
{}{.}
The compatibility with scalar multiplication is shown in a similar way, see
exercise *****.
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Variables proporcionals.png} }
\end{center}
\imagetext {The graph of a linear mapping from $\R$ to $\R$, the mapping is determined by the proportionality factor $k$ alone.} }
\imagelicense { Variables proporcionals.png } {} {Coronellian} {Commons} {CC-by-sa 3.0} {}
\inputexample{}
{
The easiest
linear mappings
are
\extrabracket {beside the null mapping} {} {}
the linear maps from $K$ to $K$. Such a linear mapping
\mathdisp {\varphi \colon K \longrightarrow K
, x \longmapsto \varphi(x)} { , }
is determined
\extrabracket {by
Theorem 24.7
,
but this is also directly clear} {} {}
by \mathl{\varphi(1)}{,} or by the value \mathl{\varphi(t)}{} for a single element
\mathcond {t \in K} {}
{t \neq 0} {}
{} {} {} {.}
In particular,
\mathrelationchain
{\relationchain
{ \varphi(x)
}
{ = }{ ax
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
with a uniquely determined
\mathrelationchain
{\relationchain
{a
}
{ \in }{K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
In the context of physics, for
\mathrelationchain
{\relationchain
{K
}
{ = }{\R
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and if there is a linear relation between two measurable quantities, we talk about \keyword {proportionality} {,} and $a$ is called the \keyword {proportionality factor} {.} In school, such a linear relation occurs as \quotationshort{rule of three}{.}
}
\subtitle {Linear mappings and matrices}
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Some_linear_maps_kpv_without_eigenspaces.svg} }
\end{center}
\imagetext {The effect of several linear mappings from $\R^2$ to itself, represented on a brain cell.} }
\imagelicense { Some linear maps kpv without eigenspaces.svg } {} {Dividuum} {Commons} {CC-by-sa 3.0} {}
Due to
Theorem 24.7
,
a linear mapping
\mathdisp {\varphi \colon K^n \longrightarrow K^m} { }
is determined by the images
\mathcond {\varphi(e_j)} {}
{j = 1 , \ldots , n} {}
{} {} {} {,}
of the standard vectors. Every \mathl{\varphi(e_j)}{} is a linear combination
\mathrelationchaindisplay
{\relationchain
{ \varphi(e_j)
}
{ =} { \sum_{i = 1}^m a_{ij} e_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
and therefore the linear mapping is determined by the elements \mathl{a_{ij}}{.} So, such a linear map is determined by the $mn$ elements
\mathcond {a_{ij}} {}
{1 \leq i \leq m} {}
{1 \leq j \leq n} {} {} {,}
from the field. We can write such a data set as a matrix. Because of
the determination theorem,
this holds for linear maps in general, as soon as in both vector spaces bases are fixed.
\inputdefinition
{ }
{
Let $K$ denote a
field,
and let $V$ be an
$n$-dimensional vector space
with a
basis
\mathrelationchain
{\relationchain
{ \mathfrak{ v }
}
{ = }{ v_1 , \ldots , v_n
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let $W$ be an $m$-dimensional vector space with a basis
\mathrelationchain
{\relationchain
{ \mathfrak{ w }
}
{ = }{ w_1 , \ldots , w_m
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
For a
linear mapping
\mathdisp {\varphi \colon V \longrightarrow W} { , }
the
matrix
\mathrelationchaindisplay
{\relationchain
{ M
}
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)
}
{ =} { (a_{ij})_{ij}
}
{ } {
}
{ } {
}
}
{}{}{,}
where \mathl{a_{ij}}{} is the $i$-th
coordinate
of \mathl{\varphi(v_j )}{} with respect to the basis $\mathfrak{ w }$, is called the \definitionword {describing matrix for}{} $\varphi$ with respect to the bases.
For a matrix
\mathrelationchain
{\relationchain
{M
}
{ = }{ (a_{ij})_{ij}
}
{ \in }{ \operatorname{Mat}_{ m \times n } (K)
}
{ }{
}
{ }{
}
}
{}{}{,}
the linear mapping \mathl{\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)}{} determined by
\mathdisp {v_j \longmapsto \sum_{ i = 1 }^{ m } a_{ij} w_i} { }
in the sense of
Theorem 24.7
,
}
For a linear mapping $\varphi \colon K^n \rightarrow K^m$, we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping $\varphi \colon V \rightarrow V$ from a vector space in itself \extrabracket {what is called an \keyword {endomorphism} {}} {} {,} one usually takes the same bases on both sides. The identity on a vector space of dimension $n$ is described by the identity matrix, with respect to every basis.
\inputfaktbeweisnichtvorgefuehrt
{Linear mapping/Matrix to bases/Correspondence/Fact}
{Theorem}
{}
{
\factsituation {Let $K$ be a field, and let $V$ be an
$n$-dimensional
vector space
with a
basis
\mathrelationchain
{\relationchain
{ \mathfrak{ v }
}
{ = }{ v_1 , \ldots , v_n
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let $W$ be an $m$-dimensional vector space with a basis
\mathrelationchain
{\relationchain
{ \mathfrak{ w }
}
{ = }{ w_1 , \ldots , w_m
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}}
\factconclusion {Then the mappings
\mathdisp {\varphi \longmapsto M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi) \text{ and } M \longmapsto \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)} { , }
defined in
definition,
are
inverse
to each other.}
\factextra {}
}
{
We show that both compositions are the identity. We start with a matrix
\mathrelationchain
{\relationchain
{ M
}
{ = }{ { \left( a_{ij} \right) }_{ij}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
and consider the matrix
\mathdisp {M^{ \mathfrak{ v } }_{ \mathfrak{ w } }( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M) )} { . }
Two matrices are equal, when the entries coincide for every index pair \mathl{(i,j)}{.} We have
\mathrelationchainalign
{\relationchainalign
{(M^{ \mathfrak{ v } }_{ \mathfrak{ w } }( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M) ))_{ij}
}
{ =} { i-\text{th coordinate of } ( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M)) (v_j)
}
{ =} { i-\text{th coordinate of } \sum_{ i = 1 }^{ m } a_{ij} w_i
}
{ =} { a_{ij}
}
{ } {
}
}
{}
{}{.}
Now, let $\varphi$ be a linear mapping, we consider
\mathdisp {\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }( M^{ \mathfrak{ v } }_{ \mathfrak{ w } }(\varphi) )} { . }
Two linear mappings coincide, due to
Theorem 24.7
,
when they have the same values on the basis \mathl{v_1 , \ldots , v_n}{.} We have
\mathrelationchaindisplay
{\relationchain
{ (\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }( M^{ \mathfrak{ v } }_{ \mathfrak{ w } }(\varphi) ))(v_j)
}
{ =} { \sum_{ i = 1 }^{ m } (M^{ \mathfrak{ v } }_{ \mathfrak{ w } } (\varphi))_{ij} \, w_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Due to the definition, the coefficient \mathl{(M^{ \mathfrak{ v } }_{ \mathfrak{ w } } (\varphi))_{ij}}{} is the $i$-th coordinate of \mathl{\varphi(v_j)}{} with respect to the basis \mathl{w_1 , \ldots , w_m}{.} Hence, this sum equals \mathl{\varphi(v_j)}{.}
\inputexample{}
{
A
linear mapping
\mathdisp {\varphi \colon K^n \longrightarrow K^m} { }
is usually described by the matrix $M$ with respect to the
standard bases
on the left and on the right. The result of the matrix multiplication
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} y_{1 } \\ \vdots\\ y_{ m } \end{pmatrix}
}
{ =} { M \begin{pmatrix} x_{1 } \\ \vdots\\ x_{ n } \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
can be interpreted directly as a point in $K^m$. The $j$-th column of $M$ is the image of the $j$-th standard vector $e_j$.
}
\subtitle {Rotations}
A rotation of the real plane $\R^2$ around the origin, given the angle $\alpha$ counterclockwise, maps \mathl{\begin{pmatrix} 1 \\0 \end{pmatrix}}{} to \mathl{\begin{pmatrix} \cos \alpha \\ \sin \alpha \end{pmatrix}}{} and \mathl{\begin{pmatrix} 0 \\1 \end{pmatrix}}{} to \mathl{\begin{pmatrix} - \sin \alpha \\ \cos \alpha \end{pmatrix}}{.} Therefore, plane rotations are described in the following way.
\inputdefinition
{ {{{2}}} }
{
A
linear mapping
\mathdisp {D(\alpha) \colon \R^2 \longrightarrow \R^2} { , }
which is given by a
\definitionword {rotation matrix}{}
\mathl{\begin{pmatrix}
\operatorname{cos} \, \alpha & - \operatorname{sin} \, \alpha \\
\operatorname{sin} \, \alpha & \operatorname{cos} \,\alpha
\end{pmatrix}}{}
\extrabracket {with some
\mathrelationchain
{\relationchain
{ \alpha
}
{ \in }{\R
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}} {} {} is called
}
A \keyword {space rotation} {} is a linear mapping of the space $\R^3$ in itself around a rotation axis
\extrabracket {a line through the origin} {} {}
with an certain angle $\alpha$. If the vector
\mathrelationchain
{\relationchain
{ v_1
}
{ \neq }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
defines the axis, and $u_2$ and $u_3$ are orthogonal to $v_1$ and to each other, and all have length $1$, then the rotation is described by the matrix
\mathdisp {\begin{pmatrix}
1 & 0 & 0 \\
0 & \operatorname{cos} \, \alpha & - \operatorname{sin} \, \alpha \\
0 &\operatorname{sin} \, \alpha & \operatorname{cos} \,\alpha
\end{pmatrix}} { }
with respect to the basis \mathl{v_1,u_2,u_3}{.}
\subtitle {The kernel of a linear mapping}
\inputdefinition
{ }
{
Let $K$ denote a
field,
let
\mathcor {} {V} {and} {W} {}
denote
$K$-vector spaces,
and let
\mathdisp {\varphi \colon V \longrightarrow W} { }
denote a
$K$-linear mapping.
Then
\mathrelationchaindisplay
{\relationchain
{ \operatorname{kern} \varphi
}
{ \defeq} { \varphi^{-1}(0)
}
{ =} { { \left\{ v \in V \mid \varphi(v) = 0 \right\} }
}
{ } {
}
{ } {
}
}
{}{}{}
}
The kernel is a linear subspace of $V$.
The following \keyword {criterion for injectivity} {} is important.
\inputfactproof
{Linear mapping/Kernel/Injectivity/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ denote a
field,
let
\mathcor {} {V} {and} {W} {}
denote
$K$-vector spaces,
and let
\mathdisp {\varphi \colon V \longrightarrow W} { }
denote a
$K$-linear mapping.}
\factconclusion {Then $\varphi$ is
injective
if and only if
\mathrelationchain
{\relationchain
{ \operatorname{kern} \varphi
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
holds.}
\factextra {}
}
{
If the mapping is injective, then there can exist, apart from
\mathrelationchain
{\relationchain
{0
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
no other vector
\mathrelationchain
{\relationchain
{v
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with
\mathrelationchain
{\relationchain
{ \varphi(v)
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
Hence,
\mathrelationchain
{\relationchain
{ \varphi^{-1}(0)
}
{ = }{ \{ 0 \}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
So suppose that
\mathrelationchain
{\relationchain
{ \operatorname{kern} \varphi
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let
\mathrelationchain
{\relationchain
{ v_1,v_2
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
be given with
\mathrelationchain
{\relationchain
{ \varphi(v_1)
}
{ = }{ \varphi(v_2)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
Then, due to linearity,
\mathrelationchaindisplay
{\relationchain
{\varphi(v_1 - v_2)
}
{ =} {\varphi(v_1) - \varphi(v_2)
}
{ =} { 0
}
{ } {
}
{ } {
}
}
{}{}{.}
Therefore,
\mathrelationchain
{\relationchain
{ v_1-v_2
}
{ \in }{ \operatorname{kern} \varphi
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and so
\mathrelationchain
{\relationchain
{v_1
}
{ = }{v_2
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}