Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 10/latex
\setcounter{section}{10}
\subtitle {Linear mappings}
We are interested in mappings between two vector spaces that respect the structures, that is, they are compatible with addition and with scalar multiplication.
\inputdefinition
{ }
{
Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
A
mapping
\mathdisp {\varphi \colon V \longrightarrow W} { }
is called a \definitionword {linear mapping}{} if the following two properties are fulfilled.
\enumerationtwo {
\mathrelationchain
{\relationchain
{ \varphi(u+v)
}
{ = }{ \varphi(u) + \varphi(v)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for all
\mathrelationchain
{\relationchain
{ u,v
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
} {
\mathrelationchain
{\relationchain
{ \varphi( s v)
}
{ = }{ s \varphi(v)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for all
\mathcor {} {s \in K} {and} {v \in V} {.}
}
Here, the first property is called \keyword {additivity} {} and the second property is called \keyword {compatibility with scaling} {.} When we want to stress the base field, then we say $K$-linear. The identity
$\operatorname{Id}_{ V } \colon V \rightarrow V$,
the null mapping
$V \rightarrow 0$,
and the inclusion
\mathrelationchain
{\relationchain
{U
}
{ \subseteq }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
of a linear subspace are the simplest examples of a linear mapping. For a linear mapping, the compatibility with arbitrary linear combination holds, that is,
\mathrelationchaindisplay
{\relationchain
{ \varphi { \left( \sum_{i = 1}^n s_iv_i \right) }
}
{ =} { \sum_{i = 1}^n s_i \varphi(v_i)
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
see
Exercise 10.2
. Instead of linear mapping, we also say \keyword {homomorphism} {.}
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Variables proporcionals.png} }
\end{center}
\imagetext {The graph of a linear mapping from $\R$ to $\R$, the mapping is determined by the proportionality factor $k$ alone.} }
\imagelicense { Variables proporcionals.png } {} {Coronellian} {Commons} {CC-by-sa 3.0} {}
\inputexample{
}
{
The easiest
linear mappings
are
\extrabracket {beside the null mapping} {} {}
the linear maps from $K$ to $K$. Such a linear mapping
\mathdisp {\varphi \colon K \longrightarrow K
, x \longmapsto \varphi(x)} { , }
is determined
\extrabracket {by
Theorem 10.10
,
but this is also directly clear} {} {}
by \mathl{\varphi(1)}{,} or by the value \mathl{\varphi(t)}{} for a single element
\mathcond {t \in K} {}
{t \neq 0} {}
{} {} {} {.}
In particular,
\mathrelationchain
{\relationchain
{ \varphi(x)
}
{ = }{ ax
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
with a uniquely determined
\mathrelationchain
{\relationchain
{a
}
{ \in }{K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
In the context of physics, for
\mathrelationchain
{\relationchain
{K
}
{ = }{\R
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and if there is a linear relation between two measurable quantities, we talk about \keyword {proportionality} {,} and $a$ is called the \keyword {proportionality factor} {.} In school, such a linear relation occurs as \quotationshort{rule of three}{.}
}
Many important functions, in particular from $\R$ to $\R$, are not linear. For example, the squaring \mathl{x \mapsto x^2}{,} the square root, the trigonometric functions, the exponential function, the logarithm is not linear. But also for such more complicated functions there are, in the framework of differential calculus, linear approximations, which help to understand these functions.
\inputexample{}
{
Let $K$ denote a
field,
and let $K^n$ be the
$n$-dimensional
standard space.
Then the $i$-th \keyword {projection} {,} this is the
mapping
\mathdisp {K^n \longrightarrow K
, \left( x_1 , \, \ldots , \, x_{i-1} , \, x_i , \, x_{i+1} , \, \ldots , \, x_n \right) \longmapsto x_i} { , }
is a
$K$-linear mapping.
This follows immediately from componentwise addition and scalar multiplication on the standard space. The $i$-th projection is also called the $i$-th \keyword {coordinate function} {.}
}
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Korea-grocery shopping-01.jpg} }
\end{center}
\imagetext {If you buy ten times this stuff, you have to pay ten times as much. In the linear world, there is no rebate.} }
\imagelicense { Korea-grocery shopping-01.jpg } {L. W. Yang} {} {Commons} {CC-by-sa 2.0} {}
\inputexample{}
{
In a shop, there are $n$ different products to buy, and the price of the $i$-th product
\extrabracket {with respect to a certain unit} {} {}
is $a_i$. A purchase is described by the $n$-tuple
\mathdisp {\left( x_1 , \, x_2 , \, \ldots , \, x_n \right)} { , }
where $x_i$ is the amount of the $i$-th product bought. The price for the purchase is hence \mathl{\sum_{i=1}^n a_ix_i}{.} The price mapping
\mathdisp {\R^n \longrightarrow \R
, \left( x_1 , \, x_2 , \, \ldots , \, x_n \right) \longmapsto \sum_{i = 1}^n a_ix_i} { , }
is
linear.
This means, for example, that if we do first the purchase \mathl{\left( x_1 , \, x_2 , \, \ldots , \, x_n \right)}{} and then, a week later, the purchase \mathl{\left( y_1 , \, y_2 , \, \ldots , \, y_n \right)}{,} then the price of the two purchases together is the same as the price of the purchase \mathl{\left( x_1+y_1 , \, x_2+y_2 , \, \ldots , \, x_n+y_n \right)}{.}
}
\inputexample{}
{
The mapping
\mathdisp {K^n \longrightarrow K^m
, \begin{pmatrix} s_1 \\\vdots\\ s_n \end{pmatrix} \longmapsto M \begin{pmatrix} s_1 \\\vdots\\ s_n \end{pmatrix} = \begin{pmatrix} \sum_{j = 1}^n a_{1j} s_j \\ \sum_{j = 1}^n a_{2j} s_j \\ \vdots\\ \sum_{j = 1}^n a_{mj} s_j \end{pmatrix}} { , }
which is given \extrabracket {see
Example 2.8
} {} {} by an
$m \times n$-matrix
\mathrelationchain
{\relationchain
{M
}
{ = }{ (a_{ij})_{1 \leq i \leq m,\, 1 \leq j \leq n}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
is
linear.
}
\inputdefinition
{ }
{
Let $V$ be a
vector space
over a
field
$K$. For
\mathrelationchain
{\relationchain
{ a
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
the
linear mapping
\mathdisp {\varphi \colon V \longrightarrow V
, v \longmapsto av} { , }
is called \definitionword {homothety}{}
\extrabracket {or \definitionword {dilation}{}} {} {}
}
For a homothety, the domain space and the target space are the same. The number $a$ is called \keyword {scaling factor} {.} For
\mathrelationchain
{\relationchain
{a
}
{ = }{1
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
we get the identity, for
\mathrelationchain
{\relationchain
{a
}
{ = }{-1
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
we get the \keyword {point reflection} {} at the origin.
\inputexample{}
{
Let \mathl{C^0(\R,\R)}{} denote the space of continuous functions from $\R$ to $\R$, and let \mathl{C^1(\R,\R)}{} denote the space of continuously differentiable functions. Then the mapping
\mathdisp {D \colon C^1(\R,\R) \longrightarrow C^0(\R,\R)
, f \longmapsto f'} { , }
which assigns to a function its derivative, is
linear.
In analysis, we prove that
\mathrelationchaindisplay
{\relationchain
{ (af+bg)'
}
{ =} { af' + bg'
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
holds for
\mathrelationchain
{\relationchain
{ a,b
}
{ \in }{\R
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
and another function
\mathrelationchain
{\relationchain
{ g
}
{ \in }{ C^1(\R,\R)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
}
\inputfactproof
{Linear mapping/Composition/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ denote a
field,
and let \mathl{U,V,W}{} denote vector spaces
over $K$. Suppose that
\mathdisp {\varphi : U \longrightarrow V \text{ and } \psi : V \longrightarrow W} { }
are
linear mappings.}
\factconclusion {Then also the
composition
\mathdisp {\psi \circ \varphi \colon U \longrightarrow W} { }
is a linear mapping.}
\factextra {}
{See Exercise 10.12 .}
\inputfactproof
{Linear mapping/Bijective/Inverse mapping linear/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
Let
\mathdisp {\varphi \colon V \longrightarrow W} { }
be a
bijective
linear map.}
\factconclusion {Then also the
inverse mapping
\mathdisp {\varphi^{-1} \colon W \longrightarrow V} { }
is linear.}
\factextra {}
{See Exercise 10.13 .}
\subtitle {Determination on a basis}
Behind the following statement \extrabracket {the \keyword {determination theorem} {}} {} {} is the important principle, that in linear algebra \extrabracket {of finite dimensional vector spaces} {} {,} the objects are determined by finitely many data.
\inputfactproof
{Linear mapping/Determination on basis/Fact}
{Theorem}
{}
{
\factsituation {Let $K$ be a
field,
and let
\mathcor {} {V} {and} {W} {}
be
$K$-vector spaces.
Let
\mathcond {v_i} {}
{i \in I} {}
{} {} {} {,}
denote a
basis
of $V$, and let
\mathcond {w_i} {}
{i \in I} {}
{} {} {} {,}
denote elements in $W$.}
\factconclusion {Then there exists a unique
linear mapping
\mathdisp {f \colon V \longrightarrow W} { }
with
\mathdisp {f(v_i)= w_i \text { for all } i \in I} { . }
}
\factextra {}
}
{
Since we want
\mathrelationchain
{\relationchain
{f(v_i)
}
{ = }{ w_i
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and since a
linear mapping
respects all
linear combinations,
that is
\extrafootnote {If $I$ is an infinite index set, then, in all sums considered here, only finitely many coefficients are not $0$} {.} {}
\mathrelationchaindisplay
{\relationchain
{ f { \left( \sum_{i \in I} s_i v_i \right) }
}
{ =} { \sum_{i \in I} s_i f { \left( v_i \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
holds, and since every vector
\mathrelationchain
{\relationchain
{ v
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
is such a linear combination, there can exist at most one such linear mapping.
We define now a
mapping
\mathdisp {f \colon V \longrightarrow W} { }
in the following way: we write every vector
\mathrelationchain
{\relationchain
{ v
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with the given basis as
\mathrelationchaindisplay
{\relationchain
{v
}
{ =} {\sum_{i \in I} s_i v_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
\extrabracket {where
\mathrelationchain
{\relationchain
{ s_i
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
for almost all
\mathrelationchain
{\relationchain
{ i
}
{ \in }{ I
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}} {} {}
and define
\mathrelationchaindisplay
{\relationchain
{ f(v)
}
{ \defeq} { \sum_{i \in I} s_i w_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Since the representation of $v$ as such a
linear combination
is unique, this mapping is well-defined. Also,
\mathrelationchain
{\relationchain
{ f(v_i)
}
{ = }{ w_i
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
is clear.
Linearity. For two vectors
\mathcor {} {u= \sum_{i \in I} s_iv_i} {and} {v= \sum_{i \in I} t_iv_i} {,}
we have
\mathrelationchainalign
{\relationchainalign
{ f { \left( u+v \right) }
}
{ =} { f { \left( { \left( \sum_{i \in I} s_iv_i \right) } + { \left( \sum_{i \in I} t_iv_i \right) } \right) }
}
{ =} { f { \left( \sum_{i \in I} { \left( s_i + t_i \right) } v_i \right) }
}
{ =} { \sum_{i \in I} (s_i + t_i) f { \left( v_i \right) }
}
{ =} { \sum_{i \in I} s_i f { \left( v_i \right) } + \sum_{i \in I} t_i f(v_i)
}
}
{
\relationchainextensionalign
{ =} { f { \left( \sum_{i \in I} s_iv_i \right) } + f { \left( \sum_{i \in I} t_iv_i \right) }
}
{ =} { f(u) +f(v)
}
{ } {}
{ } {}
}
{}{.}
The compatibility with scalar multiplication is shown in a similar way, see
Exercise 10.21
.
In particular, a linear mapping
$\varphi \colon K^n \rightarrow K^m$
is uniquely determined by \mathl{\varphi(e_1) , \ldots , \varphi(e_n)}{.}
\inputexample{}
{
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Schrägbild eines Würfels.svg} }
\end{center}
\imagetext {} }
\imagelicense { Schrägbild eines Würfels.svg } {} {WissensDürster} {Commons} {public domain} {}
In many situations, a certain object
\extrabracket {like a cube} {} {}
in space $\R^3$ shall be drawn in the plane $\R^2$. One possibility is to work a \keyword {projection} {.} This is a
linear mapping
\mathdisp {\R^3 \longrightarrow \R^2} { }
which is given
\extrabracket {with respect to the standard bases \mathl{e_1,e_2,e_3}{} and \mathl{f_1,f_2}{}} {} {}
by
\mathdisp {e_1 \longmapsto f_1, \, e_2 \longmapsto a f_1 + b f_2, \, e_3 \longmapsto f_2} { , }
where the coefficients \mathl{a,b}{} are usually chosen in the range \mathl{[ { \frac{ 1 }{ 3 } }, { \frac{ 1 }{ 2 } } ]}{.} Linearity has the effect that parallel lines are mapped to parallel lines
\extrabracket {unless they are mapped to a point} {} {.}
The point \mathl{(x,y,z)}{} is mapped to \mathl{(x+ a y,b y+z)}{.} The
image
of the object under such a linear mapping is called a \keyword {projection image} {.}
}
\subtitle {Linear mappings and matrices}
\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Some_linear_maps_kpv_without_eigenspaces.svg} }
\end{center}
\imagetext {The effect of several linear mappings from $\R^2$ to itself, represented on a brain cell.} }
\imagelicense { Some linear maps kpv without eigenspaces.svg } {} {Dividuum} {Commons} {CC-by-sa 3.0} {}
A linear mapping
\mathdisp {\varphi \colon K^n \longrightarrow K^m} { }
is determined uniquely by the images
\mathcond {\varphi(e_j)} {}
{j = 1 , \ldots , n} {}
{} {} {} {,}
of the standard vectors, and every \mathl{\varphi(e_j)}{} is a linear combination
\mathrelationchaindisplay
{\relationchain
{ \varphi(e_j)
}
{ =} { \sum_{i = 1}^m a_{ij} e_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
and hence determined by the elements \mathl{a_{ij}}{.} This means all together that such a linear mapping is given by the $mn$ elements
\mathcond {a_{ij}} {}
{1 \leq i \leq m} {}
{1 \leq j \leq n} {} {} {.}
Such a set of data can be written as a matrix. Due to
Theorem 10.10
,
this observation holds for all finite-dimensional vector spaces, as long as bases are fixed on the source space and on the target space of the linear mapping.
\inputdefinition
{ }
{
Let $K$ denote a
field,
and let $V$ be an
$n$-dimensional vector space
with a
basis
\mathrelationchain
{\relationchain
{ \mathfrak{ v }
}
{ = }{ v_1 , \ldots , v_n
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let $W$ be an $m$-dimensional vector space with a basis
\mathrelationchain
{\relationchain
{ \mathfrak{ w }
}
{ = }{ w_1 , \ldots , w_m
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
For a
linear mapping
\mathdisp {\varphi \colon V \longrightarrow W} { , }
the
matrix
\mathrelationchaindisplay
{\relationchain
{ M
}
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)
}
{ =} { (a_{ij})_{ij}
}
{ } {
}
{ } {
}
}
{}{}{,}
where \mathl{a_{ij}}{} is the $i$-th
coordinate
of \mathl{\varphi(v_j )}{} with respect to the basis $\mathfrak{ w }$, is called the \definitionword {describing matrix for}{} $\varphi$ with respect to the bases.
For a matrix
\mathrelationchain
{\relationchain
{M
}
{ = }{ (a_{ij})_{ij}
}
{ \in }{ \operatorname{Mat}_{ m \times n } (K)
}
{ }{
}
{ }{
}
}
{}{}{,}
the linear mapping \mathl{\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)}{} determined by
\mathdisp {v_j \longmapsto \sum_{ i = 1 }^{ m } a_{ij} w_i} { }
in the sense of
Theorem 10.10
,
}
For a linear mapping $\varphi \colon K^n \rightarrow K^m$, we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping $\varphi \colon V \rightarrow V$ from a vector space in itself \extrabracket {what is called an \keyword {endomorphism} {}} {} {,} one usually takes the same bases on both sides. The identity on a vector space of dimension $n$ is described by the identity matrix, with respect to every basis.
If
\mathrelationchain
{\relationchain
{V
}
{ = }{W
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
then we are usually interested in the describing matrix with respect to one basis $\mathfrak{ v }$ of $V$.
\inputexample{}
{
Let $V$ denote a vector space with
bases
\mathcor {} {\mathfrak{ v }} {and} {\mathfrak{ w }} {.}
If we consider the identity
\mathdisp {\operatorname{Id} \colon V \longrightarrow V} { }
with respect to the basis $\mathfrak{ v }$ on the source and the basis $\mathfrak{ w }$ on the target, we get, because of
\mathrelationchaindisplay
{\relationchain
{
\operatorname{Id} (v_j)
}
{ =} { v_j
}
{ =} { \sum a_{ij} w_i
}
{ } {
}
{ } {
}
}
{}{}{,}
directly
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } (
\operatorname{Id} )
}
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
This means that the describing matrix of the identical linear mapping is the
transformation matrix
for the base change from $\mathfrak{ v }$ to $\mathfrak{ w }$.
}
\inputfactproof
{Linear mapping/Matrix/Commutative diagram/Fact}
{Lemma}
{}
{
\factsituation {Let $K$ denote a
field
and let $V$ denote an
$n$-dimensional
vector space
with a
basis
\mathrelationchain
{\relationchain
{ \mathfrak{ v }
}
{ = }{ v_1 , \ldots , v_n
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
Let $W$ be an $m$-dimensional vector space with a basis
\mathrelationchain
{\relationchain
{ \mathfrak{ w }
}
{ = }{ w_1 , \ldots , w_m
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let
\mathdisp {\Psi_ \mathfrak{ v } \colon K^n \longrightarrow V} { }
and
\mathdisp {\Psi_ \mathfrak{ w } \colon K^m \longrightarrow W} { }
be the corresponding mappings. Let
\mathdisp {\varphi \colon V \longrightarrow W} { }
denote a
linear mapping
with
describing matrix
\mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)}{.}}
\factconclusion {Then
\mathrelationchaindisplay
{\relationchain
{ \varphi \circ \Psi_ \mathfrak{ v }
}
{ =} { \Psi_ \mathfrak{ w } \circ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
hold, that is, the diagram
\mathdisp {\begin{matrix} K^n & \stackrel{ \Psi_ \mathfrak{ v } }{\longrightarrow} & V & \\ \!\!\!\!\! M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi) \downarrow & & \downarrow \varphi \!\!\!\!\! & \\ K^m & \stackrel{ \Psi_ \mathfrak{ w } }{\longrightarrow} & W & \!\!\!\!\! \\ \end{matrix}} { }
commutes.}
\factextra {For a vector
\mathrelationchain
{\relationchain
{v
}
{ \in }{V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
we can compute \mathl{\varphi(v)}{} by determining the coefficient tuple of $v$ with respect to the basis $\mathfrak{ v }$, applying the matrix \mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)}{} and determining for the resulting $m$-tuple the corresponding vector with respect to $\mathfrak{ w }$.}
{See Exercise 10.27 .}
\inputfactproof
{Linear mapping/Matrix to bases/Correspondence/Fact}
{Theorem}
{}
{
\factsituation {Let $K$ be a field, and let $V$ be an
$n$-dimensional
vector space
with a
basis
\mathrelationchain
{\relationchain
{ \mathfrak{ v }
}
{ = }{ v_1 , \ldots , v_n
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
and let $W$ be an $m$-dimensional vector space with a basis
\mathrelationchain
{\relationchain
{ \mathfrak{ w }
}
{ = }{ w_1 , \ldots , w_m
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}}
\factconclusion {Then the mappings
\mathdisp {\varphi \longmapsto M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi) \text{ and } M \longmapsto \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } } (M)} { , }
defined in
definition,
are
inverse
to each other.}
\factextra {}
}
{
We show that both compositions are the identity. We start with a matrix
\mathrelationchain
{\relationchain
{ M
}
{ = }{ { \left( a_{ij} \right) }_{ij}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
and consider the matrix
\mathdisp {M^{ \mathfrak{ v } }_{ \mathfrak{ w } }( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M) )} { . }
Two matrices are equal, when the entries coincide for every index pair \mathl{(i,j)}{.} We have
\mathrelationchainalign
{\relationchainalign
{(M^{ \mathfrak{ v } }_{ \mathfrak{ w } }( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M) ))_{ij}
}
{ =} { i-\text{th coordinate of } ( \varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }(M)) (v_j)
}
{ =} { i-\text{th coordinate of } \sum_{ i = 1 }^{ m } a_{ij} w_i
}
{ =} { a_{ij}
}
{ } {
}
}
{}
{}{.}
Now, let $\varphi$ be a linear mapping, we consider
\mathdisp {\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }( M^{ \mathfrak{ v } }_{ \mathfrak{ w } }(\varphi) )} { . }
Two linear mappings coincide, due to
Theorem 10.10
,
when they have the same values on the basis \mathl{v_1 , \ldots , v_n}{.} We have
\mathrelationchaindisplay
{\relationchain
{ (\varphi^{ \mathfrak{ v } }_{ \mathfrak{ w } }( M^{ \mathfrak{ v } }_{ \mathfrak{ w } }(\varphi) ))(v_j)
}
{ =} { \sum_{ i = 1 }^{ m } (M^{ \mathfrak{ v } }_{ \mathfrak{ w } } (\varphi))_{ij} \, w_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Due to the definition, the coefficient \mathl{(M^{ \mathfrak{ v } }_{ \mathfrak{ w } } (\varphi))_{ij}}{} is the $i$-th coordinate of \mathl{\varphi(v_j)}{} with respect to the basis \mathl{w_1 , \ldots , w_m}{.} Hence, this sum equals \mathl{\varphi(v_j)}{.}
We denote the set of all linear mappings from $V$ to $W$ by \mathl{\operatorname{Hom}_{ K } { \left( V , W \right) }}{.}
Theorem 10.15
means that the mapping
\mathdisp {\operatorname{Hom}_{ K } { \left( V , W \right) } \longrightarrow \operatorname{Mat}_{ m \times n } (K)
, \varphi \longmapsto M^{ \mathfrak{ v } }_{ \mathfrak{ w } } ( \varphi)} { , }
is bijective with the given inverse mapping. A linear mapping
\mathdisp {\varphi \colon V \longrightarrow V} { }
is called an \keyword {endomorphism} {.} The set of all endomorphisms on $V$ is denoted by \mathl{\operatorname{End}_{ K } { \left( V \right) }}{.}
\subtitle {Isomorphic vector spaces}
\inputdefinition
{ }
{
Let $K$ denote a
field
and let
\mathcor {} {V} {and} {W} {}
denote
$K$-vector spaces.
A
bijective
linear mapping
\mathdisp {\varphi \colon V \longrightarrow W} { }
}
An isomorphism from $V$ to $V$ is called \keyword {automorphism} {.}
\inputdefinition
{ }
{
Let $K$ denote a field. Two $K$-vector spaces \mathcor {} {V} {and} {W} {} are called \definitionword {isomorphic}{,} if there exists an isomorphism
from $V$ to $W$.}
\inputfactproof
{Vector space/Isomorphic iff equal dimension/Fact}
{Theorem}
{}
{
\factsituation {Let $K$ denote a
field
and let
\mathcor {} {V} {and} {W} {}
denote
finite-dimensional
$K$-vector spaces.}
\factconclusion {Then
\mathcor {} {V} {and} {W} {}
are
isomorphic
to each other if and only if their
dimension
coincides.}
\factextra {In particular, an $n$-dimensional $K$-vector space is isomorphic to $K^n$.}
{See Exercise 10.32 .}
\inputremark {}
{
An
isomorphism
between an
$n$-dimensional
vector space
$V$ and the standard space $K^n$ is essentially equivalent with the choice of a
basis
of $V$. For a basis
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ v }
}
{ =} { v_1 , \ldots , v_n
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
we associate the linear mapping
\mathdisp {\Psi_ \mathfrak{ v } \colon K^n \longrightarrow V
, e_i \longmapsto v_i} { , }
which maps from the standard space to the vector space by sending the $i$-th
standard vector
to the $i$-th basis vector of the given basis. This defines a unique linear mapping due to
Theorem 10.10
.
Due to
Exercise 10.23
,
this mapping is
bijective.
It is just the mapping
\mathdisp {(a_1 , \ldots , a_{ n }) \longmapsto \sum_{i=1}^n a_iv_i} { . }
The
inverse mapping
\mathdisp {x = \Psi_ \mathfrak{ v }^{-1} \colon V \longrightarrow K^n} { }
is also linear, and it is called the \keyword {coordinate mapping} {} for this basis. The $i$-th component of this map, that is, the composed mapping
\mathdisp {x_i = p_i \circ x \colon V \longrightarrow K
, v \longmapsto (\Psi_ \mathfrak{ v }^{-1}(v))_i} { , }
is called the $i$-th \keyword {coordinate function} {.} It is denoted by \mathl{v_i^*}{.} It assigns to vector
\mathrelationchain
{\relationchain
{ v
}
{ \in }{ V
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with the unique representation
\mathrelationchaindisplay
{\relationchain
{v
}
{ =} { \sum_{i = 1}^n \lambda_i v_i
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
the coordinate $\lambda_i$. Note that the linear mapping \mathl{v_i^*}{} depends on the basis, not only on the vector $v_i$.
If an
isomorphism
\mathdisp {\Psi \colon K^n \longrightarrow V} { }
is given, then the images
\mathconddisplay {\Psi(e_i)} {}
{i=1 , \ldots , n} {}
{} {} {} {,}
form a basis of $V$.
}