The effect of several linear mappings from
R
2
{\displaystyle {}\mathbb {R} ^{2}}
to itself, represented on a brain cell.
A linear mapping
φ
:
K
n
⟶
K
m
{\displaystyle \varphi \colon K^{n}\longrightarrow K^{m}}
is determined uniquely by the images
φ
(
e
j
)
{\displaystyle {}\varphi (e_{j})}
,
j
=
1
,
…
,
n
{\displaystyle {}j=1,\ldots ,n}
,
of the standard vectors, and every
φ
(
e
j
)
{\displaystyle {}\varphi (e_{j})}
is a linear combination
φ
(
e
j
)
=
∑
i
=
1
m
a
i
j
e
i
{\displaystyle {}\varphi (e_{j})=\sum _{i=1}^{m}a_{ij}e_{i}\,}
and hence determined by the elements
a
i
j
{\displaystyle {}a_{ij}}
. This means all together that such a linear mapping is given by the
m
n
{\displaystyle {}mn}
elements
a
i
j
{\displaystyle {}a_{ij}}
,
1
≤
i
≤
m
{\displaystyle {}1\leq i\leq m}
,
1
≤
j
≤
n
{\displaystyle {}1\leq j\leq n}
.
Such a set of data can be written as a matrix. Due to
fact ,
this observation holds for all finite-dimensional vector spaces, as long as bases are fixed on the source space and on the target space of the linear mapping.
Let
K
{\displaystyle {}K}
denote a
field ,
and let
V
{\displaystyle {}V}
be an
n
{\displaystyle {}n}
-dimensional vector space
with a
basis
v
=
v
1
,
…
,
v
n
{\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}
,
and let
W
{\displaystyle {}W}
be an
m
{\displaystyle {}m}
-dimensional vector space with a basis
w
=
w
1
,
…
,
w
m
{\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}
.
For a
linear mapping
φ
:
V
⟶
W
,
{\displaystyle \varphi \colon V\longrightarrow W,}
the
matrix
M
=
M
w
v
(
φ
)
=
(
a
i
j
)
i
j
,
{\displaystyle {}M=M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )=(a_{ij})_{ij}\,,}
where
a
i
j
{\displaystyle {}a_{ij}}
is the
i
{\displaystyle {}i}
-th
coordinate
of
φ
(
v
j
)
{\displaystyle {}\varphi (v_{j})}
with respect to the basis
w
{\displaystyle {}{\mathfrak {w}}}
, is called the describing matrix for
φ
{\displaystyle {}\varphi }
with respect to the bases.
For a matrix
M
=
(
a
i
j
)
i
j
∈
Mat
m
×
n
(
K
)
{\displaystyle {}M=(a_{ij})_{ij}\in \operatorname {Mat} _{m\times n}(K)}
,
the linear mapping
φ
w
v
(
M
)
{\displaystyle {}\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M)}
determined by
v
j
⟼
∑
i
=
1
m
a
i
j
w
i
{\displaystyle v_{j}\longmapsto \sum _{i=1}^{m}a_{ij}w_{i}}
in the sense of
fact ,
is called the
linear mapping determined by the matrix
M
{\displaystyle {}M}
.
For a linear mapping
φ
:
K
n
→
K
m
{\displaystyle {}\varphi \colon K^{n}\rightarrow K^{m}}
,
we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping
φ
:
V
→
V
{\displaystyle {}\varphi \colon V\rightarrow V}
from a vector space in itself
(what is called an endomorphism ),
one usually takes the same bases on both sides. The identity on a vector space of dimension
n
{\displaystyle {}n}
is described by the identity matrix, with respect to every basis.
If
V
=
W
{\displaystyle {}V=W}
,
then we are usually interested in the describing matrix with respect to one basis
v
{\displaystyle {}{\mathfrak {v}}}
of
V
{\displaystyle {}V}
.
Let
K
{\displaystyle {}K}
denote a
field
and let
V
{\displaystyle {}V}
denote an
n
{\displaystyle {}n}
-dimensional
vector space
with a
basis
v
=
v
1
,
…
,
v
n
{\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}
.
Let
W
{\displaystyle {}W}
be an
m
{\displaystyle {}m}
-dimensional vector space with a basis
w
=
w
1
,
…
,
w
m
{\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}
,
and let
Ψ
v
:
K
n
⟶
V
{\displaystyle \Psi _{\mathfrak {v}}\colon K^{n}\longrightarrow V}
and
Ψ
w
:
K
m
⟶
W
{\displaystyle \Psi _{\mathfrak {w}}\colon K^{m}\longrightarrow W}
be the corresponding mappings. Let
φ
:
V
⟶
W
{\displaystyle \varphi \colon V\longrightarrow W}
denote a
linear mapping
with
describing matrix
M
w
v
(
φ
)
{\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )}
. Then
φ
∘
Ψ
v
=
Ψ
w
∘
M
w
v
(
φ
)
{\displaystyle {}\varphi \circ \Psi _{\mathfrak {v}}=\Psi _{\mathfrak {w}}\circ M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )\,}
hold, that is, the diagram
K
n
⟶
Ψ
v
V
M
w
v
(
φ
)
↓
↓
φ
K
m
⟶
Ψ
w
W
{\displaystyle {\begin{matrix}K^{n}&{\stackrel {\Psi _{\mathfrak {v}}}{\longrightarrow }}&V&\\\!\!\!\!\!M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )\downarrow &&\downarrow \varphi \!\!\!\!\!&\\K^{m}&{\stackrel {\Psi _{\mathfrak {w}}}{\longrightarrow }}&W&\!\!\!\!\!\\\end{matrix}}}
commutes. For a vector
v
∈
V
{\displaystyle {}v\in V}
,
we can compute
φ
(
v
)
{\displaystyle {}\varphi (v)}
by determining the coefficient tuple of
v
{\displaystyle {}v}
with respect to the basis
v
{\displaystyle {}{\mathfrak {v}}}
, applying the matrix
M
w
v
(
φ
)
{\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )}
and determining for the resulting
m
{\displaystyle {}m}
-tuple the corresponding vector with respect to
w
{\displaystyle {}{\mathfrak {w}}}
.
Proof
◻
{\displaystyle \Box }
Let
K
{\displaystyle {}K}
be a field, and let
V
{\displaystyle {}V}
be an
n
{\displaystyle {}n}
-dimensional
vector space
with a
basis
v
=
v
1
,
…
,
v
n
{\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}
,
and let
W
{\displaystyle {}W}
be an
m
{\displaystyle {}m}
-dimensional vector space with a basis
w
=
w
1
,
…
,
w
m
{\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}
. Then the mappings
φ
⟼
M
w
v
(
φ
)
and
M
⟼
φ
w
v
(
M
)
,
{\displaystyle \varphi \longmapsto M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ){\text{ and }}M\longmapsto \varphi _{\mathfrak {w}}^{\mathfrak {v}}(M),}
defined in
definition,
are
inverse
to each other.
We show that both compositions are the identity. We start with a matrix
M
=
(
a
i
j
)
i
j
{\displaystyle {}M={\left(a_{ij}\right)}_{ij}}
and consider the matrix
M
w
v
(
φ
w
v
(
M
)
)
.
{\displaystyle M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M)).}
Two matrices are equal, when the entries coincide for every index pair
(
i
,
j
)
{\displaystyle {}(i,j)}
. We have
(
M
w
v
(
φ
w
v
(
M
)
)
)
i
j
=
i
−
th coordinate of
(
φ
w
v
(
M
)
)
(
v
j
)
=
i
−
th coordinate of
∑
i
=
1
m
a
i
j
w
i
=
a
i
j
.
{\displaystyle {}{\begin{aligned}(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M)))_{ij}&=i-{\text{th coordinate of }}(\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M))(v_{j})\\&=i-{\text{th coordinate of }}\sum _{i=1}^{m}a_{ij}w_{i}\\&=a_{ij}.\end{aligned}}}
Now, let
φ
{\displaystyle {}\varphi }
be a linear mapping, we consider
φ
w
v
(
M
w
v
(
φ
)
)
.
{\displaystyle \varphi _{\mathfrak {w}}^{\mathfrak {v}}(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )).}
Two linear mappings coincide, due to
fact ,
when they have the same values on the basis
v
1
,
…
,
v
n
{\displaystyle {}v_{1},\ldots ,v_{n}}
. We have
(
φ
w
v
(
M
w
v
(
φ
)
)
)
(
v
j
)
=
∑
i
=
1
m
(
M
w
v
(
φ
)
)
i
j
w
i
.
{\displaystyle {}(\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )))(v_{j})=\sum _{i=1}^{m}(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ))_{ij}\,w_{i}\,.}
Due to the definition, the coefficient
(
M
w
v
(
φ
)
)
i
j
{\displaystyle {}(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ))_{ij}}
is the
i
{\displaystyle {}i}
-th coordinate of
φ
(
v
j
)
{\displaystyle {}\varphi (v_{j})}
with respect to the basis
w
1
,
…
,
w
m
{\displaystyle {}w_{1},\ldots ,w_{m}}
. Hence, this sum equals
φ
(
v
j
)
{\displaystyle {}\varphi (v_{j})}
.
◻
{\displaystyle \Box }
We denote the set of all linear mappings from
V
{\displaystyle {}V}
to
W
{\displaystyle {}W}
by
Hom
K
(
V
,
W
)
{\displaystyle {}\operatorname {Hom} _{K}{\left(V,W\right)}}
.
fact
means that the mapping
Hom
K
(
V
,
W
)
⟶
Mat
m
×
n
(
K
)
,
φ
⟼
M
w
v
(
φ
)
,
{\displaystyle \operatorname {Hom} _{K}{\left(V,W\right)}\longrightarrow \operatorname {Mat} _{m\times n}(K),\varphi \longmapsto M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ),}
is bijective with the given inverse mapping. A linear mapping
φ
:
V
⟶
V
{\displaystyle \varphi \colon V\longrightarrow V}
is called an endomorphism . The set of all endomorphisms on
V
{\displaystyle {}V}
is denoted by
End
K
(
V
)
{\displaystyle {}\operatorname {End} _{K}{\left(V\right)}}
.