# Linear mapping/Matrix/Relation/Section

Due to fact, a linear mapping

${\displaystyle \varphi \colon K^{n}\longrightarrow K^{m}}$

is determined by the images ${\displaystyle {}\varphi (e_{j})}$, ${\displaystyle {}j=1,\ldots ,n}$, of the standard vectors. Every ${\displaystyle {}\varphi (e_{j})}$ is a linear combination

${\displaystyle {}\varphi (e_{j})=\sum _{i=1}^{m}a_{ij}e_{i}\,,}$

and therefore the linear mapping is determined by the elements ${\displaystyle {}a_{ij}}$. So, such a linear map is determined by the ${\displaystyle {}mn}$ elements ${\displaystyle {}a_{ij}}$, ${\displaystyle {}1\leq i\leq m}$, ${\displaystyle {}1\leq j\leq n}$, from the field. We can write such a data set as a matrix. Because of the determination theorem, this holds for linear maps in general, as soon as in both vector spaces bases are fixed.

## Definition

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ be an ${\displaystyle {}n}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$, and let ${\displaystyle {}W}$ be an ${\displaystyle {}m}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}$.

For a linear mapping

${\displaystyle \varphi \colon V\longrightarrow W,}$

the matrix

${\displaystyle {}M=M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )=(a_{ij})_{ij}\,,}$

where ${\displaystyle {}a_{ij}}$ is the ${\displaystyle {}i}$-th coordinate of ${\displaystyle {}\varphi (v_{j})}$ with respect to the basis ${\displaystyle {}{\mathfrak {w}}}$, is called the describing matrix for ${\displaystyle {}\varphi }$ with respect to the bases.

For a matrix ${\displaystyle {}M=(a_{ij})_{ij}\in \operatorname {Mat} _{m\times n}(K)}$, the linear mapping ${\displaystyle {}\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M)}$ determined by

${\displaystyle v_{j}\longmapsto \sum _{i=1}^{m}a_{ij}w_{i}}$

in the sense of fact,

is called the linear mapping determined by the matrix ${\displaystyle {}M}$.

For a linear mapping ${\displaystyle {}\varphi \colon K^{n}\rightarrow K^{m}}$, we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping ${\displaystyle {}\varphi \colon V\rightarrow V}$ from a vector space in itself (what is called an endomorphism), one usually takes the same bases on both sides. The identity on a vector space of dimension ${\displaystyle {}n}$ is described by the identity matrix, with respect to every basis.

## Theorem

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ be an ${\displaystyle {}n}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$, and let ${\displaystyle {}W}$ be an ${\displaystyle {}m}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}$. Then the mappings

${\displaystyle \varphi \longmapsto M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ){\text{ and }}M\longmapsto \varphi _{\mathfrak {w}}^{\mathfrak {v}}(M),}$

defined in definition, are inverse to each other.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Example

${\displaystyle \varphi \colon K^{n}\longrightarrow K^{m}}$

is usually described by the matrix ${\displaystyle {}M}$ with respect to the standard bases on the left and on the right. The result of the matrix multiplication

${\displaystyle {}{\begin{pmatrix}y_{1}\\\vdots \\y_{m}\end{pmatrix}}=M{\begin{pmatrix}x_{1}\\\vdots \\x_{n}\end{pmatrix}}\,}$

can be interpreted directly as a point in ${\displaystyle {}K^{m}}$. The ${\displaystyle {}j}$-th column of ${\displaystyle {}M}$ is the image of the ${\displaystyle {}j}$-th standard vector ${\displaystyle {}e_{j}}$.