# Linear mapping/Diagonalizable/Eigentheory/Section

The restriction of a linear mapping to an eigenspace is the homothety with the corresponding eigenvalue, so this is a quite simple linear mapping. If there are many eigenvalues with high-dimensional eigenspaces, then usually the linear mapping is simple in some sense. An extreme case are the so-called diagonalizable mappings.

For a diagonal matrix

${\displaystyle {\begin{pmatrix}d_{1}&0&\cdots &\cdots &0\\0&d_{2}&0&\cdots &0\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&d_{n-1}&0\\0&\cdots &\cdots &0&d_{n}\end{pmatrix}},}$

the characteristic polynomial is just

${\displaystyle (X-d_{1})(X-d_{2})\cdots (X-d_{n}).}$

If the number ${\displaystyle {}d}$ occurs ${\displaystyle {}k}$-times as a diagonal entry, then also the linear factor ${\displaystyle {}X-d}$ occurs with exponent ${\displaystyle {}k}$ inside the factorization of the characteristic polynomial. This is also true when we just have an upper triangular matrix. But in the case of a diagonal matrix, we can also read of immediately the eigenspaces, see example. The eigenspace for ${\displaystyle {}d}$ consists of all linear combinations of the standard vectors ${\displaystyle {}e_{i}}$, for which ${\displaystyle {}d_{i}}$ equals ${\displaystyle {}d}$. In particular, the dimension of the eigenspace equals the number how often ${\displaystyle {}d}$ occurs as a diagonal element. Thus, for a diagonal matrix, the algebraic and the geometric multiplicities coincide.

## Definition

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ denote a vector space, and let

${\displaystyle \varphi \colon V\longrightarrow V}$

denote a linear mapping. Then ${\displaystyle {}\varphi }$ is called diagonalizable, if ${\displaystyle {}V}$ has a basis consisting of eigenvectors

for ${\displaystyle {}\varphi }$.

## Theorem

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ denote a finite-dimensional vector space. Let

${\displaystyle \varphi \colon V\longrightarrow V}$

denote a

linear mapping. Then the following statements are equivalent.
1. ${\displaystyle {}\varphi }$ is diagonalizable.
2. There exists a basis ${\displaystyle {}{\mathfrak {v}}}$ of ${\displaystyle {}V}$ such that the describing matrix ${\displaystyle {}M_{\mathfrak {v}}^{\mathfrak {v}}(\varphi )}$ is a diagonal matrix.
3. For every describing matrix ${\displaystyle {}M=M_{\mathfrak {w}}^{\mathfrak {w}}(\varphi )}$ with respect to a basis ${\displaystyle {}{\mathfrak {w}}}$, there exists an invertible matrix ${\displaystyle {}B}$ such that
${\displaystyle BMB^{-1}}$

is a diagonal matrix.

### Proof

The equivalence between (1) and (2) follows from the definition, from example, and the correspondence between linear mappings and matrices. The equivalence between (2) and (3) follows from fact.

${\displaystyle \Box }$

## Corollary

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ denote a finite-dimensional vector space. Let

${\displaystyle \varphi \colon V\longrightarrow V}$

denote a linear mapping. Suppose that there exists ${\displaystyle {}n}$ different eigenvalues. Then ${\displaystyle {}\varphi }$ is diagonalizable.

### Proof

Because of fact, there exist ${\displaystyle {}n}$ linearly independent eigenvectors. These form, due to fact, a basis.

${\displaystyle \Box }$

## Example

We continue with example. There exists the two eigenvectors ${\displaystyle {}{\begin{pmatrix}{\sqrt {5}}\\1\end{pmatrix}}}$ and ${\displaystyle {}{\begin{pmatrix}-{\sqrt {5}}\\1\end{pmatrix}}}$ for the different eigenvalues ${\displaystyle {}{\sqrt {5}}}$ and ${\displaystyle {}-{\sqrt {5}}}$, so that the mapping is diagonalizable, due to fact. With respect to the basis ${\displaystyle {}{\mathfrak {u}}}$, consisting of these eigenvectors, the linear mapping is described by the diagonal matrix

${\displaystyle {\begin{pmatrix}{\sqrt {5}}&0\\0&-{\sqrt {5}}\end{pmatrix}}.}$

The transformation matrix, from the basis ${\displaystyle {}{\mathfrak {u}}}$ to the standard basis ${\displaystyle {}{\mathfrak {v}}}$, consisting of ${\displaystyle {}e_{1}}$ and ${\displaystyle {}e_{2}}$, is simply

${\displaystyle {}M_{\mathfrak {v}}^{\mathfrak {u}}={\begin{pmatrix}{\sqrt {5}}&-{\sqrt {5}}\\1&1\end{pmatrix}}\,.}$

The inverse matrix is

${\displaystyle {}{\frac {1}{2{\sqrt {5}}}}{\begin{pmatrix}1&{\sqrt {5}}\\-1&{\sqrt {5}}\end{pmatrix}}={\begin{pmatrix}{\frac {1}{2{\sqrt {5}}}}&{\frac {1}{2}}\\{\frac {-1}{2{\sqrt {5}}}}&{\frac {1}{2}}\end{pmatrix}}\,.}$

Because of fact, we have the relation

{\displaystyle {}{\begin{aligned}{\begin{pmatrix}{\sqrt {5}}&0\\0&-{\sqrt {5}}\end{pmatrix}}&={\begin{pmatrix}{\frac {1}{2}}&{\frac {\sqrt {5}}{2}}\\{\frac {1}{2}}&{\frac {-{\sqrt {5}}}{2}}\end{pmatrix}}{\begin{pmatrix}{\sqrt {5}}&-{\sqrt {5}}\\1&1\end{pmatrix}}\\&={\begin{pmatrix}{\frac {1}{2{\sqrt {5}}}}&{\frac {1}{2}}\\{\frac {-1}{2{\sqrt {5}}}}&{\frac {1}{2}}\end{pmatrix}}{\begin{pmatrix}0&5\\1&0\end{pmatrix}}{\begin{pmatrix}{\sqrt {5}}&-{\sqrt {5}}\\1&1\end{pmatrix}}.\end{aligned}}}