Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 28/latex
\setcounter{section}{28}
\zwischenueberschrift{The characteristic polynomial}
We want to determine, for a given endomorphism $\varphi \colon V \rightarrow V$, the eigenvalues and the eigenspaces. For this, the characteristic polynomial is decisive.
\inputdefinition
{ }
{
For an
$n \times n$-matrix
$M$ with entries in a
field
$K$, the
polynomial
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ \defeq} {\det { \left( X \cdot E_{ n } - M \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
is called the \definitionswort {characteristic polynomial}{}
}
For
\mavergleichskette
{\vergleichskette
{M
}
{ = }{ { \left( a_{ij} \right) }_{ij}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
this means
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ =} { \det \begin{pmatrix} X-a_{11} & -a_{12} & \ldots & -a_{1n} \\ -a_{21} & X-a_{22} & \ldots & -a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ -a_{n1} & -a_{n2} & \ldots & X-a_{nn} \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
In this definition, we use the determinant of a matrix, which we have only defined for matrices with entries in a field. The entries are now elements of the polynomial ring \mathl{K[X]}{.} But, since we can consider these elements also inside the
field of rational functions
\mathl{K(X)}{\zusatzfussnote {\mathlk{K(X)}{} is called the field of rational polynomials; it consists of all fractions \mathl{P/Q}{} for polynomials
\mavergleichskette
{\vergleichskette
{ P,Q
}
{ \in }{ K [X]
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with
\mavergleichskette
{\vergleichskette
{ Q
}
{ \neq }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
For
\mavergleichskette
{\vergleichskette
{ K
}
{ = }{ \R
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
or $\C$, this field can be identified with the field of rational functions} {.} {,}}
this is a useful definition. By definition, the determinant is an element in \mathl{K(X)}{,} but, because all entries of the matrix are polynomials, and because in the recursive definition of the determinant, only addition and multiplication is used, the characteristic polynomial is indeed a polynomial. The degree of the characteristic polynomial is $n$, and its leading coefficient is $1$, so it has the form
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ =} { X^n + c_{n-1}X^{n-1} + \cdots + c_1 X+c_0
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
We have the important relation
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M } (\lambda)
}
{ =} { \det { \left( \lambda E_{ n } - M \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
for every
\mavergleichskette
{\vergleichskette
{ \lambda
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
see
Exercise 28.4
.
Here, on the left-hand side, the number $\lambda$ is inserted into the polynomial, and on the right-hand side, we have the determinant of a matrix which depends on $\lambda$.
For a linear mapping
\mathdisp {\varphi \colon V \longrightarrow V} { }
on a finite-dimensional vector space, the \stichwort {characteristic polynomial} {} is defined by
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ \varphi }
}
{ \defeq} { \chi_{ M }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
where $M$ is a describing matrix with respect to some basis. The
multiplication theorem for the determinant
shows that this definition is independent of the choice of the basis, see
Exercise 28.3
.
\inputfaktbeweis
{Endomorphism/Eigenvalue and characteristic polynomial/Fact}
{Theorem}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote an
$n$-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.}
\faktfolgerung {Then
\mavergleichskette
{\vergleichskette
{ \lambda
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
is an
eigenvalue
of $\varphi$, if and only if $\lambda$ is a zero of the
characteristic polynomial
$\chi_{ \varphi }$.}
\faktzusatz {}
}
{
Let $M$ denote a
describing matrix
for $\varphi$, and let
\mavergleichskette
{\vergleichskette
{ \lambda
}
{ \in }{K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
be given. We have
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }\, (\lambda)
}
{ =} { \det { \left( \lambda E_{ n } - M \right) }
}
{ =} { 0
}
{ } {
}
{ } {
}
}
{}{}{,}
if and only if the linear mapping
\mathdisp {\lambda
\operatorname{Id}_{ V } - \varphi} { }
is not
bijective
\zusatzklammer {and not
injective} {} {}
\zusatzklammer {due to
Theorem 26.11
and
Lemma 25.11
} {} {.}
This is, because of
Lemma 27.11
and
Lemma 24.14
,
equivalent with
\mavergleichskettedisp
{\vergleichskette
{ \operatorname{Eig}_{ \lambda } { \left( \varphi \right) }
}
{ =} { \operatorname{ker} { \left( ( \lambda
\operatorname{Id}_{ V } - \varphi)\right) }
}
{ \neq} { 0
}
{ } {
}
{ } {
}
}
{}{}{,}
and this means that the
eigenspace
for $\lambda$ is not the nullspace, thus $\lambda$ is an eigenvalue for $\varphi$.
\inputbeispiel{}
{
We consider the real matrix
\mavergleichskette
{\vergleichskette
{M
}
{ = }{ \begin{pmatrix} 0 & 5 \\ 1 & 0 \end{pmatrix}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
The
characteristic polynomial
is
\mavergleichskettealign
{\vergleichskettealign
{ \chi_{ M }
}
{ =} { \det { \left( x E_2 -M \right) }
}
{ =} { \det { \left( x \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix} 0 & 5 \\ 1 & 0 \end{pmatrix} \right) }
}
{ =} { \det \begin{pmatrix} x & -5 \\ -1 & x \end{pmatrix}
}
{ =} { x^2-5
}
}
{}
{}{.}
The eigenvalues are therefore
\mavergleichskette
{\vergleichskette
{x
}
{ = }{ \pm \sqrt{5}
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
\zusatzklammer {we have found these eigenvalues already in
Example 27.9
,
without using the characteristic polynomial} {} {.}
}
\inputbeispiel{}
{
For the matrix
\mavergleichskettedisp
{\vergleichskette
{M
}
{ =} { \begin{pmatrix} 2 & 5 \\ -3 & 4 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
the
characteristic polynomial
is
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ =} { \det \begin{pmatrix} X-2 & -5 \\ 3 & X-4 \end{pmatrix}
}
{ =} { (X-2)(X-4) +15
}
{ =} { X^2 -6X +23
}
{ } {
}
}
{}{}{.}
Finding the zeroes of this polynomial leads to the condition
\mavergleichskettedisp
{\vergleichskette
{ (X-3)^2
}
{ =} { -23 +9
}
{ =} { -14
}
{ } {
}
{ } {
}
}
{}{}{,}
which has no solution over $\R$, so that the matrix has no
eigenvalues
over $\R$. However, considered over the complex numbers $\Complex$, we have the two eigenvalues
\mathkor {} {3+\sqrt{14} { \mathrm i}} {and} {3 - \sqrt{14} { \mathrm i}} {.}
For the
eigenspace
for \mathl{3+\sqrt{14} { \mathrm i}}{,} we have to determine
\mavergleichskettealign
{\vergleichskettealign
{ \operatorname{Eig}_{ 3+\sqrt{14} { \mathrm i} } { \left( M \right) }
}
{ =} { \operatorname{ker} { \left( { \left( { \left( 3+ \sqrt{14} { \mathrm i} \right) } E_2 - M \right) } \right) }
}
{ =} { \operatorname{ker} { \left( \begin{pmatrix} 1 + \sqrt{14} { \mathrm i} & -5 \\ 3 & -1 + \sqrt{14} { \mathrm i} \end{pmatrix} \right) }
}
{ } {
}
{ } {
}
}
{}
{}{,}
a basis vector
\zusatzklammer {hence an eigenvector} {} {}
of this is \mathl{\begin{pmatrix} 5 \\1+ \sqrt{14} { \mathrm i} \end{pmatrix}}{.} Analogously, we get
\mavergleichskettedisp
{\vergleichskette
{ \operatorname{Eig}_{ 3 -\sqrt{14} { \mathrm i} } { \left( M \right) }
}
{ =} { \operatorname{ker} { \left( \begin{pmatrix} 1 - \sqrt{14} { \mathrm i} & -5 \\ 3 & -1 - \sqrt{14} { \mathrm i} \end{pmatrix} \right) }
}
{ =} { \langle \begin{pmatrix} 5 \\1 - \sqrt{14} { \mathrm i} \end{pmatrix} \rangle
}
{ } {
}
{ } {
}
}
{}{}{.}
}
\inputbeispiel{}
{
For an
upper triangular matrix
\mavergleichskettedisp
{\vergleichskette
{M
}
{ =} { \begin{pmatrix} d_1 & \ast & \cdots & \cdots & \ast \\ 0 & d_2 & \ast & \cdots & \ast \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & d_{ n-1} & \ast \\ 0 & \cdots & \cdots & 0 & d_{ n } \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
the
characteristic polynomial
is
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ =} { (X-d_1)(X-d_2) \cdots (X-d_n)
}
{ } {
}
{ } {
}
{ } {}
}
{}{}{,}
due to
Lemma 26.8
.
In this case, we have directly a factorization of the characteristic polynomial into linear factors, so that we can see immediately the zeroes and the
eigenvalues
of $M$, namely just the diagonal elements \mathl{d_1,d_2 , \ldots , d_n}{}
\zusatzklammer {which might not be all different} {} {.}
}
\zwischenueberschrift{Multiplicities}
For a more detailed investigation of eigenspaces, the following concepts are necessary. Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a linear mapping on a finite-dimensional vector space $V$, and
\mavergleichskette
{\vergleichskette
{ \lambda
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
Then the exponent of the linear polynomial \mathl{X - \lambda}{} inside the characteristic polynomial $\chi_{ \varphi }$ is called the \stichwort {algebraic multiplicity} {} of $\lambda$, symbolized as
\mavergleichskette
{\vergleichskette
{ \mu_\lambda
}
{ \defeq }{ \mu_\lambda(\varphi)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
The dimension of the corresponding eigenspace, that is
\mathdisp {\dim_{ } { \left( \operatorname{Eig}_{ \lambda } { \left( \varphi \right) } \right) }} { , }
is called the \stichwort {geometric multiplicity} {} of $\lambda$. Because of
Theorem 28.2
,
the algebraic multiplicity is positive if and only if the geometric multiplicity is positive. In general, these multiplicities might be different, we have however always one estimate.
\inputfaktbeweis
{Endomorphism/Geometric and algebraic multiplicity/Fact}
{Lemma}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote a
finite-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping
and
\mavergleichskette
{\vergleichskette
{ \lambda
}
{ \in }{ K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}}
\faktfolgerung {Then we have the estimate
\mavergleichskettedisp
{\vergleichskette
{ \dim_{ } { \left( \operatorname{Eig}_{ \lambda } { \left( \varphi \right) } \right) }
}
{ \leq} { \mu_\lambda(\varphi)
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
between the
geometric
and the
algebraic multiplicity.}
\faktzusatz {}
}
{
Let
\mavergleichskette
{\vergleichskette
{m
}
{ = }{ \dim_{ } { \left( \operatorname{Eig}_{ \lambda } { \left( \varphi \right) } \right) }
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
and let \mathl{v_1 , \ldots , v_m}{} be a
basis
of this
eigenspace.
We complement this basis with \mathl{w_1 , \ldots , w_{n-m}}{} to get a basis of $V$, using
Theorem 23.23
.
With respect to this basis, the
describing matrix
has the form
\mathdisp {\begin{pmatrix} \lambda E_m & B \\ 0 & C \end{pmatrix}} { . }
The
characteristic polynomial
equals therefore
\zusatzklammer {using
Exercise 26.9
} {} {}
\mathl{(X- \lambda)^m \cdot \chi_{ C }}{,} so that the
algebraic multiplicity
is at least $m$.
\inputbeispiel{}
{
We consider the \mathl{2\times 2}{-}\stichwort {shearing matrix} {}
\mavergleichskettedisp
{\vergleichskette
{ M
}
{ =} { \begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
with
\mavergleichskette
{\vergleichskette
{a
}
{ \in }{K
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{.}
The
characteristic polynomial
is
\mavergleichskettedisp
{\vergleichskette
{ \chi_{ M }
}
{ =} {(X-1)(X-1)
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
so that $1$ is the only
eigenvalue
of $M$. The corresponding
eigenspace
is
\mavergleichskettedisp
{\vergleichskette
{ \operatorname{Eig}_{ 1 } { \left( M \right) }
}
{ =} { \operatorname{ker} { \left( \begin{pmatrix} 0 & -a \\ 0 & 0 \end{pmatrix} \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
From
\mavergleichskettedisp
{\vergleichskette
{ \begin{pmatrix} 0 & -a \\ 0 & 0 \end{pmatrix} \begin{pmatrix} r \\s \end{pmatrix}
}
{ =} { \begin{pmatrix} -as \\0 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{,}
we get that \mathl{\begin{pmatrix} 1 \\0 \end{pmatrix}}{} is an
eigenvector,
and in case
\mavergleichskette
{\vergleichskette
{a
}
{ \neq }{0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
the eigenspace is one-dimensional
\zusatzklammer {in case
\mavergleichskette
{\vergleichskette
{ a
}
{ = }{ 0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
we have the identity and the eigenspace is two-dimensional} {} {.}
So in case
\mavergleichskette
{\vergleichskette
{a
}
{ \neq }{0
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{,}
the
algebraic multiplicity
of the eigenvalue $1$ equals $2$, and the
geometric multiplicity
equals $1$.
}
\zwischenueberschrift{Diagonalizable mappings}
The restriction of a linear mapping to an eigenspace is the homothety with the corresponding eigenvalue, so this is a quite simple linear mapping. If there are many eigenvalues with high-dimensional eigenspaces, then usually the linear mapping is simple in some sense. An extreme case are the so-called diagonalizable mappings.
For a diagonal matrix
\mathdisp {\begin{pmatrix} d_1 & 0 & \cdots & \cdots & 0 \\ 0 & d_2 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & d_{ n-1} & 0 \\ 0 & \cdots & \cdots & 0 & d_{ n } \end{pmatrix}} { , }
the characteristic polynomial is just
\mathdisp {(X-d_1) (X-d_2) \cdots (X-d_n)} { . }
If the number $d$ occurs $k$-times as a diagonal entry, then also the linear factor \mathl{X-d}{} occurs with exponent $k$ inside the factorization of the characteristic polynomial. This is also true when we just have an upper triangular matrix. But in the case of a diagonal matrix, we can also read of immediately the eigenspaces, see
Example 27.7
.
The eigenspace for $d$ consists of all linear combinations of the standard vectors $e_i$, for which $d_i$ equals $d$. In particular, the dimension of the eigenspace equals the number how often $d$ occurs as a diagonal element. Thus, for a diagonal matrix, the algebraic and the geometric multiplicities coincide.
\inputdefinition
{ }
{
Let $K$ denote a
field,
let $V$ denote a
vector space,
and let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.
Then $\varphi$ is called \definitionswort {diagonalizable}{,} if $V$ has a
basis
consisting of
eigenvectors
}
\inputfaktbeweis
{Linear mapping/Diagonalizable/Characterizations/Fact}
{Theorem}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote a
finite-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.}
\faktuebergang {Then the following statements are equivalent.}
\faktfolgerung {\aufzaehlungdrei {$\varphi$ is
diagonalizable.
} {There exists a basis $\mathfrak{ v }$ of $V$ such that the
describing matrix
\mathl{M_ \mathfrak{ v }^ \mathfrak{ v }(\varphi)}{} is a
diagonal matrix.
} {For every describing matrix
\mavergleichskette
{\vergleichskette
{M
}
{ = }{ M_ \mathfrak{ w }^ \mathfrak{ w }(\varphi)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
with respect to a basis $\mathfrak{ w }$, there exists an
invertible matrix
$B$ such that
\mathdisp {B M B^{-1}} { }
is a diagonal matrix.
}}
\faktzusatz {}
}
{
The equivalence between (1) and (2) follows from the definition, from Example 27.7 , and the correspondence between linear mappings and matrices. The equivalence between (2) and (3) follows from Corollary 25.9 .
\inputfaktbeweis
{Linear mapping/Different eigenvalues/Diagonalizable/Fact}
{Corollary}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote a
finite-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.}
\faktvoraussetzung {Suppose that there exists $n$ different
eigenvalues.}
\faktfolgerung {Then $\varphi$ is
diagonalizable.}
\faktzusatz {}
}
{
Because of Lemma 27.14 , there exist $n$ linearly independent eigenvectors. These form, due to Corollary 23.21 , a basis.
\inputbeispiel{}
{
We continue with
Example 27.9
.
There exists the two
eigenvectors
\mathkor {} {\begin{pmatrix} \sqrt{5} \\1 \end{pmatrix}} {and} {\begin{pmatrix} -\sqrt{5} \\1 \end{pmatrix}} {}
for the different
eigenvalues
\mathkor {} {\sqrt{5}} {and} {- \sqrt{5}} {,}
so that the mapping is
diagonalizable,
due to
Corollary 28.10
.
With respect to the
basis
$\mathfrak{ u }$, consisting of these eigenvectors, the linear mapping is described by the diagonal matrix
\mathdisp {\begin{pmatrix} \sqrt{5} & 0 \\ 0 & - \sqrt{5} \end{pmatrix}} { . }
The
transformation matrix,
from the basis $\mathfrak{ u }$ to the standard basis $\mathfrak{ v }$, consisting of
\mathkor {} {e_1} {and} {e_2} {,}
is simply
\mavergleichskettedisp
{\vergleichskette
{ M^{ \mathfrak{ u } }_{ \mathfrak{ v } }
}
{ =} { \begin{pmatrix} \sqrt{5} & - \sqrt{5} \\ 1 & 1 \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
The
inverse matrix
is
\mavergleichskettedisp
{\vergleichskette
{ \frac{1}{2 \sqrt{5} } \begin{pmatrix} 1 & \sqrt{5} \\ -1 & \sqrt{5} \end{pmatrix}
}
{ =} { \begin{pmatrix} \frac{1}{2 \sqrt{5} } & \frac{1}{2} \\ \frac{-1}{2 \sqrt{5} } & \frac{1}{2} \end{pmatrix}
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{.}
Because of
Corollary 25.9
,
we have the relation
\mavergleichskettealign
{\vergleichskettealign
{ \begin{pmatrix} \sqrt{5} & 0 \\ 0 & - \sqrt{5} \end{pmatrix}
}
{ =} { \begin{pmatrix} \frac{1}{2 } & \frac{ \sqrt{5} }{2} \\ \frac{1}{2 } & \frac{ -\sqrt{5} }{2} \end{pmatrix} \begin{pmatrix} \sqrt{5} & - \sqrt{5} \\ 1 & 1 \end{pmatrix}
}
{ =} { \begin{pmatrix} \frac{1}{2 \sqrt{5} } & \frac{1}{2} \\ \frac{-1}{2 \sqrt{5} } & \frac{1}{2} \end{pmatrix} \begin{pmatrix} 0 & 5 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \sqrt{5} & - \sqrt{5} \\ 1 & 1 \end{pmatrix}
}
{ } {
}
{ } {}
}
{}
{}{.}
}
\zwischenueberschrift{Multiplicities and diagonalizable matrices}
\inputfaktbeweisnichtvorgefuehrt
{Endomorphism/Diagonalizable/Algebraic and geometric multiplicity/Fact}
{Theorem}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote a
finite-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.}
\faktfolgerung {Then $\varphi$ is
diagonalizable
if and only if the
characteristic polynomial
$\chi_{ \varphi }$ is a product of
linear factors
and if for every zero $\lambda$ with
algebraic multiplicity
$\mu_\lambda$, the identity
\mavergleichskettedisp
{\vergleichskette
{ \mu_\lambda
}
{ =} { \dim_{ } { \left( \operatorname{Eig}_{ \lambda } { \left( \varphi \right) } \right) }
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
holds.}
\faktzusatz {}
}
{Endomorphism/Diagonalizable/Algebraic and geometric multiplicity/Fact/Proof
The product of two diagonal matrices is again a diagonal matrix. The following example shows that the product of two diagonalizable matrices is in general not diagonalizable.
\inputbeispiel{}
{
Let
\mathkor {} {G_1} {and} {G_2} {}
denote two lines in $\R^2$ through the origin, and let
\mathkor {} {\varphi_1} {and} {\varphi_2} {}
denote the reflections at these axes. A reflection at an axis is always
diagonalizable,
the axis and the line orthogonal to the axis are eigenlines
\zusatzklammer {with eigenvalues $1$ and $-1$} {} {.}
The
composition
\mavergleichskettedisp
{\vergleichskette
{ \psi
}
{ =} { \varphi_2 \circ \varphi_1
}
{ } {
}
{ } {
}
{ } {
}
}
{}{}{}
of the reflections is a
plane rotation,
the angle of rotation being twice the angle between the two lines. However, a rotation is only diagonalizable if the angle of rotation is
\mathkor {} {0} {or} {180} {}
degree. If the angle between the axes is different from \mathl{0,90}{} degree, then $\psi$ does not have any eigenvector.
}
\zwischenueberschrift{Trigonalizable mappings}
\inputdefinition
{ }
{
Let $K$ denote a field, and let $V$ denote a finite-dimensional vector space. A linear mapping $\varphi \colon V \rightarrow V$ is called \definitionswort {trigonalizable}{,} if there exists a basis such that the describing matrix of $\varphi$ with respect to this basis is an
upper triangular matrix.}
Diagonalizable linear mappings are in particular trigonalizable. The reverse statement is not true, as Example 28.7 shows.
\inputfaktbeweisnichtvorgefuehrt
{Linear mapping/Trigonalizable/Characterization with characteristic polynomial/Fact}
{Theorem}
{}
{
\faktsituation {Let $K$ denote a
field,
and let $V$ denote an
finite-dimensional
vector space.
Let
\mathdisp {\varphi \colon V \longrightarrow V} { }
denote a
linear mapping.}
\faktuebergang {Then the following statements are equivalent.}
\faktfolgerung {\aufzaehlungzwei {$\varphi$ is
trigonalizable.
} {The
characteristic polynomial
$\chi_{ \varphi }$ has a factorization into
linear factors.
}}
\faktzusatz {If $\varphi$ is trigonalizable and is described by the matrix $M$ with respect to some basis, then there exists an invertible matrix
\mavergleichskette
{\vergleichskette
{B
}
{ \in }{ \operatorname{Mat}_{ n \times n } (K)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
such that \mathl{BMB^{-1}}{} is an
upper triangular matrix.}
}
{Linear mapping/Trigonalizable/Characterization with characteristic polynomial/Fact/Proof
\inputfaktbeweis
{Square matrix/C/Trigonalizable/Fact}
{Theorem}
{}
{
\faktsituation {Let
\mavergleichskette
{\vergleichskette
{M
}
{ \in }{\operatorname{Mat}_{ n \times n } (\Complex)
}
{ }{
}
{ }{
}
{ }{
}
}
{}{}{}
denote a square matrix with
complex
entries.}
\faktfolgerung {Then $M$ is
trigonalizable.}
\faktzusatz {}
}
{
This follows from Theorem 28.15 and the Fundamental theorem of algebra.