Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 22

From Wikiversity



Relations between eigenspaces

We have seen in Lemma 21.7 that the eigenspace to is the kernel of the endomorphism. More general, the following characterization holds.


Let be a field, a -vector space and

a linear mapping. Let . Then

Let . Then if and only if , and this is the case if and only if holds, which means .


In particular, is an eigenvalue of if and only if is not injective. For a given , this property can be checked with the help of a linear system (or the determinant), and the eigenspace can be determined. However, it is not a linear problem to decide whether has eigenvalues at all and how those can be determined. We will continue to study a linear mapping by considering the differences to homotheties for various .

For an -matrix , we have to determine the kernel of the matrix . If, for example, we want t know whether the matrix has the eigenvalue , then

shows that this is not the case.


Let be a field, a -vector space and

a linear mapping. Let be elements in . Then

Let . Then

Therefore,

and this implies, because of , .



Let be a field, a -vector space and

a linear mapping. Let be eigenvectors for (pairwise) different eigenvalues . Then are

linearly independent.

We prove the statement by induction on . For , the statement is true. Suppose now that the statement is true for less than vectors. We consider a representation of , say

We apply to this and get, on one hand,

On the other hand, we multiply the equation with and get

We look at the difference of the two equations, and get

By the induction hypothesis, we get for the coefficients , . Because of , we get  for , and because of , we also get .



Let be a field, a finite-dimensional -vector space and

a linear mapping. Then there exist at most many eigenvalues

for .

Proof

In particular, an endomorphism on a finite-dimensional vector space has only finitely many eigenvalues.



Geometric multiplicity

The restriction of a linear mapping to an eigenspace is the homothety with the corresponding eigenvalue, thus a very simple linear mapping. For a diagonal matrix

the standard basis has the property that every basis vector is an eigenvector for the linear mapping given by the matrix. In this case, it is easy to describe the eigenspaces, see Example 21.4 ; the eigenspace for consists of all linear combinations of the standard vectors , for which equals . In particular, the dimension of the eigenspace equals the number how often occurs as an diagonal element. In general, the dimensions of the eigenspaces are important invariants of an endomorphism.


Let be a field, a -vector space and

a linear mapping. For we call

the geometric multiplicity

of .

In particular, a number is an eigenvalue of if and only if its geometric multiplicity is at least . It is easy to give examples where the geometric multiplicity of an eigenvalue is any number between and the dimension of the space.


Let denote a field, and let denote a -vector space of finite dimension. Let

be a linear mapping. Then the sum of the eigenspaces is direct, and we have

This follows directly from Lemma 22.3 .



Diagonalizability

Let denote a field, let denote a vector space, and let

denote a linear mapping. Then is called diagonalizable, if has a basis consisting of eigenvectors

for .

Let denote a field, and let denote a finite-dimensional vector space. Let

denote a

linear mapping. Then the following statements are equivalent.
  1. is diagonalizable.
  2. There exists a basis of such that the describing matrix is a diagonal matrix.
  3. For every describing matrix with respect to a basis , there exists an invertible matrix such that

    is a diagonal matrix.

The equivalence between (1) and (2) follows from the definition, from Example 21.4 , and the correspondence between linear mappings and matrices. The equivalence between (2) and (3) follows from Corollary 11.12 .

If is diagonalizable and if the eigenvalues together with their geometric multiplicities are known, then we can simply write down a corresponding diagonal matrix: just take the diagonal matrix such that, in the diagonal, the eigenvalues occur as often as the geometric multiplicities. In particular, the corresponding diagonal matrix of a diagonalizable mapping is, up to the ordering of the diagonal entries, uniquely determined.


We continue with Example 21.5 . There exists the two eigenvectors and for the different eigenvalues and , so that the mapping is diagonalizable, due to Corollary 22.10 . With respect to the basis , consisting of these eigenvectors, the linear mapping is described by the diagonal matrix

The transformation matrix, from the basis to the standard basis , consisting of and , is simply

The inverse matrix is

Because of Corollary 11.12 , we have the relation


Let denote a field, and let denote a finite-dimensional vector space. Let

denote a linear mapping. Suppose that there exists different eigenvalues. Then is

diagonalizable.

Because of Lemma 22.3 , there exist linearly independent eigenvectors. These form, due to Corollary 8.10 , a basis.



Let denote a field, and let denote a -vector space of finite dimension. Let

be a linear mapping. Then is diagonalizable if and only if is the direct sum of the

eigenspaces.

If is diagonalizable, then there exists a basis of consisting of eigenvectors. Hence,

Therefore,

That the sum is direct follows from Lemma 22.2 . If, the other way round,

holds, then we can choose in every eigenspace a basis. These bases consist of eigenvectors and yield together a basis of .



We consider -shearing matrices

with . The condition for some to be an eigenvalue means

This yields the equations

For , we get and hence also , that is, only can be an eigenvalue. In this case, the second equation is fulfilled, and the first equation becomes

For , we get and thus is the eigenspace for the eigenvalue , and is an eigenvector which spans this eigenspace. For , we have the identity matrix, and the eigenspace for the eigenvalue is the total plane. For , there is a one-dimensional eigenspace, and the mapping is not diagonalizable.

The product of two diagonal matrices is again a diagonal matrix. The following example shows that the product of two diagonalizable matrices is not necessarily diagonalizable.


Let and denote two lines in through the origin, and let and denote the reflections at these axes. A reflection at an axis is always diagonalizable, the axis and the line orthogonal to the axis are eigenlines (with eigenvalues and ). The composition

of the reflections is a plane rotation, the angle of rotation being twice the angle between the two lines. However, a rotation is only diagonalizable if the angle of rotation is or degree. If the angle between the axes is different from degree, then does not have any eigenvector.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)