Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 20/refcontrol

From Wikiversity



The interpolation theorem
A piecewise linear and
a polynomial interpolation.

The following theorem is called theorem about polynomial interpolation and describes the interpolation of given function values by a polynomial. If just one function value at one point is given, then this determines a constant polynomial, two values at two points determine a linear polynomial (the graph is a line), three values at three points determine a quadratic polynomial, etc.


TheoremTheorem 20.1 change

Let be a field,MDLD/field and let different elements , and elements be given. Then there exists a unique polynomialMDLD/polynomial (1K) of degree , such that

holds for all .

We prove the existence and consider first the situation where for all for some fixed . Then

is a polynomial of degree , which at the points has value . The polynomial

has at these points still a zero, but additionally at , its value is . We denote this polynomial by . Then

is the polynomial looked for, because for the point , we have

for and .

The uniqueness follows from Corollary 19.9 .


  A variant of the proof considers the mapping

This mapping is -linear,MDLD/linear since its components are linear due to Remark 19.7 . The interpolation theorem says that this mapping is surjective, this can be shown as in the proof. Moreover, it claims that this mapping, when restricted to the linear subspace of all polynomials of degree , is an isomorphism.MDLD/isomorphism (linear)


If the data and are given, then one can find the interpolating polynomial of degree , which exists by Theorem 20.1 , in the following way: We write

with unknown coefficients , and determine then these coefficients. Each interpolation point yields a linear equation

over . The resulting system of linear equations has exactly one solution , which gives the polynomial.




Inserting an endomorphism

For a linear mappingMDLD/linear mapping

on a -vector space,MDLD/vector space we can consider the iterations , that is, the -fold composition of with itself. Moreover, we can add linear mappings and multiply them with scalars from the field. Therefore, expressions of the form

are linear mappings from to . Here, we have to interpret

At first glance, it is not clear why studying such polynomial expressions in help in understanding . The expression described can be understood in the sense that in the polynomial , we substitute the variable by the linear mapping . This assignment fulfills the following structural properties.


LemmaLemma 20.3 change

Let be a field,MDLD/field a -vector space,MDLD/vector space and

a linear mapping.MDLD/linear mapping Then the mapping

has the following properties.
  1. For a constant polynomial , we have

    In particular, the zero polynomial is sent to the zero mapping and the constant -polynomial is sent to the identity.

  2. We have

    for all polynomials .

  3. We have

    for all polynomials .

  4. We have

    for all .

(1) and (4) are inherent in the definition of the substitution homomorphism. From this, also (2) and (3) follows.


If is finite-dimensional, say of dimension , then all powers , , are elements in the -dimensional vector space

of all linear mappings from to . Because the space of homomorphisms has also finite dimension, these powers must be linearly dependent. That is, there exists some and coefficients , , not all , such that

holds (here, is immediately clear, we will see later that even holds). The corresponding polynomial has the property that it is not the zero polynomial, and that, after replacing everywhere by , the zero mapping on arises. We ask the following questions:


    • Does there exist some structure on the set of all polynomials
    with ?
    • Does there exist an especially simple polynomial
    with ?
    • How can we find it?
    • Which properties of can we deduce from the factor decomposition of this polynomial ?

Let be a field,MDLD/field a finite-dimensionalMDLD/finite-dimensional -vector spaceMDLD/vector space and

a linear mapping.MDLD/linear mapping Let be a basisMDLD/basis (vs) of , and let denote the corresponding matrix. Due to Lemma 11.10 , we have a correspondence between compositions of linear mappings and matrix multiplication. In particular, corresponds with . In the same way, the scalar multiplication and the addition on the space of endomorphisms and on the space of matrices correspond to each other. Therefore, instead of the assignment , we can also work with the assignment .



Ideals

A subset of a commutative ringMDLD/commutative ring is called an ideal, if the following conditions are fulfilled:

  1. .
  2. For all , we have .
  3. For all and , we have .

The property can be replaced by the condition that is not empty. An ideal is a subgroup of the additive group of , which, moreover, is also closed under scalar multiplication.


For a family of element in a commutative ringMDLD/commutative ring , we denote by the ideal generated by these elements. It consists of all linear combinations

where

.

An idealMDLD/ideal in a commutative ringMDLD/commutative ring of the form

is called a principal ideal.

The zero element forms, in every ring, the so-called zero ideal. We write this simply as . write. The and, moreover, every unit,MDLD/unit generates as an ideal the total ring.  A unit in a commutative ring is an invertible element, that is, an element such that there exists an with . A commutative ring is a field if and only if all elements with the exception of are units.


The unit ideal in a commutative ringMDLD/commutative ring

is the ring itself.

In a field, there exist exactly two ideals.


Let be a commutative ring.MDLD/commutative ring Then the following statements are equivalent.

  1. is a field.MDLD/field
  2. There exist exactly two idealsMDLD/ideals in .

If is a field, then there exists the zero ideal and the unit ideal, and these are different ideals. Let be an ideal in different from . Then contains some element , which is a unit.MDLD/unit Therefore, and thus .

Suppose now that is a commutative ring with exactly two ideals. Then is not the zero ring. Let now be an element in different from . The principal idealMDLD/principal ideal generated by , that is , is , and therefore it must be the other ideal, which is the unit ideal. In particular, this means . Hence, for some , so that is a unit.



Ideals in

TheoremTheorem 20.10 change

Let denote an idealMDLD/ideal in different from . We consider the non-empty set of natural numbers

This set has a minimum . This number arises from an element , , . We claim that .

The inclusion is clear. To prove the other inclusion , let be given. Due to Theorem 19.4 , we have

Because of and the minimality of , the first case can not happen. Therefore, , and this means that is a multiple of .




The minimal polynomial

Let be a finite-dimensionalMDLD/finite-dimensional -vector spaceMDLD/vector space and

a linear mapping.MDLD/linear mapping Then the uniquely determined normedMDLD/normed (polynomial) polynomialMDLD/polynomial of minimal degreeMDLD/degree (polynomial) fulfilling

is called the minimal polynom

of .

CorollaryCorollary 20.12 change

Let be a finite-dimensionalMDLD/finite-dimensional vector spaceMDLD/vector space over a fieldMDLD/field , and let

denote a linear mapping.MDLD/linear mapping Then the set

is a principal idealMDLD/principal ideal in the polynomial ring , which is generated by the minimal polynomialMDLD/minimal polynomial (endomorphism)

.

Proof



For the identity on a -vector spaceMDLD/vector space , the minimal polynomialMDLD/minimal polynomial (endomorphism) is just . This polynomial is sent under the evaluation homomorphism to

A constant polynomial is sent to , which is not, with the exception of or , the zero mapping.

For a homothety, that is, a mapping of the form , the minimal polynomial is , under the condition and . For the zero mapping on , the minimal polynomial is , in case , it is the constant polynomial .


For a diagonal matrixMDLD/diagonal matrix

with different entries , the minimal polynomialMDLD/minimal polynomial (endomorphism) is

This polynomial is sent under the substitution to

We apply this to a standard vector . Then factor sends to . Therefore, the -th factor ensures that is annihilated. Since a basis is mapped under to , it must be the zero mapping.

Assume now that is not the minimal polynomial . Then there exists, due to Corollary 20.12 , a polynomial with

and, because of Lemma 19.8 , is a partial product of the linear factors of . But as soon as one factor of is removed, say we remove , then is not annihilated by the corresponding mapping.


For the matrix

is the minimal polynomial.MDLD/minimal polynomial (endomorphism) This polynomial becomes, after the substitution, the zero mapping, because of

The factors of of smaller degree are the constant polynomials and with , but these polynomials do not annihilate .


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)