Jump to content

Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 7

From Wikiversity



Approximation

A basic thought of mathematics is the idea of approximation, which occurs in many contexts and which is important both for mathematics as an auxiliary science for the empirical sciences and for the construction of mathematics as a pure science, in particular in analysis.

The first example for this is measuring, say the length of a line segment or the duration of time. Depending on the context and the aim, there are quite different ideas of what an exact measurement is, and the desired accuracy has an impact on the measuring device to take.

The result of a measurement is given in respect to a unit of measurement by a decimal fraction, that is as a number with finitely many digits after the point. The number of digits after the point indicates the claimed exactness of the measurement. To describe results of a measurement one neither needs irrational numbers nor rational numbers with a periodic decimal expansion.

Let's have a look at meteorology. From measurements at several different measurement stations, one tries to set up the weather forecast for the following days with mathematical models and computer simulation. In order to make better forecasts, one needs more measurement stations.

Let's have a look at approximations as they appear in mathematics. A certain line segment can (at least ideally) be divided into parts of the same length and one may be interested in the length of the parts, or in the length of the diagonal in the unit square. These lengths could also be measured, however, mathematics offers better descriptions of these lengths by providing rational numbers and irrational numbers (like ). The determination of a good approximation is then pursued within mathematics. Let us consider the fraction . An approximation of this number with an exactness of nine digits is given by

The decimal fractions on the left and on the right are both approximations (estimates) of the true fraction with an error which is smaller than . This is a typical accuracy of a calculator, but depending on the aim one sometimes wants a better accuracy (a smaller error). The computation in this example rests on the division algorithm, and one can go on to achieve any wanted error bound (here, it is an additional aspect that because of the periodicity we can just read off the digits and repeat them and do not have to compute further). The approximation of a given number by a decimal fraction is also called rounding.

In the empirical and in the mathematical situation, we have the following principle of approximation.

Principle of approximation: There does not exist a universal accuracy for an approximation. A good approximation method is not a single approximation, but rather a method to produce for any given wanted accuracy (error, level of exactness, deviation) an approximation within the given accuracy. To increase the accuracy (make the error smaller) one has to increase the effort.

With this principle at the back of one's mind, many difficult concepts like convergent sequence and continuity become comprehensible.

Approximations appear also in the sense that empirical functions for which a certain sampling is known, shall be described by a mathematically easy function. An example for this is the interpolation theorem. Later we will also encounter the Taylor formula, which approximates a given function in a small neighborhood of one point by a polynomial. Also, here the mentioned principle of approximation occurs, that in order to get a better approximation one has to increase the degree of the polynomials. In integration theory, the graph of a function is bounded by upper and lower staircase functions, in order to approximate the area below the graph. With finer staircase functions (shorter steps) we get better approximations.

How good an approximation is becomes sometimes clear if we want to compute with the approximations. For example, given certain estimates for the side lengths of a rectangle, what estimate does hold for the area of the rectangle? If we want to allow a certain error for the area of a rectangle, what error can we allow for the side lengths?

We are going to have a closer look at square roots and how these might be approximated. More precisely, we will describe square roots as a limit of a sequence. We have seen in the fourth lecture that there is no easier description for the square root of a prime number since these are irrational numbers.



Real sequences
Heron of Alexandria (1. century a.C.)

We begin with a motivating example.


We would like to "compute“ the square root of a natural number, say of . Such a number with the property does not exist within the rational numbers (this follows from unique prime factorization). If is such an element, then also has this property. Due to Corollary 6.6 , there can not be more than two solutions.

Though there is no solution within the rational numbers for the equation , there exist arbitrarily good approximations for it with rational numbers. Arbitrarily good means that the error (the deviation) can be made so small that it is below any given positive bound. The classical method to approximate a square root is Heron's method. This is an iterative method, i.e., the next approximation is computed from the preceding approximation. Let us start with as a first approximation. Because of

we see that is to small, . From ( being positive) we get and therefore , so . Hence we have the estimates

where we get a rational number on the right hand side if is rational. Such an estimate provides a certain idea where lies. The difference is a measure for how good the approximation is.

In particular, when we start with , we get that the square root is between and . Then we take the arithmetic mean of the interval bounds, so

Due to , this value is too large and therefore is in the interval . Then, we take again the arithmetic mean of these interval bounds and we set

to be the next approximation. Continuing like that, we get better and better approximations for .

In this way we get always a sequence of better and better approximations of the square root of a positive real number.


Let denote a positive real number. The Heron-sequence, with the positive initial value , is defined recursively by

Accordingly, this method is called Heron's method for the computation of square roots. In particular, this method produces for every natural number a real number which approximates a number defined by a certain algebraic property within an error which is arbitrarily small. In many technical applications, it is enough to know a certain number within a certain accuracy, but the accuracy aimed at might depend on the technical goal. In general, there is no accuracy which will work for all possible applications. Instead, it is important to know how to improve a good approximation by a better approximation and to know how many (computational) steps one has to take in order to reach a certain desired approximation. This idea yields the concepts sequence and convergence.


A real sequence is a mapping

We usually write a sequence as or simple as . For a given starting number , the recursively defined numbers by Heron's method (for the computation of ) form a sequence. Sometimes a sequence is not defined for all natural numbers, but just for all natural numbers . But all concepts and statements apply also in this situation.


Let denote a real sequence, and let . We say that the sequence converges to , if the following property holds.

For every positive , , there exists some , such that for all , the estimate

holds.

If this condition is fulfilled, then is called the limit of the sequence. For this we write

If the sequence converges to a limit, we just say that the sequence converges, otherwise, that the sequence diverges.

One should think of the given as a small but positive real number which expresses the desired aiming accuracy (or the allowed error). The natural number represents the effort how far one has to go in order to achieve the desired accuracy, and in fact in such a way that above this effort number , all the following members will stay within this allowed error. Thus, convergence means that every possible accuracy can be achieved by some suitable effort. The smaller the error is supposed to be (the better the approximation shall be), the higher the effort will be. Instead of arbitrary positive real numbers , one can also work with unit fractions (the rational numbers of the form , ), see Exercise 7.7 , or with the inverse powers of ten , .

For and a real number , the interval is also called the -neighborhood of . A sequence converging to is called null sequence.




A constant sequence converges to the limit . This follows immediately, since for every , we can take . Then we have

for all .


The sequence

converges to the limit . To show this, let some positive be given. Due to the Archimedean axiom, there exists an , such that . Then for all , the estimate

holds.


We consider the sequence

with exactly digits after the point. We claim that this sequence converges to . For this, we have to determine , and before we can do this, we have to recall the meaning of a decimal expansion. We have

and therefore

If now a positive is given, then for sufficiently large, this last term is .


A

real sequence has at most one limit.

We assume that the sequence has two distinct limits , . Then . We consider . Because of the convergence to there exists an such that

and because of the convergence to there exists an such that

hence both conditions hold simultaneously for . Suppose that is as large as this maximum. Then due to the triangle inequality we arrive at the contradiction



Boundedness

A subset of the real numbers is called bounded, if there exist real numbers such that

.

In this situation, is also called an upper bound for and is called a lower bound for . These concepts are also used for sequences, namely for the image set, the set of all members . For the sequence , , is an upper bound and is a lower bound.


Let be the convergent sequence with as its limit. Choose some . Due to convergence there exists some such that

So in particular

Below there are ony finitely many members, hence the maximum

is welldefined. Therefore is an upper bound and is a lower bound for .


It is easy to give a bounded but not convergent sequence.


The alternating sequence

is bounded, but not convergent. The boundedness follows directly from for all . However, there is no convergence. For if were the limit, then for positive and every odd the relation

holds, so these members are outside of this -neighbourhood. In the same way we can argue against some negative limit.



The squeeze criterion

Suppose that and are convergent sequences such that for all . Then

.

Proof


The following statement is called the squeeze criterion.


Let and denote real sequences. Suppose that

and that and converge

to the same limit . Then also converges to .

Proof



<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)