Jump to content

The bisection method

From Wikiversity

 

The bisection method


The bisection method is based on the theorem of existence of roots for continuous functions, which guarantees the existence of at least one root of the function in the interval if and have opposite sign. If in the function is also monotone, that is , then the root of the function is unique. Once established the existence of the solution, the algorithm defines a sequence as the sequence of the mid-points of the intervals of decreasing width which satisfy the hypothesis of the roots theorem.

Roots Theorem

[edit | edit source]

The theorema of existence of roots for continuous function (or Bolzano's theorem) states

Let be a continuous function such that .

Then there exists at least one point in the open interval such that .

The proof can be found here .

Bisection algorithm

[edit | edit source]

Given such that the hypothesis of the roots theorem are satisfied and given a tolerance

  1. ;
  2. if esci;
  3. if break;
    else if ;
    else ;
  4. go to step 1;

In the first step we define the new value of the sequence: the new mid-point. In the second step we do a control on the tolerance: if the error is less than the given tolerance we accept as a root of . The third step consists in the evaluation of the function in : if we have found the solution; else ,since we divided the interval in two, we need to find out on which side is the root. To this aim we use the hypothesis of the roots theorem, that is, we seek the new interval such that the function has opposite signs at the boundaries and we re-define the interval moving or in . Eventually, if we have not yet found a good approximation of the solution, we go back to the starting point.

convergence of bisection method and then the root of convergence of f(x)=0in this method

At each iteration the interval is divided into halves, where with and we indicate the extrema of the interval at iteration . Obviously . We indicate with the length of the interval . In particular we have

Note that , that means

.

From this we have that , since . For this reason we obtain

,

which proves the global convergence of the method.

The convergence of the bisection method is very slow. Although the error, in general, does not decrease monotonically, the average rate of convergence is 1/2 and so, slightly changing the definition of order of convergence, it is possible to say that the method converges linearly with rate 1/2. Don't get confused by the fact that, on some books or other references, sometimes, the error is written as . This is due to the fact that the sequence is defined for instead of .

Example

[edit | edit source]

Consider the function in . In this interval the function has 3 roots: , and .

Theoretically the bisection method converges with only one iteration to . In practice, nonetheless, the method converges to or to . In fact, since the finite representation of real numbers on the calculator, and depending on the approximation of the calculator could be positive or negative, but never zero. In this way the bisection algorithm, in this case, is excluding automatically the root at the first iteration, since the error is still large ().

Suppose that the algorithm converges to and let's see how many iterations are required to satisfy the relation . In practice, we need to impose

,

and so, solving this inequality, we have

,

and, since is a natural number, we find .

References

[edit | edit source]
  • Atkinson, Kendall E. (1989). "chapter 2.1". An Introduction To Numerical Analysis (2nd ed.). 
  • Quarteroni, Alfio; Sacco, Riccardo; Fausto, Saleri (2007). "chapter 6.2". Numerical Mathematics (2nd ed.). 
  • Süli, Endre; Mayers, David F (2003). "chapter 1.6". An introduction to numerical analysis (1st ed.). 

Other resources on the bisection method

[edit | edit source]

Look on the resources about rootfinding for nonlinear equations page.