# The bisection method

Jump to navigation Jump to search

The bisection method

The bisection method is based on the theorem of existence of roots for continuous functions, which guarantees the existence of at least one root ${\displaystyle \alpha }$ of the function ${\displaystyle f}$ in the interval ${\displaystyle [a,b]}$ if ${\displaystyle f(a)}$ and ${\displaystyle f(b)}$ have opposite sign. If in ${\displaystyle [a,b]}$ the function ${\displaystyle f}$ is also monotone, that is ${\displaystyle f'(x)>0\;\forall x\in [a,b]}$, then the root of the function is unique. Once established the existence of the solution, the algorithm defines a sequence ${\displaystyle x_{k}}$ as the sequence of the mid-points of the intervals of decreasing width which satisfy the hypothesis of the roots theorem.

## Roots Theorem

The theorema of existence of roots for continuous function (or Bolzano's theorem) states

Let ${\displaystyle f:[a,b]\to \mathbb {R} }$ be a continuous function such that ${\displaystyle f(a)\cdot f(b)<0}$.

Then there exists at least one point ${\displaystyle x}$ in the open interval ${\displaystyle \displaystyle (a,b)}$ such that ${\displaystyle \displaystyle f(x)=0}$.

The proof can be found here .

## Bisection algorithm

Given ${\displaystyle f\in C^{0}([a,b])}$ such that the hypothesis of the roots theorem are satisfied and given a tolerance ${\displaystyle \epsilon }$

1. ${\displaystyle x_{k}={\frac {a+b}{2}},\qquad k\geq 1}$;
2. if ${\displaystyle \displaystyle e_{k}=|b-x_{k}|\leq \epsilon :\qquad \qquad \qquad }$esci;
3. if ${\displaystyle \displaystyle f(x_{k})=0:\qquad \qquad }$ break;
else if ${\displaystyle \displaystyle f(a)f(x_{k})>0:}$ ${\displaystyle \displaystyle a=x_{k}}$;
else ${\displaystyle :\displaystyle b=x_{k}}$;
4. go to step 1;

In the first step we define the new value of the sequence: the new mid-point. In the second step we do a control on the tolerance: if the error is less than the given tolerance we accept ${\displaystyle x_{k}}$ as a root of ${\displaystyle f}$. The third step consists in the evaluation of the function in ${\displaystyle x_{k}}$: if ${\displaystyle f(x_{k})=0}$ we have found the solution; else ,since we divided the interval in two, we need to find out on which side is the root. To this aim we use the hypothesis of the roots theorem, that is, we seek the new interval such that the function has opposite signs at the boundaries and we re-define the interval moving ${\displaystyle a}$ or ${\displaystyle b}$ in ${\displaystyle x_{k}}$. Eventually, if we have not yet found a good approximation of the solution, we go back to the starting point.

convergence of bisection method and then the root of convergence of f(x)=0in this method

At each iteration the interval ${\displaystyle {\mathcal {I}}_{k}=[a_{k},b_{k}]}$ is divided into halves, where with ${\displaystyle a_{k}}$ and ${\displaystyle b_{k}}$ we indicate the extrema of the interval at iteration ${\displaystyle k\geq 0}$. Obviously ${\displaystyle {\mathcal {I}}_{0}=[a,b]}$. We indicate with ${\displaystyle |{\mathcal {I}}_{k}|=meas({\mathcal {I}}_{k})}$ the length of the interval ${\displaystyle {\mathcal {I}}_{k}}$. In particular we have

${\displaystyle |{\mathcal {I}}_{k}|={\frac {|{\mathcal {I}}_{k-1}|}{2}}={\frac {|{\mathcal {I}}_{k-2}|}{2^{2}}}=...={\frac {|{\mathcal {I}}_{0}|}{2^{k}}}={\frac {b-a}{2^{k}}}.}$

Note that ${\displaystyle \alpha \in {\mathcal {I}}_{k}\;,\forall k\geq 0}$, that means

${\displaystyle e_{k}\leq |{\mathcal {I}}_{k}|}$.

From this we have that ${\displaystyle \lim _{k\to \infty }e_{k}=0}$, since ${\displaystyle \lim _{k\to \infty }{\frac {1}{2^{k}}}=0}$. For this reason we obtain

${\displaystyle \displaystyle \lim _{k\to \infty }x_{k}=\alpha }$,

which proves the global convergence of the method.

The convergence of the bisection method is very slow. Although the error, in general, does not decrease monotonically, the average rate of convergence is 1/2 and so, slightly changing the definition of order of convergence, it is possible to say that the method converges linearly with rate 1/2. Don't get confused by the fact that, on some books or other references, sometimes, the error is written as ${\displaystyle e_{k}={\frac {b-a}{2^{k+1}}}}$. This is due to the fact that the sequence is defined for ${\displaystyle k\geq 0}$ instead of ${\displaystyle k\geq 1}$.

## Example

Consider the function ${\displaystyle \displaystyle f(x)=\cos x}$ in ${\displaystyle \displaystyle [0,3\pi ]}$. In this interval the function has 3 roots: ${\displaystyle \alpha _{1}={\frac {\pi }{2}}}$, ${\displaystyle \alpha _{2}={\frac {3\pi }{2}}}$ and ${\displaystyle \alpha _{3}={\frac {5\pi }{2}}}$.

Theoretically the bisection method converges with only one iteration to ${\displaystyle \displaystyle \alpha _{2}}$. In practice, nonetheless, the method converges to ${\displaystyle \displaystyle \alpha _{1}}$ or to ${\displaystyle \displaystyle \alpha _{3}}$. In fact, since the finite representation of real numbers on the calculator, ${\displaystyle x_{1}\neq {\frac {3\pi }{2}}}$ and depending on the approximation of the calculator ${\displaystyle \displaystyle f(x_{1})}$ could be positive or negative, but never zero. In this way the bisection algorithm, in this case, is excluding automatically the root ${\displaystyle \displaystyle \alpha _{2}}$ at the first iteration, since the error is still large (${\displaystyle \displaystyle e_{1}=\alpha _{2}}$).

Suppose that the algorithm converges to ${\displaystyle \displaystyle \alpha _{1}}$ and let's see how many iterations are required to satisfy the relation ${\displaystyle \displaystyle 10^{-10}}$. In practice, we need to impose

${\displaystyle e_{k}\leq {\frac {3\pi }{2^{k}}}\leq 10^{-10}}$,

and so, solving this inequality, we have

${\displaystyle k\geq \log _{2}(3\cdot 10^{10}\pi )\approx 36.46}$,

and, since ${\displaystyle k}$ is a natural number, we find ${\displaystyle k\geq 37}$.

## References

• Atkinson, Kendall E. (1989). "chapter 2.1". An Introduction To Numerical Analysis (2nd ed.).
• Quarteroni, Alfio; Sacco, Riccardo; Fausto, Saleri (2007). "chapter 6.2". Numerical Mathematics (2nd ed.).
• Süli, Endre; Mayers, David F (2003). "chapter 1.6". An introduction to numerical analysis (1st ed.).

## Other resources on the bisection method

Look on the resources about rootfinding for nonlinear equations page.