# Numerical Analysis/Truncation Errors

## Definition

Truncation errors are defined as the errors that result from using an approximation in place of an exact mathematical procedure.

There are two ways to measure the errors:

1. Local Truncation Error (LTE): the error, ${\displaystyle \tau }$, introduced by the approximation method at each step.
2. Global Truncation Error (GTE): the error, ${\displaystyle e}$, is the absolute difference between the correct value and the approximate value.

Assume that our methods take the form:

 Let yn+1 and yn be approximation values. Then ${\displaystyle y_{n+1}=y_{n}+h\cdot A(t_{n},y_{n},h,f)}$, where ${\displaystyle h}$ is the time step, equal to ${\displaystyle t_{n+1}-t_{n}}$, and ${\displaystyle A}$ is an increment function and is some algorithm for approximating the average slope ${\displaystyle {\frac {y_{n+1}-y_{n}}{h}}}$. Three important examples of ${\displaystyle A}$ are: Euler’s method: ${\displaystyle A(t_{n},y_{n},h,f)=f(t_{n},y_{n})}$. Modified Euler's method: ${\displaystyle A(t_{n},y_{n},h,f)={\frac {1}{2}}(A_{1}+A_{2})}$, where {\displaystyle {\begin{aligned}A_{1}&=f(y_{n},y_{n}){\text{, and}}\\A_{2}&=f(t_{n}+h,y_{n}+h\cdot A_{1}){\text{.}}\end{aligned}}} Runge-Kutta method: ${\displaystyle A(t_{n},y_{n},h,f)={\frac {1}{6}}(A_{1}+2A_{2}+2A_{3}+A_{4})}$, where {\displaystyle {\begin{aligned}A_{1}&=f(t_{n},y_{n}){\text{,}}\\A_{2}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{1}){\text{,}}\\A_{3}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{2}){\text{, and}}\\A_{4}&=f(t_{n}+h,y_{n}+h\cdot A_{3}){\text{.}}\end{aligned}}}

## Why do we care about truncation errors?

In the case of one-step methods, the local truncation error provides us a measure to determine how the solution to the differential equation fails to solve the difference equation. The local truncation error for multistep methods is similar to that of one-step methods.

A one-step method with local truncation error ${\displaystyle \tau _{n}(h)}$ at the nth step:

• This method is consistent with the differential equation it approximates if
${\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|\tau _{n}(h)|=0.}$

Note that here we assume that the approximation values are exactly equal to the true solution at every step.

• The method is convergent with respect to the differential equation it approximates if
${\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|y_{n}-y(t_{n})|=0,}$

where ${\displaystyle y_{n}}$ denotes the approximation obtained from the method at the nth step, and ${\displaystyle y(t_{n})}$ the exact value of the solution of the differential equation.

## How do we avoid truncation errors?

The truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.

## Relationship Between Local Truncation Error and Global Truncation Error

The global truncation error (GTE) is one order lower than the local truncation error (LTE).
That is,

if ${\displaystyle \tau _{n}(h)=O(h^{p+1})}$, then ${\displaystyle e_{n}(h)=O(h^{p})}$.

### Proof

We assume that perfect knowledge of the true solution at the initial time step.
Let ${\displaystyle {\tilde {y}}(t)}$ be the exact solution of

{\displaystyle {\Big \{}{\begin{aligned}y'&=f(t,y){\text{, and}}\\y(t_{n})&=y_{n}\,.\end{aligned}}}

The truncation error at step n+1 is defined as ${\displaystyle \tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}.}$ Also, the global errors are defined as

{\displaystyle {\begin{aligned}e_{0}(h)&=0\\e_{n+1}(h)&=y(t_{n+1})-y_{n+1}\\&=[y(t_{n+1})-{\tilde {y}}(t_{n+1})]+[{\tilde {y}}(t_{n+1})-y_{n+1}]\,.\end{aligned}}}

According to the w:Triangle inequality, we obtain that

${\displaystyle |e_{n+1}(h)|\leq |y(t_{n+1})-{\tilde {y}}(t_{n+1})|+|{\tilde {y}}(t_{n+1})-y_{n+1}|.}$

(1)

The second term on the right-hand side of (1 ) is the truncation error ${\displaystyle \tau _{n+1}(h)}$. Here we assume

${\displaystyle \tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}=O(h^{p+1}).}$

(2)

Thus,

${\displaystyle |{\tilde {y}}(t_{n+1})-y_{n+1}|\leq Ch^{p+1}\,.}$

(3)

The first term on the right-hand side of (1 ) is the difference between two exact solutions.

Both ${\displaystyle y(t)}$ and ${\displaystyle {\tilde {y}}(t)}$ satisfy ${\displaystyle y'=f(t,y)}$ so

{\displaystyle {\Big \{}{\begin{aligned}y'(t)&=f(t,y){\text{, and}}\\{\tilde {y}}(t)&=f(t,{\tilde {y}})\,.\end{aligned}}}

By subtracting one equation from the other, we can get that

{\displaystyle {\begin{aligned}y'(t)-{\tilde {y}}(t)&=f(t,y)-f(t,{\tilde {y}})\quad {\text{so}}\\|y'(t)-{\tilde {y}}'(t)|&=|f(t,y)-f(t,{\tilde {y}})|.\end{aligned}}}

Since ${\displaystyle f}$ is w:Lipschitz continuous, then

${\displaystyle |y'(t)-{\tilde {y}}'(t)|\leq L|y(t)-{\tilde {y}}(t)|,}$ where ${\displaystyle t>t_{n}.}$
{\displaystyle {\begin{aligned}|y(t)-{\tilde {y}}(t)|&\leq |y(t_{n})-{\tilde {y}}(t_{n})|\exp \left(\int _{t_{n}}^{t}Lds\right)\\&=e^{L(t-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|,\end{aligned}}}

where ${\displaystyle t\in [t_{n},t_{n+1}].}$

Setting ${\displaystyle t=t_{n+1}}$, we have that

{\displaystyle {\begin{aligned}|y(t_{n+1})-{\tilde {y}}(t_{n+1})|&\leq e^{L(t_{n+1}-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-y_{n})|\\&=e^{Lh}|e_{n}(h)|\,.\end{aligned}}}

(4)

Plugging equation (3 ) and (4 ) into (1 ), we can get that

${\displaystyle |e_{n+1}(h)|\leq e^{Lh}|e_{n}(h)|+Ch^{p+1}\,.}$

(5)

Note that equation (5 ) is a recursive inequality valid for all values of ${\displaystyle n}$.

Next, we are trying to use it to estimate ${\displaystyle |e_{N}(h)|,}$ where we assume ${\displaystyle Nh=T}$.

Let ${\displaystyle \alpha =e^{Lh}.}$ Dividing both sides of (4 ) by ${\displaystyle \alpha ^{n+1},}$ we get that

${\displaystyle {\frac {|e_{n+1}(h)|}{\alpha ^{n+1}}}\leq {\frac {e_{n}(h)}{\alpha ^{n}}}+Ch^{p+1}{\frac {1}{\alpha ^{n+1}}}\,.}$

Summing over n = 0,1, 2,…, N-1,

${\displaystyle {\frac {|e_{1}(h)|}{\alpha ^{1}}}\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}{\frac {1}{\alpha ^{1}}}}$,
${\displaystyle {\frac {|e_{2}(h)|}{\alpha ^{2}}}\leq {\frac {e_{1}(h)}{\alpha ^{1}}}+Ch^{p+1}{\frac {1}{\alpha ^{2}}}}$,
${\displaystyle \vdots }$

and

${\displaystyle {\frac {|e_{N}(h)|}{\alpha ^{N}}}\leq {\frac {e_{N-1}(h)}{\alpha ^{N-1}}}+Ch^{p+1}{\frac {1}{\alpha ^{N}}}\,.}$

Then we obtain

{\displaystyle {\begin{aligned}{\frac {|e_{N}(h)|}{\alpha ^{N}}}&\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}({\frac {1}{\alpha ^{1}}}+{\frac {1}{\alpha ^{2}}}+\cdots +{\frac {1}{\alpha ^{N}}})\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}(1+\alpha +\alpha ^{2}+\cdots +\alpha ^{N-1})]\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\,.\end{aligned}}}

Since ${\displaystyle e_{0}(h)=0,}$ we have

{\displaystyle {\begin{aligned}|e_{N}(h)|&\leq Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\\&\leq Ch^{p+1}({\frac {\alpha ^{N}-1}{\alpha -1}}),{\text{since }}\alpha ^{N}>1\,.\end{aligned}}}

Using the inequality ${\displaystyle e^{x}-1\geq x,}$ we get

{\displaystyle {\begin{aligned}\alpha -1&=e^{Lh}-1\geq Lh\quad {\text{and}}\\\alpha ^{N}-1&=e^{LNh}-1=e^{LT}-1\,.\end{aligned}}}

Therefore, we can obtain that

${\displaystyle |e_{N}(h)|\leq Ch^{p+1}({\frac {e^{LT}-1}{Lh}})=C({\frac {e^{LT}-1}{L}})h^{p}.}$

That is,

${\displaystyle |e_{N}(h)|=O(h^{p}).}$

(6)

From equation (2 ) and (6 ),

${\displaystyle \tau _{n+1}(h)=O(h^{p+1})\quad {\text{and}}}$
${\displaystyle |e_{N}(h)|=O(h^{p}),}$

so we can conclude that the global truncation error is one order lower than the local truncation error.

## Graph

In this graph, ${\displaystyle c=a+{\frac {b-a}{2}}.}$ The red line is the true value, the green line is the first step, and the blue line is the second step.

${\displaystyle {\overline {AB}}}$ is the local truncation error at step 1, ${\displaystyle \tau _{1}=e_{1}}$, equal to ${\displaystyle {\overline {CD}}.}$
${\displaystyle {\overline {DE}}}$ is separation because after the first step we are on the wrong solution of the ODE.
${\displaystyle {\overline {EF}}}$ is ${\displaystyle \tau _{2}.}$

Thus, ${\displaystyle {\overline {CF}}}$ is the global truncation error at step 2, ${\displaystyle e_{2}.}$

We can see from this,

${\displaystyle e_{n+1}=e_{n}+h[A(t_{n}),y(t_{n}),h,f)-A(t_{n},y_{n},h,f)]+\tau _{n+1}.}$

Then,

${\displaystyle e_{2}=e_{1}+h[A(t_{1}),y(t_{1}),h,f)-A(t_{1},y_{1},h,f)]+\tau _{2}.}$
${\displaystyle e_{2}={\overline {AB}}+{\overline {DE}}+{\overline {CF}}.}$

## Exercise

Find the order of the 2-steps Adams-Bashforth method. You need to show the order of truncation error.

## References

1. Burden, R. L., & Faires, J. (2011). Numerical analysis ninth edition. Brooks/Cole, Cengage Learning.
2. Materials from MATH 3600 Lecture 28 http://www.math.ohiou.edu/courses/math3600/lecture28.pdf.
3. http://www.math.uiuc.edu/~ekirr/page/teaching/math385/handout2.pdf.
4. http://users.soe.ucsc.edu/~hongwang/AMS147/Notes/Lecture09.pdf.