# Numerical Analysis/Truncation Errors

## Definition

Truncation errors are defined as the errors that result from using an approximation in place of an exact mathematical procedure.

There are two ways to measure the errors:

1. Local Truncation Error (LTE): the error, $\tau$ , introduced by the approximation method at each step.
2. Global Truncation Error (GTE): the error, $e$ , is the absolute difference between the correct value and the approximate value.

Assume that our methods take the form:

 Let yn+1 and yn be approximation values. Then $y_{n+1}=y_{n}+h\cdot A(t_{n},y_{n},h,f)$ , where $h$ is the time step, equal to $t_{n+1}-t_{n}$ , and $A$ is an increment function and is some algorithm for approximating the average slope ${\frac {y_{n+1}-y_{n}}{h}}$ . Three important examples of $A$ are: Euler’s method: $A(t_{n},y_{n},h,f)=f(t_{n},y_{n})$ . Modified Euler's method: $A(t_{n},y_{n},h,f)={\frac {1}{2}}(A_{1}+A_{2})$ , where {\begin{aligned}A_{1}&=f(y_{n},y_{n}){\text{, and}}\\A_{2}&=f(t_{n}+h,y_{n}+h\cdot A_{1}){\text{.}}\end{aligned}} Runge-Kutta method: $A(t_{n},y_{n},h,f)={\frac {1}{6}}(A_{1}+2A_{2}+2A_{3}+A_{4})$ , where {\begin{aligned}A_{1}&=f(t_{n},y_{n}){\text{,}}\\A_{2}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{1}){\text{,}}\\A_{3}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{2}){\text{, and}}\\A_{4}&=f(t_{n}+h,y_{n}+h\cdot A_{3}){\text{.}}\end{aligned}} ## Why do we care about truncation errors?

In the case of one-step methods, the local truncation error provides us a measure to determine how the solution to the differential equation fails to solve the difference equation. The local truncation error for multistep methods is similar to that of one-step methods.

A one-step method with local truncation error $\tau _{n}(h)$ at the nth step:

• This method is consistent with the differential equation it approximates if
$\lim _{h\to 0}\max _{1\leq n\leq N}|\tau _{n}(h)|=0.$ Note that here we assume that the approximation values are exactly equal to the true solution at every step.

• The method is convergent with respect to the differential equation it approximates if
$\lim _{h\to 0}\max _{1\leq n\leq N}|y_{n}-y(t_{n})|=0,$ where $y_{n}$ denotes the approximation obtained from the method at the nth step, and $y(t_{n})$ the exact value of the solution of the differential equation.

## How do we avoid truncation errors?

The truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.

## Relationship Between Local Truncation Error and Global Truncation Error

The global truncation error (GTE) is one order lower than the local truncation error (LTE).
That is,

if $\tau _{n}(h)=O(h^{p+1})$ , then $e_{n}(h)=O(h^{p})$ .

### Proof

We assume that perfect knowledge of the true solution at the initial time step.
Let ${\tilde {y}}(t)$ be the exact solution of

{\Big \{}{\begin{aligned}y'&=f(t,y){\text{, and}}\\y(t_{n})&=y_{n}\,.\end{aligned}} The truncation error at step n+1 is defined as $\tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}.$ Also, the global errors are defined as

{\begin{aligned}e_{0}(h)&=0\\e_{n+1}(h)&=y(t_{n+1})-y_{n+1}\\&=[y(t_{n+1})-{\tilde {y}}(t_{n+1})]+[{\tilde {y}}(t_{n+1})-y_{n+1}]\,.\end{aligned}} According to the w:Triangle inequality, we obtain that

$|e_{n+1}(h)|\leq |y(t_{n+1})-{\tilde {y}}(t_{n+1})|+|{\tilde {y}}(t_{n+1})-y_{n+1}|.$ (1)

The second term on the right-hand side of (1 ) is the truncation error $\tau _{n+1}(h)$ . Here we assume

$\tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}=O(h^{p+1}).$ (2)

Thus,

$|{\tilde {y}}(t_{n+1})-y_{n+1}|\leq Ch^{p+1}\,.$ (3)

The first term on the right-hand side of (1 ) is the difference between two exact solutions.

Both $y(t)$ and ${\tilde {y}}(t)$ satisfy $y'=f(t,y)$ so

{\Big \{}{\begin{aligned}y'(t)&=f(t,y){\text{, and}}\\{\tilde {y}}(t)&=f(t,{\tilde {y}})\,.\end{aligned}} By subtracting one equation from the other, we can get that

{\begin{aligned}y'(t)-{\tilde {y}}(t)&=f(t,y)-f(t,{\tilde {y}})\quad {\text{so}}\\|y'(t)-{\tilde {y}}'(t)|&=|f(t,y)-f(t,{\tilde {y}})|.\end{aligned}} Since $f$ is w:Lipschitz continuous, then

$|y'(t)-{\tilde {y}}'(t)|\leq L|y(t)-{\tilde {y}}(t)|,$ where $t>t_{n}.$ {\begin{aligned}|y(t)-{\tilde {y}}(t)|&\leq |y(t_{n})-{\tilde {y}}(t_{n})|\exp \left(\int _{t_{n}}^{t}Lds\right)\\&=e^{L(t-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|,\end{aligned}} where $t\in [t_{n},t_{n+1}].$ Setting $t=t_{n+1}$ , we have that

{\begin{aligned}|y(t_{n+1})-{\tilde {y}}(t_{n+1})|&\leq e^{L(t_{n+1}-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-y_{n})|\\&=e^{Lh}|e_{n}(h)|\,.\end{aligned}} (4)

Plugging equation (3 ) and (4 ) into (1 ), we can get that

$|e_{n+1}(h)|\leq e^{Lh}|e_{n}(h)|+Ch^{p+1}\,.$ (5)

Note that equation (5 ) is a recursive inequality valid for all values of $n$ .

Next, we are trying to use it to estimate $|e_{N}(h)|,$ where we assume $Nh=T$ .

Let $\alpha =e^{Lh}.$ Dividing both sides of (4 ) by $\alpha ^{n+1},$ we get that

${\frac {|e_{n+1}(h)|}{\alpha ^{n+1}}}\leq {\frac {e_{n}(h)}{\alpha ^{n}}}+Ch^{p+1}{\frac {1}{\alpha ^{n+1}}}\,.$ Summing over n = 0,1, 2,…, N-1,

${\frac {|e_{1}(h)|}{\alpha ^{1}}}\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}{\frac {1}{\alpha ^{1}}}$ ,
${\frac {|e_{2}(h)|}{\alpha ^{2}}}\leq {\frac {e_{1}(h)}{\alpha ^{1}}}+Ch^{p+1}{\frac {1}{\alpha ^{2}}}$ ,
$\vdots$ and

${\frac {|e_{N}(h)|}{\alpha ^{N}}}\leq {\frac {e_{N-1}(h)}{\alpha ^{N-1}}}+Ch^{p+1}{\frac {1}{\alpha ^{N}}}\,.$ Then we obtain

{\begin{aligned}{\frac {|e_{N}(h)|}{\alpha ^{N}}}&\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}({\frac {1}{\alpha ^{1}}}+{\frac {1}{\alpha ^{2}}}+\cdots +{\frac {1}{\alpha ^{N}}})\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}(1+\alpha +\alpha ^{2}+\cdots +\alpha ^{N-1})]\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\,.\end{aligned}} Since $e_{0}(h)=0,$ we have

{\begin{aligned}|e_{N}(h)|&\leq Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\\&\leq Ch^{p+1}({\frac {\alpha ^{N}-1}{\alpha -1}}),{\text{since }}\alpha ^{N}>1\,.\end{aligned}} Using the inequality $e^{x}-1\geq x,$ we get

{\begin{aligned}\alpha -1&=e^{Lh}-1\geq Lh\quad {\text{and}}\\\alpha ^{N}-1&=e^{LNh}-1=e^{LT}-1\,.\end{aligned}} Therefore, we can obtain that

$|e_{N}(h)|\leq Ch^{p+1}({\frac {e^{LT}-1}{Lh}})=C({\frac {e^{LT}-1}{L}})h^{p}.$ That is,

$|e_{N}(h)|=O(h^{p}).$ (6)

From equation (2 ) and (6 ),

$\tau _{n+1}(h)=O(h^{p+1})\quad {\text{and}}$ $|e_{N}(h)|=O(h^{p}),$ so we can conclude that the global truncation error is one order lower than the local truncation error.

## Graph

In this graph, $c=a+{\frac {b-a}{2}}.$ The red line is the true value, the green line is the first step, and the blue line is the second step.

${\overline {AB}}$ is the local truncation error at step 1, $\tau _{1}=e_{1}$ , equal to ${\overline {CD}}.$ ${\overline {DE}}$ is separation because after the first step we are on the wrong solution of the ODE.
${\overline {EF}}$ is $\tau _{2}.$ Thus, ${\overline {CF}}$ is the global truncation error at step 2, $e_{2}.$ We can see from this,

$e_{n+1}=e_{n}+h[A(t_{n}),y(t_{n}),h,f)-A(t_{n},y_{n},h,f)]+\tau _{n+1}.$ Then,

$e_{2}=e_{1}+h[A(t_{1}),y(t_{1}),h,f)-A(t_{1},y_{1},h,f)]+\tau _{2}.$ $e_{2}={\overline {AB}}+{\overline {DE}}+{\overline {CF}}.$ ## Exercise

Find the order of the 2-steps Adams-Bashforth method. You need to show the order of truncation error.