This page is about Truncation error of ODE methods.
Truncation errors are defined as the errors that result from using an approximation in place of an exact mathematical procedure.
There are two ways to measure the errors:
- Local Truncation Error (LTE): the error,
, introduced by the approximation method at each step.
- Global Truncation Error (GTE): the error,
, is the absolute difference between the correct value and the approximate value.
Assume that our methods take the form:
Let yn+1 and yn be approximation values.
Then
![{\displaystyle y_{n+1}=y_{n}+h\cdot A(t_{n},y_{n},h,f)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ffa71611c1ca3ef059655c3d4e8c46b9f998ed9d) , where
is the time step, equal to , and
is an increment function and is some algorithm for approximating the average slope .
Three important examples of are:
- Euler’s method:
.
- Modified Euler's method:
, where
![{\displaystyle {\begin{aligned}A_{1}&=f(y_{n},y_{n}){\text{, and}}\\A_{2}&=f(t_{n}+h,y_{n}+h\cdot A_{1}){\text{.}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d316ecfcde96164cfe4c80c05c275796e931934a)
- Runge-Kutta method:
, where
![{\displaystyle {\begin{aligned}A_{1}&=f(t_{n},y_{n}){\text{,}}\\A_{2}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{1}){\text{,}}\\A_{3}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{2}){\text{, and}}\\A_{4}&=f(t_{n}+h,y_{n}+h\cdot A_{3}){\text{.}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3b55075a136e9d04a1ecbb5ae742a6cee4ea6890)
|
In the case of one-step methods, the local truncation error provides us a measure to determine how the solution to the differential equation fails to solve the difference equation. The local truncation error for multistep methods is similar to that of one-step methods.
A one-step method with local truncation error
at the nth step:
- This method is consistent with the differential equation it approximates if
![{\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|\tau _{n}(h)|=0.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7db912b0ff29454ef70739619cdca0ff07bba50e)
Note that here we assume that the approximation values are exactly equal to the true solution at every step.
- The method is convergent with respect to the differential equation it approximates if
![{\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|y_{n}-y(t_{n})|=0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26dc4c031e935629b8d46492d243d087bb6f8749)
where
denotes the approximation obtained from the method at the nth step, and
the exact value of the solution of the differential equation.
The truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.
Relationship Between Local Truncation Error and Global Truncation Error
[edit | edit source]
The global truncation error (GTE) is one order lower than the local truncation error (LTE).
That is,
- if
, then
.
We assume that perfect knowledge of the true solution at the initial time step.
Let
be the exact solution of
![{\displaystyle {\Big \{}{\begin{aligned}y'&=f(t,y){\text{, and}}\\y(t_{n})&=y_{n}\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6912f6e9a43f76665e838063c56f7956fb715394)
The truncation error at step n+1 is defined as
Also, the global errors are defined as
![{\displaystyle {\begin{aligned}e_{0}(h)&=0\\e_{n+1}(h)&=y(t_{n+1})-y_{n+1}\\&=[y(t_{n+1})-{\tilde {y}}(t_{n+1})]+[{\tilde {y}}(t_{n+1})-y_{n+1}]\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/650c1f6ae7cd91b18c1e505047854c32791ff8f3)
According to the w:Triangle inequality, we obtain that
-
![{\displaystyle |e_{n+1}(h)|\leq |y(t_{n+1})-{\tilde {y}}(t_{n+1})|+|{\tilde {y}}(t_{n+1})-y_{n+1}|.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7770999c2def12aee4c5a7e1c251b41c2f37050b)
|
|
(1)
|
The second term on the right-hand side of (1) is the truncation error
.
Here we assume
-
![{\displaystyle \tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}=O(h^{p+1}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a826e1d3b03eda3f918b1a56879df17b1d0bd273)
|
|
(2)
|
Thus,
-
![{\displaystyle |{\tilde {y}}(t_{n+1})-y_{n+1}|\leq Ch^{p+1}\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/69e5b20d339f5b95b01a4c04b71ef3a10f30834e)
|
|
(3)
|
The first term on the right-hand side of (1) is the difference between two exact solutions.
Both
and
satisfy
so
![{\displaystyle {\Big \{}{\begin{aligned}y'(t)&=f(t,y){\text{, and}}\\{\tilde {y}}(t)&=f(t,{\tilde {y}})\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3e5c6c89d2394fe41c39260b5c703b0753a20fb)
By subtracting one equation from the other, we can get that
![{\displaystyle {\begin{aligned}y'(t)-{\tilde {y}}(t)&=f(t,y)-f(t,{\tilde {y}})\quad {\text{so}}\\|y'(t)-{\tilde {y}}'(t)|&=|f(t,y)-f(t,{\tilde {y}})|.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b18cb713781ac68012a45a99de24cbedb43116a8)
Since
is w:Lipschitz continuous, then
where ![{\displaystyle t>t_{n}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/06f3d5c0d9e3b5d62abc9fa0b5e6e8fd30019d61)
By w:Gronwall's inequality,
![{\displaystyle {\begin{aligned}|y(t)-{\tilde {y}}(t)|&\leq |y(t_{n})-{\tilde {y}}(t_{n})|\exp \left(\int _{t_{n}}^{t}Lds\right)\\&=e^{L(t-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|,\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f972193739acdda0b5bc063f5396113404230b67)
where
Setting
, we have that
-
![{\displaystyle {\begin{aligned}|y(t_{n+1})-{\tilde {y}}(t_{n+1})|&\leq e^{L(t_{n+1}-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-y_{n})|\\&=e^{Lh}|e_{n}(h)|\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ebb471b22a1ce79e0c40d4f528277fea0853df4)
|
|
(4)
|
Plugging equation (3) and (4) into (1), we can get that
-
![{\displaystyle |e_{n+1}(h)|\leq e^{Lh}|e_{n}(h)|+Ch^{p+1}\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/befb5fb330a49efe474e97fa2bc40da18bfed772)
|
|
(5)
|
Note that equation (5) is a recursive inequality valid for all values of
.
Next, we are trying to use it to estimate
where we assume
.
Let
Dividing both sides of (4) by
we get that
![{\displaystyle {\frac {|e_{n+1}(h)|}{\alpha ^{n+1}}}\leq {\frac {e_{n}(h)}{\alpha ^{n}}}+Ch^{p+1}{\frac {1}{\alpha ^{n+1}}}\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdbfa52e3e656ec5d8a2f1f0e0ff0f6c2a01f831)
Summing over n = 0,1, 2,…, N-1,
,
,
![{\displaystyle \vdots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8039d9feb6596ae092e5305108722975060c083)
and
![{\displaystyle {\frac {|e_{N}(h)|}{\alpha ^{N}}}\leq {\frac {e_{N-1}(h)}{\alpha ^{N-1}}}+Ch^{p+1}{\frac {1}{\alpha ^{N}}}\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/013bdfa8ebdd9ad3e8498b10a1c2aac52e87fcce)
Then we obtain
![{\displaystyle {\begin{aligned}{\frac {|e_{N}(h)|}{\alpha ^{N}}}&\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}({\frac {1}{\alpha ^{1}}}+{\frac {1}{\alpha ^{2}}}+\cdots +{\frac {1}{\alpha ^{N}}})\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}(1+\alpha +\alpha ^{2}+\cdots +\alpha ^{N-1})]\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/91b39b226b84e99f41ba0c065b2facb6c383941c)
Since
we have
![{\displaystyle {\begin{aligned}|e_{N}(h)|&\leq Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\\&\leq Ch^{p+1}({\frac {\alpha ^{N}-1}{\alpha -1}}),{\text{since }}\alpha ^{N}>1\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e6364240f0eaf283d62364d2191f6ecc49e5c6b)
Using the inequality
we get
![{\displaystyle {\begin{aligned}\alpha -1&=e^{Lh}-1\geq Lh\quad {\text{and}}\\\alpha ^{N}-1&=e^{LNh}-1=e^{LT}-1\,.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b711856fc629599640a4b6598688b7cb32e089d1)
Therefore, we can obtain that
![{\displaystyle |e_{N}(h)|\leq Ch^{p+1}({\frac {e^{LT}-1}{Lh}})=C({\frac {e^{LT}-1}{L}})h^{p}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/654dc6e1a30372be4f298d58c0cf2a4a460a453d)
That is,
-
![{\displaystyle |e_{N}(h)|=O(h^{p}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/376c7ae96cf1dbd65419b3a546c5df336050f82b)
|
|
(6)
|
From equation (2) and (6),
![{\displaystyle \tau _{n+1}(h)=O(h^{p+1})\quad {\text{and}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/916e5e546255b655ad5dfb116bd2897d53a74552)
![{\displaystyle |e_{N}(h)|=O(h^{p}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3c1a50fb60dc88aba5052a8006fb69af9143381c)
so we can conclude that the global truncation error is one order lower than the local truncation error.
In this graph,
The red line is the true value, the green line is the first step, and the blue line is the second step.
is the local truncation error at step 1,
, equal to ![{\displaystyle {\overline {CD}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8dbbd0adc6769b40c80651375286e15af2cf0ad6)
is separation because after the first step we are on the wrong solution of the ODE.
is ![{\displaystyle \tau _{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5bcee57d9e421b3cda35d0a37a5f849a471a0a2d)
Thus,
is the global truncation error at step 2,
We can see from this,
![{\displaystyle e_{n+1}=e_{n}+h[A(t_{n}),y(t_{n}),h,f)-A(t_{n},y_{n},h,f)]+\tau _{n+1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3f02e9b61aaa5a1f7c47504a909e745ea7cf56b2)
Then,
![{\displaystyle e_{2}=e_{1}+h[A(t_{1}),y(t_{1}),h,f)-A(t_{1},y_{1},h,f)]+\tau _{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4de716e4f63758d0c9b0fe254a809595ffd9a9fa)
![{\displaystyle e_{2}={\overline {AB}}+{\overline {DE}}+{\overline {CF}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/947e1198d7d213e0c3280be2bd285c74415aa4ac)
Find the order of the 2-steps Adams-Bashforth method. You need to show the order of truncation error.
- Burden, R. L., & Faires, J. (2011). Numerical analysis ninth edition. Brooks/Cole, Cengage Learning.
- Materials from MATH 3600 Lecture 28 http://www.math.ohiou.edu/courses/math3600/lecture28.pdf.
- http://www.math.uiuc.edu/~ekirr/page/teaching/math385/handout2.pdf.
- http://users.soe.ucsc.edu/~hongwang/AMS147/Notes/Lecture09.pdf.
- http://livetoad.org/Courses/Documents/03e0/Notes/truncation_error.pdf.