# Numerical Analysis/Computing the order of numerical methods for ODE's

## Numerical methods for ordinary differential equations order computation

### Introduction

The subject of this analysis is the order of accuracy of numerical methods for solving ordinary differential equations. When we mention "order of accuracy" for a particular numerical method, we usually mean the order of the global truncation error. The global error is the cumulative error in the numerical solution that is produced on an interval that we need the ODE to solve on. A local truncation error is the error in the numerical solution (the round off errors produced by computing are excluded) generated at a particular step, when the previous step solution is considered exact (although it is not exact, unless the previous step is the boundary or initial condition). The local truncation error is usually denoted as ${\displaystyle O(h^{(n+1)})\,}$, where n is the order of accuracy on the entire interval of the differential equation and h is the step size (for fixed step size discretization). The importance of the order of a numerical method is reflected in the fact that we need fewer steps to numerically solve an ODE with a prescribed error tolerance, if we use a higher order numerical method. This issue becomes very important when we try to solve an ODE that describes a relatively fast changing physical proccess, such as an internal combustion in the IC machines, vibrations in structures, etc.

The following two examples show how to determine the order of a numerical method. The first example is about a set of Runge Kutta methods of the second order. It shows how we can derive the conditions on the parameters that we introduce in the general form of the method and the possible choices for the parameters. The second example can be used as an exercise, since some steps are hidden and can be seen by selecting the marked button to show the missing steps that complete the derivation. The example shows how the conditions are derived for a Runge Kutta type general form with slope evaluations at three different locations within the current step. The main discussion is about the comparison of the Taylor series expansion and the corresponding numerical method recurrence equation. Based on this comparison, term by term, we conclude on the order of the local truncation error, which is then used to make a statement about the order of the method. The examples are followed by a concept quiz, which is convenient for a user to review the theoretical details that are used in the examples. Finally, we end up the analysis with a conclusion.

### Example 1. Determination of the parameters to establish a second order Runge Kutta method

Let a single step numerical method for solving ODE of the type

${\displaystyle y'(t)=f(t,y(t)),y(t_{0})=y_{0},\,}$

(1.1)

be given by the following [2,4] recurrence formula

${\displaystyle y_{n+1}=y_{n}+h(a_{1}k_{1}+a_{2}k_{2})\,}$

with

${\displaystyle k_{1}=f(t_{n},y_{n})\,}$

${\displaystyle k_{2}=f(t_{n}+ph,y_{n}+hqk_{1})\,}$

Using Taylor series expansion about tn, the following can be obtained

${\displaystyle y(t_{n}+h)=y(t_{n})+y'(t_{n})h+y''(t_{n})h^{2}/2+y'''(t_{n})h^{3}/3!+O(h^{4})\,}$

(1.2)

Assuming that the previous point in (1.2) is exact (which we may because we are analyzing the local truncation error, i.e. the error generated at the last step), we will denote ${\displaystyle y(t_{n})=y_{n},y(t_{n}+h)={\overline {y_{n+1}}}\,}$. Therefore, both ${\displaystyle y_{n}\,}$ and ${\displaystyle {\overline {y_{n+1}}}\,}$ are considered exact values of ${\displaystyle y(t)\,}$ at ${\displaystyle t=t_{n}\,}$ and ${\displaystyle t=t_{n}+h\,}$, respectively.

Then,

${\displaystyle {\overline {y_{n+1}}}=y_{n}+y'(t_{n})h+y''(t_{n})h^{2}/2+y'''(t_{n})h^{3}/3!+O(h^{4})\,}$

(1.3)

Using the differential equation (1.1) and replacing the derivatives of y, (1.3) becomes

${\displaystyle {\overline {y_{n+1}}}=y_{n}+f(t_{n},y_{n})h+f'(t_{n},y_{n})h^{2}/2+f''(t_{n},y_{n})h^{3}/3!+O(h^{4}),\,}$

(1.4)

where

${\displaystyle f'(t_{n},y_{n})={\frac {\partial f}{\partial t}}(t_{n},y_{n})+{\frac {\partial f}{\partial y}}(t_{n},y_{n})f(t_{n},y_{n}),\,}$

(1.5)

or in the compact form (1.5) is ${\displaystyle f'(t_{n},y_{n})=\left(f_{t}+f_{y}f)\right)_{t_{n},y_{n}},\,}$

and

${\displaystyle f''(t_{n},y_{n})=\left({\frac {\partial ^{2}f}{\partial t^{2}}}+{\frac {\partial ^{2}f}{\partial y\partial t}}f+{\frac {\partial ^{2}f}{\partial y\partial t}}f+{\frac {\partial f}{\partial y}}{\frac {\partial f}{\partial t}}+{\frac {\partial ^{2}f}{\partial y^{2}}}f^{2}+{\frac {\partial f}{\partial y}}{\frac {\partial f}{\partial y}}f\right)_{t=t_{n},y=y_{n}}\,}$

(1.6)

or in the compact form (1.6) is ${\displaystyle f''(t_{n},y_{n})=\left(f_{tt}+2f_{ty}f+f_{t}f_{y}+f_{yy}f^{2}+f_{y}f_{y}f\right)_{t=t_{n},y=y_{n}}\,}$

After substituting the expressions for f′ and f″ into the Taylor series expansion (1.4), the following is obtained

${\displaystyle {\overline {y_{n+1}}}=y_{n}+hf(t_{n},y_{n})+{\frac {h^{2}}{2}}(f_{t}+f_{y}f)_{(t_{n},y_{n})}+{\frac {h^{3}}{6}}\left(f_{tt}+2f_{ty}f+f_{t}f_{y}+f_{yy}f^{2}+f_{y}f_{y}f\right)_{(t_{n},y_{n})}+O(h^{4})\,}$

(1.7)

Now, our objective is to adjust the method's recurrence equation (1.2), such that it can be compared, term by term, with the equation obtained using the Taylor series expansion (1.7). This step requires the Taylor series expansion of the ${\displaystyle f(t_{n}+ph,y_{n}+hqk_{1})\,}$ term about the ${\displaystyle (t_{n},y_{n})\,}$ point, as follows.

${\displaystyle f(t_{n}+ph,y_{n}+hqk_{1})=f(t_{n},y_{n})+{\frac {\partial f}{\partial t}}(t_{n},y_{n})(hp)+{\frac {\partial f}{\partial y}}(t_{n},y_{n})(qhf(t_{n},y_{n}))+{\frac {h^{2}}{2}}\left({\frac {\partial ^{2}f}{\partial t^{2}}}p^{2}+2{\frac {\partial ^{2}f}{\partial y\partial t}}qpf+{\frac {\partial ^{2}f}{\partial y^{2}}}q^{2}f^{2}\right)_{t=t_{n},y=y_{n}}+O(h^{3})\,}$

(1.8)

or in the compact symbolic form (1.8) is:

${\displaystyle f(t_{n}+ph,y_{n}+hqk_{1})=\left(f+f_{t}hp+f_{y}fhq+{\frac {h^{2}}{2}}(f_{tt}p^{2}+2f_{ty}qpf+f_{yy}q^{2}f^{2})\right)_{t=t_{n},y=y_{n}}+O(h^{3})\,}$

(1.9)

By substituting (1.9) into (1.7), the following is obtained

${\displaystyle y_{n+1}=y_{n}+\left(hf(a_{1}+a_{2})+ha_{2}(hpf_{t}+hqf_{y}f)+{\frac {h^{3}a_{2}}{2}}(f_{tt}p^{2}+2f_{t}f_{y}qpf+f_{yy}q^{2}f^{2})\right)_{t=t_{n},y=y_{n}}+\underbrace {ha_{2}O_{2}(h^{3})} _{O_{3}(h^{4})}\,}$

(1.10)

Finally, the method's equation (1.10) and the exact one step solution (1.7) can be compared. By substracting (1.10) from (10.7), the following error expression is obtained.

${\displaystyle {\overline {y_{n+1}}}-y_{n+1}=\left((1-a_{1}-a_{2})hf+(1/2-pa_{2})h^{2}f_{t}+(1/2-qa_{2})h^{2}f_{y}f+A(h)\right)_{t=t_{n},y=y_{n}}\,}$

(1.11)

where

${\displaystyle A(h)=\left(h^{3}(1/3(f_{tt}+2f_{ty}f+f_{t}f_{y}+f_{yy}f^{2}+f_{y}f_{y}f)-{\frac {a_{2}}{2}}(f_{tt}p^{2}+2f_{t}f_{y}qpf+f_{yy}q^{2}f^{2}))\right)_{t=t_{n},y=y_{n}}+O_{1}(h^{4})-O_{3}(h^{4})\,}$

(1.12)

It can be noticed that the multiplier expression of ${\displaystyle h^{3}\,}$ in (1.12) does not cancel out in (1.11) for any combination of the parameters, since there are terms that appear only once without any parameter. This means that the dominant term in the error expression cannot be of higher order than ${\displaystyle O(h^{3})\,}$. In order to achieve the local truncation error of the third order, all terms in the error expression containing ${\displaystyle h^{0},h^{1},h^{2}\,}$ must be eliminated, by adjusting the 4 parameters. By satisfying the equations (1.13-1.15), the local error is of ${\displaystyle O(h^{3})\,}$, and, consequently, the global error is of ${\displaystyle O(h^{2})\,}$ order. Therefore, the resulting method is a second order method.

The equation set that must satisfied is the following

${\displaystyle a_{1}+a_{2}=1,\,}$

(1.13)

${\displaystyle pa_{2}=1/2,\,}$

(1.14)

${\displaystyle qa_{2}=1/2.\,}$

(1.15)

There are three equations (1.13,1.14,1.15) but with four unknown parameters. This means that one parameter must be chosen. Now, it is tempting to ask whether the extra parameter (additional degree of freedom) could be used to cancel next term in the error expression. However, we have already seen that it is not possible in this case, due to the additional term that are not associated (multiplied) with a parameter.

Since one parameter can be chosen, then there is non-unique form of a second order method.

• (case 1) Choose ${\displaystyle a_{1}=a_{2}\,}$, then ${\displaystyle a_{1}=a_{2}=1/2,p=q=1\,}$

The method equation (1.2) for these parameter becomes

${\displaystyle y_{n+1}=y_{n}+{\frac {h}{2}}(k_{1}+k_{2})\,}$

with

${\displaystyle k_{1}=f(t_{n},y_{n})\,}$

${\displaystyle k_{2}=f(t_{n}+h,y_{n}+hk_{1})\,}$

or in a compact form

${\displaystyle y_{n+1}=y_{n}+{\frac {h}{2}}(f(t_{n},y_{n})+f(t_{n}+h,y_{n}+hf(t_{n},y_{n})))\,}$

• (case 2) If ${\displaystyle a_{1}=0\,}$ is chosen (notice that ${\displaystyle a_{2}=0\,}$ cannot be selected, because of the last two equations), then

${\displaystyle y_{n+1}=y_{n}+hk_{2}\,}$

${\displaystyle k_{1}=f(t_{n},y_{n})\,}$

${\displaystyle k_{2}=f(t_{n}+1/2h,y_{n}+1/2hk_{1})\,}$

or in a compact form

${\displaystyle y_{n+1}=y_{n}+hf(t_{n}+1/2h,y_{n}+1/2hf(t_{n},y_{n}))\,}$

### Example/Exercise 2. A single step ODE numerical method order computing with three slope evaluations (Runge Kutta 3-rd order)

Let the recurrence equation of a method be given by the following of Runge Kutta type with three slope evaluations at each step

${\displaystyle y_{n+1}=y_{n}+h(a_{1}k_{1}+a_{2}k_{2}+a_{3}k_{3})\,}$

(2.1)

with

${\displaystyle k_{1}=f(t_{n},y_{n}),\,}$

${\displaystyle k_{2}=f(t_{n}+p_{1}h,y_{n}+q_{11}k_{1}),\,}$

${\displaystyle k_{3}=f(t_{n}+p_{2}h,y_{n}+q_{21}k_{1}+q_{22}k_{2}),\,}$

Taylor series expansion of ${\displaystyle y(t_{n}+h)\,}$ about ${\displaystyle t_{n}\,}$ is the same as in Example 1. Therefore, we will just use the final expression (1.7), since the procedure of the derivation is the same. For convenience, the final expression is repeated, which is going to be a reference equation for the comparison with the method's recurrence equation. Since the formulas for the given form of recurrence equation will get complicated, we will use the compact symbolic notation for the derivatives, which is shown in Example 1.

${\displaystyle {\overline {y_{n+1}}}=y_{n}+hf(t_{n},y_{n})+{\frac {h^{2}}{2}}(f_{t}+f_{y}f)_{(t_{n},y_{n})}+{\frac {h^{3}}{6}}\left(f_{tt}+2f_{ty}f+f_{t}f_{y}+f_{yy}f^{2}+f_{y}f_{y}f\right)_{(t_{n},y_{n})}+O(h^{4})\,}$

(2.2)

The Taylor expansion of the terms in (2.2) is shown up to ${\displaystyle O(h^{4})\,}$, rather then up to ${\displaystyle O(h^{5})\,}$, as we should in order to check that eventually next higher order terms cancel out, but we will assume that the method cannot achieve better local accuracy then fourth order, or equivalently, the global error of the third order. This will save us getting into the third level expansion of the two variable function f, which has 18 terms and would not be appropriate due to its length (even if the compact symbolic notation is used).

After we prepared the Taylor series expansion, we need to adjust the method's recurrence equation such that it can be compared with the Taylor series (2.2).

Now, we need to group the terms in the similar way they are grouped in the Taylor series (2.2), such that we can establish the conditions on the parameters that will yield the same terms as in the Taylor expansion up to the terms containing ${\displaystyle h^{4}\,}$.

${\displaystyle y_{n+1}=y_{n}+h(a_{1}+a_{2}+a_{3})f(t_{n},y_{n})+h^{2}(a_{2}(p_{1}f_{t}+q_{11}f_{y}f)+a_{3}(p_{2}f_{t}+q_{21}ff_{y}+q_{22}ff_{y}))_{(t_{n},y_{n})}+h^{3}({\frac {a_{2}}{2}}(f_{tt}p_{1}^{2}+f_{yy}q_{11}^{2}f^{2}+2f_{ty}p_{1}q_{11}f)+\,}$

${\displaystyle +a_{3}(q_{22}(p_{1}f_{t}f_{y}+q_{11}f_{y}^{2}f)+{\frac {1}{2}}(f_{tt}p_{2}^{2}+f_{yy}(q_{21}+q_{22})^{2}f^{2}+2f_{ty}p_{2}f(q_{21}+q_{22}))))_{(t_{n},y_{n})}+\,}$

${\displaystyle +\underbrace {[hO_{2}(h^{3})+h^{2}O_{3}(h^{2})(f_{y}q_{22}a_{3})+f_{yy}q_{22}a_{3}fh^{3}O_{5}(h)+a_{3}f_{ty}p_{2}q_{22}h^{3}O_{5}(h)]_{(t_{n},y_{n})}} _{O_{6}(h^{4})}\,}$

(2,4)

By comparing the two expressions (2,4) and (2,2), the following system of equations is obtained.

${\displaystyle a_{1}+a_{2}+a_{3}=1\,}$

(2.5)

${\displaystyle p_{1}a_{2}+p_{2}a_{3}=1/2\,}$

(2.6)

${\displaystyle q_{11}a_{2}+a_{3}(q_{21}+q_{22})=1/2\,}$

(2.7)

${\displaystyle a_{2}p_{1}^{2}+a_{1}p_{2}^{2}=1/3\,}$

(2.8)

${\displaystyle a_{2}p_{1}q_{11}+a_{3}p_{2}(q_{21}+q_{22})=1/3\,}$

(2.9)

${\displaystyle a_{3}q_{22}p_{1}=1/6\,}$

(2.10)

${\displaystyle a_{2}q_{11}^{2}+a_{3}(q_{21}+q_{22})^{2}=1/3\,}$

(2.11)

${\displaystyle a_{3}q_{22}q_{11}=1/6\,}$

(2.12)

At the first glance, the system is closed, the number of equations is (2.5 through 2.12) which matches the number of undetermined parameters. However, only 6 equations are independent, the rest of them can be obtained from those 6 equations. By dividing (2.10) with (2.12), we can obtain that ${\displaystyle q_{11}=p_{1}\,}$. Similarly, by substracting (2.11) from (2.9) equation, we see that ${\displaystyle p_{2}=q_{21}+q_{22}\,}$. When we replace these two results into the rest of the equations, it is evident that the (2.6) and the (2.7) are the same, and (2.8) and the (2.9) equations are the same. Therefore, two equations can be obtained from other six, and we have to choose two variables in order to obtain a solution for the parameters.

For example, we can choose that ${\displaystyle p_{2}=1,q_{11}=1/2\,}$, then we obtain the following recurrence equation.

${\displaystyle y_{n+1}=y_{n}+{\frac {h}{6}}(k_{1}+4k_{2}+k_{3})\,}$

(2.13)

where

${\displaystyle k_{1}=f(t_{n},y_{n})\,}$

${\displaystyle k_{2}=f(t_{n}+1/2h,y_{n}+1/2hk_{1})\,}$

${\displaystyle k_{3}=f(t_{n}+h,y_{n}-hk_{1}+2hk_{2})\,}$

The recurrence equation (2.13) is known Runge Kutta third order method [1,3] (List of Runge–Kutta methods), which indicates that our approach was correct.

### Quiz - Method order computations, local and global truncation error

1 What is the main importance of having a higher ODE method order

 It always better to increase the accuracy of the solution It requires fewer steps for the same accuracy restriction We need the method to be better consistent We need the method to be more stable It provides better margin of stability

2 The number of slope predictions (number of k's) in an RK method is related to the order of the method as follows. The order of the method is:

 exactly equal to the number slope predictions equal to number of k's (slope evaluations) +1 upper bounded by the number of k's not related to the number of k's

3 An n-th order of the method means that

 the global truncation error is of the order ${\displaystyle O(h^{n+1})\,}$ the local truncation error is of the order ${\displaystyle O(h^{n})\,}$ the global truncation error is of the order ${\displaystyle O(h^{n})\,}$ the maximum of the global and local error is n-th order

4 The order of the local truncation error is related to the global truncation error in the following way

 they are about equal they are not related the global is one higher order than the local the local is one higher order than the global

5 Is the following true? The global truncation error is always less then the local, since the errors partially cancel out over steps.

 true false

6 What is the highest order that we can achieve with a Runge Kutta type method by using 5 k's?

 We can achieve 5-th order That method is not used, since it is not consistent. We can achieve 4-th order We can achieve even higher order, it is sixth order accurate.

7 How is the order of single step methods related to consistency of the method?

 At least first order of the method indicates that the method is consistent To be consistent, a method must be second order or higher not related at all, a single step method can be inconsistent and be of any order.

8 What is the main "tool" to prove the order of a numerical method for solving ODE's of type ${\displaystyle y'=f(t,y(t))\,}$?

 Solving a specific "stiff" problem ${\displaystyle y'=f(t,y(t))=-Ky\,}$ and showing that it converges for specific real positive K Taylor series expansion for one variable Taylor series expansion for one and two variables Solving the ODE equation analytically and then numerically

9 If we show that all terms in a numerical method for ODE recurrence equation match the terms in the Taylor series expansion up to the terms with ${\displaystyle h^{n}\,}$, but we do not analyze whether further cancellation of the terms exist, the following is true

 The method is of the order n The method is of the order n, since the global error is of the order ${\displaystyle O(h^{n})\,}$ The order of the method could be higher than n-1, but it is at least n-1

10 If we use an explicit n-th (n>1) order accurate multistep method with s steps (s>2) to solve an ODE, but we use an n-1 order method to calculate the missing initial points that we need to start using the multistep method, what is the global order of accuracy of our calculation?

 There is no such combined method, we cannot do that The overall method is n-th order, since we just use the n-1 order method for few initial points The overall method is of the order n-1, since the initial error from the lower accurate method remains in the cummulative error (global) The order of the method could be either n or n-1 The order of the method is something between n-1 and n, just like the RK 45 method

11 What do we get as an order of error from the following expression

${\displaystyle {\frac {1}{h}}O(h^{2})+10O(h^{3})\,}$?

 Something between ${\displaystyle O(h^{2})\,}$ and ${\displaystyle O(h^{3})\,}$ ${\displaystyle O(h^{2})\,}$ ${\displaystyle O(h^{3})\,}$ ${\displaystyle O(h)\,}$

12 What do we get as an order of error from the following expression

${\displaystyle sin(h^{2}){\frac {1}{h}}O(h^{2})\,}$?

 Something between ${\displaystyle O(h^{2})\,}$ and ${\displaystyle O(h)\,}$ ${\displaystyle O(h^{2})\,}$ ${\displaystyle O(h^{3})\,}$ ${\displaystyle O(h)\,}$

13 What do we get as an order of error from the following expression

${\displaystyle e^{h}cos(h){\frac {1}{h}}O(h^{2})\,}$?

 It must be of order ${\displaystyle O(h^{2})\,}$ ${\displaystyle O(h^{3})\,}$ ${\displaystyle O(h)\,}$

### Conclusion

The main reason that we need to consider the order of a numerical method for solving ODE's is that it directly determines the step size i.e. the number of steps that we need to calculate the recurrence equation for. This directly influences the amount of computational work to obtain the prescribed accuracy.

The main approach for computing the order of a numerical method is that we first use Taylor series expansion for the exact differential equation to obtain the next value, namely ${\displaystyle y_{n+1}\,}$ using the previous value ${\displaystyle y_{n}\,}$ and the right hand side function ${\displaystyle f(t,y(t))\,}$ evaluated at ${\displaystyle t_{n}\,}$. We cannot say in advance up to which order we need to expand those terms in the Taylor series, since we are solving for the order of the method. The general rule is to expand the terms one order higher that we expect the method order is.

The second part of the method order computing is to write all terms in the given method recurrence equation in terms of the functions f and y evaluated at ${\displaystyle t_{n}\,}$. Hence, the function f appearing in the recurrence equation as an evaluation at time different than ${\displaystyle t_{n}\,}$, need to be replaced with a truncated Taylor series expansion about ${\displaystyle (t_{n},y_{n})\,}$.

The third part is to compare those two expressions obtained in the two forementioned ways. The order with respect to the powers of h, up to which the terms from the adjusted recurrence equation match the terms in the Taylor series expansion, represent the order of local truncation error. After we computed the order of the local truncation error, the global truncation error is one order less than the local error order.

Since the order of a numerical method for ODE is, by the definition, equal to the global error order, we only need to compute the local truncation error order and substract one from it.

There are some methods that do not have preciselly determined order. It is usually the case with the predictor corrector methods, like the Runge-Kutta method 45.

### References

[1] A list of the Runge - Kutta methods at Wikipedia List of Runge–Kutta methods, on November 5, 2010

[2] Ian Jacques and Colin Judd, "Numerical Analysis," Chapman and Hall, New York, 1987

[3] Topic Runge Kutta methods on Wikipedia Runge Kutta methods, on November 5, 2010

[4] John H. Mathews and Kurtis K. Fink, "Numerical Methods Using Matlab," 4th Edition, Prentice-Hall Inc., 2004