User:Egm6321.f11.team4/HW5

From Wikiversity
Jump to: navigation, search

Problem R*5.1 - Equivalence of twos forms of 2nd exactness condition[edit]

From Mtg 22-6

Given[edit]

\displaystyle 
\begin{align}


g_{0}-\frac{dg_1}{dx}+ \frac{d^2g_2}{dx^2}=0

\end{align}

(1.1)

\displaystyle 
\begin{align}

g_0=\frac{ \partial G}{\partial y} 

\end{align}

\displaystyle 
\begin{align}

g_1=\frac{\partial G}{\partial y'}

\end{align}

\displaystyle 
\begin{align}

g_2=\frac{ \partial^2 G}{\partial y''} 

\end{align}

Find[edit]

Equivalence of the form of 2nd exactness condition:

\displaystyle 
\begin{align}

f_{xx}+2pf_{xy}+p^2f_{yy}-g_{px}-pg_{py}+g_y+( f_{xp}+pf_{yp}+2f_y-g_{pp} )q=0

\end{align}

Solution[edit]

- Solved on our own

Let's assume that p=y' and q=y.

\displaystyle 
\begin{align}

g_0&=\frac{ \partial G}{\partial y} \\
   &=\frac{\partial( g+fy'' )}{\partial y}
   &=g_y+f_yq

\end{align}

(1.1)

\displaystyle 
\begin{align}

g_1&=\frac{ \partial G}{\partial y'} \\
   &=\frac{\partial( g+fy'' )}{\partial p}
   &=g_p+f_pq

\end{align}

\displaystyle 
\begin{align}

\frac{dg_1}{dx}&=\frac{d( g_p+f_pq )}{ dx } \\
   &=\frac{\partial(g_p+f_pq)}{\partial x} +\frac{\partial(g_p+f_pq)}{\partial y}\frac{dy}{dx} +\frac{\partial(g_p+f_pq)}{\partial p}\frac{dp}{dx} \\
   &=g_{px}+f_{px}q+(g_{py}+f_{yy}q)p+(g_{pp}+f_{yp}q)q \\
   &=g_{px}+g_{py}p+g_{pp}q + f_{px}q +f_{yy}qp+f_{yp}q^2



\end{align}

(1.2)

\displaystyle 
\begin{align}

g_2&=\frac{ \partial G}{\partial y''} \\
   &=\frac{\partial( g+fy'' )}{\partial q}
   &=f

\end{align}

\displaystyle 
\begin{align}

\frac{dg_2}{dx}&=\frac{d}{dx} (\frac{d( f )}{ dx }) \\
               &=\frac{d}{dx} (f_x+f_yp+f_pq) \\

   &=\frac{\partial(f_x+f_yp+f_pq)}{\partial x} +\frac{\partial(f_x+f_yp+f_pq)}{\partial y}\frac{dy}{dx} +\frac{\partial(f_x+f_yp+f_pq)}{\partial p}\frac{dp}{dx} \\
   &=f_{xx}+f_{xy}p+f_{xp}q+f_{yx}p+f_{yy}p^2+f_{yp}pq+f_yq+f_{px}q+f_{py}pq+f_{pp}q^2\\
   &=f_{xx}+2f_{xy}p+f_{yy}p^2+2f_{xp}q+2f_{yp}pq+f_{pp}q^2   


\end{align}

(1.3)

As a result,


\displaystyle 
\begin{align}

g_{0}-\frac{dg_1}{dx}+ \frac{d^2g_2}{dx^2}&=g_y+f_yq - (g_{px}+g_{py}p+g_{pp}q + f_{px}q +f_{yy}qp+f_{yp}q^2) + f_{xx}+2f_{xy}p+f_{yy}p^2+2f_{xp}q+2f_{yp}pq+f_{pp}q^2   \\
                                          &=f_{xx}+2pf_{xy}+p^2f_{yy}-g_{px}-pg_{py}+g_y+( f_{xp}+pf_{yp}+2f_y-g_{pp} )q=0




\end{align}

Author[edit]

Contributed by Chung

Problem R*5.2 - Verification of the exactness of Legendre and Hermite equations[edit]

From Mtg 27-1

Given[edit]

Legendre equation:

 \displaystyle
     G=(1-x^2) y^{''} - 2x y^{\prime} + n(n+1)y=0

(2.1)

Hermite equation:

 \displaystyle
     y^{''}-2xy^{\prime}+2ny = 0

(2.2)

Find[edit]

1. Verify the exactness of the designated L2-ODE-VC (Eq. 2.1, 2.2).
2. If Hermite equation is not exact, check whether it is in power form, and see whether it can be made exact using IFM with \displaystyle h(x,y)=x^m y^n
3. The first few Hermite polynomials are given as below.

 \displaystyle
     \begin{align}
     H_0(x) &=1 \\
     H_1(x) &= 2x \\
     H_2(x) &= 4x^2-2
     \end{align}

(2.3)

Verify that the equations in (2.3) are homogeneous solutions of the Hermite differential equation (Eq. 2.2).

Solution[edit]

- Solved on our own

Part 1[edit]

Legendre equation:
The first exactness condition is satisfied since the equation has a form of Eq. (2.4).

 \displaystyle
  \begin{align}
  G(x,y,y^{\prime}, y^{''}) &= g(x,y,p)+f(x,y,p)y^{''} \\
   &= \underbrace{(1-x^2)}_{f(x,y,p)} y^{''} \underbrace{- 2x y^{\prime} + n(n+1)y}_{g(x,y,p)}
  \end{align}

(2.4)

Method 1 : the 2nd exactness condition
In order to verify the second exactness condition, the following terms are computed.

 \displaystyle
  \begin{align}
  &f_{xx} = -2 \\
  &f_{xy} = 0 \\
  &f_{yy} = 0 \\
  &g_{xp} = -2 \\
  &g_{yp} = 0 \\
  &g_y = n(n+1) \\
  &f_{xp} = 0 \\
  &f_y = 0 \\
  &f_{yp} = 0 \\
  &g_{pp} = 0
  \end{align}

(2.5)

 \displaystyle
  \begin{align}
  f_{xx}+2p f_{xy} + p^2 f_{yy} &= g_{xp} + p g_{yp} - g_y \\
  -2 + 2p \cdot 0 + p^2 \cdot 0 &= -2 + p \cdot 0 - n(n+1) \\
  \end{align}

(2.6)

 \displaystyle
  \begin{align}
  f_{xp}+p f_{yp} + 2 f_y &= g_{pp} \\
  0 + p \cdot 0 + 2 \cdot 0 &= 0
  \end{align}

(2.7)

- The second exactness condition is satisfied when \displaystyle n=0  or \displaystyle n=-1 .


Method 2 : the 2nd exactness condition
Following is another condition to prove the second exactness.

 \displaystyle
  g_0-\frac{dg_1}{dx}+\frac{d^2 g_2}{dx^2}=0

(2.8)

Using the definition of \displaystyle g_i := \frac{\partial G}{\partial y ^{(i)}}, the followings are computed.

 \displaystyle
  \begin{align}
  g_0&=\frac{\partial G}{\partial y^{(0)}} = n(n+1) \\
  g_1&=\frac{\partial G}{\partial y^{(1)}} = -2x \\
  g_2&=\frac{\partial G}{\partial y^{(2)}} = 1-x^2 \\
  & \frac{dg_1}{dx}=-2 \\
  & \frac{d^2 g_2}{dx^2} = -2
  \end{align}

(2.9)

After substituting the terms in Eq. (2.9), Eq. (2.8) becomes as following.

 \displaystyle
  \begin{align}
  n(n+1) - (-2) -2&= 0 \\
  n(n+1) &= 
  \end{align}

(2.10)

- The second exactness condition is satisfied when \displaystyle n=0  or \displaystyle n=-1 .

Hermite equation:
The first exactness condition is satisfied since the equation has a form of Eq. (2.8).

 \displaystyle
  \begin{align}
  G(x,y,y^{\prime}, y^{''}) &= g(x,y,p)+f(x,y,p)y^{''} \\
   &= \underbrace{1}_{f(x,y,p)} \cdot y^{''} \underbrace{- 2x y^{\prime} + 2ny}_{g(x,y,p)}
  \end{align}

(2.11)

Method 1 : the 2nd exactness condition
In order to verify the second exactness condition, the following terms are computed.

 \displaystyle
  \begin{align}
  &f_{xx} = 0 \\
  &f_{xy} = 0 \\
  &f_{yy} = 0 \\
  &g_{xp} = -2 \\
  &g_{yp} = 0 \\
  &g_y = 2n \\
  &f_{xp} = 0 \\
  &f_y = 0 \\
  &f_{yp} = 0 \\
  &g_{pp} = 0
  \end{align}

(2.12)

 \displaystyle
  \begin{align}
  f_{xx}+2p f_{xy} + p^2 f_{yy} &= g_{xp} + p g_{yp} - g_y \\
  0 + 2p \cdot 0 + p^2 \cdot 0 &= -2 + p \cdot 0 - 2n \\
  \end{align}

(2.13)

 \displaystyle
  \begin{align}
  f_{xp}+p f_{yp} + 2 f_y &= g_{pp} \\
  0 + p \cdot 0 + 2 \cdot 0 &= 0
  \end{align}

(2.14)

- The second exactness condition is satisfied only when \displaystyle n=-1 , but the condition is not satisfied if \displaystyle n \neq -1 .

Method 2 : the 2nd exactness condition
Following is another condition to prove the second exactness.

 \displaystyle
  g_0-\frac{dg_1}{dx}+\frac{d^2 g_2}{dx^2}=0

(2.15)

Using the definition of \displaystyle g_i := \frac{\partial G}{\partial y ^{(i)}}, the followings are computed.

 \displaystyle
  \begin{align}
  g_0&=\frac{\partial G}{\partial y^{(0)}} = 2n \\
  g_1&=\frac{\partial G}{\partial y^{(1)}} = -2x \\
  g_2&=\frac{\partial G}{\partial y^{(2)}} = 1 \\
  & \frac{dg_1}{dx}=-2 \\
  & \frac{d^2 g_2}{dx^2} = 0
  \end{align}

(2.16)

After substituting the terms in Eq. (2.16), Eq. (2.15) becomes as following.

 \displaystyle
  \begin{align}
  2n - (-2) +0&= 0 \\
  2(n+1) &= 
  \end{align}

(2.17)

- The second exactness condition is satisfied only when \displaystyle n=-1 .

Part 2[edit]

As shown in the Eq. (2.10), the Hermite equation does not satisfy the second exactness condition if n is a positive value. To see if the equation can be made exact using IFM with \displaystyle h(x,y) = x^m y^n , multiply the Eq. (2.2) by \displaystyle h(x,y) . Also, the term n in the Eq. (2.2) is replaced with \displaystyle \lambda to avoid confusion.

 \displaystyle
  x^m y^n ( y^{''} - 2x y^{\prime} + 2\lambda y ) = 0

(2.18)

 \displaystyle
  \underbrace{x^m y^n}_{f(x,y,p)} y ^{''} \underbrace{-2x^{m+1}y^n y^{\prime}+2\lambda x^{m}y^{n+1}}_{g(x,y,p)} = 0

(2.19)

It is assumed that the Eq. (2.13) is exact. Then, the integrating factor \displaystyle h(x,y) can be obtained by utilizing the second exactness condition as following.

 \displaystyle
  \begin{align}
  &f_{xx} = m(m-1)x^{m-2}y^n \\
  &f_{xy} = mnx^{m-1}y^{n-1} \\
  &f_{yy} = n(n-1)x^my^{n-2} \\
  &g_{xp} = -2(m+1)x^my^n \\
  &g_{yp} = -2nx^{m+1} y^{n-1} \\
  &g_y = -2nx^{m+1} y^{n-1}p + 2(n+1)\lambda x^m y^n \\
  &f_{xp} = 0 \\
  &f_y = nx^m y^{n-1} \\
  &f_{yp} = 0 \\
  &g_{pp} = 0
  \end{align}

(2.20)

With the terms computed above, the two equations below which represent the second exactness condition are exploited.

 \displaystyle
  f_{xx}+2p f_{xy} + p^2 f_{yy} = g_{xp} + p g_{yp} - g_y

(2.21)

 \displaystyle
  f_{xp}+p f_{yp} + 2 f_y = g_{pp}

(2.22)

First, substitute the terms in the Eq. (2.14) into the Eq. (2.16).

 \displaystyle
  \begin{align}
  f_{xp}+p f_{yp} + 2 f_y &= g_{pp} \\
  0 + p \cdot 0 + 2nx^m y^{n-1} &= 0
  \end{align}

(2.23)

The only way the Eq. (2.17) is satisfied is to set n equal to zero.

 \displaystyle
  \therefore n=0

(2.24)

With knowing n=0, substitute the terms in the Eq. (2.14) into the Eq. (2.15).

 \displaystyle
  \begin{align}
  f_{xx}+2p f_{xy} + p^2 f_{yy} &= g_{xp} + p g_{yp} - g_y \\
  m(m-1)x^{m-2} &= -2(m+1)x^m - 2 \lambda x^m \\
                &= -2 \left(m+1+\lambda \right) x^m
  \end{align}

(2.25)

The both sides of the Eq. (2.19) are equal only when the coefficients of \displaystyle x^{m-1}, x^m are zero. Then the following is concluded.

 \displaystyle
  \begin{align}
  m(m-1) &= 0 \\
  m+1+ \lambda &= 0
  \end{align}

(2.26)

The first equation in the Eq. (2.20) implies that \displaystyle m=0 or \displaystyle m=1 . Then \displaystyle \lambda is determined using the second one in the Eq. (2.20).

 \displaystyle
  \begin{align}
  &\lambda = -1 \mbox{ when } m = 0 \\
  &\lambda = -2 \mbox{ when } m = 1
  \end{align}

(2.27)

Since n is determined to be zero already, m can not be zero. Hence, m is chosen to be one and \displaystyle \lambda is 2.

 \displaystyle
  \begin{align}
  n &= 0 \\
  m &= 1 \\
  \lambda &= -2
  \end{align}

(2.28)

 \displaystyle
  \begin{align}
  \therefore h(x,y) &= x^1 y^0 \\
         &= x
  \end{align}

(2.29)

Here, the exactness of the Hermite equation with \displaystyle h(x,y) is justified.

 \displaystyle
  \begin{align}
  G(x,y,y^{\prime}, y^{''}) &= g(x,y,p)+f(x,y,p)y^{''} \\
   &= xy^{''} - 2x^2 y^{\prime} -4xy
  \end{align}

(2.30)

In order to prove the second exactness of the Hermite equation, the Method 2 is used.

 \displaystyle
  g_0-\frac{dg_1}{dx}+\frac{d^2 g_2}{dx^2}=0

(2.31)

Using the definition of \displaystyle g_i := \frac{\partial G}{\partial y ^{(i)}}, the followings are computed.

 \displaystyle
  \begin{align}
  g_0&=\frac{\partial G}{\partial y^{(0)}} = -4x \\
  g_1&=\frac{\partial G}{\partial y^{(1)}} = -2x^2 \\
  g_2&=\frac{\partial G}{\partial y^{(2)}} = x \\
  & \frac{dg_1}{dx}=-4x \\
  & \frac{d^2 g_2}{dx^2} = 0
  \end{align}

(2.32)

After substituting the terms in Eq. (2.32), Eq. (2.31) becomes as following.

 \displaystyle
  -4x-(-4x)+0=0

(2.33)

- Hermite equation with IFM \displaystyle h(x,y) = x  is exact.

Part 3[edit]

Hermite differential equation is known to have a polynomial solution, which has following form.

 \displaystyle
  y(x) = \sum_{n=0}^{\infty} a_n x^n

(2.34)

Compute the first and the second derivative of Eq. (2.34) with respect to x,

 \displaystyle
  \begin{align}
  y^{\prime}(x) &= \sum_{n=1}^{\infty}n a_n x^{n-1} \\
  y^{''}(x) &= \sum_{n=2}^{\infty}n(n-1) a_n x^{n-2} \\
  \end{align}

(2.35)

Substitute Eq. (2.34), (2.35) into Eq. (2.2),

 \displaystyle
  \begin{align}
  &\sum_{n=2}^{\infty}n(n-1)a_n x^{n-1} - 2x \sum_{n=1}^{\infty} n a_n x^{n-1} + 2 \lambda \sum_{n=1}^{\infty} a_n x^n = 0 \\
  &\sum_{n=2}^{\infty}n(n-1)a_n x^{n-1} - \sum_{n=1}^{\infty} 2n a_n x^{n} + \sum_{n=1}^{\infty} 2 \lambda a_n x^n = 0 \\
  \end{align}

(2.36)

Shift the first summation up by two units,

 \displaystyle
  \sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2} x^{n} - \sum_{n=1}^{\infty} 2n a_n x^{n} + \sum_{n=0}^{\infty} 2 \lambda a_n x^n = 0

(2.37)

The second summation starts at n=1, while the other summations start at n=0. Following justifies the second summation starting at n=0.

 \displaystyle
  \begin{align}
  \sum_{n=1}^{\infty} 2n a_n x^n &= 2 \cdot 0 \cdot a_0 x^0 + \sum_{n=1}^{\infty}2n a_n x^n \\
  &= \sum_{n=0}^{\infty}2n a_n x^n
  \end{align}

(2.38)

Combine all summations in Eq. (2.37),

 \displaystyle
  \sum_{n=0}^{\infty} \left[ (n+2)(n+1)a_{n+2} - 2n a_n + 2 \lambda a_n \right] x^n = 0

(2.39)

Hence, the following recurrence relation is obtained from Eq. (2. 39).

 \displaystyle
  (n+2)(n+1)a_{n+2} - 2n a_n + 2 \lambda a_n = 0 \mbox{ for all } n=0,1,2,3, \cdots

(2.40)

Simplify Eq. (2.40),

 \displaystyle
  a_{n+2} = \frac{2(n-\lambda)}{(n+2)(n+1)} a_n \mbox{ for all } n=0,1,2,3, \cdots

(2.41)

Using the recurrence relation in Eq. (2.41), the Hermite polynomials given in the problem are obtained.
Let the initial conditions and \displaystyle \lambda be as follows.

 \displaystyle
  y(0)=a_0=1, \mbox{ } y^{\prime}(0)=a_1=0, \mbox{ } \lambda = 0

(2.42)

Then,

 \displaystyle
  \begin{align}
  a_2 &= \frac{2(0-0)}{2 \cdot 1} a_0 = 0 \\
  a_3 &= \frac{2(1-0)}{3 \cdot 2} a_1 = 0 \\
  a_4 &= a_5 = a_6 = a_7 = \ldots = 0
  \end{align}

(2.43)

- Hence, Hermite polynomial is \displaystyle H_0(x) = 1  in this case.

Let the initial conditions and \displaystyle \lambda be as follows.

 \displaystyle
  y(0)=a_0=0, \mbox{ } y^{\prime}(0)=a_1=2, \mbox{ } \lambda = 1

(2.44)

Then,

 \displaystyle
  \begin{align}
  a_2 &= \frac{2(0-1)}{2 \cdot 1} a_0 = 0 \\
  a_3 &= \frac{2(1-1)}{3 \cdot 2} a_1 = 0 \\
  a_4 &= a_5 = a_6 = a_7 = \ldots = 0
  \end{align}

(2.45)

- Hence, Hermite polynomial is \displaystyle H_1(x) = 2x  in this case.

Let the initial conditions and \displaystyle \lambda be as follows.

 \displaystyle
  y(0)=a_0=-2, \mbox{ } y^{\prime}(0)=a_1=0, \mbox{ } \lambda = 2

(2.46)

Then,

 \displaystyle
  \begin{align}
  a_2 &= \frac{2(0-2)}{2 \cdot 1} a_0 = 4 \\
  a_3 &= \frac{2(1-2)}{3 \cdot 2} a_1 = 0 \\
  a_4 &= \frac{2(2-2)}{4 \cdot 3} a_2 = 0 \\
  a_5 &= a_6 = a_7 = a_8 = \ldots = 0
  \end{align}

(2.47)

- Hence, Hermite polynomial is \displaystyle H_2(x) = 4x^2-2  in this case.

References [edit]

  • Hermite Polynomial, WolframMathworld [1]

Author[edit]

Contributed by Shin

Problem R*5.3 Method of undetermined coefficients[edit]

From Mtg 30-3

Given[edit]

Given L4-ODE-CC,

\displaystyle X^{(4)} - K^4 X = 0

(3.1)

Assuming,

\displaystyle X(x) = e^{(rx)}

(3.2)

and substituting this in Eq. (3.1), we get 4 solutions for r,

\displaystyle r_{1,2} = \pm K

\displaystyle r_{3,4} = \pm i \, K

(3.3)

Find[edit]

Expressions for X(x) in terms of  \displaystyle \cos K x, \ \sin K x, \ \cosh K x, \ \sinh K x


Solution[edit]

- Solved on my own

We can write final solution X(x) as,

\displaystyle X(x) = \sum_{i=1}^4 C_i e^{(r_i x)}

(3.4)

Substituting all the ri values from Eq. (3.3),

\displaystyle X(x) = C_1 e^{K x} + C_2 e^{-K x} + C_3 e^{iK x} + C_4 e^{-iK x}

(3.5)

Expanding the exponential terms into sin, cos, sinh and cosh terms, we get,

\displaystyle X(x) = C_1 \cosh K x \ + C_1 \sinh K x \ + C_2 \cosh K x \ - C_2 \sinh K x \ + C_3 \cos K x \ + i C_3 \sin K x \ + C_4 \cos k x - \ i C_4 \sin k x

(3.6)

for X(x) to be real, imaginary part in above Eq. (3.6) must be real, i.e. (C3 - C4) must be imaginary

\displaystyle C_3 - C_4 = ib_4

and, (C3 + C4) must be real,

\displaystyle C_3 + C_4 = b_3

Therefore, C3 and C4 will be given by,

\displaystyle C_3 = \frac{1}{2}(b_3 + ib_4)

\displaystyle C_4 = \frac{1}{2}(b_3 - ib_4)

This means C3 and C4 given above are complex conjugates.



Therefore, we get final solution to be,

\displaystyle X(x) = (C_1 + C_2) \cosh K x + (C_1 - C_2) \sinh K x + b_3 \cos K x - b_4 \sin K x

(3.7)

Author[edit]

Contributed by Ankush

Problem R*5.4 - Find \displaystyle y_{xxxxx} in terms of the derivatives of y with respect to t[edit]

From Mtg 31-1

Given[edit]

In Euler L2-ODE-VC, suppose: \displaystyle x=e^t .

Find[edit]

\displaystyle y_{xxxxx}

Solution[edit]

- Solved on my own

Since:

\displaystyle 
\frac{dt}{dx} = \left( \frac{dx}{dt} \right)^{-1} = \left( \frac{d(e^t)}{dt} \right)^{-1} = e^{-t}

(4.1)

So:

\displaystyle 
y_x = \frac{dy}{dx} = \frac{dy}{dt} \frac{dt}{dx} = e^{-t} y_t

(4.2)

\displaystyle \begin{align}
y_{xx} &= \frac{dy_x}{dx} \\
&= \frac{dy_x}{dt} \frac{dt}{dx}\\ 
&= e^{-2t} (y_{tt}-y_t)\\
\end{align}

(4.3)

\displaystyle \begin{align}
y_{xxx} &= \frac{dy_{xx}}{dx} \\
&= \frac{dy_{xx}}{dt} \frac{dt}{dx}\\ 
&= e^{-3t} (y_{ttt}-3y_{tt}+2y_t)\\
\end{align}

(4.4)

\displaystyle \begin{align}
y_{xxxx} &= \frac{dy_{xxx}}{dx} \\
&= \frac{dy_{xxx}}{dt} \frac{dt}{dx}\\ 
&= e^{-4t} (y_{tttt}-6y{ttt}+11y_{tt}-6y_t)\\
\end{align}

(4.5)

\displaystyle \begin{align}
y_{xxxxx} &= \frac{dy_{xxxx}}{dx} \\
&= \frac{dy_{xxxx}}{dt} \frac{dt}{dx}\\ 
&= e^{-t} \frac{d \left( e^{-4t}(y_{tttt}-6y_{ttt}+11y_{tt}-6y_t) \right)}{dt}\\
&= e^{-5t} (y_{ttttt}-10y_{tttt}+35y_{ttt}-50y_{tt}+24y_t)
\end{align}

(4.6)

Author[edit]

Contributed by YuChen

Problem R*5.5 - Solve y and plot the solution[edit]

From Mtg 31-3

Given[edit]

Euler L2-ODE-VC:

\displaystyle
x^2 y''-2xy'+2y=0

(5.1)

and trial solution:

\displaystyle
y=x^r

(5.2)

with boundary conditions:

\displaystyle
\begin{cases} y(1)=-4\\ y(2)=7\\ \end{cases}

(5.3)

Find[edit]

The solution \displaystyle y(x) and plot it.

Solution[edit]

- Solved on my own

Substitute \displaystyle x^r for \displaystyle y in equation (5.1):

\displaystyle
\underbrace{x^r}_{\displaystyle \ne 0} (r^2-3r+2)=0

(5.4)

So the characteristic equation:

\displaystyle
r^2-3r+2=0 \Rightarrow \begin{cases} r_1=1\\ r_2=2\\ \end{cases}

(5.5)

So the solution is:

\displaystyle
y(x)=c_1 x + c_2 x^2

(5.6)

Use the boundary conditions:

\displaystyle
\begin{cases} y(1)=c_1+c_2=-4\\ y(2)=2c_1+4c_2=7\\ \end{cases} \Rightarrow \begin{cases} c_1=- \frac{23}{2}\\ c_2= \frac{15}{2}\\ \end{cases}

(5.7)

So the solution is:

\displaystyle
y(x)= \frac{15}{2} x^2 - \frac{23}{2} x

(5.8)

Plot \displaystyle y(x) .

Matlab Code:

clear all
close all
clc
%homogeneous solutions
y=@(x) (15/2)*(x^2)-(23/2)*x;
%plotting
hold on
fplot(y,[-7 7],'r');
title('Plotting');
xlabel('x');
ylabel('y(x)');
legend('y=(15/2)*(x^2)-(23/2)*x')

Plot of y(x).jpg

Author[edit]

Contributed by YuChen

Problem R5.6: Equivalence of Two methods to solve Euler Ln-ODE-VC[edit]

From Mtg 31-5

Given[edit]

Given Euler Ln-ODE-VC,

 \displaystyle \sum_{i=0}^n a_i x^i y^{(i)} = 0

(6.1)

where,  \displaystyle y^{(i)} is ith derivative of y w.r.t. x,

Two methods of solving above equation,

Method 1:

Stage 1: Transformation of variables

 \displaystyle x = e^t

(6.2)

Stage 2: Trial Solution

 \displaystyle y = e^{rt} ,\ {\color{red} r = \text{constant}}

(6.3)

Method 2: Trial Solution

 \displaystyle y = x^{r} ,\ {\color{red} r = \text{constant}}

(6.4)

Find[edit]

Show equivalence of methods 1 and 2 shown above in Eq. (6.2), (6.3) and Eq. (6.4) respectively


Solution[edit]

- Solved on our own

Let's take method - 2 shown in Eq. (6.4), first. Differentiating Eq. (6.4) w.r.t. x, we get

 \displaystyle y_x = y' = r x^{r-1}

(6.5)

For, ith derivative of y w.r.t. x, i.e. y(i), we have,

 \displaystyle y^{(i)} = r(r-1)...(r-i+1)x^{(r-1)}

(6.6)

Substituting Eq. (6.6), i.e. definition of y(i) in Eq. (6.1), we get,

 \displaystyle \sum_{i=0}^n a_i x^i r(r-1)...(r-i+1) x^{(r-i)} = 0

(6.7a)

 \displaystyle \sum_{i=0}^n a_i x^r r(r-1)...(r-i+1) = 0

(6.7b)

Eliminating xr, which is a common factor in the all the terms of above summation, we get,

 \displaystyle \sum_{i=0}^n a_i r(r-1)...(r-i+1) = 0

(6.7c)

So, we solve the above equation for r, and that provides us solution for Eq. (6.1).

Now, for method - 1, we will try to get the same form as in Eq. (6.7c)

Stage 1: Substitute x = et in Eq. (6.1), we write yx as,

 \displaystyle y_x = y_t \frac {dt}{dx} = e^{-t} y_t

(6.8)

Stage 2: Substitute y = ert,

Using above we can write yt as, r*ert. Substitute this into Eq. (6.8), and we get,

 \displaystyle y^{(1)} = y_x = re^{rt}e^-t = r e^{(r-1)t}

(6.9)

Performing another differentiation of above equation to get yxx, we get,

 \displaystyle y^{(2)} = y_{xx} = \frac {d}{dx} (y_x) = \frac {d}{dx} (r e^{(r-1)t}) = \frac {d}{dt} (r e^{(r-1)t}) \frac {dt}{dx} = r(r-1) e^{(r-1)t} e^{-t} = r(r-1)e^{(r-2)t}

(6.10)

We can see that as done above for y(2), we can write in general for y(i), we can write,

 \displaystyle y^{(i)} = r(r-1)...(r-i+1) e^{(r-i)t}

(6.10)

Substituting definitions of both x and y as seen in Eq. (6.2) and Eq. (6.10) respectively, into Eq. (6.1) we get,


 \displaystyle \sum_{i=0}^n a_i e^{it} r(r-1)...(r-i+1) e^{(r-i)t} = 0

(6.11a)

Above can be simplified to following form very easily (combining both the exponential terms in above expression),

 \displaystyle \sum_{i=0}^n a_i e^{rt} r(r-1)...(r-i+1) = 0

(6.11b)

Since, e(rt) is not dependent on i, we can take this term out of the summation as a common factor, and write rest of the terms as,

 \displaystyle \sum_{i=0}^n a_i r(r-1)...(r-i+1) = 0

(6.11c)


Above Eq. (6.11c) is exactly same as Eq. (6.7c), and therefore it proves that by both methods 1 & 2 we will get same values for r, 
and hence both methods are equivalent.

Author[edit]

Contributed by Ankush

Problem R*5.7-Euler L2-ODEs [edit]

From Mtg 32-1

Given[edit]

Characteristic equation

\displaystyle 
\begin{align}

(r - \lambda)^2=0

\end{align}

(7.1)

\displaystyle 
\begin{align}

a_2x^2y''+a_1xy'+a_0y=0
\end{align}

(7.2)

\displaystyle 
\begin{align}

b_2y''+b_1y'+b_0y=0
\end{align}

(7.3)

Find[edit]

1.1) Find a_2, a_1, a_0 such that (7.1) is characteristic equation of (7.2)

1.2) 1st homogeneous solution : y_1(x)=x^\lambda

1.3) Complete solution : Find c(x) such that y(x)=c(x)y_1(x)

1.4) Find the 2nd homogeneous solution y_2(x)


2.1) Find b_2, b_1, b_0 such that (7.1) is characteristic equation of (7.3)

2.2) 1st homogeneous solution

2.3) Complete solution : Find c(x) such that y(x)=c(x)y_1(x)

2.4) Find the 2nd homogeneous solution y_2(x)

Solution[edit]

- Solved on our own

Part 1.1[edit]

Assume that the trial solution is following.

\displaystyle 
\begin{align}
y=x^r
\end{align}

(7.4)

Then, compute the first and the second derivatives of the trial solution,

\displaystyle 
\begin{align}

y'=rx^{r-1}
\end{align}

(7.5)

\displaystyle 
\begin{align}

y''=r(r-1)x^{r-2}
\end{align}

(7.6)

Substitute the equation (7.4), (7.5), (7.6) into (7.2) and simplify the equation,

\displaystyle 
\begin{align}
a_2x^2r(r-1)x^{r-2}+a_1xrx^{r-1}+a_0x^r=0 \\
a_2r(r-1)x^r+a_1rx^r+a_0x^r=0 \\
\left[ a_2r(r-1)+a_1r+a_0 \right] x^r=0 


\end{align}

(7.7)


Then, we get

\displaystyle 
\begin{align}

a_2r^2+(a_1-a_2)r+a_0=0

\end{align}

(7.8)

Compare (7.8) to (7.1),

\displaystyle 
\begin{align}
a_2r^2+(a_1-a_2)r+a_0 &= (r-\lambda)^2 \\
                      &= r^2-2r\lambda+\lambda^2 \\
                      
\end{align}

(7.9)

Comparing the coefficients of LHS and RHS,

\displaystyle 
\begin{align}
  a_2&=1 \\
  a_1&=1-2\lambda \\
  a_0&=\lambda^2
\end{align}

(7.10)

Part 1.2[edit]

From the characteristic equation in Eq. (7.1), the solution of the characteristic equation is determined as follows.

\displaystyle 
  r = \lambda

(7.11)

Hence, the 1st homogeneous solution can be

\displaystyle 
\begin{align}

y_1=x^\lambda

\end{align}

(7.12)

Part 1.3[edit]

\displaystyle 
\begin{align}
  y=c(x)y_1(x)
\end{align}

(7.13)

Compute the first and the second derivatives of Eq. (7.13),

\displaystyle 
\begin{align}
  y^{\prime}(x)&=c^{\prime}(x)y_1(x)+c(x)y_1^{\prime}(x) \\
  y^{''}(x)    &=c^{''}(x)y_1(x)+c^{\prime}(x)y_1^{\prime}(x)+c^{\prime}(x)y_1^{\prime}(x)+C(x)y_1^{''}(x)
\end{align}

(7.14)

Substitute (7.13) and (7.14) into (7.2) and simplify the equation,

\displaystyle 
\begin{align}

a_2x^2\left[c^{''}(x)y_1(x)+c^{\prime}(x)y_1^{\prime}(x)+c^{\prime}(x)y_1^{\prime}(x)+c(x)y_1^{''}(x)\right] + a_1x\left[ c^{\prime}(x)y_1(x)+c(x)y_1^{\prime}(x) \right] +a_0c(x)y_1(x)&=0 \\
c(x) \cancelto{0}{\left[ a_2x^2y_1^{''}(x) + a_1xy_1^{\prime}(x)+a_0y_1(x) \right]} + c^{\prime}(x) \left[ 2a_2x^2y_1^{\prime}(x) + a_1xy_1(x) \right] + c^{''}(x)\left[ a_2x^2y_1(x) \right] &= 0 \\

\end{align}

(7.15)

The first term in Eq. (7.15) cancels out using Eq. (7.2). Then,

\displaystyle 
\begin{align}

c^{\prime}(x) \left[ 2a_2x^2y_1^{\prime}(x) + a_1xy_1(x) \right] + c^{''}(x)\left[ a_2x^2y_1(x) \right] = 0 \\

\end{align}

(7.16)

Substitute coefficients and homogeneous solution,

\displaystyle 
\begin{align}
  c^{\prime}(x) \left[ 2a_2x^2y_1^{\prime}(x) + a_1xy_1(x) \right] + c^{''}(x)\left[ a_2x^2y_1(x) \right]   \\
\end{align}

\displaystyle 
\begin{align}
 &= c^{\prime}(x) \left[ 2x^2 \lambda x^{ \lambda-1 } + 1-2\lambda x x^\lambda \right] + c^{''}(x) \left[ x^2 x^\lambda \right]   \\
 &= c^{\prime}(x) \left[ 2 \lambda x^{ \lambda+1 } + 1-2\lambda x^{\lambda+1} \right] + c^{''}(x) \left[  x^{\lambda+2} \right]   \\
 &= c^{\prime}(x)  x^{\lambda+1} + c^{''}(x)  x^{\lambda+2}    \\
 &= \left[ c^{\prime}(x)   + c^{''}(x)  x \right] x^{\lambda+1} =0 \\
\end{align}

\displaystyle 
\begin{align}
 \therefore  \left[ c^{\prime}(x) + c^{''}(x)x \right] = 0
\end{align}

(7.17)


Let \displaystyle c^{\prime}(x)=z(x) ,

\displaystyle 
\begin{align}
z(x)+z'(x)x &=0 \\
z+x\frac{dz}{dx}&=0 \\
z&=-x\frac{dz}{dx} \\
\frac{1}{z}dz &= -\frac{1}{x}dx \\
\int \frac{1}{z}dz &= \int \frac{1}{x}dx \\
\end{align}

(7.18)

\displaystyle 
  \log (z)= \log (x) + C_1

(7.19)

\displaystyle 
\begin{align}
  z&= e^{-(\log (x)+C_1)} \\
   &= e^ {\log (x {e^{C_1}})^{-1}} \\
   &= \frac{1}{e^{C_1}x} \\
   &=C_2\frac{1}{x} \mbox{ } (C_2=e^{-C_1}) 
\end{align}

(7.20)

Then, compute the function \displaystyle c(x) ,

\displaystyle 
\begin{align}
c(x)&=\int z dx \\
    &=\int C_2 \frac{1}{x} dx  \\
    &=C_2(\log(x)+C_3)
\end{align}

(7.21)

As a result,

\displaystyle 
\begin{align}

y(x)&=C_2(\log(x)+C_3)y_1(x) \\
    &=C_2x^\lambda \log(x)+ C_4x^\lambda  \\
    &=C_1^* x^\lambda \log(x) + C_2^*x^\lambda


\end{align}

(7.22)

Part 1.4[edit]

From 1.3, the general homogeneous solution is

\displaystyle 
\begin{align}

y(x)&=C_1^* x^\lambda \log(x) + C_2^*x^\lambda

\end{align}

From (7.12), \displaystyle y_1(x) is \displaystyle x^\lambda.


So, \displaystyle y_2(x) is

\displaystyle 
\begin{align}

y_2(x)&= x^\lambda \log(x)

\end{align}

Part 2.1[edit]

The trial solution for the Euler equation with constant coefficient:

\displaystyle 
\begin{align}

y=e^{xr}

\end{align}

(7.23)

Compute the first and the second derivatives

\displaystyle 
\begin{align}

y^{\prime}(x)=re^{xr}
\end{align}

(7.24)

\displaystyle 
\begin{align}

y^{''}(x)=r^2e^{rx}
\end{align}

(7.25)

Substitute (7.23),(7.24),(7.25) into (7.3) and simplify it,

\displaystyle 
  \begin{align}
  b_2r^2e^{rx}+b_1re^{rx}+b_0e^{rx} &=0 \\
  \left[ b_2r^2+b_1r+b_0 \right] e^{rx} &=0 \\

\end{align}

Then,

\displaystyle 
\begin{align}

b_2r^2+b_1r+b_0=0 \\

\end{align}

\displaystyle 
\begin{align}

b_2r^2+b_1r+b_0&=(r-\lambda)^2 \\
               &=r^2-2r\lambda+\lambda^2

\end{align}

Compare LHS and RHS,

\displaystyle 
\begin{align}

b_2=1

\end{align}

(7.26)

\displaystyle 
\begin{align}

b_1=-2\lambda

\end{align}

(7.27)

\displaystyle 
\begin{align}

b_0=\lambda^2

\end{align}

(7.28)

Part 2.2[edit]

From (7.1), \displaystyle r=\lambda, As a result,

\displaystyle 
\begin{align}

y_1=e^{x\lambda}

\end{align}

(7.29)

Part 2.3[edit]

\displaystyle 
\begin{align}

y=c(x)y_1(x)

\end{align}

(7.30)

Compute the first and the second derivatives

\displaystyle 
\begin{align}

y^{'}(x)=c^{'}(x)y_1(x)+c(x)y_1^{'}(x)

\end{align}

(7.31)

\displaystyle 
\begin{align}

y^{''}(x)=c^{''}(x)y_1(x)+c^{'}(x)y_1^{'}(x)+c^{'}(x)y_1^{'}(x)+c(x)y_1^{''}(x)

\end{align}

(7.32)

Substituting (7.25),(7.26) and (7.27) into (7.3)

\displaystyle 
\begin{align}

b_2\left[c^{''}(x)y_1(x)+c^{'}(x)y_1^{'}(x)+c^{'}(x)y_1^{'}(x)+C(x)y_1^{''}(x)\right] + b_1\left[ c^{'}(x)y_1(x)+C(x)y_1^{'}(x) \right] +b_0c(x)y_1(x) \\
c(x) \cancelto{0} {\left [ b_2y_1^{''}(x) + b_1y_1^{'}(x)+b_0y_1(x) \right]} + c^{'}(x) \left[ 2b_2y_1^{'}(x) + b_1y_1(x) \right] + c^{''}(x)\left[ b_2y_1(x) \right] = 0 \\

\end{align}


From (7.3),

\displaystyle 
\begin{align}

b_2y_1^{''}(x) + b_1y_1^{'}(x)+b_0y_1(x)=0
\end{align}

Then,

\displaystyle 
\begin{align}

c^{'}(x) \left[ 2b_2y_1^{'}(x) + b_1y_1(x) \right] + c^{''}(x)\left[ b_2y_1(x) \right] = 0 \\

\end{align}

Substituting coefficients and homogeneous solution,

\displaystyle 
\begin{align}
&c^{'}(x) \left[ 2b_2y_1^{'}(x) + b_1y_1(x) \right] + c^{''}(x)\left[ b_2y_1(x) \right]  \\
&= c^{'}(x) \cancelto{0} {\left[ 2\lambda e^{ \lambda x } -2\lambda e^{\lambda x} \right]} + c^{''}(x) \left[ e^{\lambda x} \right]  \\
&= c^{''}(x)  e^{\lambda x}  = 0 \\

\end{align}

Then,

\displaystyle 
  \begin{align}
  \therefore c^{''}(x)=0
  \end{align}

(7.33)

Solve (7.33),

\displaystyle 
\begin{align}
c^{'}(x) &= \int c^{''}(x) dx \\ 
      &= \int 0 dx \\
      &= C_1    \\
c^{'}(x)&=C_1 \\
 c(x)&=\int c^{'}(x) dx\\
     &=\int C_1 dx\\
     &=C_1x+C_2

\end{align}

(7.34)

As a result,

\displaystyle 
\begin{align}

y(x)&= c(x)y_1(x) \\
    &= (C_1x+C_2)e^{\lambda x} \\
    &= C_1xe^{\lambda x}+ C_2e^{\lambda x}

\end{align}

(7.35)

Part 2.4[edit]

From 2.3, the general homogeneous solution is

\displaystyle 
\begin{align}

y(x)&=(C_1x+C_2)e^{\lambda x} \\
    &= C_1xe^{\lambda x} + C_2e^{\lambda x} \\

\end{align}

From (7.29), \displaystyle y_1(x) is \displaystyle e^{\lambda x}.


So, \displaystyle y_2(x) is

\displaystyle 
\begin{align}

y_2(x)&= xe^{\lambda x}

\end{align}

Author[edit]

Contributed by chung

Problem R*5.8 - Variation of parameters[edit]

From Mtg 32-2

Given[edit]

A General non-homogenous L1-ODE-VC (Linear, 1st order, Ordinary Differential Equation with Variable Coefficient),

 \displaystyle
  P(x)\,y' + Q(x)\,y = \underbrace{R(x)}_{\color{blue}{\neq 0}}

(8.1)

which can be written in the form given below,

 \displaystyle
  \underbrace{\color{blue}{1}}_{\color{blue}{a_1(x)}} \cdot \underbrace{y'}_{\color{blue}{y^{(1)}}} + \underbrace{\frac{Q(x)}{P(x)}}_{\color{blue}{a_0(x)}}y^{\color{blue}{(0)}} = \underbrace{\frac{R(x)}{P(x)}}_{\color{blue}{b(x)}}

(8.2)

The form can be written as,

 \displaystyle
  y' + a_0(x)y^{(0)} = b(x)

(8.3)

Find[edit]

Find the particular solution, \displaystyle y_P(x) by the use of the method of variation of parameters after knowing the homogeneous solution \displaystyle y_H(x), i.e., let \displaystyle y(x) = A(x)y_H(x), with \displaystyle A(x) being the unknown to be found.

Solution[edit]

- Solved on our own

For particular solution, \displaystyle y_P (x) , the use variation of parameters requires \displaystyle y_P(x) to be following,

 \displaystyle
  y_p(x) = \sum_{i=1}^{n} c_i(x) y_i(x)

(8.4)

for  n^{th} order non-linear homogenous ODE. For our case, n = 1, therefore we get,

 \displaystyle
  y_p(x) = A(x) y_H(x)

(8.5)

It is important to note for method of variation of parameters that above, \displaystyle c_1 is a function of \displaystyle x , and is determined by substituting the particular solution. Thus,

 \displaystyle
  y_H^{'}(x) + y_P^{'}(x) + a_0(x)[y_H(x) + y_P(x)] = b(x)

(8.6)

Substitute Eq. 8.5 into Eq. 8.6 to get,

 \displaystyle
  y_H^{'}(x) + [A^{'}(x)y_H(x) + A(x)y_H^{'}(x)] + a_0(x)y_H(x) + a_0(x)[y_H(x) + A(x)y_H(x)] = b(x)

(8.7)

and, rearrange the terms,

 \displaystyle
  \underbrace{[y_H^{'}(x) + a_0(x)y_H(x)]}_{=0} + A^{'}(x)y_H(x) + A(x)\underbrace{[y_H^{'}(x) + a_0(x)y_H(x)]}_{=0} = b(x)

(8.8)

Therefore \displaystyle A(x) is given by,

 \displaystyle
  A^{'}(x) = \frac{b(x)}{y_H(x)}

(8.9)

Integrate both sides w.r.t. x,

 \displaystyle
  A(x) = \int^x \frac{b(s)}{y_H(s)} ds

(8.10)

Substituting for the expression of \displaystyle y_H(x) from the homework problem 2.17 into above, we will get,

 \displaystyle
  A(x) = \int^x \underbrace{\exp [ \int^s a_0(t)dt ]}_{=h(s)} b(s) ds

(8.11)

Therefore particular solution can finally be written as,

 \displaystyle
  y_p(x) = [\int^x h(s) b(s) ds] \frac{1}{h(s)}

(8.12)

Author[edit]

Contributed by Shin

Problem R*5.9 Special IFM Solution[edit]

From Mtg 32-7

Given[edit]

\displaystyle 
\begin{align}

a_2x^2y''+a_1xy'+a_0y=0
\end{align}

(9.1)

Find[edit]

Special IFM to solve for general \displaystyle f(t)

Solution[edit]

- Solved on our own

First, find integrating factor \displaystyle h(t,y)


  \displaystyle
h(t,y)[{a_2}y''+{a_1}y'+{a_0}y]=h(t,y)f(t)

(Eq 9.2)

Suppose \displaystyle h(t,y) is a function of \displaystyle t, thus \displaystyle h(t,y)=h(t)

Rearrange Eq 9.2, we have


  \displaystyle
h(t)[{a_2}y''+{a_1}y'+{a_0}y-f(t)]=0

(Eq 9.3)


  \displaystyle
\frac{d\phi }{dt}=0

(Eq 9.4)

Thus

\displaystyle F(t,y,p,p')=h(t){a_2}p'+h(t){a_1}p+h(t){a_0}y-h(t)f(t)=f(t,y,p)p'+g(t,y,p)

(Eq 9.5)

Where \displaystyle p:=y'

According to the 2nd Exactness Condition

\displaystyle {f}_{tt}+2p{f}_{ty}+{p}^{2}{f}_{yy}={g}_{tp}+p{g}_{yp}-{g}_{y}

(Eq 9.6)

\displaystyle {f}_{tp}+p{f}_{yp}+2{f}_{y}={g}_{pp}

(Eq 9.7)

We have

\displaystyle f_{tt}={h_{tt}}{a_2}

(Eq 9.8)

\displaystyle f_{ty}=0

(Eq 9.9)

\displaystyle f_{yy}=0

(Eq 9.10)

\displaystyle g_{tp}={h_{t}}{a_1}

(Eq 9.11)

\displaystyle g_{yp}=0

(Eq 9.12)

\displaystyle g_{y}={h}{a_0}

(Eq 9.13)

Substituting Eq 9.8~9.13 in Eq 9.6, we get

\displaystyle h_{tt}a_2={h_{t}}{a_1}-h{a_0}

(Eq 9.14)

Now we assume \displaystyle h(t)=e^{\alpha t}, where \displaystyle {\alpha} is a constant

So Eq 9.14 can be expressed as

\displaystyle {\alpha}^2{a_2}-{\alpha}{a_1}+{a_0}=0

(Eq 9.15)

Then we try to reduce order for Eq 9.2 and express Eq 9.2 as


  \displaystyle
{{e}^{\alpha t}}\left( {{{\bar{a}}}_{1}}{y}'+{{{\bar{a}}}_{2}}y \right)=\int{{{e}^{\alpha t}}f(t)dt}

(Eq 9.16)

We can get


  \displaystyle
{{e}^{\alpha t}}\left[ {{{\bar{a}}}_{1}}{y}''+(\alpha {{{\bar{a}}}_{1}}+{{{\bar{a}}}_{0}}){y}'+\alpha {{{\bar{a}}}_{0}}y \right]={{e}^{\alpha t}}f(t)

(Eq 9.17)

Therefore, we have


  \displaystyle
\begin{align}
  & {{{\bar{a}}}_{1}}={{a}_{2}} \\ 
 & {{{\bar{a}}}_{0}}={{a}_{1}}-\alpha {{a}_{2}} \\ 
 & {{{\bar{a}}}_{0}}={{a}_{0}}/\alpha  \\ 
\end{align}

(Eq 9.18)

Rearrange Eq 9.16, we have


  \displaystyle
\left( {y}'+\frac{{{\bar{a}}}_{2}}{{{\bar{a}}}_{1}}y \right)=\frac{{e}^{-\alpha t}}{{{\bar{a}}}_{1}}\int{{{e}^{\alpha t}}f(t)dt}

(Eq 9.19)

Using \displaystyle h(t)=e^{{\beta}t}, where \displaystyle {\beta}=\frac{\bar{a_0}}{\bar{a_1}}

Thus, we can get


  \displaystyle
\begin{align}
 y(t) & =\frac{{{e}^{-\beta t}}}{{{{\bar{a}}}_{1}}}\int_{{}}^{t}{{{e}^{(\beta -\alpha )\tau }}(\int_{{}}^{\tau }{{{e}^{\alpha s}}f(s)ds}})d\tau  \\ 
 & =\frac{{{e}^{-\beta t}}}{{{{\bar{a}}}_{1}}}\int_{{}}^{t}{{{e}^{(\beta -\alpha )\tau }}\left( \int{{{e}^{\alpha \tau }}f(\tau )d\tau }+{{k}_{1}} \right)d\tau } \\ 
 & =\frac{{{e}^{-\beta t}}}{{{{\bar{a}}}_{1}}}\left[ \int_{{}}^{t}{{{e}^{(\beta -\alpha )\tau }}\left( \int{{{e}^{\alpha \tau }}f(\tau )d\tau } \right)d\tau }+\int_{{}}^{t}{{{e}^{(\beta -\alpha )\tau }}{{k}_{1}}d\tau } \right] \\ 
 & =\frac{{{e}^{-\beta t}}}{{{{\bar{a}}}_{1}}}\left[ \int{{{e}^{(\beta -\alpha )t}}\left( \int{{{e}^{\alpha t}}f(t)dt} \right)dt}+{{k}_{2}}+\frac{{{k}_{1}}}{\beta -\alpha }{{e}^{(\beta -\alpha )t}} \right] \\ 
 & =\frac{{{k}_{1}}}{(\beta -\alpha ){{{\bar{a}}}_{1}}}{{e}^{-\alpha t}}+\frac{{{k}_{2}}}{{{{\bar{a}}}_{1}}}{{e}^{-\beta t}}+\frac{{{e}^{-\beta t}}}{{{{\bar{a}}}_{1}}}\int{{{e}^{(\beta -\alpha )t}}\left( \int{{{e}^{\alpha t}}f(t)dt} \right)dt}  
\end{align}

(Eq 9.20)

Therefore, we have


  \displaystyle
y(t)={{C}_{1}}{{e}^{-\alpha t}}+{{C}_{2}}{{e}^{-\beta t}}+\frac{{{e}^{-\beta t}}}{{{a}_{2}}}\int{{{e}^{(\beta -\alpha )t}}\left( \int{{{e}^{\alpha t}}f(t)dt} \right)dt}

(Eq 9.21)

Where \displaystyle {C_1} and \displaystyle {C_2} are constants.

Author[edit]

Contributed by Kexin Ren

Contributing Members[edit]

Team Contribution Table
Problem Number Lecture(Mtg) Assigned To Solved By Typed By Signature(Author)
R*5.1 Mtg 22-6 Chung Chung Chung Chung 17 Oct. 2011 at 1:30 (UTC)
R*5.2 Mtg 27-1 Shin Shin Shin Shin 31 Oct. 2011 at 14:38 (UTC)
R*5.3 Mtg 30-3 Ankush Ankush Ankush Ankush 1 Nov. 2011 at 18:00 (UTC)
R*5.4 Mtg 31-1 YuChen YuChen YuChen YuChen 30 Oct. 2011 at 08:47 (UTC)
R*5.5 Mtg 31-3 YuChen YuChen YuChen YuChen 30 Oct. 2011 at 10:01 (UTC)
R5.6 Mtg 31-5 Ankush Ankush Ankush Ankush 1 Nov. 2011 at 20:00 (UTC)
R*5.7 Mtg 32-1 Chung Chung Chung Chung 29 Oct. 2011 at 0:19 (UTC)
R*5.8 Mtg 32-2 Shin Shin Shin Shin 2 Nov. 2011 at 15:43 (UTC)
R*5.9 Mtg 32-7 Ren Ren Ren Ren 1 Nov. 2011 at 23:50 (UTC)