User:Egm4313.s12.team17/Report 5

From Wikiversity
Jump to: navigation, search

Contents

Problem R5.1 - Radius of Convergence[edit]

Part 1 Given[edit]

Find R_c for the following series


   \displaystyle
  r(x) = \sum_{k=0}^\infty (k+1)k x^k

(Eq.1.1.1)

Part 1 Problem Statement[edit]

Find  R_c\, for the following series.

Part 1 Solution[edit]

Using the following formula:


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{d_{k+1}}{d_{k}}\right|\right ]^{-1}

(Eq.1.1.2)

Where for a power series:


   \displaystyle
  r(x)=\sum_{k=0}^\infty d_k x^k

(Eq.1.1.3)


   \displaystyle
  r(x) = \sum_{k=0}^\infty (k+1)k x^k

(Eq.1.1.4)

Where:


   \displaystyle
  d_k=(k+1)k

(Eq.1.1.5)


   \displaystyle
  d_{k+1}=(k+2)(k+1)

(Eq.1.1.6)

Plugging equations 1.1.5 and 1.1.6 into equation 1.1.2:


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{(k+2)(k+1)}{(k+1)k}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{(k+2)}{k}\right|\right ]^{-1}=1

(Eq.1.1.7)

Part 2 Given[edit]


   \displaystyle
  r(x) = \sum_{k=0}^\infty \frac{(-1)^k}{\gamma^k}x^{2k}

(Eq.1.2.1)


   \displaystyle
  \gamma = constant

Part 2 Problem Statement[edit]


   \displaystyle
  r(x) = \sum_{k=0}^\infty \frac{(-1)^k}{\gamma^k}x^{2k}

(Eq.1.2.2)


   \displaystyle
  \frac{(-1)^k}{\gamma^k}x^{2k}=\frac{(-1)^k}{\gamma^k}(x^{k})^2=\sqrt{\frac{(-1)^k}{\gamma^k}}x^{k}

(Eq.1.2.3)

Where:


   \displaystyle
  d_k= \sqrt{\frac{(-1)^k}{\gamma^k}}

(Eq.1.2.4)


   \displaystyle
  d_{k+1}=\sqrt{\frac{(-1)^{k+1}}{\gamma^{k+1}}}=\sqrt{\frac{(-1)^{k}(-1)^{1}}{\gamma^{k}\gamma^{1}}}

(Eq.1.2.5)

Using the same equation in Part 1, plug Eq.1.2.4 and Eq.1.2.5 into Eq.1.1.2


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{\sqrt{\frac{(-1)^{k}(-1)^{1}}{\gamma^{k}\gamma^{1}}}}{\sqrt{\frac{(-1)^k}{\gamma^k}}}\right|\right ]^{-1} = \left[\lim_{k \to \infty}\left|\frac{\sqrt{\frac{(-1)^{1}}{\gamma^{1}}}}{1}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{1}{\sqrt{|\gamma|}}\right|\right ]^{-1}=\sqrt{|\gamma|}

(Eq.1.2.6)

Part 3 Given[edit]


   \displaystyle
  sin x

(Eq.1.3.1)


   \displaystyle
  \hat x = 0

Part 3 Problem Statement[edit]

Find  R_c\, for the given Taylor series.

Part 3 Solution[edit]

Performing a Maclaurin series expansion:


   \displaystyle
  f(x)=sin(x)=\sum_{k=0}^{\infty }\frac{[(-1)^k]x^{1+2k}}{(1+2k)!}

(Eq.1.3.2)


   \displaystyle
  \frac{[(-1)^k]x^{1+2k}}{(1+2k)!}= \frac{[(-1)^k]x^{1}x^{2k}}{(1+2k)!}= \frac{[(-1)^k]x^{1}(x^{k})^{2}}{(1+2k)!} = \sqrt{\frac{(-1)^kx}{(1+2k)!}}x^{k}

(Eq.1.3.3)

Where:


   \displaystyle
  d_{k+1} = \sqrt{\frac{(-1)^{k+1}x}{(1+2(k+1))!}}=\sqrt{\frac{(-1)^{k}(-1)^{1}x}{(1+2(k+1))!}}

(Eq.1.3.4)


   \displaystyle
  d_{k}=\sqrt{\frac{(-1)^kx}{(1+2k)!}}

(Eq.1.3.5)

Using Eq.1.1.2, plug in Eq.1.3.4 and Eq.1.3.5


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{\sqrt{\frac{(-1)^{k}(-1)^{1}x}{(1+2(k+1))!}}}{\sqrt{\frac{(-1)^kx}{(1+2k)!}}}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{\sqrt{\frac{-1}{(2k+3)(2k+2)}}}{1}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left| \sqrt{\frac{1}{(4k^2+8k+3)}}\right|\right ]^{-1}=(\frac{1}{\infty})^{-1}=\infty

(Eq.1.3.6)

Part 4 Given[edit]


   \displaystyle
  \log(1+x)

(Eq.1.4.1)


   \displaystyle
  \hat x = 0

Part 4 Problem Statement[edit]

Find  R_c\, for the given Taylor series.

Part 4 Solution[edit]

Performing a Maclaurin series expansion:


   \displaystyle
  f(x)=log(1+x)=\sum_{k=1}^{\infty }\frac{(-1)^{k-1}x^{k}}{k}

(Eq.1.4.2)

Where:


   \displaystyle
  d_{k+1} = \frac{(-1)^{k+1-1}}{k+1}=\frac{-1^k}{k+1}

(Eq.1.4.3)


   \displaystyle
  d_{k}=\frac{(-1)^{k-1}}{k}=\frac{(-1)^k(-1)^{-1}}{k}=\frac{-1^{k}}{-k}

(Eq.1.4.4)

Plugging Eq.1.4.3 and Eq.1.4.4 into Eq.1.1.2:


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{\frac{-1^k}{k+1}}{\frac{-1^{k}}{-k}}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{\frac{1}{k+1}}{\frac{-1}{k}}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{-k}{k+1}\right|\right ]^{-1}=1

(Eq.1.4.5)

Part 5 Given[edit]


   \displaystyle
  \log(1+x)

(Eq.1.5.1)


   \displaystyle
  \hat x = 1

Part 5 Problem Statement[edit]

Find  R_c\, for the given Taylor series.

Part 5 Solution[edit]

Performing a Taylor series expansion about \hat x, results in the following series:


   \displaystyle
  f(x)=\log(2)+\frac{1}{2}(x-1)-\frac{1}{8}(x-1)^{2}+\frac{1}{24}(x-1)^3-\frac{1}{64}(x-1)^4...

(Eq.1.5.2)

This simplifies to:


   \displaystyle
  f(x)=\log(2)+\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{2^kk}(x-1)^k

(Eq.1.5.3)

Therefore d_k and d_(k+1) are then:


   \displaystyle
  d_{k+1} = \frac{(-1)^{k+1-1}}{2^(k+1)(k+1)}=\frac{-1^k}{2^k*2(k+1)}

(Eq.1.5.4)


   \displaystyle
  d_{k}=\frac{(-1)^{k-1}}{2^k*k}=\frac{(-1)^k(-1)^{-1}}{2^k*k}=\frac{-1^{k}}{-k*2^k}

(Eq.1.5.5)

Plugging Eq.1.5.4 and Eq.1.5.5 into Eq.1.1.2


   \displaystyle
  R_{c}=\left[\lim_{k \to \infty}\left|\frac{\frac{-1^k}{2^k*2(k+1)}}{\frac{-1^{k}}{-k*2^k}}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{\frac{1}{2(k+1)}}{\frac{-1}{k}}\right|\right ]^{-1}=\left[\lim_{k \to \infty}\left|\frac{-k}{2(k+1)}\right|\right ]^{-1}=2

(Eq.1.4.6)

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.wheeler.tw 05:06, 23 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.ying 02:32, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 02:32, 30 March 2012 (UTC)


Problem R5.2 - Linear Independent Testing[edit]

Part 1 Given[edit]


   \displaystyle
  f(x) = x^2

(Eq.2.1.1)


   \displaystyle
  g(x) = x^4

(Eq.2.1.2)


   \displaystyle
  f(x) = cos x

(Eq.2.1.3)


   \displaystyle
  g(x) = sin 3x

(Eq.2.1.4)

Part 1 Problem Statement[edit]

Determine whether the following pair of functions are linearly independent using the Wronskian.

Part 1 solution[edit]

The Wronskian of the two functions is written as


   \displaystyle
  W(f,g) = f(x)g'(x) - g(x)f'(x)

(Eq.2.1.5)

which is the determinant of the functions were they in a 2x2 matrix. If W(f,g) = 0 then the two functions are linearly dependent.

Using the first set of functions, Eq.2.1.1 and Eq.2.1.2, the corresponding derivatives are


   \displaystyle
  f'(x) = 2x

(Eq.2.1.6)


   \displaystyle
    g'(x) = 4x^3

(Eq.2.1.7)

and placing Eq.2.1.1, Eq.2.1.2, Eq.2.1.6 and Eq.2.1.7 into Eq.2.1.5 we get


   \displaystyle
 (x^2)(4x^3) - (x^4)(2x) = 4x^5 - 2x^5 = 2x^5 \ne 0

(Eq.2.1.8)

Therefore Eq.2.1.1 and Eq.2.1.2 are Linearly Independent.

Using the second set of functions, Eq.2.1.3 and Eq.2.1.4, the corresponding derivatives are


   \displaystyle
 f'(x) = -sin(x)

(Eq.2.1.9)

and


   \displaystyle
 g'(x) = 3cos(3x)

(Eq.2.1.10)

Plugging Eq.2.1.3,Eq.2.1.4,Eq.2.1.9, and Eq.2.1.10 into Eq.2.1.5 gives:


   \displaystyle
 [cos(x)][cos(3x)] - [sin(3x)][-sin(x)]

(Eq.2.1.11)

These equations cannot be simplified so


   \displaystyle
 [cos(x)][cos(3x)] - [sin(3x)][-sin(x)] \ne 0

(Eq.2.1.12)

Thus the two functions are linearly independent.

Part 2 Given[edit]

Same as part 1 given

Part 2 Problem Statement[edit]

Verify Eq.2.1.1, Eq.2.1.2, Eq.2.1.3, and Eq.2.1.4 are linearly independent using the Gramian over the interval [a,b] = [-1,1].

Part 2 Solution[edit]

The gramian is given by


   \displaystyle
 \Gamma (f,g) = \begin{bmatrix} \left \langle f,f \right \rangle  \left \langle f,g \right \rangle \\
                   \left \langle g,f \right \rangle  \left \langle g,g \right \rangle 
                \end{bmatrix}

(Eq.2.2.1)

Where


   \displaystyle

 \left \langle f,g \right \rangle = \int\limits_{a}^{b} f(x)g(x) dx

(Eq.2.2.2)

If the Gramian, Eq.2.2.1


   \displaystyle

\Gamma (f,g)  \ne 0

(Eq.2.2.3)

Then the two functions are linearly independent

Now to calculate the matrix values:


   \displaystyle
  \left \langle f,f \right \rangle  = \int\limits_{-1}^{1} x^4 dx

(Eq.2.2.4)

Which then equals


   \displaystyle
  \frac{1}{5}[(1^5) -  (-1^5)] = \frac{2}{5}

(Eq.2.2.5)

The other terms are as follows:


   \displaystyle
 \left \langle f,g \right \rangle = \int\limits_{-1}^{1} x^6 dx = \frac{2}{7}

(Eq.2.2.6)


   \displaystyle
 \left \langle g,f \right \rangle = \int\limits_{-1}^{1} x^6 dx = \frac{2}{7}

(Eq.2.2.7)


   \displaystyle
 \left \langle g,g \right \rangle = \int\limits_{-1}^{1} x^8 dx = \frac{2}{9}

(Eq.2.2.8)

Entering Eq.2.2.5, Eq.2.2.6, Eq.2.2.7, Eq.2.2.8 into the Gramian Matrix, Eq.2.2.1 results in


   \displaystyle
 \Gamma (f,g) = \begin{bmatrix} \left (\frac{2}{5} \right) & 	\left (\frac{2}{7} \right) \\
                  \left (\frac{2}{7} \right) & \left (\frac{2}{9} \right) 
                \end{bmatrix}

(Eq.2.2.9)

Eq.2.2.9 then results in:


   \displaystyle
 \frac{2}{5}*\frac{2}{9} - \frac{2}{7}*\frac{2}{7} = \frac{4}{45} - \frac{4}{49} \ne 0

(Eq.2.2.10)

Thus the two functions are linearly independent.

Using Eq.2.2.2, the following scalar product values are calculated.


   \displaystyle
  \left \langle f,f \right \rangle  = \int\limits_{-1}^{1} cos(x)*cos(x)dx = 1.008

(Eq.2.2.11)


   \displaystyle
 \left \langle f,g \right \rangle = \int\limits_{-1}^{1} cos(x)*sin(3x) dx = 0

(Eq.2.2.12)


   \displaystyle
 \left \langle g,f \right \rangle = \int\limits_{-1}^{1} sin(3x)*cos(x)dx = 0

(Eq.2.2.13)


   \displaystyle
 \left \langle g,g \right \rangle = \int\limits_{-1}^{1} sin^2(3x) dx = .9826

(Eq.2.2.14)


Entering Eq.2.2.11, Eq.2.2.12, Eq.2.2.13, and Eq.2.2.14 into the Eq.2.2.1 results in


   \displaystyle
 \Gamma (f,g) = \begin{bmatrix} 1.008 & 0 \\ 0 & .9826 \end{bmatrix}

(Eq.2.2.15)

Which finally becomes


   \displaystyle
 (1.008)(.9826) - (0)(0) = 0.990 \ne 0

(Eq.2.2.16)

Thus the two functions are linearly independent.

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.hintz 19:09, 23 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 06:32, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 03:58, 30 March 2012 (UTC)


Problem R5.3 - Linear Independent Test using Gramain[edit]

Given[edit]


   \displaystyle
   \mathbf b_1 = 2\mathbf e_1 + 7\mathbf e_2

(Eq.3.1)


   \displaystyle
   \mathbf b_1 = 1.5\mathbf e_1 + 3\mathbf e_2

(Eq.3.2)

Problem Statement[edit]

Verify that  b_1\, and  b_2\, are linearly independent using the Gramian

Solution[edit]

The Grammian is given as:


   \displaystyle
 \Gamma (b_1,b_2) = \begin{bmatrix} \left \langle b_1,b_1 \right \rangle  \left \langle b_1,b_2 \right \rangle \\
                   \left \langle b_2,b_1 \right \rangle  \left \langle b_2,b_2 \right \rangle 
                \end{bmatrix}

(Eq.3.3)

According to (3) on p.8-9


   \displaystyle
  \left \langle b_1,b_1 \right \rangle = (b_1 \cdot b_2)

(Eq.3.4)

Calculating the dot products using Eq.3.1 and Eq.3.2:


   \displaystyle
  \left \langle b_1,b_1 \right \rangle = 4 + 49 = 53

(Eq.3.5)


   \displaystyle
  \left \langle b_1,b_2 \right \rangle = 3 + 21 = 24

(Eq.3.6)


   \displaystyle
  \left \langle b_2,b_1 \right \rangle = 3 + 21 = 24

(Eq.3.7)


   \displaystyle
  \left \langle b_2,b_2 \right \rangle = 2.25 + 9 = 11.25

(Eq.3.8)

Plugging Eq.3.5, Eq.3.6, Eq.3.7, and Eq.3.8 into Eq.3.3 and finding the determinant gives:


   \displaystyle

596.25 - 576 = 20.25 = \Gamma \ne 0

(Eq.3.9)

Thus the two functions are linearly independent.

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.hintz 19:48, 21 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.Li 17:21, 25 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 04:46, 30 March 2012 (UTC)


Problem R5.4 - Particular Solution Verification[edit]

Given[edit]


   \displaystyle
  y_{p}(x)= \sum_{i=0}^n y_{p,i}(x)

(Eq.4.1)


   \displaystyle
  y^{\prime\prime}+p(x)y'+q(x)y=r(x)

(Eq.4.2)


   \displaystyle
  r(x)=\sum_{i=0}^n r_{i}(x)

(Eq.4.3)

Problem Statement[edit]

Show that Eq. 4.1 is indeed the overall particular solution of the L2-OE-CC, Eq. 4.2

Discuss the choice of  y_p(x)\, for  r(x) = kcos(\omega x)\, and  r(x) = sin (\omega x)\, in  y_p(x)\,

Solution[edit]

The particular solution for Eq. 4.2 is:


   \displaystyle
  y_{p}(x)=r(x)

Since the excitation is Eq. 4.3, the particular solution is then


   \displaystyle
  y_{p}(x)=r(x)=\sum_{i=0}^n r_{i}(x)=r_1(x)+r_2(x) \cdots + r_n(x)

(Eq.4.4)

If we expand Eq.4.1


   \displaystyle
  y_{p}(x)= \sum_{i=0}^n y_{p,i}(x)=y_{p,1}(x)+y_{p,2}(x)+ \cdots + y_{p,n}(x)

(Eq.4.5)

We have :


   \displaystyle
 r_1(x)=y_{p,1}


   \displaystyle
  r_2(x)=y_{p,2}


   \displaystyle
  \vdots


   \displaystyle
  r_{n}(x)=y_{p,n}

Now we can see that the particular solution y_{p,i}(x) for a simple excitation r_i(x) satisfies Eq. 4.2:


   \displaystyle
  y_{p,i}''+p(x)y'_{p,i}+q(x)y_{p,i}=r_i(x)

(Eq.4.6)

Because of linearity, when each particular solution  y_{p,i} is known, then we see that the overall particular solution is:


   \displaystyle
  y_{p}(x)= \sum_{i=0}^n y_{p,i}(x)

(Eq.4.7)

Therefore, Eq.4.1 is indeed the particular solution to Eq.4.2.


Explanation

When the excitation to an equation, y^{\prime\prime}+p(x)y'+q(x)y=r(x) is:


   \displaystyle
  r(x)=kcoswx

We first think of using a particular solution of:


   \displaystyle
  y_{p}(x)= Asinwx

However if we were to take the derivatives of the particular solution and substitute back into the equation, we have:


   \displaystyle
  -w^2Asin(wx)-wAsin(wx)p(x)+Acos(wx)q(x)= Kcoswx

Notice that sinwx has appeared. And it has the same coefficient 'A' as well. When equating common coefficients we see that 'A' must equal 0, A=0.
However this can not happen since the cosine will drop out.Therefore, we can see that because of the sine that is shown, our initial guess was wrong.
In order to counter the mistake, we need to add a sine to y_p(x). So,the particular solution would be:


   \displaystyle
  y_{p}(x)= Acoswx+Bsinwx

In conclusion, if the excitation, r(x) includes a sine or cosine, the particular solution must be in the form of:


   \displaystyle
  y_{p}(x)= Mcoswx+Nsinwx

where 'N' and 'M' are constants.

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.Li 21:32, 26 March 2012 (UTC)
Proofreader 1:Egm4313.s12.team17.wheeler.tw 12:15, 28 March 2012 (UTC)
Proofreader 2:Egm4313.s12.team17.axelrod.a 17:23, 29 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 05:42, 30 March 2012 (UTC)


Problem R5.5 - L2-ODE-CC with Linear Independent Testing[edit]

Part 1 Given[edit]


   \displaystyle
  cos(7x)

(Eq.5.1.1)


   \displaystyle
  sin(7x)

(Eq.5.1.2)

Part 1 Problem Statement[edit]

Show that  cos(7x)\, and  sin(7x)\, are linearly independent using the Wronskian and the Gramian (integrate over 1 period).

Part 1 Solution[edit]

Wronskian Method
By Definition


   \displaystyle
  W(f,g):= \det \begin{bmatrix} f &g \\  f'&g' \end{bmatrix}=fg'-gf'

(Eq.5.1.3)

In this problem:


   \displaystyle
  f(x)=\cos7x

(Eq.5.1.4)


   \displaystyle
 g(x)=\sin7x

(Eq.5.1.5)

Taking the derivative of Eq.5.1.4 and Eq.5.1.5 respectively:


   \displaystyle
  f'(x)=-7\sin7x

(Eq.5.1.6)


   \displaystyle
  g'(x)=7\cos7x

(Eq.5.1.7)

Plugging equations Eq.5.1.4 through Eq.5.1.7 into Eq.5.1.3


   \displaystyle
  W(f,g)=(\cos7x)(7\cos7x)-(\sin7x)(-7\sin7x)=7cos^27x+7sin^27x \ne 0

(Eq.5.1.8)

Thus the functions are linearly independent.

Gramian Method
By definition:


   \displaystyle
  \Gamma (f,g) := \det \begin{bmatrix}\langle f,f \rangle & \langle f,g \rangle\\ \langle g,f \rangle & \langle g,g \rangle\end{bmatrix}

(Eq.5.1.9)

Where


   \displaystyle
  \langle f,g \rangle := \int_a^b f(x)g(x)\,dx

(Eq.5.1.10)

In this problem:


   \displaystyle
  f(x)=\cos7x

(Eq.5.1.11)


   \displaystyle
 g(x)=\sin7x

(Eq.5.1.12)


   \displaystyle
 T=\frac{2 \pi}{7}=0.8976

(Eq.5.1.13)

Applying Eqs.5.1.11 through Eq.5.1.13 to Eq.5.1.10:


   \displaystyle
  \langle f,f \rangle := \int_0^{0.8976} f(x)f(x)\,dx = \int_0^{0.8976} \cos7x \cos7x \,dx = \int_0^{0.8976} \cos^27x\,dx = 0.449 =\frac{\pi}{7}

(Eq.5.1.14)


   \displaystyle
  \langle f,g \rangle := \int_0^{0.8976} f(x)g(x)\,dx = \int_0^{0.8976} \cos7x \sin7x \,dx = 0

(Eq.5.1.15)


   \displaystyle
  \langle g,f \rangle := \int_0^{0.8976} f(x)g(x)\,dx = \int_0^{0.8976} \cos7x \sin7x \,dx = 0

(Eq.5.1.16)


   \displaystyle
  \langle g,g \rangle := \int_0^{0.8976} g(x)g(x)\,dx = \int_0^{0.8976} \sin7x \sin7x \,dx = \int_0^{0.8976} \sin^27x\,dx = 0.449 = \frac{\pi}{7}

(Eq.5.1.17)

Plugging in values form Eq.5.1.14 to Eq.5.1.17 into Eq.5.1.9:


   \displaystyle
  \Gamma (f,g) := \det \begin{bmatrix}\langle f,f \rangle & \langle f,g \rangle\\ \langle g,f \rangle & \langle g,g \rangle\end{bmatrix} = \det \begin{bmatrix} \frac{\pi}{7}  &  0 \\  0  &  \frac{\pi}{7} \end{bmatrix} = \frac{\pi^2}{49} \ne 0

(Eq.5.1.18)

Thus the functions are linearly independent.

Part 2 Given[edit]

Same as Part 1 Given

Part 2 Problem Statement[edit]

Find 2 equations for the two unknowns M,N, and solve for M,N.

Part 2 Solution[edit]

The ODE to be solved is:


   \displaystyle
  y''-3y'-10y=3\cos7x

(Eq.5.2.1)

The particular solution will take the following form:


   \displaystyle
  y_p(x)=M\cos7x + N\sin7x

(Eq.5.2.2)

Take the first and second derivatives of Eq.5.2.2:


   \displaystyle
  y'_p(x)=-7M\sin7x + 7N\cos7x

(Eq.5.2.3)


   \displaystyle
 y''_p(x)=-49M\cos7x - 49N\sin7x

(Eq.5.2.4)

Plugging Eq.5.2.2 and Eq.5.2.3 into Eq.5.2.1:


   \displaystyle
  -49M\cos7x - 49N\sin7x-3(-7M\sin7x + 7N\cos7x)-10(M\cos7x + N\sin7x)=3\cos7x

(Eq.5.2.5)


   \displaystyle
  -49M\cos7x-49N\sin7x+21M\sin7x-21N\cos7x-10M\cos7x-10N\sin7x=3\cos7x

(Eq.5.2.6)

By equating like terms, the two equations for solving for M and N are:


   \displaystyle
  -49M-21N-10M=3

(Eq.5.2.7)


   \displaystyle
  -49N+21M-10N=0

(Eq.5.2.8)

Solving the system of equations created by Eq.5.2.7 and Eq.5.2.8:


   \displaystyle
  M=-0.045

(Eq.5.2.9)


   \displaystyle
  N=-0.016

(Eq.5.2.10)

Part 3 Given[edit]

Initial conditions


   \displaystyle
  y(0) = 1


   \displaystyle
  y'(0) = 0

Part 3 Problem Statement[edit]

Find the overall solution y(x) and plot the solution over 3 periods

Part 3 Solution[edit]

For the overall solution:


   \displaystyle
  y(x)=y_p(x)+y_h(x)

(Eq.5.3.1)

To find the homogeneous solution, solve:


   \displaystyle
  y''-3y'-10y=0

(Eq.5.3.2)

The characteristic equation of Eq.5.3.2 is:


   \displaystyle
  \lambda^2-3\lambda-10=0

(Eq.5.3.3)

To solve the characteristic equation, apply the quadratic formula:


   \displaystyle
  x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}

(Eq.5.3.4)


   \displaystyle
  \lambda_{1,2}=\frac{-(-3)\pm\sqrt{(-3)^2-4(-10)}}{2}

(Eq.3.3.5)


   \displaystyle
  \lambda_1=5

(Eq.5.3.6)


   \displaystyle
  \lambda_2=-2

(Eq.5.3.7)

Therefore the homogeneous solution takes the following form:


   \displaystyle
  y_h(x)=c_1e^{5x}+c_2e^{-2x}

(Eq.5.3.8)

Taking the first derivative of Eq.5.3.8:


   \displaystyle
  y'_h(x)=5c_1e^{5x}-2c_2e^{-2x}

(Eq.5.3.9)

Apply the boundary conditions y(0)=1 and y'(0)=0 to Eq.5.3.8 and Eq.5.3.9:


   \displaystyle
  1=c_1+c_2

(Eq.5.3.10)


   \displaystyle
  0=5c_1-2c_2

(Eq.5.3.11)

Solving Eq.5.3.10 and Eq.5.3.11:


   \displaystyle
  c_1=\frac{2}{7}

(Eq.5.3.12)


   \displaystyle
  c_2=\frac{5}{7}

(Eq.5.3.13)

The homogeneous solution becomes:


   \displaystyle
  y_h(x)=\frac{2}{7}e^{5x}-\frac{5}{7}e^{-2x}

(Eq.5.3.14)

Therefore from Eq.5.3.1 the overall solution is:


   \displaystyle
  y(x)=y_p(x)+y_h(x)=-0.045\cos7x-0.016\sin7x+\frac{2}{7}e^{5x}+\frac{5}{7}e^{-2x}

(Eq.5.3.15)

Graph

R5 5.jpg

Matlab Code

clear all;
%R5.5 graph the complete solution over 3 periods
%y(x)=-0.045cos7x-0.016sin7x+2/7e^(5x)+5/7e^(-2x)
x = 0:0.2:(3*0.8976);%Domnain over 3 periods
y = -0.045*cos(7*x)-0.016*sin(7*x)+(2/7)*exp(5*x)+(5/7)*exp(-2*x);
plot(x,y); %Plot graph
%Add labels
title('R5.5 Graph');
xlabel('x');
ylabel('y(x)');

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.wheeler.tw 05:10, 23 February 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 09:25, 27 February 2012 (UTC)
Editor:Egm4313.s12.team17.ying 06:25, 30 February 2012 (UTC)


Problem R5.6 - Solving for the Unknown Coefficients[edit]

Given[edit]


   \displaystyle
  y_p=xe^{-2x}(Mcos(3x)+Nsin(3x))

(Eq.6.1)


   \displaystyle
  y_h=e^{-2x}(Acos(3x)+Bsin(3x))

(Eq.6.2)


   \displaystyle
  y''+4y'+13y=2e^{-2x}cos(3x)

(Eq.6.3)

Initial conditions


   \displaystyle
  y(0) = 1


   \displaystyle
  y'(0) = 0

Problem Statement[edit]

Determine the overall solution y(x) that corresponds to the initial conditions.

Plot the general solution over 3 periods.

Solution[edit]

Combining the given homogeneous and particular solutions gives us the overall form:


   \displaystyle
  y(x)=xe^{-2x}(Mcos(3x)+Nsin(3x))+ e^{-2x}(Acos(3x)+Bsin(3x))

(Eq.6.4)

Applying the initial condition y(0)=1 to Eq.6.4 gives us:


   \displaystyle
  y(0)=1=0e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))+ e^{-2(0)}(Acos(3(0))+Bsin(3(0)))

(Eq.6.5)

Which is simplified below:


   \displaystyle
  1=Acos(0)+Bsin(0)

(Eq.6.6)


   \displaystyle
  1=Acos(0)+Bsin(0)

(Eq.6.7)


   \displaystyle
  1=A

(Eq.6.8)

Taking the derivative of Eq.6.4 gives us


   \displaystyle
  y'(x)=e^{-2x}(Mcos(3x)+Nsin(3x))-2xe^{-2x}(Mcos(3x)+Nsin(3x))-3xe^{-2x}(Msin(3x))

   \displaystyle
    +3xe^{-2x}(Ncos(3x))-2e^{-2x}(Acos(3x)+Bsin(3x))-3e^{-2x}(Asin(3x))+3e^{-2x}(Bcos(3x))

(Eq.6.9)

Inserting the initial condition y'(0)=0 into Eq.6.9 we get:


   \displaystyle
  y'(0)=0=e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))-2(0)e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))-3(0)e^{-2(0)}(Msin(3(0)))

   \displaystyle
    +3(0)e^{-2(0)}(Ncos(3(0)))-2e^{-2(0)}(Acos(3(0))+Bsin(3(0)))-3e^{-2(0)}(Asin(3(0)))+3e^{-2(0)}(Bcos(3(0)))

(Eq.6.10)

Which is further simplified to:


   \displaystyle
  0=(M)-2(A)+3(B)

(Eq.6.11)

Since A=1:


   \displaystyle
  0=(M)-2(1)+3(B)

(Eq.6.12)

We then get the relation:


   \displaystyle
  2=(M)+3(B)

(Eq.6.13)

Taking the derivative of Eq.6.9 gives us.


   \displaystyle
  y''(x)=-2e^{-2x}(Mcos(3x)+Nsin(3x))-e^{-2x}(Msin(3x))+e^{-2x}(Ncos(3x))

   \displaystyle
    -2e^{-2x}(Mcos(3x)+Nsin(3x))+4xe^{-2x}(Mcos(3x)+Nsin(3x))+6xe^{-2x}(Msin(3x))-6xe^{-2x}(Ncos(3x))

   \displaystyle
    -3e^{-2x}(Msin(3x))+6xe^{-2x}(Msin(3x))-9xe^{-2x}(Mcos(3x))+3e^{-2x}(Ncos(3x))-6xe^{-2x}(Ncos(3x))-9xe^{-2x}(Nsin(3x))

   \displaystyle
    +4e^{-2x}(Acos(3x)+Bsin(3x))+6e^{-2x}(Asin(3x))-6e^{-2x}(Bcos(3x))

   \displaystyle
    +6e^{-2x}(Asin(3x))-9e^{-2x}(Acos(3x))-6e^{-2x}(Bcos(3x))-9e^{-2x}(Bsin(3x))

(Eq.6.14)

We then insert Eq.6.3, Eq.6.9, and Eq.6.14 into the equation y''+4y'+13y=2e^{-2x}cos(3x)
:


   \displaystyle
  -4e^{-2x}(Mcos(3x)+Nsin(3x))-4e^{-2x}(Msin(3x))-2e^{-2x}(Ncos(3x))

   \displaystyle
    +4xe^{-2x}(Mcos(3x)+Nsin(3x))+12xe^{-2x}(Msin(3x))

   \displaystyle
    -9xe^{-2x}(Mcos(3x))-6xe^{-2x}(Ncos(3x))-9xe^{-2x}(Nsin(3x))

   \displaystyle
    -2e^{-2x}(Acos(3x)+Bsin(3x))+4e^{-2x}(Acos(3x)+Bsin(3x))+6e^{-2x}(Asin(3x))-6e^{-2x}(Bcos(3x))

   \displaystyle
    +6e^{-2x}(Asin(3x))-9e^{-2x}(Acos(3x))-6e^{-2x}(Bcos(3x))-9e^{-2x}(Bsin(3x))

   \displaystyle
  +4(e^{-2x}(Mcos(3x)+Nsin(3x))-2xe^{-2x}(Mcos(3x)+Nsin(3x))-3xe^{-2x}(Msin(3x))

   \displaystyle
    +3xe^{-2x}(Ncos(3x))-2e^{-2x}(Acos(3x)+Bsin(3x))-3e^{-2x}(Asin(3x))+3e^{-2x}(Bcos(3x)))

   \displaystyle
    +13(xe^{-2x}(Mcos(3x)+Nsin(3x))+ e^{-2x}(Acos(3x)+Bsin(3x)))=2e^{-2x}cos(3x)

(Eq.6.15)

Which is then simplified:


   \displaystyle
  -6e^{-2x}(Msin(3x))+6e^{-2x}(Ncos(3x))=2e^{-2x}cos(3x)

(Eq.6.16)

Dividing Eq.6.16 by e^{-2x} yields:


   \displaystyle
  -6(Msin(3x))+6(Ncos(3x))=2cos(3x)

(Eq.6.17)

We then split Eq.6.17 into a system of equations to determine the value of the coefficients:


   \displaystyle
  -6(Msin(3x))=0sin(3x)

(Eq.6.18)


   \displaystyle
  6(Ncos(3x))=2cos(3x)

(Eq.6.19)

From Eq.6.18 and Eq.6.19 we can deduce that M=0 and N=\frac{1}{3}.
Inserting the value of M into Eq.6.13 yields:


   \displaystyle
    2=(0)+3(B)

(Eq.6.20)


   \displaystyle
    \frac{2}{3}=(B)

(Eq.6.21)

Inserting the determined values for A,B,M, and N from Eq.6.8, Eq.6.18, Eq.6.21, and Eq.6.19 we can deduce that the overall solution is:


   \displaystyle
    y(x)=xe^{-2x}(0cos(3x)+\frac{1}{2}sin(3x))+ e^{-2x}(1cos(3x)+\frac{2}{3}sin(3x))

(Eq.6.22)

Making the overall solution:


   \displaystyle
  y(x)=xe^{-2x}(\frac{1}{3}sin(3x))+ e^{-2x}(1cos(3x)+\frac{2}{3}sin(3x))

(Eq.6.23)

Below is a plot of the solution:
Graph

File:StupidStupidGraph.jpg

Matlab Code

clear all;
%R6 graph the complete solution over 3 periods
%y(x)=xe^{-2x}((1/2)*sin(3x))+ e^{-2x}(1*cos(3x)+(2/3)*sin(3x))
x=0:0.1:6.3;%Domnain over 3 periods from 0 to 6.3
for n=1:1:64;
plot(x(n),x(n)*exp(-2*x(n))*(1/3)*sin(3*x(n))+exp(-2*x(n))*(cos(3*x(n))+(2/3)*sin(3*x(n))),'r:+');hold all
end%This loop plots the desired graph
%Add labels
title('R5.6 Graph');
xlabel('x');
ylabel('y(x)');

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.axelrod.a 04:46, 30 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 07:05, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 07:14, 30 March 2012 (UTC)


Problem R5.7 - Projection on a Basis for Vectors[edit]

Given[edit]


   \displaystyle
 V = 4e_1 + 2e_2 = c_1 b_1 + c_2 b_2

(Eq.7.1)


   \displaystyle
 b_1 = 2e_1 + 7e_2

(Eq.7.2)


   \displaystyle
 b_2 = 1.5e_1 + 3e_2

(Eq.7.3)

Part 1 Problem Statement[edit]

Find  c_1,c_2 using the Gram Matrix

Part 1 Solution[edit]

The Grammian is given as:


   \displaystyle
 \Gamma (b_1,b_2) = \begin{bmatrix} \left \langle b_1,b_1 \right \rangle  \left \langle b_1,b_2 \right \rangle \\
                   \left \langle b_2,b_1 \right \rangle  \left \langle b_2,b_2 \right \rangle 
                \end{bmatrix}

(Eq.7.1.1)

According to (3) on p.8-9



   \displaystyle
  \left \langle b_1,b_1 \right \rangle = (b_1 \cdot b_2)

(Eq.7.1.2)


Calculating the dot products:


   \displaystyle
  \left \langle b_1,b_1 \right \rangle = 4 + 49 = 53


   \displaystyle
  \left \langle b_1,b_2 \right \rangle = 3 + 21 = 24


   \displaystyle
  \left \langle b_2,b_1 \right \rangle = 3 + 21 = 24


   \displaystyle
  \left \langle b_2,b_2 \right \rangle = 2.25 + 9 = 11.25

(Eq.7.1.3)

Plugging Equations 7.1.3 into 7.1.1 gives:


   \displaystyle

\Gamma =
\begin{bmatrix}
53 & 24\\
24 & 11.25\\
\end{bmatrix}

(Eq.7.1.4)

Next we have to get an equation for d, as shown in (5) on pg.8-10 *notice v is given under heading of problem 5.7


   \displaystyle

d = \{\left \langle b_1,v \right \rangle, \left \langle b_2,v \right \rangle\}^T

(Eq.7.1.5)

Evaluating as equation 7.1.2 shows gives:


   \displaystyle

d = \{22,12\}^T

(Eq.7.1.6)

From the notes (1) on pg.8-11


   \displaystyle

c = \Gamma ^{-1}d

(Eq.7.1.6)

This gives:


   \displaystyle

c = \bigg\{\frac{11.25(22)-24(12)}{20.25},\frac{-24(22)+53(12)}{20.25}\bigg\}

(Eq.7.1.7)

So  c_1 and  c_2 are:


   \displaystyle

c_1 = -2



   \displaystyle

c_2 = 5.33

(Eq.7.1.8)

Part 2 Problem Statement[edit]

(2)Verify the result by using (1)-(2) p.7c-34 in (2)and rely on the non-zero determinant of the matrix of components of  b_1, b_2 relative to the basis  e_1,e_2 as discussed on p.7c-34

Part 2 Solution[edit]

The equations to be entered into the A matrix are as follows:


   \displaystyle
  b_1 = 2 e_1 + 7 e_2


   \displaystyle
  b_2 = 1.5 e_1 + 3 e_2

(Eq.7.2.1)

According to the notes if these two equations are put into a matrix and the determinant is  \ne 0 then they are linearly independent. Entering them into a matrix gives:


   \displaystyle
 A = 
\begin{bmatrix} 
2 & 7 \\
1.5 & 3\\
\end{bmatrix}

(Eq.7.2.2)

Solving:


   \displaystyle
 6 - 10.5 = -4.5 \ne 0

(Eq.7.2.3)

Thus  b_1 and  b_2 are linearly independent.

Use the following equation to check the answer from the previous part.


   \displaystyle
 V=4e_1+2e_2\equiv c_1b_1+c_2b_2

(Eq.7.2.4)


   \displaystyle
 V\equiv (-2)(2e_1+7e_2)+(5.33)(1.5e_1+3e_2)

(Eq.7.2.4)


   \displaystyle
 V\equiv 4e_1+2e_2

(Eq.7.2.5)

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.hintz 23:46, 24 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.axelrod.a 14:35, 29 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 07:51, 30 March 2012 (UTC)


Problem R5.8 - Integration[edit]

Given[edit]


   \displaystyle
  \int x^nlog(1+x)dx

(Eq. 8.1)

With


   \displaystyle
  n = 0,1

Integration by Parts


   \displaystyle
  \int udv= uv- \int vdu

(Eq.8.2)

General Binomial Theorem


   \displaystyle
  (x+y)^n=\sum_{k=0}^n \binom {n}{k}x^{n-k}y^k

(Eq.8.3)

Where


   \displaystyle
  \binom {n}{k}x^{n-k}y^k=\dfrac {n!}{k!(n-k)!}=\dfrac {n(n-1) \cdots (n-k+1)}{k!}

Problem Statement[edit]

Find the indefinite integral below with  n=0,1 using the integration by parts and the General Binomial Theorem

Solution[edit]

Using the integration by parts formula, Eq.8.2, we set  u=log(1+x) and  dv=x^n so we have the following:


   \displaystyle
  u=log(1+x)


   \displaystyle
  du = \dfrac{1}{x+1}dx


   \displaystyle
  dv=x^n


   \displaystyle
  v=\dfrac {x^{n+1}}{n+1}

Substituting the above in to Eq. 8.3 we have:


   \displaystyle
  \int x^nlog(1+x)dx= \dfrac{log(1+x)x^{n+1}}{n+1}-\int \dfrac {x^{n+1}}{(n+1)(1+x)}dx

(Eq.8.4)

Now we can set n=0 and Eq. 8.4 becomes:


   \displaystyle
  \int x^nlog(1+x)dx= \dfrac{log(1+x)x^{1}}{1}-\int \dfrac {x^{1}}{(1)(1+x)}dx

(Eq.8.5)

Where


   \displaystyle
  \int \dfrac {x^{1}}{(1)(1+x)}dx

Can be solved by first using long division on the integrand:


   \displaystyle
  \int \dfrac {x^{1}}{(1)(1+x)}dx= \int (1-\dfrac {1}{1+x})dx

Next integrate term by term


   \displaystyle
  \int 1dx-\int \dfrac {1}{1+x}dx = x-log(1+x)

So when  n=0 we have:


   \displaystyle
  \int x^nlog(1+x)dx= log(1+x)x-x+log(x+1)

(Eq.8.6)

Now we substitute n=1 into Eq. 8.4 which gives us:


   \displaystyle
  \int x^nlog(1+x)dx= \dfrac{log(1+x)x^{1+1}}{1+1}-\int \dfrac {x^{1+1}}{(1+1)(1+x)}dx=\dfrac{log(1+x)x^{2}}{2}-\int \dfrac {x^{2}}{(2)(1+x)}dx

(Eq.8.7)

And we have the integral:


   \displaystyle
  \int \dfrac {x^{2}}{(2)(1+x)}dx

Which can also be solved by first using long division on the integrand \dfrac {x^2}{1+x} , so:


   \displaystyle
  \int \dfrac {x^{2}}{(2)(1+x)}dx= \dfrac{1}{2} \int \dfrac {x^2}{1+x}dx= \dfrac{1}{2} \int(x+\dfrac{1}{1+x}-1)dx

Integrating term by term we have that:


   \displaystyle
  \dfrac{1}{2} \int(x+\dfrac{1}{1+x}-1)dx=\dfrac{x^2}{4}-\dfrac{x}{2}+\dfrac{1}{2}log(1+x)=\dfrac{1}{4}((x-2)x+2log(1+x))

So Eq. 8.7 becomes:


   \displaystyle
  \int x^nlog(1+x)dx=\dfrac{log(1+x)x^{2}}{2}-\int \dfrac {x^{2}}{(2)(1+x)}dx=\dfrac{log(1+x)x^{2}}{2}-(\dfrac{1}{4}((x-2)x+2log(1+x)))

(Eq.8.8)

So when n=1:


   \displaystyle
  \int x^nlog(1+x)dx=\dfrac{\log(1+x)x^{2}}{2}-\dfrac{1}{4}[(x-2)x+2log(1+x)]

(Eq.8.9)

Author & Proofreaders[edit]

Author:Egm4313.s12.team17.Li 23:47, 23 March 2012 (UTC)
Proofreader 1:Egm4313.s12.team17.axelrod.a 20:45, 25 March 2012 (UTC)
Proofreader 2:Egm4313.s12.team17.ying 08:22, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 08:22, 30 March 2012 (UTC)


Problem R5.9 - Projection on a Basis[edit]

Part 1 Given[edit]


   \displaystyle
   y'' - 3y' + 2y = r(x)

(Eq.9.1.1)


   \displaystyle
   r(x) = log(1+x)

(Eq.9.1.2)


   \displaystyle
   y(-3/4) = 1, y'(-3/4) = 0

(Eq.9.1.3)


   \displaystyle
  r_n(x) = \log(1+x) = \sum_{n=0}^\infty \frac{x^{n+1}}{n+1}(-1)^{n}

(Eq.9.1.4)


   \displaystyle
  \bold{b}=\{b_j(x)=x^j,j=0,1,...,n\}

(Eq.9.1.5)

Part 1 Problem Statement[edit]

Project equation 9.1.2 on the polynomial basis such that it takes the form shown below for  x=[-3/4,3]\, and  n=0,1,2  \,:


   \displaystyle
  r(x)\approx r_n(x)=\sum_{j=0}^n d_jx^j

(Eq.9.1.6)

Plot equations 9.1.2 and 9.1.6 to show the uniform approximation and convergence.

Then, plot equations 9.1.4 and 9.1.6, in order, to compare the pros and cons of the projection on polynomial and Taylor series expansion methods for approximating equation 9.1.2.

Part 1 Solution[edit]

Let  n=0 \,.

Therefore, equation 9.1.5 becomes:


   \displaystyle
  \bold{b}=\{b_0\}=\{x^0\}

(Eq.9.1.7)

Set up a Gram matrix for the basis functions (eq. 9.1.7):


   \displaystyle
  \bold{\Gamma}(\bold{b})=\{\langle b_0,b_0 \rangle\}

(Eq.9.1.8)


   \displaystyle
  \langle b_0,b_0 \rangle = \int\limits_{a}^{b}b_0b_0dx

(Eq.9.1.9)

Let  a=-3/4 \, and  b=3 \,.


   \displaystyle
  \langle b_0,b_0 \rangle = \int\limits_{-\frac{3}{4}}^{3}x^0(x^0)dx

(Eq.9.1.10)


   \displaystyle
  \langle b_0,b_0 \rangle = \frac{15}{4} = 3.75

(Eq.9.1.11)

Therefore,


   \displaystyle
  \bold{\Gamma}=\{3.75\}

(Eq.9.1.12)

Rhs matrix will take the form shown below:


   \displaystyle
  \bold{e}=\{\langle b_0,r(x) \rangle, ...,\langle b_n,r(x) \rangle\}^T

(Eq.9.1.13)

Therefore,


   \displaystyle
  \bold{e}=\{\langle b_0,r(x) \rangle\}^T

(Eq.9.1.14)


   \displaystyle
  \langle b_0,r(x) \rangle = \int\limits_{a}^{b}b_0r(x)dx

(Eq.9.1.15)


   \displaystyle
  \langle b_0,r(x) \rangle = \int\limits_{-\frac{3}{4}}^{3}x^0(log(1+x))dx

(Eq.9.1.16)


   \displaystyle
  \langle b_0,r(x) \rangle = \bigg[(x+1)log(x+1)-x\bigg]_{-\frac{3}{4}}^3

(Eq.9.1.17)


   \displaystyle
  \langle b_0,r(x) \rangle = \frac{17}{2}log(2)-\frac{15}{4}

(Eq.9.1.18)


   \displaystyle
  \langle b_0,r(x) \rangle = 2.1418

(Eq.9.1.19)


   \displaystyle
  \bold{e}=\{2.1418\}^T

(Eq.9.1.20)

Using the equation shown below, solve for the d values:


   \displaystyle
  \bold{\Gamma}\bold{d}=\bold{e}

(Eq.9.1.21)


   \displaystyle
  \bold{d}=\bold{\Gamma}^{-1}\bold{e}

(Eq.9.1.22)

Therefore,


   \displaystyle
  \bold{d}=\{3.75\}^{-1}\{2.1418\}^T

(Eq.9.1.23)


   \displaystyle
  \bold{d}=0.5711

(Eq.9.1.24)

This result will make equation 9.4.6 the following:


   \displaystyle
  r_0(x)=\sum^0_{j=0}d_jx_j= d_0x^0

(Eq.9.1.25)


   \displaystyle
  r_0(x)=0.5711

(Eq.9.1.26)


Let  n=1 \,.

Therefore, equation 9.1.5 becomes:


   \displaystyle
  \bold{b}=\{b_0,b_1\}=\{x^0,x^1\}

(Eq.9.1.27)

Set up a Gram matrix for the basis functions (eq. 9.1.27):


   \displaystyle
  \bold{\Gamma}(\bold{b})=
\begin{bmatrix}
\langle b_0,b_0 \rangle & \langle b_0,b_1 \rangle \\
\langle b_1,b_0 \rangle & \langle b_1,b_1 \rangle \\
\end{bmatrix}

(Eq.9.1.28)


   \displaystyle
  \langle b_0,b_1 \rangle = \langle b_1,b_0 \rangle = \int\limits_{-\frac{3}{4}}^{3}x^0(x^1)dx

(Eq.9.1.29)


   \displaystyle
  \langle b_0,b_1 \rangle = \langle b_1,b_0 \rangle = \frac{1}{2}\bigg[x^2\bigg]^3_{-\frac{3}{4}}

(Eq.9.1.30)


   \displaystyle
  \langle b_0,b_1 \rangle = \langle b_1,b_0 \rangle = \frac{135}{32} = 4.2188

(Eq.9.1.31)


   \displaystyle
  \langle b_1,b_1 \rangle = \int\limits_{-\frac{3}{4}}^{3}x^1(x^1)dx

(Eq.9.1.32)


   \displaystyle
  \langle b_1,b_1 \rangle = \frac{1}{3}\bigg[x^3\bigg]^3_{-\frac{3}{4}}

(Eq.9.1.33)


   \displaystyle
  \langle b_1,b_1 \rangle = \frac{585}{64} = 9.1406

(Eq.9.1.34)

Therefore,


   \displaystyle
  \bold{\Gamma}=
\begin{bmatrix}
3.7500 & 4.2188 \\
4.2188 & 9.1406 \\
\end{bmatrix}

(Eq.9.1.35)

Generate a rhs matrix:


   \displaystyle
  \bold{e}=\{\langle b_0,r(x) \rangle\,\langle b_1,r(x) \rangle\}^T

(Eq.9.1.36)


   \displaystyle
  \langle b_1,r(x) \rangle = \int\limits_{a}^{b}b_1r(x)dx

(Eq.9.1.37)


   \displaystyle
  \langle b_1,r(x) \rangle = \int\limits_{-\frac{3}{4}}^{3}x^1(log(1+x))dx

(Eq.9.1.38)


   \displaystyle
  \langle b_1,r(x) \rangle = \bigg[\bigg(\frac{x^2}{2}-\frac{1}{2}\bigg)log(x+1)-\frac{x(x-2)}{4}\bigg]_{-\frac{3}{4}}^3

(Eq.9.1.39)


   \displaystyle
  \langle b_1,r(x) \rangle = \frac{121}{16}log(2)-\frac{15}{64}

(Eq.9.1.40)


   \displaystyle
  \langle b_1,r(x) \rangle = 5.0076

(Eq.9.1.41)


   \displaystyle
  \bold{e}=\{2.1418,5.0076\}^T

(Eq.9.1.42)

Using the equation 9.1.21, to solve for the d values:


   \displaystyle
  \bold{d}=\frac{1}{det\bold{\Gamma}}
\begin{bmatrix}
9.1406 & -4.2188 \\
-4.2188 & 3.7500 \\
\end{bmatrix}
\begin{bmatrix}
2.1418 \\
5.0076 \\
\end{bmatrix}

(Eq.9.1.43)


   \displaystyle
  \bold{d}=\frac{1}{16.479}
\begin{bmatrix}
9.1406 & -4.2188 \\
-4.2188 & 3.7500 \\
\end{bmatrix}
\begin{bmatrix}
2.1418 \\
5.0076 \\
\end{bmatrix}

(Eq.9.1.44)


   \displaystyle
  \bold{d}=
\begin{bmatrix}
-0.0940 \\
0.5912 \\
\end{bmatrix}

(Eq.9.1.45)

This result will make equation 9.4.6 the following:


   \displaystyle
  r_1(x)=\sum^1_{j=0}d_jx_j= d_0x^0 + d_1x^1

(Eq.9.1.46)


   \displaystyle
  r_1(x)=-0.0940+0.5912x

(Eq.9.1.47)

The plot below shows the projected and actual excitation.

5 9 1.jpg

Below is the MATLAB code used to generate the above graph:

x=[-.75:0.01:3];
r=log(x+1);
r0=0.5711+1-x.^0;
r1=-0.0940+0.5912*x;
plot(x,r,x,r0,x,r1)
xlabel('x-axis');
ylabel('r-axis');


Taylor series expansion for equation 9.1.4 are shown below:


   \displaystyle
  t_0=x

(Eq.9.1.48)


   \displaystyle
  t_1=x-\frac{x^2}{2}

(Eq.9.1.49)

The plots below show the comparison of the two methods for approximating equation 9.1.2.

5 9 2.jpg

Below is the MATLAB code used to generate the above graph:

x=[-.75:0.01:3];
r=log(x+1);
r0=0.5711+1-x.^0;
r1=-0.0940+0.5912*x;
t0=x;
t1=x-x.^2/2;
subplot(211)
plot(x,r,x,r0,x,t0)
xlabel('x-axis');
ylabel('r-axis');
title('Approximation when n=0')
subplot(212)
plot(x,r,x,r1,x,t1)
xlabel('x-axis');
ylabel('r-axis');
title('Approximation when n=1')


The Taylor series expansion is great for approximating a complex function around a specific point. Once, the Taylor series gets farther away from the centered point, it will begin to become inaccurate compared to the true value of the actual function. The Taylor series will require more terms to reduce the error for a larger x range.

Projection on polynomial basis will generate an approximation that evenly distributes the error for a given x range. In order to get a high accurate approximation with this method, you will have to increase the number of terms used which leads to more unknowns. More unknowns means that it will require more time to solve and that will result in higher cost when money is involved.

Part 2 Given[edit]

Information from part 1.

Part 2 Problem Statement[edit]

Find a  y_n(x) \, that solves the following equation for  n=0,1 \,:


   \displaystyle
  y_n''+ay_n'+by_n= r_n(x)

(Eq.9.2.1)

Then plot the solutions, truncated Taylor series, and numerical solution.

Part 2 Solution[edit]

For  n=0 \,.


   \displaystyle
  y_0''-3y_0'+2y_0= r_0(x)

(Eq.9.2.2)


   \displaystyle
  y_0=y_{0,h}+y_{0,p}

(Eq.9.2.3)

The homogenous solution must satisfy the following:


   \displaystyle
  y_{0,h}''-3y_{0,h}'+2y_{0,h}= 0

(Eq.9.2.4)

Therefore, the characteristic equation is:


   \displaystyle
  \lambda^2-3\lambda+2= 0

(Eq.9.2.5)


   \displaystyle
  (\lambda-2)(\lambda-1)=0

(Eq.9.2.6)


   \displaystyle
  \lambda_1=2,\lambda_2=1

(Eq.9.2.7)

Distinct real roots makes the general solution of the homogenous take the following form:


   \displaystyle
  y_h=d_1e^{\lambda_1 x}+d_2e^{\lambda_2 x}

(Eq.9.2.8)

Therefore,


   \displaystyle
  y_{0,h}=d_1e^{2 x}+d_2e^{ x}

(Eq.9.2.9)

The particular solution must satisfy the following:


   \displaystyle
  y_{0,p}''-3y_{0,p}'+2y_{0,p}= 0.5711

(Eq.9.2.10)

Using Method of Undetermined Coefficients, the particular solution will have the form:


   \displaystyle
  y_p=\sum^n_{j=0}c_jx^j

(Eq.9.2.11)

Therefore,


   \displaystyle
  y_{0,p}=c_0x^0

(Eq.9.2.12)

Take the first and second derivative of equation 9.2.12:


   \displaystyle
  y_{0,p}^\prime=0

(Eq.9.2.13)


   \displaystyle
  y_{0,p}^{\prime\prime}=0

(Eq.9.2.14)

Substitute equations 9.2.12, 9.2.13, and 9.2.14 into equation 9.2.10:


   \displaystyle
  2c_0= 0.5711

(Eq.9.2.15)

Therefore,


   \displaystyle
  c_0= 0.2856

(Eq.9.2.16)

The particular solution will be:


   \displaystyle
  y_{0,p}=0.2856

(Eq.9.2.17)

Superimpose equation 9.2.9 and 9.2.17:


   \displaystyle
  y_{0}=d_1e^{2 x}+d_2e^{ x}+0.2856

(Eq.9.2.18)

Take the first of equation 9.2.18:


   \displaystyle
  y_{0}^\prime=2d_1e^{2 x}+d_2e^{ x}

(Eq.9.2.19)

Use equation 9.1.3 to solve for the unknowns:


   \displaystyle
  1=d_1e^{-1.5}+d_2e^{-0.75}+0.2856

(Eq.9.2.20)


   \displaystyle
  0=2d_1e^{-1.5}+d_2e^{-0.75}

(Eq.9.2.21)

Therefore,


   \displaystyle
  d_1=-3.2019

(Eq.9.2.22)


   \displaystyle
  d_2=3.0249

(Eq.9.2.23)

The final solution for  n=0 \,:


   \displaystyle
  y_{0}=-3.2019e^{2 x}+3.0249e^{ x}+0.2856

(Eq.9.2.24)

For  n=1 \,.


   \displaystyle
  y_1''-3y_1'+2y_1= r_1(x)

(Eq.9.2.25)


   \displaystyle
  y_1=y_{1,h}+y_{1,p}

(Eq.9.2.26)

The homogenous solution will be the same has equation 9.2.9:


   \displaystyle
  y_{1,h}=d_1e^{2 x}+d_2e^{ x}

(Eq.9.2.27)

The particular solution must satisfy the following:


   \displaystyle
  y_{1,p}''-3y_{1,p}'+2y_{1,p}= 0.5912x-0.0940

(Eq.9.2.28)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:


   \displaystyle
  y_{1,p}=c_0x^0+c_1x^1

(Eq.9.2.29)

Take the first and second derivative of equation 9.2.29:


   \displaystyle
  y_{1,p}^\prime=c_1

(Eq.9.2.30)


   \displaystyle
  y_{1,p}^{\prime\prime}=0

(Eq.9.2.31)

Substitute equations 9.2.29, 9.2.30, and 9.2.31 into equation 9.2.28:


   \displaystyle
  -3c_1+2c_0+2c_1x=0.5912x-0.0940

(Eq.9.2.32)

Solve the coefficients by setting like terms equal to one another.

For  x^0 \,:


   \displaystyle
  -3c_1+2c_0=-0.0940

(Eq.9.2.33)

For  x^1 \,:


   \displaystyle
  2c_1=0.5912

(Eq.9.2.34)

Therefore,


   \displaystyle
  c_1=0.2956

(Eq.9.2.35)


   \displaystyle
  c_0=0.2956

(Eq.9.2.36)

The particular solution will be:


   \displaystyle
  y_{1,p}=0.2956x+0.3964

(Eq.9.2.37)

Superimpose equation 9.2.27 and 9.2.37:


   \displaystyle
  y_{1}=d_1e^{2 x}+d_2e^{ x}+0.2956x+0.3964

(Eq.9.2.38)

Take the first of equation 9.2.38:


   \displaystyle
  y_{1}^\prime=2d_1e^{2 x}+d_2e^{ x}+0.2956

(Eq.9.2.39)

Use equation 9.1.3 to solve for the unknowns:


   \displaystyle
  1=d_1e^{-1.5}+d_2e^{-0.75}-0.2217+0.3964

(Eq.9.2.40)


   \displaystyle
  0=2d_1e^{-1.5}+d_2e^{-0.75}+0.2956

(Eq.9.2.41)

Therefore,


   \displaystyle
  d_1=-5.0235

(Eq.9.2.42)


   \displaystyle
  d_2=4.1201

(Eq.9.2.43)

The final solution for  n=1 \,:


   \displaystyle
  y_{1}=-5.0235e^{2 x}+4.1201e^{ x}+0.2956x+0.3964

(Eq.9.2.44)

For Taylor series expansion for  n=0 \,.


   \displaystyle
  y_{t_0}''-3y_{t_0}'+2y_{t_0}= r_{t_0}(x)

(Eq.9.2.45)


   \displaystyle
  y_{t_0}=y_{{t_0},h}+y_{{t_0},p}

(Eq.9.2.46)

The homogenous solution will be the same has equation 9.2.9:


   \displaystyle
  y_{{t_0},h}=d_1e^{2 x}+d_2e^{ x}

(Eq.9.2.47)

The particular solution must satisfy the following:


   \displaystyle
  y_{{t_0},p}''-3y_{{t_0},p}'+2y_{{t_0},p}=x

(Eq.9.2.48)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:


   \displaystyle
  y_{{t_0},p}=c_0x^0+c_1x^1

(Eq.9.2.49)

Take the first and second derivative of equation 9.2.49:


   \displaystyle
  y_{{t_0},p}^\prime=c_1

(Eq.9.2.50)


   \displaystyle
  y_{{t_0},p}^{\prime\prime}=0

(Eq.9.2.51)

Substitute equations 9.2.49, 9.2.50, and 9.2.51 into equation 9.2.48:


   \displaystyle
  -3c_1+2c_0+2c_1x=x

(Eq.9.2.52)

Solve the coefficients by setting like terms equal to one another.

For  x^0 \,:


   \displaystyle
  -3c_1+2c_0=0

(Eq.9.2.53)

For  x^1 \,:


   \displaystyle
  2c_1=1

(Eq.9.2.54)

Therefore,


   \displaystyle
  c_1=0.5

(Eq.9.2.55)


   \displaystyle
  c_0=0.75

(Eq.9.2.56)

The particular solution will be:


   \displaystyle
  y_{{t_0},p}=0.5x+0.75

(Eq.9.2.57)

Superimpose equation 9.2.47 and 9.2.57:


   \displaystyle
  y_{{t_0}}=d_1e^{2 x}+d_2e^{ x}+0.5x+0.75

(Eq.9.2.58)

Take the first of equation 9.2.58:


   \displaystyle
  y_{{t_0}}^\prime=2d_1e^{2 x}+d_2e^{ x}+0.5

(Eq.9.2.59)

Use equation 9.1.3 to solve for the unknowns:


   \displaystyle
  1=d_1e^{-1.5}+d_2e^{-0.75}-0.375+0.75

(Eq.9.2.60)


   \displaystyle
  0=2d_1e^{-1.5}+d_2e^{-0.75}+0.5

(Eq.9.2.61)

Therefore,


   \displaystyle
  d_1=-5.0419

(Eq.9.2.62)


   \displaystyle
  d_2=3.7048

(Eq.9.2.63)

The final solution for Taylor series expansion with  n=0 \,:


   \displaystyle
  y_{{t_0}}=-5.0419e^{2 x}+3.7048e^{ x}+0.5x+0.75

(Eq.9.2.64)

For Taylor series expansion for  n=1 \,.


   \displaystyle
  y_{t_1}''-3y_{t_1}'+2y_{t_1}= r_{t_1}(x)

(Eq.9.2.65)


   \displaystyle
  y_{t_1}=y_{{t_1},h}+y_{{t_1},p}

(Eq.9.2.66)

The homogenous solution will be the same has equation 9.2.9:


   \displaystyle
  y_{{t_1},h}=d_1e^{2 x}+d_2e^{ x}

(Eq.9.2.67)

The particular solution must satisfy the following:


   \displaystyle
  y_{{t_1},p}''-3y_{{t_1},p}'+2y_{{t_1},p}=x-\frac{x^2}{2}

(Eq.9.2.68)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:


   \displaystyle
  y_{{t_1},p}=c_0x^0+c_1x^1+c_2x^2

(Eq.9.2.69)

Take the first and second derivative of equation 9.2.69:


   \displaystyle
  y_{{t_1},p}^\prime=c_1+2c_2x

(Eq.9.2.70)


   \displaystyle
  y_{{t_1},p}^{\prime\prime}=2c_2

(Eq.9.2.71)

Substitute equations 9.2.69, 9.2.70, and 9.2.71 into equation 9.2.68:


   \displaystyle
  2c_2-3c_1-6c_2x+2c_0+2c_1x2c_2x^2=x-\frac{x^2}{2}

(Eq.9.2.72)

Solve the coefficients by setting like terms equal to one another.

For  x^0 \,:


   \displaystyle
  2c_2-3c_1+2c_0=0

(Eq.9.2.73)

For  x^1 \,:


   \displaystyle
  -6c_2+2c_1=1

(Eq.9.2.74)

For  x^2 \,:


   \displaystyle
  2c_2=-\frac{1}{2}

(Eq.9.2.75)

Therefore,


   \displaystyle
  c_2=-0.25

(Eq.9.2.76)


   \displaystyle
  c_1=-0.25

(Eq.9.2.77)


   \displaystyle
  c_0=-0.125

(Eq.9.2.78)

The particular solution will be:


   \displaystyle
  y_{{t_1},p}=-0.25x^2-0.25x-0.125

(Eq.9.2.79)

Superimpose equation 9.2.67 and 9.2.79:


   \displaystyle
  y_{{t_1}}=d_1e^{2 x}+d_2e^{ x}-0.25x^2-0.25x-0.125

(Eq.9.2.80)

Take the first of equation 9.2.80:


   \displaystyle
  y_{{t_1}}^\prime=2d_1e^{2 x}+d_2e^{ x}-0.5x-0.25

(Eq.9.2.81)

Use equation 9.1.3 to solve for the unknowns:


   \displaystyle
  1=d_1e^{-1.5}+d_2e^{-0.75}-0.5625+0.1875+0.125

(Eq.9.2.82)


   \displaystyle
  0=2d_1e^{-1.5}+d_2e^{-0.75}+0.375-0.25

(Eq.9.2.83)

Therefore,


   \displaystyle
  d_1=-7.2827

(Eq.9.2.84)


   \displaystyle
  d_2=6.6156

(Eq.9.2.85)

The final solution for Taylor series expansion with  n=1 \,:


   \displaystyle
  y_{{t_1}}=-7.2827e^{2 x}+6.6156e^{ x}-0.25x^2-0.5x-0.125

(Eq.9.2.86)


5 9 3.jpg

Below is the MATLAB code used to create the ODE for equation 9.1.1 with excitation equation 9.1.2:

function pdot = ODE45(t,p)
pdot = zeros(2,1);
pdot(1) = p(2);
pdot(2) = 3*p(2) - 2*p(1) + log10(1+t);
end


Below is the MATLAB code used to generate the above graph:

[t,p] = ode45('ODE45',[-0.75,3],[1 0]);
x=[-.75:0.01:3];
y0=-3.2019*exp(2.*x)+3.0249*exp(x)+0.2856;
y1=-5.0235*exp(2.*x)+4.1201*exp(x)+0.2956.*x+0.3964;
yt0=-5.0419*exp(2.*x)+3.7048*exp(x)+0.5.*x+0.75;
yt1=-7.2827*exp(2.*x)+6.6156*exp(x)-0.25.*x.^2-0.5.*x-0.125;
subplot(211)
plot(t,p(:,1),x,y0,x,yt0)
xlabel('x-axis');
ylabel('y-axis');
title('Solution when n=0')
subplot(212)
plot(t,p(:,1),x,y1,x,yt1)
xlabel('x-axis');
ylabel('y-axis');
title('Solution when n=1')


The solutions when  n=0 \,:

  • The Taylor series solution is very close to the numerical solution, while the projection solution isn't that accurate.
  • The projection solution seems to start diverging at about  x=0.5 \,.

The solutions when  n=1 \,:

  • The projection solution is very close to the numerical solution, while the Taylor series solution isn't that accurate.
  • The Taylor series expansion function starts diverging after  x=1 \,, since  R_c=1 \,.

Conclusion:

  • As the number of terms increase, the Taylor series solution begins to take effects from the radius of divergence.
  • The projection solution becomes more accurate as the number of terms increase, since the solution is using the extra terms to mold into the numerical solution for a given x range.

Author & Proofreaders[edit]

Author: Egm4313.s12.team17.deaver.md 05:36, 30 March 2012 (UTC)
Proofreader: Egm4313.s12.team17.ying 09:41, 30 March 2012 (UTC)
Editor: Egm4313.s12.team17.deaver.md 05:36, 30 March 2012 (UTC)


Contributing Team Members[edit]

Team Member Contribute Proofread
Allan Axelrod Problem 6 Problem 4, 7, and 8
Michael Deaver Problem 9 and Formatting Problem 2, 5, and 6
Max Hintz Problem 2, 3, and 7
Kelvin Li Problem 4 and 8 Problem 3
Thomas Wheeler Problem 1 and 5 Problem 4
Chen Ying Formatting Problem 1, 8, and 9