# Problem R5.1 - Radius of Convergence

## Part 1 Given

Find ${\displaystyle R_{c}}$ for the following series

 ${\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }(k+1)kx^{k}}$ (Eq.1.1.1)

## Part 1 Problem Statement

Find ${\displaystyle R_{c}\,}$ for the following series.

## Part 1 Solution

Using the following formula:

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {d_{k+1}}{d_{k}}}\right|\right]^{-1}}$ (Eq.1.1.2)

Where for a power series:

 ${\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }d_{k}x^{k}}$ (Eq.1.1.3)
 ${\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }(k+1)kx^{k}}$ (Eq.1.1.4)

Where:

 ${\displaystyle \displaystyle d_{k}=(k+1)k}$ (Eq.1.1.5)
 ${\displaystyle \displaystyle d_{k+1}=(k+2)(k+1)}$ (Eq.1.1.6)

Plugging equations 1.1.5 and 1.1.6 into equation 1.1.2:

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {(k+2)(k+1)}{(k+1)k}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {(k+2)}{k}}\right|\right]^{-1}=1}$ (Eq.1.1.7)

## Part 2 Given

 ${\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{\gamma ^{k}}}x^{2k}}$ (Eq.1.2.1)
 ${\displaystyle \displaystyle \gamma =constant}$

## Part 2 Problem Statement

 ${\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{\gamma ^{k}}}x^{2k}}$ (Eq.1.2.2)
 ${\displaystyle \displaystyle {\frac {(-1)^{k}}{\gamma ^{k}}}x^{2k}={\frac {(-1)^{k}}{\gamma ^{k}}}(x^{k})^{2}={\sqrt {\frac {(-1)^{k}}{\gamma ^{k}}}}x^{k}}$ (Eq.1.2.3)

Where:

 ${\displaystyle \displaystyle d_{k}={\sqrt {\frac {(-1)^{k}}{\gamma ^{k}}}}}$ (Eq.1.2.4)
 ${\displaystyle \displaystyle d_{k+1}={\sqrt {\frac {(-1)^{k+1}}{\gamma ^{k+1}}}}={\sqrt {\frac {(-1)^{k}(-1)^{1}}{\gamma ^{k}\gamma ^{1}}}}}$ (Eq.1.2.5)

Using the same equation in Part 1, plug Eq.1.2.4 and Eq.1.2.5 into Eq.1.1.2

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {\sqrt {\frac {(-1)^{k}(-1)^{1}}{\gamma ^{k}\gamma ^{1}}}}{\sqrt {\frac {(-1)^{k}}{\gamma ^{k}}}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {\sqrt {\frac {(-1)^{1}}{\gamma ^{1}}}}{1}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {1}{\sqrt {|\gamma |}}}\right|\right]^{-1}={\sqrt {|\gamma |}}}$ (Eq.1.2.6)

## Part 3 Given

 ${\displaystyle \displaystyle sinx}$ (Eq.1.3.1)
 ${\displaystyle \displaystyle {\hat {x}}=0}$

## Part 3 Problem Statement

Find ${\displaystyle R_{c}\,}$ for the given Taylor series.

## Part 3 Solution

Performing a Maclaurin series expansion:

 ${\displaystyle \displaystyle f(x)=sin(x)=\sum _{k=0}^{\infty }{\frac {[(-1)^{k}]x^{1+2k}}{(1+2k)!}}}$ (Eq.1.3.2)
 ${\displaystyle \displaystyle {\frac {[(-1)^{k}]x^{1+2k}}{(1+2k)!}}={\frac {[(-1)^{k}]x^{1}x^{2k}}{(1+2k)!}}={\frac {[(-1)^{k}]x^{1}(x^{k})^{2}}{(1+2k)!}}={\sqrt {\frac {(-1)^{k}x}{(1+2k)!}}}x^{k}}$ (Eq.1.3.3)

Where:

 ${\displaystyle \displaystyle d_{k+1}={\sqrt {\frac {(-1)^{k+1}x}{(1+2(k+1))!}}}={\sqrt {\frac {(-1)^{k}(-1)^{1}x}{(1+2(k+1))!}}}}$ (Eq.1.3.4)
 ${\displaystyle \displaystyle d_{k}={\sqrt {\frac {(-1)^{k}x}{(1+2k)!}}}}$ (Eq.1.3.5)

Using Eq.1.1.2, plug in Eq.1.3.4 and Eq.1.3.5

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {\sqrt {\frac {(-1)^{k}(-1)^{1}x}{(1+2(k+1))!}}}{\sqrt {\frac {(-1)^{k}x}{(1+2k)!}}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {\sqrt {\frac {-1}{(2k+3)(2k+2)}}}{1}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\sqrt {\frac {1}{(4k^{2}+8k+3)}}}\right|\right]^{-1}=({\frac {1}{\infty }})^{-1}=\infty }$ (Eq.1.3.6)

## Part 4 Given

 ${\displaystyle \displaystyle \log(1+x)}$ (Eq.1.4.1)
 ${\displaystyle \displaystyle {\hat {x}}=0}$

## Part 4 Problem Statement

Find ${\displaystyle R_{c}\,}$ for the given Taylor series.

## Part 4 Solution

Performing a Maclaurin series expansion:

 ${\displaystyle \displaystyle f(x)=log(1+x)=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}x^{k}}{k}}}$ (Eq.1.4.2)

Where:

 ${\displaystyle \displaystyle d_{k+1}={\frac {(-1)^{k+1-1}}{k+1}}={\frac {-1^{k}}{k+1}}}$ (Eq.1.4.3)
 ${\displaystyle \displaystyle d_{k}={\frac {(-1)^{k-1}}{k}}={\frac {(-1)^{k}(-1)^{-1}}{k}}={\frac {-1^{k}}{-k}}}$ (Eq.1.4.4)

Plugging Eq.1.4.3 and Eq.1.4.4 into Eq.1.1.2:

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {\frac {-1^{k}}{k+1}}{\frac {-1^{k}}{-k}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {\frac {1}{k+1}}{\frac {-1}{k}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {-k}{k+1}}\right|\right]^{-1}=1}$ (Eq.1.4.5)

## Part 5 Given

 ${\displaystyle \displaystyle \log(1+x)}$ (Eq.1.5.1)
 ${\displaystyle \displaystyle {\hat {x}}=1}$

## Part 5 Problem Statement

Find ${\displaystyle R_{c}\,}$ for the given Taylor series.

## Part 5 Solution

Performing a Taylor series expansion about ${\displaystyle {\hat {x}}}$, results in the following series:

 ${\displaystyle \displaystyle f(x)=\log(2)+{\frac {1}{2}}(x-1)-{\frac {1}{8}}(x-1)^{2}+{\frac {1}{24}}(x-1)^{3}-{\frac {1}{64}}(x-1)^{4}...}$ (Eq.1.5.2)

This simplifies to:

 ${\displaystyle \displaystyle f(x)=\log(2)+\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{2^{k}k}}(x-1)^{k}}$ (Eq.1.5.3)

Therefore ${\displaystyle d_{k}}$ and ${\displaystyle d_{(}k+1)}$ are then:

 ${\displaystyle \displaystyle d_{k+1}={\frac {(-1)^{k+1-1}}{2^{(}k+1)(k+1)}}={\frac {-1^{k}}{2^{k}*2(k+1)}}}$ (Eq.1.5.4)
 ${\displaystyle \displaystyle d_{k}={\frac {(-1)^{k-1}}{2^{k}*k}}={\frac {(-1)^{k}(-1)^{-1}}{2^{k}*k}}={\frac {-1^{k}}{-k*2^{k}}}}$ (Eq.1.5.5)

Plugging Eq.1.5.4 and Eq.1.5.5 into Eq.1.1.2

 ${\displaystyle \displaystyle R_{c}=\left[\lim _{k\to \infty }\left|{\frac {\frac {-1^{k}}{2^{k}*2(k+1)}}{\frac {-1^{k}}{-k*2^{k}}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {\frac {1}{2(k+1)}}{\frac {-1}{k}}}\right|\right]^{-1}=\left[\lim _{k\to \infty }\left|{\frac {-k}{2(k+1)}}\right|\right]^{-1}=2}$ (Eq.1.4.6)

## Author & Proofreaders

Author:Egm4313.s12.team17.wheeler.tw 05:06, 23 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.ying 02:32, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 02:32, 30 March 2012 (UTC)

# Problem R5.2 - Linear Independent Testing

## Part 1 Given

 ${\displaystyle \displaystyle f(x)=x^{2}}$ (Eq.2.1.1)
 ${\displaystyle \displaystyle g(x)=x^{4}}$ (Eq.2.1.2)
 ${\displaystyle \displaystyle f(x)=cosx}$ (Eq.2.1.3)
 ${\displaystyle \displaystyle g(x)=sin3x}$ (Eq.2.1.4)

## Part 1 Problem Statement

Determine whether the following pair of functions are linearly independent using the Wronskian.

## Part 1 solution

The Wronskian of the two functions is written as

 ${\displaystyle \displaystyle W(f,g)=f(x)g'(x)-g(x)f'(x)}$ (Eq.2.1.5)

which is the determinant of the functions were they in a 2x2 matrix. If W(f,g) = 0 then the two functions are linearly dependent.

Using the first set of functions, Eq.2.1.1 and Eq.2.1.2, the corresponding derivatives are

 ${\displaystyle \displaystyle f'(x)=2x}$ (Eq.2.1.6)
 ${\displaystyle \displaystyle g'(x)=4x^{3}}$ (Eq.2.1.7)

and placing Eq.2.1.1, Eq.2.1.2, Eq.2.1.6 and Eq.2.1.7 into Eq.2.1.5 we get

 ${\displaystyle \displaystyle (x^{2})(4x^{3})-(x^{4})(2x)=4x^{5}-2x^{5}=2x^{5}\neq 0}$ (Eq.2.1.8)
 Therefore Eq.2.1.1 and Eq.2.1.2 are Linearly Independent.

Using the second set of functions, Eq.2.1.3 and Eq.2.1.4, the corresponding derivatives are

 ${\displaystyle \displaystyle f'(x)=-sin(x)}$ (Eq.2.1.9)

and

 ${\displaystyle \displaystyle g'(x)=3cos(3x)}$ (Eq.2.1.10)

Plugging Eq.2.1.3,Eq.2.1.4,Eq.2.1.9, and Eq.2.1.10 into Eq.2.1.5 gives:

 ${\displaystyle \displaystyle [cos(x)][cos(3x)]-[sin(3x)][-sin(x)]}$ (Eq.2.1.11)

These equations cannot be simplified so

 ${\displaystyle \displaystyle [cos(x)][cos(3x)]-[sin(3x)][-sin(x)]\neq 0}$ (Eq.2.1.12)
 Thus the two functions are linearly independent.

## Part 2 Given

Same as part 1 given

## Part 2 Problem Statement

Verify Eq.2.1.1, Eq.2.1.2, Eq.2.1.3, and Eq.2.1.4 are linearly independent using the Gramian over the interval [a,b] = [-1,1].

## Part 2 Solution

The gramian is given by

 ${\displaystyle \displaystyle \Gamma (f,g)={\begin{bmatrix}\left\langle f,f\right\rangle \left\langle f,g\right\rangle \\\left\langle g,f\right\rangle \left\langle g,g\right\rangle \end{bmatrix}}}$ (Eq.2.2.1)

Where

 ${\displaystyle \displaystyle \left\langle f,g\right\rangle =\int \limits _{a}^{b}f(x)g(x)dx}$ (Eq.2.2.2)

If the Gramian, Eq.2.2.1

 ${\displaystyle \displaystyle \Gamma (f,g)\neq 0}$ (Eq.2.2.3)

Then the two functions are linearly independent

Now to calculate the matrix values:

 ${\displaystyle \displaystyle \left\langle f,f\right\rangle =\int \limits _{-1}^{1}x^{4}dx}$ (Eq.2.2.4)

Which then equals

 ${\displaystyle \displaystyle {\frac {1}{5}}[(1^{5})-(-1^{5})]={\frac {2}{5}}}$ (Eq.2.2.5)

The other terms are as follows:

 ${\displaystyle \displaystyle \left\langle f,g\right\rangle =\int \limits _{-1}^{1}x^{6}dx={\frac {2}{7}}}$ (Eq.2.2.6)
 ${\displaystyle \displaystyle \left\langle g,f\right\rangle =\int \limits _{-1}^{1}x^{6}dx={\frac {2}{7}}}$ (Eq.2.2.7)
 ${\displaystyle \displaystyle \left\langle g,g\right\rangle =\int \limits _{-1}^{1}x^{8}dx={\frac {2}{9}}}$ (Eq.2.2.8)

Entering Eq.2.2.5, Eq.2.2.6, Eq.2.2.7, Eq.2.2.8 into the Gramian Matrix, Eq.2.2.1 results in

 ${\displaystyle \displaystyle \Gamma (f,g)={\begin{bmatrix}\left({\frac {2}{5}}\right)&\left({\frac {2}{7}}\right)\\\left({\frac {2}{7}}\right)&\left({\frac {2}{9}}\right)\end{bmatrix}}}$ (Eq.2.2.9)

Eq.2.2.9 then results in:

 ${\displaystyle \displaystyle {\frac {2}{5}}*{\frac {2}{9}}-{\frac {2}{7}}*{\frac {2}{7}}={\frac {4}{45}}-{\frac {4}{49}}\neq 0}$ (Eq.2.2.10)
 Thus the two functions are linearly independent.

Using Eq.2.2.2, the following scalar product values are calculated.

 ${\displaystyle \displaystyle \left\langle f,f\right\rangle =\int \limits _{-1}^{1}cos(x)*cos(x)dx=1.008}$ (Eq.2.2.11)
 ${\displaystyle \displaystyle \left\langle f,g\right\rangle =\int \limits _{-1}^{1}cos(x)*sin(3x)dx=0}$ (Eq.2.2.12)
 ${\displaystyle \displaystyle \left\langle g,f\right\rangle =\int \limits _{-1}^{1}sin(3x)*cos(x)dx=0}$ (Eq.2.2.13)
 ${\displaystyle \displaystyle \left\langle g,g\right\rangle =\int \limits _{-1}^{1}sin^{2}(3x)dx=.9826}$ (Eq.2.2.14)

Entering Eq.2.2.11, Eq.2.2.12, Eq.2.2.13, and Eq.2.2.14 into the Eq.2.2.1 results in

 ${\displaystyle \displaystyle \Gamma (f,g)={\begin{bmatrix}1.008&0\\0&.9826\end{bmatrix}}}$ (Eq.2.2.15)

Which finally becomes

 ${\displaystyle \displaystyle (1.008)(.9826)-(0)(0)=0.990\neq 0}$ (Eq.2.2.16)
 Thus the two functions are linearly independent.

## Author & Proofreaders

Author:Egm4313.s12.team17.hintz 19:09, 23 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 06:32, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 03:58, 30 March 2012 (UTC)

# Problem R5.3 - Linear Independent Test using Gramain

## Given

 ${\displaystyle \displaystyle \mathbf {b} _{1}=2\mathbf {e} _{1}+7\mathbf {e} _{2}}$ (Eq.3.1)
 ${\displaystyle \displaystyle \mathbf {b} _{1}=1.5\mathbf {e} _{1}+3\mathbf {e} _{2}}$ (Eq.3.2)

## Problem Statement

Verify that ${\displaystyle b_{1}\,}$ and ${\displaystyle b_{2}\,}$ are linearly independent using the Gramian

## Solution

The Grammian is given as:

 ${\displaystyle \displaystyle \Gamma (b_{1},b_{2})={\begin{bmatrix}\left\langle b_{1},b_{1}\right\rangle \left\langle b_{1},b_{2}\right\rangle \\\left\langle b_{2},b_{1}\right\rangle \left\langle b_{2},b_{2}\right\rangle \end{bmatrix}}}$ (Eq.3.3)

According to (3) on p.8-9

 ${\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =(b_{1}\cdot b_{2})}$ (Eq.3.4)

Calculating the dot products using Eq.3.1 and Eq.3.2:

 ${\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =4+49=53}$ (Eq.3.5)
 ${\displaystyle \displaystyle \left\langle b_{1},b_{2}\right\rangle =3+21=24}$ (Eq.3.6)
 ${\displaystyle \displaystyle \left\langle b_{2},b_{1}\right\rangle =3+21=24}$ (Eq.3.7)
 ${\displaystyle \displaystyle \left\langle b_{2},b_{2}\right\rangle =2.25+9=11.25}$ (Eq.3.8)

Plugging Eq.3.5, Eq.3.6, Eq.3.7, and Eq.3.8 into Eq.3.3 and finding the determinant gives:

 ${\displaystyle \displaystyle 596.25-576=20.25=\Gamma \neq 0}$ (Eq.3.9)
 Thus the two functions are linearly independent.

## Author & Proofreaders

Author:Egm4313.s12.team17.hintz 19:48, 21 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.Li 17:21, 25 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 04:46, 30 March 2012 (UTC)

# Problem R5.4 - Particular Solution Verification

## Given

 ${\displaystyle \displaystyle y_{p}(x)=\sum _{i=0}^{n}y_{p,i}(x)}$ (Eq.4.1)
 ${\displaystyle \displaystyle y^{\prime \prime }+p(x)y'+q(x)y=r(x)}$ (Eq.4.2)
 ${\displaystyle \displaystyle r(x)=\sum _{i=0}^{n}r_{i}(x)}$ (Eq.4.3)

## Problem Statement

Show that Eq. 4.1 is indeed the overall particular solution of the L2-OE-CC, Eq. 4.2

Discuss the choice of ${\displaystyle y_{p}(x)\,}$ for ${\displaystyle r(x)=kcos(\omega x)\,}$ and ${\displaystyle r(x)=sin(\omega x)\,}$ in ${\displaystyle y_{p}(x)\,}$

## Solution

The particular solution for Eq. 4.2 is:

 ${\displaystyle \displaystyle y_{p}(x)=r(x)}$

Since the excitation is Eq. 4.3, the particular solution is then

 ${\displaystyle \displaystyle y_{p}(x)=r(x)=\sum _{i=0}^{n}r_{i}(x)=r_{1}(x)+r_{2}(x)\cdots +r_{n}(x)}$ (Eq.4.4)

If we expand Eq.4.1

 ${\displaystyle \displaystyle y_{p}(x)=\sum _{i=0}^{n}y_{p,i}(x)=y_{p,1}(x)+y_{p,2}(x)+\cdots +y_{p,n}(x)}$ (Eq.4.5)

We have :

 ${\displaystyle \displaystyle r_{1}(x)=y_{p,1}}$
 ${\displaystyle \displaystyle r_{2}(x)=y_{p,2}}$
 ${\displaystyle \displaystyle \vdots }$
 ${\displaystyle \displaystyle r_{n}(x)=y_{p,n}}$

Now we can see that the particular solution ${\displaystyle y_{p,i}(x)}$ for a simple excitation ${\displaystyle r_{i}(x)}$ satisfies Eq. 4.2:

 ${\displaystyle \displaystyle y_{p,i}''+p(x)y'_{p,i}+q(x)y_{p,i}=r_{i}(x)}$ (Eq.4.6)

Because of linearity, when each particular solution ${\displaystyle y_{p,i}}$ is known, then we see that the overall particular solution is:

 ${\displaystyle \displaystyle y_{p}(x)=\sum _{i=0}^{n}y_{p,i}(x)}$ (Eq.4.7)
 Therefore, Eq.4.1 is indeed the particular solution to Eq.4.2.

Explanation

When the excitation to an equation, ${\displaystyle y^{\prime \prime }+p(x)y'+q(x)y=r(x)}$ is:

 ${\displaystyle \displaystyle r(x)=kcoswx}$

We first think of using a particular solution of:

 ${\displaystyle \displaystyle y_{p}(x)=Asinwx}$

However if we were to take the derivatives of the particular solution and substitute back into the equation, we have:

 ${\displaystyle \displaystyle -w^{2}Asin(wx)-wAsin(wx)p(x)+Acos(wx)q(x)=Kcoswx}$

Notice that ${\displaystyle sinwx}$ has appeared. And it has the same coefficient 'A' as well. When equating common coefficients we see that 'A' must equal 0,${\displaystyle A=0}$.<br\> However this can not happen since the cosine will drop out.Therefore, we can see that because of the sine that is shown, our initial guess was wrong.<br\> In order to counter the mistake, we need to add a sine to ${\displaystyle y_{p}(x)}$. So,the particular solution would be:

 ${\displaystyle \displaystyle y_{p}(x)=Acoswx+Bsinwx}$

In conclusion, if the excitation,${\displaystyle r(x)}$ includes a sine or cosine, the particular solution must be in the form of:

 ${\displaystyle \displaystyle y_{p}(x)=Mcoswx+Nsinwx}$

where 'N' and 'M' are constants.

## Author & Proofreaders

Author:Egm4313.s12.team17.Li 21:32, 26 March 2012 (UTC)
Proofreader 1:Egm4313.s12.team17.wheeler.tw 12:15, 28 March 2012 (UTC)
Proofreader 2:Egm4313.s12.team17.axelrod.a 17:23, 29 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 05:42, 30 March 2012 (UTC)

# Problem R5.5 - L2-ODE-CC with Linear Independent Testing

## Part 1 Given

 ${\displaystyle \displaystyle cos(7x)}$ (Eq.5.1.1)
 ${\displaystyle \displaystyle sin(7x)}$ (Eq.5.1.2)

## Part 1 Problem Statement

Show that ${\displaystyle cos(7x)\,}$ and ${\displaystyle sin(7x)\,}$ are linearly independent using the Wronskian and the Gramian (integrate over 1 period).

## Part 1 Solution

Wronskian Method
By Definition

 ${\displaystyle \displaystyle W(f,g):=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'}$ (Eq.5.1.3)

In this problem:

 ${\displaystyle \displaystyle f(x)=\cos 7x}$ (Eq.5.1.4)
 ${\displaystyle \displaystyle g(x)=\sin 7x}$ (Eq.5.1.5)

Taking the derivative of Eq.5.1.4 and Eq.5.1.5 respectively:

 ${\displaystyle \displaystyle f'(x)=-7\sin 7x}$ (Eq.5.1.6)
 ${\displaystyle \displaystyle g'(x)=7\cos 7x}$ (Eq.5.1.7)

Plugging equations Eq.5.1.4 through Eq.5.1.7 into Eq.5.1.3

 ${\displaystyle \displaystyle W(f,g)=(\cos 7x)(7\cos 7x)-(\sin 7x)(-7\sin 7x)=7cos^{2}7x+7sin^{2}7x\neq 0}$ (Eq.5.1.8)
 Thus the functions are linearly independent.

Gramian Method
By definition:

 ${\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}}$ (Eq.5.1.9)

Where

 ${\displaystyle \displaystyle \langle f,g\rangle :=\int _{a}^{b}f(x)g(x)\,dx}$ (Eq.5.1.10)

In this problem:

 ${\displaystyle \displaystyle f(x)=\cos 7x}$ (Eq.5.1.11)
 ${\displaystyle \displaystyle g(x)=\sin 7x}$ (Eq.5.1.12)
 ${\displaystyle \displaystyle T={\frac {2\pi }{7}}=0.8976}$ (Eq.5.1.13)

Applying Eqs.5.1.11 through Eq.5.1.13 to Eq.5.1.10:

 ${\displaystyle \displaystyle \langle f,f\rangle :=\int _{0}^{0.8976}f(x)f(x)\,dx=\int _{0}^{0.8976}\cos 7x\cos 7x\,dx=\int _{0}^{0.8976}\cos ^{2}7x\,dx=0.449={\frac {\pi }{7}}}$ (Eq.5.1.14)
 ${\displaystyle \displaystyle \langle f,g\rangle :=\int _{0}^{0.8976}f(x)g(x)\,dx=\int _{0}^{0.8976}\cos 7x\sin 7x\,dx=0}$ (Eq.5.1.15)
 ${\displaystyle \displaystyle \langle g,f\rangle :=\int _{0}^{0.8976}f(x)g(x)\,dx=\int _{0}^{0.8976}\cos 7x\sin 7x\,dx=0}$ (Eq.5.1.16)
 ${\displaystyle \displaystyle \langle g,g\rangle :=\int _{0}^{0.8976}g(x)g(x)\,dx=\int _{0}^{0.8976}\sin 7x\sin 7x\,dx=\int _{0}^{0.8976}\sin ^{2}7x\,dx=0.449={\frac {\pi }{7}}}$ (Eq.5.1.17)

Plugging in values form Eq.5.1.14 to Eq.5.1.17 into Eq.5.1.9:

 ${\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}=\det {\begin{bmatrix}{\frac {\pi }{7}}&0\\0&{\frac {\pi }{7}}\end{bmatrix}}={\frac {\pi ^{2}}{49}}\neq 0}$ (Eq.5.1.18)
 Thus the functions are linearly independent.

## Part 2 Given

Same as Part 1 Given

## Part 2 Problem Statement

Find 2 equations for the two unknowns M,N, and solve for M,N.

## Part 2 Solution

The ODE to be solved is:

 ${\displaystyle \displaystyle y''-3y'-10y=3\cos 7x}$ (Eq.5.2.1)

The particular solution will take the following form:

 ${\displaystyle \displaystyle y_{p}(x)=M\cos 7x+N\sin 7x}$ (Eq.5.2.2)

Take the first and second derivatives of Eq.5.2.2:

 ${\displaystyle \displaystyle y'_{p}(x)=-7M\sin 7x+7N\cos 7x}$ (Eq.5.2.3)
 ${\displaystyle \displaystyle y''_{p}(x)=-49M\cos 7x-49N\sin 7x}$ (Eq.5.2.4)

Plugging Eq.5.2.2 and Eq.5.2.3 into Eq.5.2.1:

 ${\displaystyle \displaystyle -49M\cos 7x-49N\sin 7x-3(-7M\sin 7x+7N\cos 7x)-10(M\cos 7x+N\sin 7x)=3\cos 7x}$ (Eq.5.2.5)
 ${\displaystyle \displaystyle -49M\cos 7x-49N\sin 7x+21M\sin 7x-21N\cos 7x-10M\cos 7x-10N\sin 7x=3\cos 7x}$ (Eq.5.2.6)

By equating like terms, the two equations for solving for M and N are:

 ${\displaystyle \displaystyle -49M-21N-10M=3}$ (Eq.5.2.7)
 ${\displaystyle \displaystyle -49N+21M-10N=0}$ (Eq.5.2.8)

Solving the system of equations created by Eq.5.2.7 and Eq.5.2.8:

 ${\displaystyle \displaystyle M=-0.045}$ (Eq.5.2.9)
 ${\displaystyle \displaystyle N=-0.016}$ (Eq.5.2.10)

## Part 3 Given

Initial conditions

 ${\displaystyle \displaystyle y(0)=1}$
 ${\displaystyle \displaystyle y'(0)=0}$

## Part 3 Problem Statement

Find the overall solution ${\displaystyle y(x)}$ and plot the solution over 3 periods

## Part 3 Solution

For the overall solution:

 ${\displaystyle \displaystyle y(x)=y_{p}(x)+y_{h}(x)}$ (Eq.5.3.1)

To find the homogeneous solution, solve:

 ${\displaystyle \displaystyle y''-3y'-10y=0}$ (Eq.5.3.2)

The characteristic equation of Eq.5.3.2 is:

 ${\displaystyle \displaystyle \lambda ^{2}-3\lambda -10=0}$ (Eq.5.3.3)

To solve the characteristic equation, apply the quadratic formula:

 ${\displaystyle \displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}}$ (Eq.5.3.4)
 ${\displaystyle \displaystyle \lambda _{1,2}={\frac {-(-3)\pm {\sqrt {(-3)^{2}-4(-10)}}}{2}}}$ (Eq.3.3.5)
 ${\displaystyle \displaystyle \lambda _{1}=5}$ (Eq.5.3.6)
 ${\displaystyle \displaystyle \lambda _{2}=-2}$ (Eq.5.3.7)

Therefore the homogeneous solution takes the following form:

 ${\displaystyle \displaystyle y_{h}(x)=c_{1}e^{5x}+c_{2}e^{-2x}}$ (Eq.5.3.8)

Taking the first derivative of Eq.5.3.8:

 ${\displaystyle \displaystyle y'_{h}(x)=5c_{1}e^{5x}-2c_{2}e^{-2x}}$ (Eq.5.3.9)

Apply the boundary conditions ${\displaystyle y(0)=1}$ and ${\displaystyle y'(0)=0}$ to Eq.5.3.8 and Eq.5.3.9:

 ${\displaystyle \displaystyle 1=c_{1}+c_{2}}$ (Eq.5.3.10)
 ${\displaystyle \displaystyle 0=5c_{1}-2c_{2}}$ (Eq.5.3.11)

Solving Eq.5.3.10 and Eq.5.3.11:

 ${\displaystyle \displaystyle c_{1}={\frac {2}{7}}}$ (Eq.5.3.12)
 ${\displaystyle \displaystyle c_{2}={\frac {5}{7}}}$ (Eq.5.3.13)

The homogeneous solution becomes:

 ${\displaystyle \displaystyle y_{h}(x)={\frac {2}{7}}e^{5x}-{\frac {5}{7}}e^{-2x}}$ (Eq.5.3.14)

Therefore from Eq.5.3.1 the overall solution is:

 ${\displaystyle \displaystyle y(x)=y_{p}(x)+y_{h}(x)=-0.045\cos 7x-0.016\sin 7x+{\frac {2}{7}}e^{5x}+{\frac {5}{7}}e^{-2x}}$ (Eq.5.3.15)

Graph

Matlab Code

clear all;
%R5.5 graph the complete solution over 3 periods
%y(x)=-0.045cos7x-0.016sin7x+2/7e^(5x)+5/7e^(-2x)
x = 0:0.2:(3*0.8976);%Domnain over 3 periods
y = -0.045*cos(7*x)-0.016*sin(7*x)+(2/7)*exp(5*x)+(5/7)*exp(-2*x);
plot(x,y); %Plot graph
title('R5.5 Graph');
xlabel('x');
ylabel('y(x)');


## Author & Proofreaders

Author:Egm4313.s12.team17.wheeler.tw 05:10, 23 February 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 09:25, 27 February 2012 (UTC)
Editor:Egm4313.s12.team17.ying 06:25, 30 February 2012 (UTC)

# Problem R5.6 - Solving for the Unknown Coefficients

## Given

 ${\displaystyle \displaystyle y_{p}=xe^{-2x}(Mcos(3x)+Nsin(3x))}$ (Eq.6.1)
 ${\displaystyle \displaystyle y_{h}=e^{-2x}(Acos(3x)+Bsin(3x))}$ (Eq.6.2)
 ${\displaystyle \displaystyle y''+4y'+13y=2e^{-2x}cos(3x)}$ (Eq.6.3)

Initial conditions

 ${\displaystyle \displaystyle y(0)=1}$
 ${\displaystyle \displaystyle y'(0)=0}$

## Problem Statement

Determine the overall solution ${\displaystyle y(x)}$ that corresponds to the initial conditions.

Plot the general solution over 3 periods.

## Solution

Combining the given homogeneous and particular solutions gives us the overall form:

 ${\displaystyle \displaystyle y(x)=xe^{-2x}(Mcos(3x)+Nsin(3x))+e^{-2x}(Acos(3x)+Bsin(3x))}$ (Eq.6.4)

Applying the initial condition ${\displaystyle y(0)=1}$ to Eq.6.4 gives us:

 ${\displaystyle \displaystyle y(0)=1=0e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))+e^{-2(0)}(Acos(3(0))+Bsin(3(0)))}$ (Eq.6.5)

Which is simplified below:

 ${\displaystyle \displaystyle 1=Acos(0)+Bsin(0)}$ (Eq.6.6)
 ${\displaystyle \displaystyle 1=Acos(0)+Bsin(0)}$ (Eq.6.7)
 ${\displaystyle \displaystyle 1=A}$ (Eq.6.8)

Taking the derivative of Eq.6.4 gives us

 ${\displaystyle \displaystyle y'(x)=e^{-2x}(Mcos(3x)+Nsin(3x))-2xe^{-2x}(Mcos(3x)+Nsin(3x))-3xe^{-2x}(Msin(3x))}$ ${\displaystyle \displaystyle +3xe^{-2x}(Ncos(3x))-2e^{-2x}(Acos(3x)+Bsin(3x))-3e^{-2x}(Asin(3x))+3e^{-2x}(Bcos(3x))}$ (Eq.6.9)

Inserting the initial condition y'(0)=0 into Eq.6.9 we get:

 ${\displaystyle \displaystyle y'(0)=0=e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))-2(0)e^{-2(0)}(Mcos(3(0))+Nsin(3(0)))-3(0)e^{-2(0)}(Msin(3(0)))}$ ${\displaystyle \displaystyle +3(0)e^{-2(0)}(Ncos(3(0)))-2e^{-2(0)}(Acos(3(0))+Bsin(3(0)))-3e^{-2(0)}(Asin(3(0)))+3e^{-2(0)}(Bcos(3(0)))}$ (Eq.6.10)

Which is further simplified to:

 ${\displaystyle \displaystyle 0=(M)-2(A)+3(B)}$ (Eq.6.11)

Since ${\displaystyle A=1}$:

 ${\displaystyle \displaystyle 0=(M)-2(1)+3(B)}$ (Eq.6.12)

We then get the relation:

 ${\displaystyle \displaystyle 2=(M)+3(B)}$ (Eq.6.13)

Taking the derivative of Eq.6.9 gives us.

 ${\displaystyle \displaystyle y''(x)=-2e^{-2x}(Mcos(3x)+Nsin(3x))-e^{-2x}(Msin(3x))+e^{-2x}(Ncos(3x))}$ ${\displaystyle \displaystyle -2e^{-2x}(Mcos(3x)+Nsin(3x))+4xe^{-2x}(Mcos(3x)+Nsin(3x))+6xe^{-2x}(Msin(3x))-6xe^{-2x}(Ncos(3x))}$ ${\displaystyle \displaystyle -3e^{-2x}(Msin(3x))+6xe^{-2x}(Msin(3x))-9xe^{-2x}(Mcos(3x))+3e^{-2x}(Ncos(3x))-6xe^{-2x}(Ncos(3x))-9xe^{-2x}(Nsin(3x))}$ ${\displaystyle \displaystyle +4e^{-2x}(Acos(3x)+Bsin(3x))+6e^{-2x}(Asin(3x))-6e^{-2x}(Bcos(3x))}$ ${\displaystyle \displaystyle +6e^{-2x}(Asin(3x))-9e^{-2x}(Acos(3x))-6e^{-2x}(Bcos(3x))-9e^{-2x}(Bsin(3x))}$ (Eq.6.14)

We then insert Eq.6.3, Eq.6.9, and Eq.6.14 into the equation ${\displaystyle y''+4y'+13y=2e^{-2x}cos(3x)}$
:

 ${\displaystyle \displaystyle -4e^{-2x}(Mcos(3x)+Nsin(3x))-4e^{-2x}(Msin(3x))-2e^{-2x}(Ncos(3x))}$ ${\displaystyle \displaystyle +4xe^{-2x}(Mcos(3x)+Nsin(3x))+12xe^{-2x}(Msin(3x))}$ ${\displaystyle \displaystyle -9xe^{-2x}(Mcos(3x))-6xe^{-2x}(Ncos(3x))-9xe^{-2x}(Nsin(3x))}$ ${\displaystyle \displaystyle -2e^{-2x}(Acos(3x)+Bsin(3x))+4e^{-2x}(Acos(3x)+Bsin(3x))+6e^{-2x}(Asin(3x))-6e^{-2x}(Bcos(3x))}$ ${\displaystyle \displaystyle +6e^{-2x}(Asin(3x))-9e^{-2x}(Acos(3x))-6e^{-2x}(Bcos(3x))-9e^{-2x}(Bsin(3x))}$ ${\displaystyle \displaystyle +4(e^{-2x}(Mcos(3x)+Nsin(3x))-2xe^{-2x}(Mcos(3x)+Nsin(3x))-3xe^{-2x}(Msin(3x))}$ ${\displaystyle \displaystyle +3xe^{-2x}(Ncos(3x))-2e^{-2x}(Acos(3x)+Bsin(3x))-3e^{-2x}(Asin(3x))+3e^{-2x}(Bcos(3x)))}$ ${\displaystyle \displaystyle +13(xe^{-2x}(Mcos(3x)+Nsin(3x))+e^{-2x}(Acos(3x)+Bsin(3x)))=2e^{-2x}cos(3x)}$ (Eq.6.15)

Which is then simplified:

 ${\displaystyle \displaystyle -6e^{-2x}(Msin(3x))+6e^{-2x}(Ncos(3x))=2e^{-2x}cos(3x)}$ (Eq.6.16)

Dividing Eq.6.16 by ${\displaystyle e^{-2x}}$ yields:

 ${\displaystyle \displaystyle -6(Msin(3x))+6(Ncos(3x))=2cos(3x)}$ (Eq.6.17)

We then split Eq.6.17 into a system of equations to determine the value of the coefficients:

 ${\displaystyle \displaystyle -6(Msin(3x))=0sin(3x)}$ (Eq.6.18)
 ${\displaystyle \displaystyle 6(Ncos(3x))=2cos(3x)}$ (Eq.6.19)

From Eq.6.18 and Eq.6.19 we can deduce that ${\displaystyle M=0}$ and ${\displaystyle N={\frac {1}{3}}}$.
Inserting the value of M into Eq.6.13 yields:

 ${\displaystyle \displaystyle 2=(0)+3(B)}$ (Eq.6.20)
 ${\displaystyle \displaystyle {\frac {2}{3}}=(B)}$ (Eq.6.21)

Inserting the determined values for A,B,M, and N from Eq.6.8, Eq.6.18, Eq.6.21, and Eq.6.19 we can deduce that the overall solution is:

 ${\displaystyle \displaystyle y(x)=xe^{-2x}(0cos(3x)+{\frac {1}{2}}sin(3x))+e^{-2x}(1cos(3x)+{\frac {2}{3}}sin(3x))}$ (Eq.6.22)

Making the overall solution:

 ${\displaystyle \displaystyle y(x)=xe^{-2x}({\frac {1}{3}}sin(3x))+e^{-2x}(1cos(3x)+{\frac {2}{3}}sin(3x))}$ (Eq.6.23)

Below is a plot of the solution:
Graph

Matlab Code

clear all;
%R6 graph the complete solution over 3 periods
%y(x)=xe^{-2x}((1/2)*sin(3x))+ e^{-2x}(1*cos(3x)+(2/3)*sin(3x))
x=0:0.1:6.3;%Domnain over 3 periods from 0 to 6.3
for n=1:1:64;
plot(x(n),x(n)*exp(-2*x(n))*(1/3)*sin(3*x(n))+exp(-2*x(n))*(cos(3*x(n))+(2/3)*sin(3*x(n))),'r:+');hold all
end%This loop plots the desired graph
title('R5.6 Graph');
xlabel('x');
ylabel('y(x)');


## Author & Proofreaders

Author:Egm4313.s12.team17.axelrod.a 04:46, 30 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.deaver.md 07:05, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 07:14, 30 March 2012 (UTC)

# Problem R5.7 - Projection on a Basis for Vectors

## Given

 ${\displaystyle \displaystyle V=4e_{1}+2e_{2}=c_{1}b_{1}+c_{2}b_{2}}$ (Eq.7.1)
 ${\displaystyle \displaystyle b_{1}=2e_{1}+7e_{2}}$ (Eq.7.2)
 ${\displaystyle \displaystyle b_{2}=1.5e_{1}+3e_{2}}$ (Eq.7.3)

## Part 1 Problem Statement

Find ${\displaystyle c_{1},c_{2}}$ using the Gram Matrix

## Part 1 Solution

The Grammian is given as:

 ${\displaystyle \displaystyle \Gamma (b_{1},b_{2})={\begin{bmatrix}\left\langle b_{1},b_{1}\right\rangle \left\langle b_{1},b_{2}\right\rangle \\\left\langle b_{2},b_{1}\right\rangle \left\langle b_{2},b_{2}\right\rangle \end{bmatrix}}}$ (Eq.7.1.1)

According to (3) on p.8-9

 ${\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =(b_{1}\cdot b_{2})}$ (Eq.7.1.2)

Calculating the dot products:

 ${\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =4+49=53}$ ${\displaystyle \displaystyle \left\langle b_{1},b_{2}\right\rangle =3+21=24}$ ${\displaystyle \displaystyle \left\langle b_{2},b_{1}\right\rangle =3+21=24}$ ${\displaystyle \displaystyle \left\langle b_{2},b_{2}\right\rangle =2.25+9=11.25}$ (Eq.7.1.3)

Plugging Equations 7.1.3 into 7.1.1 gives:

 ${\displaystyle \displaystyle \Gamma ={\begin{bmatrix}53&24\\24&11.25\\\end{bmatrix}}}$ (Eq.7.1.4)

Next we have to get an equation for d, as shown in (5) on pg.8-10 *notice v is given under heading of problem 5.7

 ${\displaystyle \displaystyle d=\{\left\langle b_{1},v\right\rangle ,\left\langle b_{2},v\right\rangle \}^{T}}$ (Eq.7.1.5)

Evaluating as equation 7.1.2 shows gives:

 ${\displaystyle \displaystyle d=\{22,12\}^{T}}$ (Eq.7.1.6)

From the notes (1) on pg.8-11

 ${\displaystyle \displaystyle c=\Gamma ^{-1}d}$ (Eq.7.1.6)

This gives:

 ${\displaystyle \displaystyle c={\bigg \{}{\frac {11.25(22)-24(12)}{20.25}},{\frac {-24(22)+53(12)}{20.25}}{\bigg \}}}$ (Eq.7.1.7)

So ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$ are:

 ${\displaystyle \displaystyle c_{1}=-2}$ ${\displaystyle \displaystyle c_{2}=5.33}$ (Eq.7.1.8)

## Part 2 Problem Statement

(2)Verify the result by using (1)-(2) p.7c-34 in (2)and rely on the non-zero determinant of the matrix of components of ${\displaystyle b_{1},b_{2}}$ relative to the basis ${\displaystyle e_{1},e_{2}}$ as discussed on p.7c-34

## Part 2 Solution

The equations to be entered into the A matrix are as follows:

 ${\displaystyle \displaystyle b_{1}=2e_{1}+7e_{2}}$ ${\displaystyle \displaystyle b_{2}=1.5e_{1}+3e_{2}}$ (Eq.7.2.1)

According to the notes if these two equations are put into a matrix and the determinant is ${\displaystyle \neq 0}$ then they are linearly independent. Entering them into a matrix gives:

 ${\displaystyle \displaystyle A={\begin{bmatrix}2&7\\1.5&3\\\end{bmatrix}}}$ (Eq.7.2.2)

Solving:

 ${\displaystyle \displaystyle 6-10.5=-4.5\neq 0}$ (Eq.7.2.3)

Thus ${\displaystyle b_{1}}$ and ${\displaystyle b_{2}}$ are linearly independent.

Use the following equation to check the answer from the previous part.

 ${\displaystyle \displaystyle V=4e_{1}+2e_{2}\equiv c_{1}b_{1}+c_{2}b_{2}}$ (Eq.7.2.4)
 ${\displaystyle \displaystyle V\equiv (-2)(2e_{1}+7e_{2})+(5.33)(1.5e_{1}+3e_{2})}$ (Eq.7.2.4)
 ${\displaystyle \displaystyle V\equiv 4e_{1}+2e_{2}}$ (Eq.7.2.5)

## Author & Proofreaders

Author:Egm4313.s12.team17.hintz 23:46, 24 March 2012 (UTC)
Proofreader:Egm4313.s12.team17.axelrod.a 14:35, 29 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 07:51, 30 March 2012 (UTC)

# Problem R5.8 - Integration

## Given

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx}$ (Eq. 8.1)

With

 ${\displaystyle \displaystyle n=0,1}$

Integration by Parts

 ${\displaystyle \displaystyle \int udv=uv-\int vdu}$ (Eq.8.2)

General Binomial Theorem

 ${\displaystyle \displaystyle (x+y)^{n}=\sum _{k=0}^{n}{\binom {n}{k}}x^{n-k}y^{k}}$ (Eq.8.3)

Where

 ${\displaystyle \displaystyle {\binom {n}{k}}x^{n-k}y^{k}={\dfrac {n!}{k!(n-k)!}}={\dfrac {n(n-1)\cdots (n-k+1)}{k!}}}$

## Problem Statement

Find the indefinite integral below with ${\displaystyle n=0,1}$ using the integration by parts and the General Binomial Theorem

## Solution

Using the integration by parts formula, Eq.8.2, we set ${\displaystyle u=log(1+x)}$ and ${\displaystyle dv=x^{n}}$ so we have the following:

 ${\displaystyle \displaystyle u=log(1+x)}$
 ${\displaystyle \displaystyle du={\dfrac {1}{x+1}}dx}$
 ${\displaystyle \displaystyle dv=x^{n}}$
 ${\displaystyle \displaystyle v={\dfrac {x^{n+1}}{n+1}}}$

Substituting the above in to Eq. 8.3 we have:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx={\dfrac {log(1+x)x^{n+1}}{n+1}}-\int {\dfrac {x^{n+1}}{(n+1)(1+x)}}dx}$ (Eq.8.4)

Now we can set ${\displaystyle n=0}$ and Eq. 8.4 becomes:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx={\dfrac {log(1+x)x^{1}}{1}}-\int {\dfrac {x^{1}}{(1)(1+x)}}dx}$ (Eq.8.5)

Where

 ${\displaystyle \displaystyle \int {\dfrac {x^{1}}{(1)(1+x)}}dx}$

Can be solved by first using long division on the integrand:

 ${\displaystyle \displaystyle \int {\dfrac {x^{1}}{(1)(1+x)}}dx=\int (1-{\dfrac {1}{1+x}})dx}$

Next integrate term by term

 ${\displaystyle \displaystyle \int 1dx-\int {\dfrac {1}{1+x}}dx=x-log(1+x)}$

So when ${\displaystyle n=0}$ we have:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx=log(1+x)x-x+log(x+1)}$ (Eq.8.6)

Now we substitute ${\displaystyle n=1}$ into Eq. 8.4 which gives us:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx={\dfrac {log(1+x)x^{1+1}}{1+1}}-\int {\dfrac {x^{1+1}}{(1+1)(1+x)}}dx={\dfrac {log(1+x)x^{2}}{2}}-\int {\dfrac {x^{2}}{(2)(1+x)}}dx}$ (Eq.8.7)

And we have the integral:

 ${\displaystyle \displaystyle \int {\dfrac {x^{2}}{(2)(1+x)}}dx}$

Which can also be solved by first using long division on the integrand ${\displaystyle {\dfrac {x^{2}}{1+x}}}$ , so:

 ${\displaystyle \displaystyle \int {\dfrac {x^{2}}{(2)(1+x)}}dx={\dfrac {1}{2}}\int {\dfrac {x^{2}}{1+x}}dx={\dfrac {1}{2}}\int (x+{\dfrac {1}{1+x}}-1)dx}$

Integrating term by term we have that:

 ${\displaystyle \displaystyle {\dfrac {1}{2}}\int (x+{\dfrac {1}{1+x}}-1)dx={\dfrac {x^{2}}{4}}-{\dfrac {x}{2}}+{\dfrac {1}{2}}log(1+x)={\dfrac {1}{4}}((x-2)x+2log(1+x))}$

So Eq. 8.7 becomes:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx={\dfrac {log(1+x)x^{2}}{2}}-\int {\dfrac {x^{2}}{(2)(1+x)}}dx={\dfrac {log(1+x)x^{2}}{2}}-({\dfrac {1}{4}}((x-2)x+2log(1+x)))}$ (Eq.8.8)

So when ${\displaystyle n=1}$:

 ${\displaystyle \displaystyle \int x^{n}log(1+x)dx={\dfrac {\log(1+x)x^{2}}{2}}-{\dfrac {1}{4}}[(x-2)x+2log(1+x)]}$ (Eq.8.9)

## Author & Proofreaders

Author:Egm4313.s12.team17.Li 23:47, 23 March 2012 (UTC)
Proofreader 1:Egm4313.s12.team17.axelrod.a 20:45, 25 March 2012 (UTC)
Proofreader 2:Egm4313.s12.team17.ying 08:22, 30 March 2012 (UTC)
Editor:Egm4313.s12.team17.ying 08:22, 30 March 2012 (UTC)

# Problem R5.9 - Projection on a Basis

## Part 1 Given

 ${\displaystyle \displaystyle y''-3y'+2y=r(x)}$ (Eq.9.1.1)
 ${\displaystyle \displaystyle r(x)=log(1+x)}$ (Eq.9.1.2)
 ${\displaystyle \displaystyle y(-3/4)=1,y'(-3/4)=0}$ (Eq.9.1.3)
 ${\displaystyle \displaystyle r_{n}(x)=\log(1+x)=\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n+1}}(-1)^{n}}$ (Eq.9.1.4)
 ${\displaystyle \displaystyle {\mathbf {b}}=\{b_{j}(x)=x^{j},j=0,1,...,n\}}$ (Eq.9.1.5)

## Part 1 Problem Statement

Project equation 9.1.2 on the polynomial basis such that it takes the form shown below for ${\displaystyle x=[-3/4,3]\,}$ and ${\displaystyle n=0,1,2\,}$:

 ${\displaystyle \displaystyle r(x)\approx r_{n}(x)=\sum _{j=0}^{n}d_{j}x^{j}}$ (Eq.9.1.6)

Plot equations 9.1.2 and 9.1.6 to show the uniform approximation and convergence.

Then, plot equations 9.1.4 and 9.1.6, in order, to compare the pros and cons of the projection on polynomial and Taylor series expansion methods for approximating equation 9.1.2.

## Part 1 Solution

Let ${\displaystyle n=0\,}$.

Therefore, equation 9.1.5 becomes:

 ${\displaystyle \displaystyle {\mathbf {b}}=\{b_{0}\}=\{x^{0}\}}$ (Eq.9.1.7)

Set up a Gram matrix for the basis functions (eq. 9.1.7):

 ${\displaystyle \displaystyle {\mathbf {\Gamma }}({\mathbf {b}})=\{\langle b_{0},b_{0}\rangle \}}$ (Eq.9.1.8)
 ${\displaystyle \displaystyle \langle b_{0},b_{0}\rangle =\int \limits _{a}^{b}b_{0}b_{0}dx}$ (Eq.9.1.9)

Let ${\displaystyle a=-3/4\,}$ and ${\displaystyle b=3\,}$.

 ${\displaystyle \displaystyle \langle b_{0},b_{0}\rangle =\int \limits _{-{\frac {3}{4}}}^{3}x^{0}(x^{0})dx}$ (Eq.9.1.10)
 ${\displaystyle \displaystyle \langle b_{0},b_{0}\rangle ={\frac {15}{4}}=3.75}$ (Eq.9.1.11)

Therefore,

 ${\displaystyle \displaystyle {\mathbf {\Gamma }}=\{3.75\}}$ (Eq.9.1.12)

Rhs matrix will take the form shown below:

 ${\displaystyle \displaystyle {\mathbf {e}}=\{\langle b_{0},r(x)\rangle ,...,\langle b_{n},r(x)\rangle \}^{T}}$ (Eq.9.1.13)

Therefore,

 ${\displaystyle \displaystyle {\mathbf {e}}=\{\langle b_{0},r(x)\rangle \}^{T}}$ (Eq.9.1.14)
 ${\displaystyle \displaystyle \langle b_{0},r(x)\rangle =\int \limits _{a}^{b}b_{0}r(x)dx}$ (Eq.9.1.15)
 ${\displaystyle \displaystyle \langle b_{0},r(x)\rangle =\int \limits _{-{\frac {3}{4}}}^{3}x^{0}(log(1+x))dx}$ (Eq.9.1.16)
 ${\displaystyle \displaystyle \langle b_{0},r(x)\rangle ={\bigg [}(x+1)log(x+1)-x{\bigg ]}_{-{\frac {3}{4}}}^{3}}$ (Eq.9.1.17)
 ${\displaystyle \displaystyle \langle b_{0},r(x)\rangle ={\frac {17}{2}}log(2)-{\frac {15}{4}}}$ (Eq.9.1.18)
 ${\displaystyle \displaystyle \langle b_{0},r(x)\rangle =2.1418}$ (Eq.9.1.19)
 ${\displaystyle \displaystyle {\mathbf {e}}=\{2.1418\}^{T}}$ (Eq.9.1.20)

Using the equation shown below, solve for the d values:

 ${\displaystyle \displaystyle {\mathbf {\Gamma }}{\mathbf {d}}={\mathbf {e}}}$ (Eq.9.1.21)
 ${\displaystyle \displaystyle {\mathbf {d}}={\mathbf {\Gamma }}^{-1}{\mathbf {e}}}$ (Eq.9.1.22)

Therefore,

 ${\displaystyle \displaystyle {\mathbf {d}}=\{3.75\}^{-1}\{2.1418\}^{T}}$ (Eq.9.1.23)
 ${\displaystyle \displaystyle {\mathbf {d}}=0.5711}$ (Eq.9.1.24)

This result will make equation 9.4.6 the following:

 ${\displaystyle \displaystyle r_{0}(x)=\sum _{j=0}^{0}d_{j}x_{j}=d_{0}x^{0}}$ (Eq.9.1.25)
 ${\displaystyle \displaystyle r_{0}(x)=0.5711}$ (Eq.9.1.26)

Let ${\displaystyle n=1\,}$.

Therefore, equation 9.1.5 becomes:

 ${\displaystyle \displaystyle {\mathbf {b}}=\{b_{0},b_{1}\}=\{x^{0},x^{1}\}}$ (Eq.9.1.27)

Set up a Gram matrix for the basis functions (eq. 9.1.27):

 ${\displaystyle \displaystyle {\mathbf {\Gamma }}({\mathbf {b}})={\begin{bmatrix}\langle b_{0},b_{0}\rangle &\langle b_{0},b_{1}\rangle \\\langle b_{1},b_{0}\rangle &\langle b_{1},b_{1}\rangle \\\end{bmatrix}}}$ (Eq.9.1.28)
 ${\displaystyle \displaystyle \langle b_{0},b_{1}\rangle =\langle b_{1},b_{0}\rangle =\int \limits _{-{\frac {3}{4}}}^{3}x^{0}(x^{1})dx}$ (Eq.9.1.29)
 ${\displaystyle \displaystyle \langle b_{0},b_{1}\rangle =\langle b_{1},b_{0}\rangle ={\frac {1}{2}}{\bigg [}x^{2}{\bigg ]}_{-{\frac {3}{4}}}^{3}}$ (Eq.9.1.30)
 ${\displaystyle \displaystyle \langle b_{0},b_{1}\rangle =\langle b_{1},b_{0}\rangle ={\frac {135}{32}}=4.2188}$ (Eq.9.1.31)
 ${\displaystyle \displaystyle \langle b_{1},b_{1}\rangle =\int \limits _{-{\frac {3}{4}}}^{3}x^{1}(x^{1})dx}$ (Eq.9.1.32)
 ${\displaystyle \displaystyle \langle b_{1},b_{1}\rangle ={\frac {1}{3}}{\bigg [}x^{3}{\bigg ]}_{-{\frac {3}{4}}}^{3}}$ (Eq.9.1.33)
 ${\displaystyle \displaystyle \langle b_{1},b_{1}\rangle ={\frac {585}{64}}=9.1406}$ (Eq.9.1.34)

Therefore,

 ${\displaystyle \displaystyle {\mathbf {\Gamma }}={\begin{bmatrix}3.7500&4.2188\\4.2188&9.1406\\\end{bmatrix}}}$ (Eq.9.1.35)

Generate a rhs matrix:

 ${\displaystyle \displaystyle {\mathbf {e}}=\{\langle b_{0},r(x)\rangle \,\langle b_{1},r(x)\rangle \}^{T}}$ (Eq.9.1.36)
 ${\displaystyle \displaystyle \langle b_{1},r(x)\rangle =\int \limits _{a}^{b}b_{1}r(x)dx}$ (Eq.9.1.37)
 ${\displaystyle \displaystyle \langle b_{1},r(x)\rangle =\int \limits _{-{\frac {3}{4}}}^{3}x^{1}(log(1+x))dx}$ (Eq.9.1.38)
 ${\displaystyle \displaystyle \langle b_{1},r(x)\rangle ={\bigg [}{\bigg (}{\frac {x^{2}}{2}}-{\frac {1}{2}}{\bigg )}log(x+1)-{\frac {x(x-2)}{4}}{\bigg ]}_{-{\frac {3}{4}}}^{3}}$ (Eq.9.1.39)
 ${\displaystyle \displaystyle \langle b_{1},r(x)\rangle ={\frac {121}{16}}log(2)-{\frac {15}{64}}}$ (Eq.9.1.40)
 ${\displaystyle \displaystyle \langle b_{1},r(x)\rangle =5.0076}$ (Eq.9.1.41)
 ${\displaystyle \displaystyle {\mathbf {e}}=\{2.1418,5.0076\}^{T}}$ (Eq.9.1.42)

Using the equation 9.1.21, to solve for the d values:

 ${\displaystyle \displaystyle {\mathbf {d}}={\frac {1}{det{\mathbf {\Gamma }}}}{\begin{bmatrix}9.1406&-4.2188\\-4.2188&3.7500\\\end{bmatrix}}{\begin{bmatrix}2.1418\\5.0076\\\end{bmatrix}}}$ (Eq.9.1.43)
 ${\displaystyle \displaystyle {\mathbf {d}}={\frac {1}{16.479}}{\begin{bmatrix}9.1406&-4.2188\\-4.2188&3.7500\\\end{bmatrix}}{\begin{bmatrix}2.1418\\5.0076\\\end{bmatrix}}}$ (Eq.9.1.44)
 ${\displaystyle \displaystyle {\mathbf {d}}={\begin{bmatrix}-0.0940\\0.5912\\\end{bmatrix}}}$ (Eq.9.1.45)

This result will make equation 9.4.6 the following:

 ${\displaystyle \displaystyle r_{1}(x)=\sum _{j=0}^{1}d_{j}x_{j}=d_{0}x^{0}+d_{1}x^{1}}$ (Eq.9.1.46)
 ${\displaystyle \displaystyle r_{1}(x)=-0.0940+0.5912x}$ (Eq.9.1.47)

The plot below shows the projected and actual excitation.

Below is the MATLAB code used to generate the above graph:

x=[-.75:0.01:3];
r=log(x+1);
r0=0.5711+1-x.^0;
r1=-0.0940+0.5912*x;
plot(x,r,x,r0,x,r1)
xlabel('x-axis');
ylabel('r-axis');


Taylor series expansion for equation 9.1.4 are shown below:

 ${\displaystyle \displaystyle t_{0}=x}$ (Eq.9.1.48)
 ${\displaystyle \displaystyle t_{1}=x-{\frac {x^{2}}{2}}}$ (Eq.9.1.49)

The plots below show the comparison of the two methods for approximating equation 9.1.2.

Below is the MATLAB code used to generate the above graph:

x=[-.75:0.01:3];
r=log(x+1);
r0=0.5711+1-x.^0;
r1=-0.0940+0.5912*x;
t0=x;
t1=x-x.^2/2;
subplot(211)
plot(x,r,x,r0,x,t0)
xlabel('x-axis');
ylabel('r-axis');
title('Approximation when n=0')
subplot(212)
plot(x,r,x,r1,x,t1)
xlabel('x-axis');
ylabel('r-axis');
title('Approximation when n=1')


The Taylor series expansion is great for approximating a complex function around a specific point. Once, the Taylor series gets farther away from the centered point, it will begin to become inaccurate compared to the true value of the actual function. The Taylor series will require more terms to reduce the error for a larger x range.

Projection on polynomial basis will generate an approximation that evenly distributes the error for a given x range. In order to get a high accurate approximation with this method, you will have to increase the number of terms used which leads to more unknowns. More unknowns means that it will require more time to solve and that will result in higher cost when money is involved.

## Part 2 Given

Information from part 1.

## Part 2 Problem Statement

Find a ${\displaystyle y_{n}(x)\,}$ that solves the following equation for ${\displaystyle n=0,1\,}$:

 ${\displaystyle \displaystyle y_{n}''+ay_{n}'+by_{n}=r_{n}(x)}$ (Eq.9.2.1)

Then plot the solutions, truncated Taylor series, and numerical solution.

## Part 2 Solution

For ${\displaystyle n=0\,}$.

 ${\displaystyle \displaystyle y_{0}''-3y_{0}'+2y_{0}=r_{0}(x)}$ (Eq.9.2.2)
 ${\displaystyle \displaystyle y_{0}=y_{0,h}+y_{0,p}}$ (Eq.9.2.3)

The homogenous solution must satisfy the following:

 ${\displaystyle \displaystyle y_{0,h}''-3y_{0,h}'+2y_{0,h}=0}$ (Eq.9.2.4)

Therefore, the characteristic equation is:

 ${\displaystyle \displaystyle \lambda ^{2}-3\lambda +2=0}$ (Eq.9.2.5)
 ${\displaystyle \displaystyle (\lambda -2)(\lambda -1)=0}$ (Eq.9.2.6)
 ${\displaystyle \displaystyle \lambda _{1}=2,\lambda _{2}=1}$ (Eq.9.2.7)

Distinct real roots makes the general solution of the homogenous take the following form:

 ${\displaystyle \displaystyle y_{h}=d_{1}e^{\lambda _{1}x}+d_{2}e^{\lambda _{2}x}}$ (Eq.9.2.8)

Therefore,

 ${\displaystyle \displaystyle y_{0,h}=d_{1}e^{2x}+d_{2}e^{x}}$ (Eq.9.2.9)

The particular solution must satisfy the following:

 ${\displaystyle \displaystyle y_{0,p}''-3y_{0,p}'+2y_{0,p}=0.5711}$ (Eq.9.2.10)

Using Method of Undetermined Coefficients, the particular solution will have the form:

 ${\displaystyle \displaystyle y_{p}=\sum _{j=0}^{n}c_{j}x^{j}}$ (Eq.9.2.11)

Therefore,

 ${\displaystyle \displaystyle y_{0,p}=c_{0}x^{0}}$ (Eq.9.2.12)

Take the first and second derivative of equation 9.2.12:

 ${\displaystyle \displaystyle y_{0,p}^{\prime }=0}$ (Eq.9.2.13)
 ${\displaystyle \displaystyle y_{0,p}^{\prime \prime }=0}$ (Eq.9.2.14)

Substitute equations 9.2.12, 9.2.13, and 9.2.14 into equation 9.2.10:

 ${\displaystyle \displaystyle 2c_{0}=0.5711}$ (Eq.9.2.15)

Therefore,

 ${\displaystyle \displaystyle c_{0}=0.2856}$ (Eq.9.2.16)

The particular solution will be:

 ${\displaystyle \displaystyle y_{0,p}=0.2856}$ (Eq.9.2.17)

Superimpose equation 9.2.9 and 9.2.17:

 ${\displaystyle \displaystyle y_{0}=d_{1}e^{2x}+d_{2}e^{x}+0.2856}$ (Eq.9.2.18)

Take the first of equation 9.2.18:

 ${\displaystyle \displaystyle y_{0}^{\prime }=2d_{1}e^{2x}+d_{2}e^{x}}$ (Eq.9.2.19)

Use equation 9.1.3 to solve for the unknowns:

 ${\displaystyle \displaystyle 1=d_{1}e^{-1.5}+d_{2}e^{-0.75}+0.2856}$ (Eq.9.2.20)
 ${\displaystyle \displaystyle 0=2d_{1}e^{-1.5}+d_{2}e^{-0.75}}$ (Eq.9.2.21)

Therefore,

 ${\displaystyle \displaystyle d_{1}=-3.2019}$ (Eq.9.2.22)
 ${\displaystyle \displaystyle d_{2}=3.0249}$ (Eq.9.2.23)

The final solution for ${\displaystyle n=0\,}$:

 ${\displaystyle \displaystyle y_{0}=-3.2019e^{2x}+3.0249e^{x}+0.2856}$ (Eq.9.2.24)

For ${\displaystyle n=1\,}$.

 ${\displaystyle \displaystyle y_{1}''-3y_{1}'+2y_{1}=r_{1}(x)}$ (Eq.9.2.25)
 ${\displaystyle \displaystyle y_{1}=y_{1,h}+y_{1,p}}$ (Eq.9.2.26)

The homogenous solution will be the same has equation 9.2.9:

 ${\displaystyle \displaystyle y_{1,h}=d_{1}e^{2x}+d_{2}e^{x}}$ (Eq.9.2.27)

The particular solution must satisfy the following:

 ${\displaystyle \displaystyle y_{1,p}''-3y_{1,p}'+2y_{1,p}=0.5912x-0.0940}$ (Eq.9.2.28)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:

 ${\displaystyle \displaystyle y_{1,p}=c_{0}x^{0}+c_{1}x^{1}}$ (Eq.9.2.29)

Take the first and second derivative of equation 9.2.29:

 ${\displaystyle \displaystyle y_{1,p}^{\prime }=c_{1}}$ (Eq.9.2.30)
 ${\displaystyle \displaystyle y_{1,p}^{\prime \prime }=0}$ (Eq.9.2.31)

Substitute equations 9.2.29, 9.2.30, and 9.2.31 into equation 9.2.28:

 ${\displaystyle \displaystyle -3c_{1}+2c_{0}+2c_{1}x=0.5912x-0.0940}$ (Eq.9.2.32)

Solve the coefficients by setting like terms equal to one another.

For ${\displaystyle x^{0}\,}$:

 ${\displaystyle \displaystyle -3c_{1}+2c_{0}=-0.0940}$ (Eq.9.2.33)

For ${\displaystyle x^{1}\,}$:

 ${\displaystyle \displaystyle 2c_{1}=0.5912}$ (Eq.9.2.34)

Therefore,

 ${\displaystyle \displaystyle c_{1}=0.2956}$ (Eq.9.2.35)
 ${\displaystyle \displaystyle c_{0}=0.2956}$ (Eq.9.2.36)

The particular solution will be:

 ${\displaystyle \displaystyle y_{1,p}=0.2956x+0.3964}$ (Eq.9.2.37)

Superimpose equation 9.2.27 and 9.2.37:

 ${\displaystyle \displaystyle y_{1}=d_{1}e^{2x}+d_{2}e^{x}+0.2956x+0.3964}$ (Eq.9.2.38)

Take the first of equation 9.2.38:

 ${\displaystyle \displaystyle y_{1}^{\prime }=2d_{1}e^{2x}+d_{2}e^{x}+0.2956}$ (Eq.9.2.39)

Use equation 9.1.3 to solve for the unknowns:

 ${\displaystyle \displaystyle 1=d_{1}e^{-1.5}+d_{2}e^{-0.75}-0.2217+0.3964}$ (Eq.9.2.40)
 ${\displaystyle \displaystyle 0=2d_{1}e^{-1.5}+d_{2}e^{-0.75}+0.2956}$ (Eq.9.2.41)

Therefore,

 ${\displaystyle \displaystyle d_{1}=-5.0235}$ (Eq.9.2.42)
 ${\displaystyle \displaystyle d_{2}=4.1201}$ (Eq.9.2.43)

The final solution for ${\displaystyle n=1\,}$:

 ${\displaystyle \displaystyle y_{1}=-5.0235e^{2x}+4.1201e^{x}+0.2956x+0.3964}$ (Eq.9.2.44)

For Taylor series expansion for ${\displaystyle n=0\,}$.

 ${\displaystyle \displaystyle y_{t_{0}}''-3y_{t_{0}}'+2y_{t_{0}}=r_{t_{0}}(x)}$ (Eq.9.2.45)
 ${\displaystyle \displaystyle y_{t_{0}}=y_{{t_{0}},h}+y_{{t_{0}},p}}$ (Eq.9.2.46)

The homogenous solution will be the same has equation 9.2.9:

 ${\displaystyle \displaystyle y_{{t_{0}},h}=d_{1}e^{2x}+d_{2}e^{x}}$ (Eq.9.2.47)

The particular solution must satisfy the following:

 ${\displaystyle \displaystyle y_{{t_{0}},p}''-3y_{{t_{0}},p}'+2y_{{t_{0}},p}=x}$ (Eq.9.2.48)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:

 ${\displaystyle \displaystyle y_{{t_{0}},p}=c_{0}x^{0}+c_{1}x^{1}}$ (Eq.9.2.49)

Take the first and second derivative of equation 9.2.49:

 ${\displaystyle \displaystyle y_{{t_{0}},p}^{\prime }=c_{1}}$ (Eq.9.2.50)
 ${\displaystyle \displaystyle y_{{t_{0}},p}^{\prime \prime }=0}$ (Eq.9.2.51)

Substitute equations 9.2.49, 9.2.50, and 9.2.51 into equation 9.2.48:

 ${\displaystyle \displaystyle -3c_{1}+2c_{0}+2c_{1}x=x}$ (Eq.9.2.52)

Solve the coefficients by setting like terms equal to one another.

For ${\displaystyle x^{0}\,}$:

 ${\displaystyle \displaystyle -3c_{1}+2c_{0}=0}$ (Eq.9.2.53)

For ${\displaystyle x^{1}\,}$:

 ${\displaystyle \displaystyle 2c_{1}=1}$ (Eq.9.2.54)

Therefore,

 ${\displaystyle \displaystyle c_{1}=0.5}$ (Eq.9.2.55)
 ${\displaystyle \displaystyle c_{0}=0.75}$ (Eq.9.2.56)

The particular solution will be:

 ${\displaystyle \displaystyle y_{{t_{0}},p}=0.5x+0.75}$ (Eq.9.2.57)

Superimpose equation 9.2.47 and 9.2.57:

 ${\displaystyle \displaystyle y_{t_{0}}=d_{1}e^{2x}+d_{2}e^{x}+0.5x+0.75}$ (Eq.9.2.58)

Take the first of equation 9.2.58:

 ${\displaystyle \displaystyle y_{t_{0}}^{\prime }=2d_{1}e^{2x}+d_{2}e^{x}+0.5}$ (Eq.9.2.59)

Use equation 9.1.3 to solve for the unknowns:

 ${\displaystyle \displaystyle 1=d_{1}e^{-1.5}+d_{2}e^{-0.75}-0.375+0.75}$ (Eq.9.2.60)
 ${\displaystyle \displaystyle 0=2d_{1}e^{-1.5}+d_{2}e^{-0.75}+0.5}$ (Eq.9.2.61)

Therefore,

 ${\displaystyle \displaystyle d_{1}=-5.0419}$ (Eq.9.2.62)
 ${\displaystyle \displaystyle d_{2}=3.7048}$ (Eq.9.2.63)

The final solution for Taylor series expansion with ${\displaystyle n=0\,}$:

 ${\displaystyle \displaystyle y_{t_{0}}=-5.0419e^{2x}+3.7048e^{x}+0.5x+0.75}$ (Eq.9.2.64)

For Taylor series expansion for ${\displaystyle n=1\,}$.

 ${\displaystyle \displaystyle y_{t_{1}}''-3y_{t_{1}}'+2y_{t_{1}}=r_{t_{1}}(x)}$ (Eq.9.2.65)
 ${\displaystyle \displaystyle y_{t_{1}}=y_{{t_{1}},h}+y_{{t_{1}},p}}$ (Eq.9.2.66)

The homogenous solution will be the same has equation 9.2.9:

 ${\displaystyle \displaystyle y_{{t_{1}},h}=d_{1}e^{2x}+d_{2}e^{x}}$ (Eq.9.2.67)

The particular solution must satisfy the following:

 ${\displaystyle \displaystyle y_{{t_{1}},p}''-3y_{{t_{1}},p}'+2y_{{t_{1}},p}=x-{\frac {x^{2}}{2}}}$ (Eq.9.2.68)

Using Method of Undetermined Coefficients, the particular solution will take the form of equation 9.2.11:

 ${\displaystyle \displaystyle y_{{t_{1}},p}=c_{0}x^{0}+c_{1}x^{1}+c_{2}x^{2}}$ (Eq.9.2.69)

Take the first and second derivative of equation 9.2.69:

 ${\displaystyle \displaystyle y_{{t_{1}},p}^{\prime }=c_{1}+2c_{2}x}$ (Eq.9.2.70)
 ${\displaystyle \displaystyle y_{{t_{1}},p}^{\prime \prime }=2c_{2}}$ (Eq.9.2.71)

Substitute equations 9.2.69, 9.2.70, and 9.2.71 into equation 9.2.68:

 ${\displaystyle \displaystyle 2c_{2}-3c_{1}-6c_{2}x+2c_{0}+2c_{1}x2c_{2}x^{2}=x-{\frac {x^{2}}{2}}}$ (Eq.9.2.72)

Solve the coefficients by setting like terms equal to one another.

For ${\displaystyle x^{0}\,}$:

 ${\displaystyle \displaystyle 2c_{2}-3c_{1}+2c_{0}=0}$ (Eq.9.2.73)

For ${\displaystyle x^{1}\,}$:

 ${\displaystyle \displaystyle -6c_{2}+2c_{1}=1}$ (Eq.9.2.74)

For ${\displaystyle x^{2}\,}$:

 ${\displaystyle \displaystyle 2c_{2}=-{\frac {1}{2}}}$ (Eq.9.2.75)

Therefore,

 ${\displaystyle \displaystyle c_{2}=-0.25}$ (Eq.9.2.76)
 ${\displaystyle \displaystyle c_{1}=-0.25}$ (Eq.9.2.77)
 ${\displaystyle \displaystyle c_{0}=-0.125}$ (Eq.9.2.78)

The particular solution will be:

 ${\displaystyle \displaystyle y_{{t_{1}},p}=-0.25x^{2}-0.25x-0.125}$ (Eq.9.2.79)

Superimpose equation 9.2.67 and 9.2.79:

 ${\displaystyle \displaystyle y_{t_{1}}=d_{1}e^{2x}+d_{2}e^{x}-0.25x^{2}-0.25x-0.125}$ (Eq.9.2.80)

Take the first of equation 9.2.80:

 ${\displaystyle \displaystyle y_{t_{1}}^{\prime }=2d_{1}e^{2x}+d_{2}e^{x}-0.5x-0.25}$ (Eq.9.2.81)

Use equation 9.1.3 to solve for the unknowns:

 ${\displaystyle \displaystyle 1=d_{1}e^{-1.5}+d_{2}e^{-0.75}-0.5625+0.1875+0.125}$ (Eq.9.2.82)
 ${\displaystyle \displaystyle 0=2d_{1}e^{-1.5}+d_{2}e^{-0.75}+0.375-0.25}$ (Eq.9.2.83)

Therefore,

 ${\displaystyle \displaystyle d_{1}=-7.2827}$ (Eq.9.2.84)
 ${\displaystyle \displaystyle d_{2}=6.6156}$ (Eq.9.2.85)

The final solution for Taylor series expansion with ${\displaystyle n=1\,}$:

 ${\displaystyle \displaystyle y_{t_{1}}=-7.2827e^{2x}+6.6156e^{x}-0.25x^{2}-0.5x-0.125}$ (Eq.9.2.86)

Below is the MATLAB code used to create the ODE for equation 9.1.1 with excitation equation 9.1.2:

function pdot = ODE45(t,p)
pdot = zeros(2,1);
pdot(1) = p(2);
pdot(2) = 3*p(2) - 2*p(1) + log10(1+t);
end


Below is the MATLAB code used to generate the above graph:

[t,p] = ode45('ODE45',[-0.75,3],[1 0]);
x=[-.75:0.01:3];
y0=-3.2019*exp(2.*x)+3.0249*exp(x)+0.2856;
y1=-5.0235*exp(2.*x)+4.1201*exp(x)+0.2956.*x+0.3964;
yt0=-5.0419*exp(2.*x)+3.7048*exp(x)+0.5.*x+0.75;
yt1=-7.2827*exp(2.*x)+6.6156*exp(x)-0.25.*x.^2-0.5.*x-0.125;
subplot(211)
plot(t,p(:,1),x,y0,x,yt0)
xlabel('x-axis');
ylabel('y-axis');
title('Solution when n=0')
subplot(212)
plot(t,p(:,1),x,y1,x,yt1)
xlabel('x-axis');
ylabel('y-axis');
title('Solution when n=1')


The solutions when ${\displaystyle n=0\,}$:

• The Taylor series solution is very close to the numerical solution, while the projection solution isn't that accurate.
• The projection solution seems to start diverging at about ${\displaystyle x=0.5\,}$.

The solutions when ${\displaystyle n=1\,}$:

• The projection solution is very close to the numerical solution, while the Taylor series solution isn't that accurate.
• The Taylor series expansion function starts diverging after ${\displaystyle x=1\,}$, since ${\displaystyle R_{c}=1\,}$.

Conclusion:

• As the number of terms increase, the Taylor series solution begins to take effects from the radius of divergence.
• The projection solution becomes more accurate as the number of terms increase, since the solution is using the extra terms to mold into the numerical solution for a given x range.

## Author & Proofreaders

Author: Egm4313.s12.team17.deaver.md 05:36, 30 March 2012 (UTC)
Proofreader: Egm4313.s12.team17.ying 09:41, 30 March 2012 (UTC)
Editor: Egm4313.s12.team17.deaver.md 05:36, 30 March 2012 (UTC)

# Contributing Team Members

Team Member Contribute Proofread
Allan Axelrod Problem 6 Problem 4, 7, and 8
Michael Deaver Problem 9 and Formatting Problem 2, 5, and 6
Max Hintz Problem 2, 3, and 7
Kelvin Li Problem 4 and 8 Problem 3
Thomas Wheeler Problem 1 and 5 Problem 4
Chen Ying Formatting Problem 1, 8, and 9