Find the radius of convergence for:
1)
r
(
x
)
=
∑
k
=
0
∞
(
k
+
1
)
k
x
k
{\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }(k+1)kx^{k}}
(1.0)
2)
r
(
x
)
=
∑
k
=
0
∞
−
1
k
γ
k
x
2
k
{\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }{\frac {-1^{k}}{\gamma ^{k}}}x^{2k}}
(1.1)
And find the radius of convergence for the Taylor series of 3) Sin(x) about x=0, 4) log(1+x) about x=0, and 5)log(1+x) about x=1.
1)---------------------------------------
First, we establish what
d
k
{\displaystyle d_{k}}
is. As found in the notes (section 7-c), we consider an infinite power series of the form:
r
(
x
)
=
∑
k
=
0
∞
d
k
x
k
{\displaystyle \displaystyle r(x)=\sum _{k=0}^{\infty }d_{k}x^{k}}
(1.2)
The radius is then calculated using the formula below:
R
c
=
[
k
→
∞
l
i
m
|
d
k
+
1
d
k
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {d_{k+1}}{d_{k}}}\right|\right]^{-1}}
(1.3)
In the case of equation (1.0),
d
k
{\displaystyle d_{k}}
is equals to:
d
k
=
(
k
+
1
)
(
k
)
{\displaystyle \displaystyle d_{k}=(k+1)(k)}
(1.4)
This means, from equation (1.3):
R
c
=
[
k
→
∞
l
i
m
|
(
k
+
2
)
(
k
+
1
)
(
k
+
1
)
(
k
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(k+2)(k+1)}{(k+1)(k)}}\right|\right]^{-1}}
(1.5)
This simplifies to:
R
c
=
[
k
→
∞
l
i
m
|
(
k
+
2
)
(
k
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(k+2)}{(k)}}\right|\right]^{-1}}
(1.6)
This limit becomes
∞
∞
{\displaystyle {\frac {\infty }{\infty }}}
, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:
R
c
=
[
k
→
∞
l
i
m
|
1
1
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {1}{1}}\right|\right]^{-1}}
(1.7)
This limit just becomes 1 and the radius of convergence is then established as 1.
2)---------------------------------------
In the case of equation (1.1), after making a bounds change where j=2k and then making k=j,
d
k
{\displaystyle d_{k}}
is equal to:
d
k
=
−
1
k
/
2
γ
k
/
2
{\displaystyle \displaystyle d_{k}={\frac {-1^{k/2}}{\gamma ^{k/2}}}}
(1.8)
This means, from equation (1.3):
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
k
/
2
+
1
/
2
)
(
γ
k
/
2
)
(
γ
k
/
2
+
1
/
2
)
(
−
1
k
/
2
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1^{k/2+1/2})(\gamma ^{k/2})}{(\gamma ^{k/2+1/2})(-1^{k/2})}}\right|\right]^{-1}}
(1.9)
γ
{\displaystyle \gamma }
is a constant in this case, and the radius of convergence simplifies to this form:
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
1
/
2
)
(
γ
1
/
2
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1^{1/2})}{(\gamma ^{1/2})}}\right|\right]^{-1}}
(1.10)
When taking the limit at infinity, the radius of convergence becomes
−
γ
1
/
2
{\displaystyle -\gamma ^{1/2}}
.
3)---------------------------------------
The Taylor series that represents sin(x) about x=0 is:
sin
x
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
!
x
2
n
+
1
{\displaystyle \displaystyle \sin x=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}}
(1.11)
We put this in the form found in equation (1.2) first. This will allow us to get a
d
k
{\displaystyle d_{k}}
. To put (1.11) in the form found in (1.2), we let k=2n+1:
r
(
x
)
=
∑
k
=
1
∞
−
1
k
+
1
2
k
!
x
k
{\displaystyle \displaystyle r(x)=\sum _{k=1}^{\infty }{\frac {-1^{\frac {k+1}{2}}}{k!}}x^{k}}
(1.12)
This means
d
k
{\displaystyle d_{k}}
would equal:
−
1
k
+
1
2
k
!
{\displaystyle \displaystyle {\frac {-1^{\frac {k+1}{2}}}{k!}}}
(1.13)
This means, from equation (1.3):
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
k
+
2
2
)
(
k
)
!
(
k
+
1
)
!
(
−
1
k
+
1
2
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1^{\frac {k+2}{2}})(k)!}{(k+1)!(-1^{\frac {k+1}{2}})}}\right|\right]^{-1}}
(1.14)
This simplifies to:
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
.5
)
(
k
+
1
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1^{.5})}{(k+1)}}\right|\right]^{-1}}
(1.15)
This inverted limit makes
R
c
=
∞
{\displaystyle R_{c}=\infty }
. This is the best possible case, as this means the series converges for all values of x.
4)---------------------------------------
The Taylor series that represents log(1+x) about x=0 is:
sin
x
=
∑
k
=
1
∞
(
−
1
)
k
k
x
k
{\displaystyle \displaystyle \sin x=\sum _{k=1}^{\infty }{\frac {(-1)^{k}}{k}}x^{k}}
(1.16)
So
d
k
{\displaystyle d_{k}}
would be defined as shown below:
d
k
=
(
−
1
)
k
k
{\displaystyle \displaystyle d_{k}={\frac {(-1)^{k}}{k}}}
(1.17)
This means, from equation (1.3):
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
k
+
1
)
(
k
)
(
k
+
1
)
(
−
1
k
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1^{k+1})(k)}{(k+1)(-1^{k})}}\right|\right]^{-1}}
(1.18)
This simplifies to:
R
c
=
[
k
→
∞
l
i
m
|
(
−
1
)
(
k
)
(
k
+
1
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {(-1)(k)}{(k+1)}}\right|\right]^{-1}}
(1.19)
This limit becomes
∞
∞
{\displaystyle {\frac {\infty }{\infty }}}
, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:
R
c
=
[
k
→
∞
l
i
m
|
−
1
1
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {-1}{1}}\right|\right]^{-1}}
(1.20)
This absolute, inverted limit just becomes 1 and the radius of convergence is then established as 1.
5)---------------------------------------
The Taylor series that represents log(1+x) about x=1 is:
sin
x
=
−
∑
k
=
1
∞
(
−
1
)
k
k
x
k
{\displaystyle \displaystyle \sin x=-\sum _{k=1}^{\infty }{\frac {(-1)^{k}}{k}}x^{k}}
(1.21)
So
d
k
{\displaystyle d_{k}}
would be defined as shown below:
d
k
=
−
(
−
1
)
k
k
{\displaystyle \displaystyle d_{k}=-{\frac {(-1)^{k}}{k}}}
(1.22)
This means, from equation (1.3):
R
c
=
[
k
→
∞
l
i
m
|
−
(
−
1
k
+
1
)
(
k
)
(
k
+
1
)
(
−
1
k
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|-{\frac {(-1^{k+1})(k)}{(k+1)(-1^{k})}}\right|\right]^{-1}}
(1.23)
This simplifies to:
R
c
=
[
k
→
∞
l
i
m
|
−
(
−
1
)
(
k
)
(
k
+
1
)
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|-{\frac {(-1)(k)}{(k+1)}}\right|\right]^{-1}}
(1.24)
This limit becomes
∞
∞
{\displaystyle {\frac {\infty }{\infty }}}
, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:
R
c
=
[
k
→
∞
l
i
m
|
1
1
|
]
−
1
{\displaystyle \displaystyle R_{c}=\left[{\overset {lim}{k\rightarrow \infty }}\left|{\frac {1}{1}}\right|\right]^{-1}}
(1.25)
This absolute, inverted limit just becomes 1 and the radius of convergence is then established as 1.
Solved and Typed By -Egm4313.s12.team1.silvestri (talk ) 17:26, 24 March 2012 (UTC)
Reviewed By - --Egm4313.s12.team1.durrance (talk ) 17:33, 30 March 2012 (UTC)
Problem R5.2: Determining linear independence using Wronskian and Gramian [ edit | edit source ]
Determine whether the following functions are linear independent using the Wronskian.
f
(
x
)
=
x
2
,
g
(
x
)
=
x
4
{\displaystyle \displaystyle f(x)=x^{2},g(x)=x^{4}}
(2.0)
f
(
x
)
=
cos
(
x
)
,
g
(
x
)
=
sin
(
3
x
)
{\displaystyle \displaystyle f(x)=\cos(x),g(x)=\sin(3x)}
(2.1)
Using the Gramian over interval [a,b ] = [1,1] to come to the same conclusion.
The Wronskian is defined as:
W
(
f
,
g
)
:=
det
[
f
g
f
′
g
′
]
=
f
g
′
−
g
f
′
{\displaystyle \displaystyle W(f,g):=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'}
(2.2)
If the Wronskian does not equal zero, then the equations are linearly independent.
Values needed are:
f
′
=
2
x
{\displaystyle \displaystyle f'=2x}
(2.3)
g
′
=
4
x
3
{\displaystyle \displaystyle g'=4x^{3}}
(2.4)
Substituting all of the conditions into Eq. (2.2):
W
(
f
,
g
)
:=
det
[
f
g
f
′
g
′
]
=
x
2
(
4
x
3
)
−
2
x
(
x
4
)
=
2
x
5
≠
0
{\displaystyle \displaystyle W(f,g):=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=x^{2}(4x^{3})-2x(x^{4})=2x^{5}\neq 0}
(2.5)
Because the Wronskian does not equal zero, Eqs. (2.0) are linearly independent.
The Wronskian is defined as:
W
(
f
,
g
)
:=
det
[
f
g
f
′
g
′
]
=
f
g
′
−
g
f
′
{\displaystyle \displaystyle W(f,g):=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'}
(2.2)
If the Wronskian does not equal zero, then the equations are linearly independent.
Values needed are:
f
′
=
−
sin
(
x
)
{\displaystyle \displaystyle f'=-\sin(x)}
(2.6)
g
′
=
3
cos
(
3
x
)
{\displaystyle \displaystyle g'=3\cos(3x)}
(2.7)
Substituting all of the conditions into Eq. (5.2):
W
(
f
,
g
)
=
det
[
f
g
f
′
g
′
]
=
3
cos
(
x
)
(
cos
(
3
x
)
)
−
(
−
sin
(
x
)
)
(
sin
(
3
x
)
)
≠
0
{\displaystyle \displaystyle W(f,g)=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=3\cos(x)(\cos(3x))-(-\sin(x))(\sin(3x))\neq 0}
(2.8)
Because the Wronskian does not equal zero, Eqs. (2.1) are linearly independent.
The Gramian is defined as:
Γ
(
f
,
g
)
:=
det
[
⟨
f
,
f
⟩
⟨
f
,
g
⟩
⟨
g
,
f
⟩
⟨
g
,
g
⟩
]
{\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}}
(2.9)
Where the scalar product is defined as:
⟨
f
,
g
⟩
:=
∫
a
b
f
(
x
)
g
(
x
)
d
x
{\displaystyle \displaystyle \langle f,g\rangle :=\int _{a}^{b}f(x)g(x)\,dx}
(2.10)
When the Gramian does not equal zero, the functions are linearly independent.
So calculating the scalar products:
⟨
f
,
f
⟩
=
∫
−
1
1
(
x
2
)
(
x
2
)
d
x
=
2
/
5
{\displaystyle \displaystyle \langle f,f\rangle =\int _{-1}^{1}(x^{2})(x^{2})\,dx=2/5}
⟨
f
,
g
⟩
=
∫
−
1
1
(
x
2
)
(
x
4
)
d
x
=
2
/
7
{\displaystyle \displaystyle \langle f,g\rangle =\int _{-1}^{1}(x^{2})(x^{4})\,dx=2/7}
⟨
g
,
f
⟩
=
∫
−
1
1
(
x
4
)
(
x
2
)
d
x
=
2
/
7
{\displaystyle \displaystyle \langle g,f\rangle =\int _{-1}^{1}(x^{4})(x^{2})\,dx=2/7}
⟨
g
,
g
⟩
=
∫
−
1
1
(
x
4
)
(
x
4
)
d
x
=
2
/
9
{\displaystyle \displaystyle \langle g,g\rangle =\int _{-1}^{1}(x^{4})(x^{4})\,dx=2/9}
(2.11)
Substituting those values into Eq. (2.9) and calculating the determinant:
Γ
(
f
,
g
)
:=
det
[
⟨
f
,
f
⟩
⟨
f
,
g
⟩
⟨
g
,
f
⟩
⟨
g
,
g
⟩
]
=
(
2
/
5
)
(
2
/
9
)
−
(
2
/
7
)
(
2
/
7
)
=
16
/
2205
≠
0
{\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}=(2/5)(2/9)-(2/7)(2/7)=16/2205\neq 0}
(2.12)
Because the determinant does not equal zero, Eqs. (2.0) are linearly independent.
The Gramian is defined as:
Γ
(
f
,
g
)
:=
det
[
⟨
f
,
f
⟩
⟨
f
,
g
⟩
⟨
g
,
f
⟩
⟨
g
,
g
⟩
]
{\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}}
(2.9)
Where the scalar product is defined as:
⟨
f
,
g
⟩
:=
∫
a
b
f
(
x
)
g
(
x
)
d
x
{\displaystyle \displaystyle \langle f,g\rangle :=\int _{a}^{b}f(x)g(x)\,dx}
(2.10)
When the Gramian does not equal zero, the functions are linearly independent.
So calculating the scalar products:
⟨
f
,
f
⟩
=
∫
−
1
1
cos
(
x
)
cos
(
x
)
d
x
=
1.9998
{\displaystyle \displaystyle \langle f,f\rangle =\int _{-1}^{1}\cos(x)\cos(x)\,dx=1.9998}
⟨
f
,
g
⟩
=
∫
−
1
1
cos
(
x
)
sin
(
3
x
)
d
x
=
0
{\displaystyle \displaystyle \langle f,g\rangle =\int _{-1}^{1}\cos(x)\sin(3x)\,dx=0}
⟨
g
,
f
⟩
=
∫
−
1
1
sin
(
3
x
)
cos
(
x
)
d
x
=
0
{\displaystyle \displaystyle \langle g,f\rangle =\int _{-1}^{1}\sin(3x)\cos(x)\,dx=0}
⟨
g
,
g
⟩
=
∫
−
1
1
sin
(
3
x
)
sin
(
3
x
)
d
x
=
0.0018
{\displaystyle \displaystyle \langle g,g\rangle =\int _{-1}^{1}\sin(3x)\sin(3x)\,dx=0.0018}
(2.13)
Substituting those values into Eq. (2.9) and calculating the determinant:
Γ
(
f
,
g
)
:=
det
[
⟨
f
,
f
⟩
⟨
f
,
g
⟩
⟨
g
,
f
⟩
⟨
g
,
g
⟩
]
=
1.998
∗
0.0018
−
0
=
0.0036
≠
0
{\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}=1.998*0.0018-0=0.0036\neq 0}
(2.14)
Because the determinant does not equal zero, Eqs. (2.1) are linearly independent.
Solved and Typed By - --Egm4313.s12.team1.durrance (talk ) 18:35, 26 March 2012 (UTC)--128.227.12.77 18:17, 26 March 2012 (UTC)
Reviewed By ---Egm4313.s12.team1.stewart (talk ) 19:03, 26 March 2012 (UTC)
Problem R5.3 Linear Independence of Vectors Using the Gramian [ edit | edit source ]
From Lect. 7c Pg. 38 Verify that
b
1
,
b
2
{\displaystyle \mathbf {b_{1}} ,\mathbf {b_{2}} \!}
are linearly independent using the Gramian.
b
1
=
2
e
1
+
7
e
2
{\displaystyle \displaystyle \mathbf {b_{1}} =2\mathbf {e} _{1}+7\mathbf {e} _{2}}
(3.1)
b
2
=
1.5
e
1
+
3
e
2
{\displaystyle \displaystyle \mathbf {b_{2}} =1.5\mathbf {e} _{1}+3\mathbf {e} _{2}}
(3.2)
The Gramian:
Γ
(
f
,
g
)
:=
det
[
⟨
f
,
f
⟩
⟨
f
,
g
⟩
⟨
g
,
f
⟩
⟨
g
,
g
⟩
]
{\displaystyle \displaystyle \Gamma (f,g):=\det {\begin{bmatrix}\langle f,f\rangle &\langle f,g\rangle \\\langle g,f\rangle &\langle g,g\rangle \end{bmatrix}}}
(3.3)
But for the vectors given the Gramian looks like this:
Γ
(
b
1
,
b
2
)
:=
det
[
⟨
b
1
,
b
1
⟩
⟨
b
1
,
b
2
⟩
⟨
b
2
,
b
1
⟩
⟨
b
2
,
b
2
⟩
]
{\displaystyle \displaystyle \Gamma (\mathbf {b_{1}} ,\mathbf {b_{2}} ):=\det {\begin{bmatrix}\langle \mathbf {b_{1}} ,\mathbf {b_{1}} \rangle &\langle \mathbf {b_{1}} ,\mathbf {b_{2}} \rangle \\\langle \mathbf {b_{2}} ,\mathbf {b_{1}} \rangle &\langle \mathbf {b_{2}} ,\mathbf {b_{2}} \rangle \end{bmatrix}}}
(3.4)
And:
⟨
b
i
,
b
j
⟩
≡
b
i
⋅
b
j
{\displaystyle \langle \mathbf {b_{i}} ,\mathbf {b_{j}} \rangle \equiv \mathbf {b_{i}} \cdot \mathbf {b_{j}} }
(3.5)
And for the scalar dot product:
⟨
e
1
,
e
2
⟩
=
0
{\displaystyle \displaystyle \langle \mathbf {e} _{1},\mathbf {e} _{2}\rangle =0}
(3.6)
Therefore:
⟨
b
1
,
b
1
⟩
=
(
2
e
1
+
7
e
2
)
⋅
(
2
e
1
+
7
e
2
)
=
4
e
1
+
49
e
2
=
4
+
49
=
53
{\displaystyle \langle \mathbf {b_{1}} ,\mathbf {b_{1}} \rangle =(2\mathbf {e} _{1}+7\mathbf {e} _{2})\cdot (2\mathbf {e} _{1}+7\mathbf {e} _{2})=4\mathbf {e} _{1}+49\mathbf {e} _{2}=4+49=53}
(3.7)
⟨
b
1
,
b
2
⟩
=
(
2
e
1
+
7
e
2
)
⋅
(
1.5
e
1
+
3
e
2
)
=
3
e
1
+
21
e
2
=
3
+
21
=
24
{\displaystyle \langle \mathbf {b_{1}} ,\mathbf {b_{2}} \rangle =(2\mathbf {e} _{1}+7\mathbf {e} _{2})\cdot (1.5\mathbf {e} _{1}+3\mathbf {e} _{2})=3\mathbf {e} _{1}+21\mathbf {e} _{2}=3+21=24}
(3.8)
⟨
b
2
,
b
1
⟩
=
(
1.5
e
1
+
3
e
2
)
⋅
(
2
e
1
+
7
e
2
)
=
3
e
1
+
21
e
2
=
3
+
21
=
24
{\displaystyle \langle \mathbf {b_{2}} ,\mathbf {b_{1}} \rangle =(1.5\mathbf {e} _{1}+3\mathbf {e} _{2})\cdot (2\mathbf {e} _{1}+7\mathbf {e} _{2})=3\mathbf {e} _{1}+21\mathbf {e} _{2}=3+21=24}
(3.9)
⟨
b
2
,
b
2
⟩
=
(
1.5
e
1
+
3
e
2
)
⋅
(
1.5
e
1
+
3
e
2
)
=
2.25
e
1
+
9
e
2
=
2.25
+
9
=
11.25
{\displaystyle \langle \mathbf {b_{2}} ,\mathbf {b_{2}} \rangle =(1.5\mathbf {e} _{1}+3\mathbf {e} _{2})\cdot (1.5\mathbf {e} _{1}+3\mathbf {e} _{2})=2.25\mathbf {e} _{1}+9\mathbf {e} _{2}=2.25+9=11.25}
(3.10)
The determinant to solve for the Gramian is:
Γ
(
b
1
,
b
2
)
:=
det
[
⟨
b
1
,
b
1
⟩
⟨
b
1
,
b
2
⟩
⟨
b
2
,
b
1
⟩
⟨
b
2
,
b
2
⟩
]
=
⟨
b
1
,
b
1
⟩
⋅
⟨
b
2
,
b
2
⟩
−
⟨
b
1
,
b
2
⟩
⋅
⟨
b
2
,
b
1
⟩
{\displaystyle \Gamma (\mathbf {b_{1}} ,\mathbf {b_{2}} ):=\det {\begin{bmatrix}\langle \mathbf {b_{1}} ,\mathbf {b_{1}} \rangle &\langle \mathbf {b_{1}} ,\mathbf {b_{2}} \rangle \\\langle \mathbf {b_{2}} ,\mathbf {b_{1}} \rangle &\langle \mathbf {b_{2}} ,\mathbf {b_{2}} \rangle \end{bmatrix}}=\langle \mathbf {b_{1}} ,\mathbf {b_{1}} \rangle \cdot \langle \mathbf {b_{2}} ,\mathbf {b_{2}} \rangle -\langle \mathbf {b_{1}} ,\mathbf {b_{2}} \rangle \cdot \langle \mathbf {b_{2}} ,\mathbf {b_{1}} \rangle }
(3.11)
Plugging in values:
Γ
(
b
1
,
b
2
)
=
(
53
)
⋅
(
11.25
)
−
(
24
)
⋅
(
24
)
{\displaystyle \Gamma (\mathbf {b_{1}} ,\mathbf {b_{2}} )=(53)\cdot (11.25)-(24)\cdot (24)}
(3.12)
Γ
(
b
1
,
b
2
)
=
596.25
−
576
{\displaystyle \Gamma (\mathbf {b_{1}} ,\mathbf {b_{2}} )=596.25-576}
(3.13)
Γ
(
b
1
,
b
2
)
=
20.25
{\displaystyle \Gamma (\mathbf {b_{1}} ,\mathbf {b_{2}} )=20.25}
(3.14)
Qualifications for a linearly independent system of vectors:
Γ
≠
0
⇒
Γ
−
1
e
x
i
s
t
s
⇒
c
=
Γ
−
1
d
{\displaystyle \displaystyle \mathbf {\Gamma } \neq 0\Rightarrow \mathbf {\Gamma ^{-1}} exists\Rightarrow \mathbf {c} =\mathbf {\Gamma } ^{-1}\mathbf {d} }
(3.15)
So the vectors
b
1
,
b
2
{\displaystyle \mathbf {b_{1}} ,\mathbf {b_{2}} \!}
are linearly independent because:
Γ
=
20.25
≠
0
{\displaystyle \displaystyle \mathbf {\Gamma } =20.25\neq 0}
(3.16)
Solved and Typed By - Egm4313.s12.team1.stewart (talk ) 18:20, 25 March 2012 (UTC)
Reviewed By - Egm4313.s12.team1.rosenberg (talk ) 05:20, 28 March 2012 (UTC)
Problem R5.4 Showing summation of particular solutions are overall particular solutions to L2-ODE-VC with summation of excitations [ edit | edit source ]
Show that
y
p
(
x
)
=
∑
i
=
0
n
y
p
,
i
(
x
)
{\displaystyle y_{p}(x)=\sum _{i=0}^{n}y_{p,i}(x)\!}
is indeed the overall particular solution of the L2-ODE-VC
y
p
,
i
″
+
p
(
x
)
y
p
,
i
′
+
q
(
x
)
y
p
,
i
=
r
i
(
x
)
{\displaystyle y''_{p,i}+p(x)y'_{p,i}+q(x)y_{p,i}=r_{i}(x)\!}
with the excitation
r
(
x
)
=
r
1
(
x
)
+
r
2
(
x
)
+
.
.
.
+
r
n
(
x
)
=
∑
i
=
0
n
r
i
(
x
)
{\displaystyle r(x)=r_{1}(x)+r_{2}(x)+...+r_{n}(x)=\sum _{i=0}^{n}r_{i}(x)\!}
. Discuss the choice of
y
p
(
x
)
{\displaystyle y_{p}(x)\!}
, in example for
r
(
x
)
=
K
c
o
s
(
w
x
)
{\displaystyle r(x)=Kcos(wx)\!}
. Why would you need to have both
c
o
s
(
w
x
)
{\displaystyle cos(wx)\!}
and
s
i
n
(
w
x
)
{\displaystyle sin(wx)\!}
in
y
p
(
x
)
{\displaystyle y_{p}(x)\!}
?
The following represent particular solutions and their derivatives, equated into a summation:
y
p
(
x
)
=
y
p
,
1
(
x
)
+
y
p
,
2
(
x
)
+
.
.
.
+
y
p
,
n
(
x
)
→
y
p
(
x
)
=
∑
i
=
0
n
y
p
,
i
(
x
)
{\displaystyle \displaystyle y_{p}(x)=y_{p,1}(x)+y_{p,2}(x)+...+y_{p,n}(x)\rightarrow y_{p}(x)=\sum _{i=0}^{n}y_{p,i}(x)}
(4.0)
y
p
′
(
x
)
=
y
p
,
1
′
(
x
)
+
y
p
,
2
′
(
x
)
+
.
.
.
+
y
p
,
n
′
(
x
)
→
y
p
′
(
x
)
=
∑
i
=
0
n
y
p
,
i
′
(
x
)
{\displaystyle \displaystyle y_{p}'(x)=y_{p,1}'(x)+y_{p,2}'(x)+...+y_{p,n}'(x)\rightarrow y_{p}'(x)=\sum _{i=0}^{n}y_{p,i}'(x)}
(4.1)
y
p
″
(
x
)
=
y
p
,
1
″
(
x
)
+
y
p
,
2
″
(
x
)
+
.
.
.
+
y
p
,
n
″
(
x
)
→
y
p
″
(
x
)
=
∑
i
=
0
n
y
p
,
i
″
(
x
)
{\displaystyle \displaystyle y_{p}''(x)=y_{p,1}''(x)+y_{p,2}''(x)+...+y_{p,n}''(x)\rightarrow y_{p}''(x)=\sum _{i=0}^{n}y_{p,i}''(x)}
(4.2)
Because the given ODE is in the form of L2-ODE-VC, it is linearly independent. Each
y
p
,
i
{\displaystyle y_{p,i}\!}
is a solution for each
r
i
{\displaystyle r_{i}\!}
, and when there are multiple excitations, the solution to a sum of these excitations is the sum of the particular solutions. Using these particular solutions, we can show that they are the solutions to the following L2-ODE-VC (4.3) with the given excitation:
y
p
,
i
″
+
p
(
x
)
y
p
,
i
′
+
q
(
x
)
y
p
,
i
=
r
i
(
x
)
{\displaystyle \displaystyle y_{p,i}''+p(x)y_{p,i}'+q(x)y_{p,i}=r_{i}(x)}
(4.3)
∑
i
=
0
n
y
p
,
i
″
(
x
)
+
p
(
x
)
∑
i
=
0
n
y
p
,
i
′
(
x
)
+
q
(
x
)
∑
i
=
0
n
y
p
,
i
(
x
)
=
∑
i
=
0
n
r
i
(
x
)
{\displaystyle \displaystyle \sum _{i=0}^{n}y_{p,i}''(x)+p(x)\sum _{i=0}^{n}y_{p,i}'(x)+q(x)\sum _{i=0}^{n}y_{p,i}(x)=\sum _{i=0}^{n}r_{i}(x)}
(4.4)
For
r
(
x
)
=
K
c
o
s
(
w
x
)
{\displaystyle r(x)=Kcos(wx)\!}
. You need to have both
c
o
s
(
w
x
)
{\displaystyle cos(wx)\!}
and
s
i
n
(
w
x
)
{\displaystyle sin(wx)\!}
in
y
p
(
x
)
{\displaystyle y_{p}(x)\!}
because when solving for the particular solution, it is necessary to take derivatives of the particular solution, and in the case of
r
(
x
)
=
K
c
o
s
(
w
x
)
{\displaystyle r(x)=Kcos(wx)\!}
the derivatives will produce extra terms since the derivative of
c
o
s
(
w
x
)
{\displaystyle cos(wx)\!}
will produce both
s
i
n
(
w
x
)
{\displaystyle sin(wx)\!}
and
c
o
s
(
w
x
)
{\displaystyle cos(wx)\!}
terms. Having both
c
o
s
(
w
x
)
{\displaystyle cos(wx)\!}
and
s
i
n
(
w
x
)
{\displaystyle sin(wx)\!}
is necessary to eliminate the extra terms.
Solved and Typed By - --Egm4313.s12.team1.wyattling (talk ) 19:17, 26 March 2012 (UTC)
Reviewed By - Egm4313.s12.team1.armanious (talk ) 05:25, 30 March 2012 (UTC)
1. Show that
cos
7
x
{\displaystyle \cos 7x}
and
sin
7
x
{\displaystyle \sin 7x}
are linearly independent using the Wronskian and the Gramian(1 period).
2. Find 2 equations for the two unknowns M, N, and solve for M, N.
3. Find the overall solution
y
(
x
)
{\displaystyle y(x)}
that corresponds to the initial condition (3b) p3-7.
y
(
0
)
=
1
,
y
′
(
0
)
=
0
{\displaystyle y(0)=1,y'(0)=0}
Plot the solution over 3 periods.
Part 1a
When using the Wronskian, if the solution does not equal to 0, then the two are linearly independent of each other.
The Wronskian can be defined as:
W
(
f
,
g
)
:=
d
e
t
[
f
g
f
′
g
′
]
=
f
g
′
−
g
f
′
{\displaystyle \displaystyle W(f,g):=det{\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'}
(5.1)
Lets set
f
=
cos
(
7
x
)
{\displaystyle f=\cos(7x)}
and
g
=
sin
(
7
x
)
{\displaystyle g=\sin(7x)}
so that we can find
f
′
,
g
′
{\displaystyle f',g'}
f
′
=
−
7
sin
(
7
x
)
,
g
′
=
7
cos
(
7
x
)
{\displaystyle \displaystyle f'=-7\sin(7x),g'=7\cos(7x)}
(5.2)
Plugging these values into the Wronskian equation yields:
W
(
f
,
g
)
:=
d
e
t
[
cos
(
7
x
)
sin
(
7
x
)
−
7
sin
(
7
x
)
7
cos
(
7
x
)
]
=
7
cos
2
(
7
x
)
−
7
sin
2
(
7
x
)
{\displaystyle \displaystyle W(f,g):=det{\begin{bmatrix}\cos(7x)&\sin(7x)\\-7\sin(7x)&7\cos(7x)\end{bmatrix}}=7\cos ^{2}(7x)-7\sin ^{2}(7x)}
(5.3)
7
cos
2
(
7
x
)
−
7
sin
2
(
7
x
)
≠
0
{\displaystyle \displaystyle 7\cos ^{2}(7x)-7\sin ^{2}(7x)\neq 0}
(5.4)
Thus f and g are in fact linearly independent of each other.
Part 1b
Now we need to solve it using the Gramian
Gramian can be defined as:
Γ
(
f
,
g
)
:=
d
e
t
[
<
f
,
f
>
<
f
,
g
>
<
g
,
f
>
<
g
,
g
>
]
{\displaystyle \displaystyle \Gamma (f,g):=det{\begin{bmatrix}<f,f>&<f,g>\\<g,f>&<g,g>\end{bmatrix}}}
(5.5)
Where like with the Wronskian, f and g are linearly independent of each other when
Γ
(
f
,
g
)
≠
0
{\displaystyle \Gamma (f,g)\neq 0}
Integrating over one period implies that our boundaries will be
(
0
,
2
π
7
)
{\displaystyle (0,{\frac {2\pi }{7}})}
<
f
,
f
>=
∫
0
2
π
7
cos
2
(
7
x
)
d
x
{\displaystyle \displaystyle <f,f>=\int _{0}^{\frac {2\pi }{7}}\cos ^{2}(7x)dx}
(5.6)
Setting
u
=
7
x
{\displaystyle u=7x}
gives us
d
u
=
7
d
x
{\displaystyle du=7dx}
which also changes our integration factors to be
(
0
,
2
π
)
{\displaystyle (0,2\pi )}
giving us:
<
f
,
f
>=
1
7
∫
0
2
π
cos
2
(
u
)
d
u
{\displaystyle \displaystyle <f,f>={\frac {1}{7}}\int _{0}^{2\pi }\cos ^{2}(u)du}
(5.7)
Integrating gives us:
1
7
[
u
2
+
1
4
sin
2
u
∣
0
2
π
]
{\displaystyle \displaystyle {\frac {1}{7}}[{\frac {u}{2}}+{\frac {1}{4}}\sin 2u\mathbf {\mid } _{0}^{2\pi }]}
(5.8)
Which equals:
π
7
{\displaystyle \displaystyle {\frac {\pi }{7}}}
(5.9)
<
g
,
g
>=
∫
0
2
π
7
sin
2
(
7
x
)
d
x
{\displaystyle \displaystyle <g,g>=\int _{0}^{\frac {2\pi }{7}}\sin ^{2}(7x)dx}
(5.10)
Setting
u
=
7
x
{\displaystyle u=7x}
gives us
d
u
=
7
d
x
{\displaystyle du=7dx}
which also changes our integration factors to be
(
0
,
2
π
)
{\displaystyle (0,2\pi )}
giving us:
<
g
,
g
>=
1
7
∫
0
2
π
sin
2
(
u
)
d
u
{\displaystyle \displaystyle <g,g>={\frac {1}{7}}\int _{0}^{2\pi }\sin ^{2}(u)du}
(5.11)
Integrating gives us:
1
7
[
u
2
−
1
4
sin
2
u
∣
0
2
π
]
{\displaystyle \displaystyle {\frac {1}{7}}[{\frac {u}{2}}-{\frac {1}{4}}\sin 2u\mathbf {\mid } _{0}^{2\pi }]}
(5.12)
Which equals:
π
7
{\displaystyle \displaystyle {\frac {\pi }{7}}}
(5.13)
<
f
,
g
>=<
g
,
f
>=
∫
0
2
π
7
cos
(
7
x
)
sin
(
7
x
)
d
x
{\displaystyle \displaystyle <f,g>=<g,f>=\int _{0}^{\frac {2\pi }{7}}\cos(7x)\sin(7x)dx}
(5.14)
Because cos and sin are orthogonal of each other, without going through with the integration, we can say that
<
f
,
g
>=<
g
,
f
>=
0
{\displaystyle <f,g>=<g,f>=0}
Plugging these values into our Gramian equation gives us:
Γ
(
f
,
g
)
:=
d
e
t
[
π
7
0
0
π
7
]
=
π
2
49
≠
0
{\displaystyle \displaystyle \Gamma (f,g):=det{\begin{bmatrix}{\frac {\pi }{7}}&0\\0&{\frac {\pi }{7}}\end{bmatrix}}={\frac {\pi ^{2}}{49}}\neq 0}
(5.15)
Thus, we again see that f and g are linearly independent of each other.
Part 2
From the notes, we are given the following information:
y
″
−
3
y
′
−
10
y
=
3
cos
(
7
x
)
{\displaystyle \displaystyle y''-3y'-10y=3\cos(7x)}
(5.16)
y
p
(
x
)
=
M
cos
(
7
x
)
+
N
sin
(
7
x
)
{\displaystyle \displaystyle y_{p}(x)=M\cos(7x)+N\sin(7x)}
(5.17)
y
p
′
(
x
)
=
−
M
7
sin
(
7
x
)
+
N
7
cos
(
7
x
)
{\displaystyle \displaystyle y'_{p}(x)=-M7\sin(7x)+N7\cos(7x)}
(5.18)
y
p
″
(
x
)
=
−
M
49
cos
(
7
x
)
−
N
49
sin
(
7
x
)
{\displaystyle \displaystyle y''_{p}(x)=-M49\cos(7x)-N49\sin(7x)}
(5.19)
Plugging 5.17-19 back into our original equation in 5.16 yields:
−
M
49
cos
(
7
x
)
−
N
49
sin
(
7
x
)
+
M
21
sin
(
7
x
)
−
N
21
cos
(
7
x
)
−
M
10
cos
(
7
x
)
−
N
10
sin
(
7
x
)
=
3
cos
(
7
x
)
{\displaystyle \displaystyle -M49\cos(7x)-N49\sin(7x)+M21\sin(7x)-N21\cos(7x)-M10\cos(7x)-N10\sin(7x)=3\cos(7x)}
(5.20)
Collecting like terms leaves us with:
−
59
M
cos
(
7
x
)
−
59
N
sin
(
7
x
)
+
21
M
sin
(
7
x
)
−
21
N
cos
(
7
x
)
=
3
cos
(
7
x
)
{\displaystyle \displaystyle -59M\cos(7x)-59N\sin(7x)+21M\sin(7x)-21N\cos(7x)=3\cos(7x)}
(5.21)
Now we can equate coefficients to solve of M and N:
−
59
M
=
3
,
M
=
−
3
59
{\displaystyle \displaystyle -59M=3,M={\frac {-3}{59}}}
(5.22)
21
N
=
3
,
N
=
1
7
{\displaystyle \displaystyle 21N=3,N={\frac {1}{7}}}
(5.23)
Part 3
The overall solution to the equation is expressed as
y
(
x
)
=
y
p
(
x
)
+
y
h
(
x
)
{\displaystyle y(x)=y_{p}(x)+y_{h}(x)}
. We must now find the homogeneous equation.
Writing our given equation in homogeneous form gives us:
y
″
−
3
y
′
−
10
=
0
{\displaystyle \displaystyle y''-3y'-10=0}
(5.24)
rewriting in characteristic form:
λ
2
−
3
λ
−
10
=
0
{\displaystyle \displaystyle \lambda ^{2}-3\lambda -10=0}
(5.25)
We can solve for our roots using simple factoring:
(
λ
−
5
)
(
λ
+
2
)
=
0
{\displaystyle \displaystyle (\lambda -5)(\lambda +2)=0}
(5.26)
Thus:
λ
1
,
2
=
(
5
,
−
2
)
{\displaystyle \displaystyle \lambda _{1,2}=(5,-2)}
(5.27)
Yielding:
y
h
(
x
)
=
c
1
e
5
x
+
c
2
e
−
2
x
{\displaystyle \displaystyle y_{h}(x)=c_{1}e^{5x}+c_{2}e^{-2x}}
(5.28)
Now using the given initial conditions, we can solve for
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
y
h
′
(
x
)
=
5
c
1
e
5
x
−
2
c
2
e
−
2
x
{\displaystyle \displaystyle y'_{h}(x)=5c_{1}e^{5x}-2c_{2}e^{-2x}}
(5.29)
Plugging in initial conditions we have:
y
(
0
)
=
c
1
+
c
2
=
1
,
y
′
(
0
)
=
5
c
1
−
2
c
2
=
1
{\displaystyle \displaystyle y(0)=c_{1}+c_{2}=1,y'(0)=5c_{1}-2c_{2}=1}
(5.30)
Solving for each gives us:
c
1
=
2
7
,
c
2
=
5
7
{\displaystyle \displaystyle c_{1}={\frac {2}{7}},c_{2}={\frac {5}{7}}}
(5.31)
Therfore:
y
h
(
x
)
=
2
7
e
5
x
+
5
7
e
−
2
x
{\displaystyle \displaystyle y_{h}(x)={\frac {2}{7}}e^{5x}+{\frac {5}{7}}e^{-2x}}
(5.32)
Now plugging in our answers from Step 2 into the general particular equation we get that:
y
p
(
x
)
=
−
3
59
cos
(
7
x
)
+
1
7
sin
(
7
x
)
{\displaystyle \displaystyle y_{p}(x)={\frac {-3}{59}}\cos(7x)+{\frac {1}{7}}\sin(7x)}
(5.33)
so our final equation to is:
y
(
x
)
=
2
7
e
5
x
+
5
7
e
−
2
x
+
−
3
59
cos
(
7
x
)
+
1
7
sin
(
7
x
)
{\displaystyle \displaystyle y(x)={\frac {2}{7}}e^{5x}+{\frac {5}{7}}e^{-2x}+{\frac {-3}{59}}\cos(7x)+{\frac {1}{7}}\sin(7x)}
(5.34)
To plot over 3 periods, we will plot from
(
0
,
6
π
7
)
{\displaystyle (0,{\frac {6\pi }{7}})}
Solved and Typed By - Egm4313.s12.team1.rosenberg (talk ) 18:45, 30 March 2012 (UTC)
Reviewed By -Egm4313.s12.team1.silvestri (talk ) 18:44, 30 March 2012 (UTC)
Consider the following L2-ODE-CC; see p.6-6:
y
″
−
4
y
′
+
13
y
=
2
e
−
2
x
cos
(
3
x
)
{\displaystyle \displaystyle {y}''-4{y}'+13{y}=2e^{-2x}\cos(3x)}
(6.0)
Homogeneous solution:
y
h
(
x
)
=
e
−
2
x
[
A
cos
3
x
+
B
sin
3
x
]
{\displaystyle \displaystyle {y}_{h}(x)=e^{-2x}[A\cos 3x+B\sin 3x]}
(6.1)
Particular solution:
y
p
(
x
)
=
x
e
−
2
x
[
M
cos
3
x
+
N
sin
3
x
]
{\displaystyle \displaystyle {y}_{p}(x)=xe^{-2x}[M\cos 3x+N\sin 3x]}
(6.2)
Complete the solution for this problem.
Find the overall solution
y
(
x
)
{\displaystyle y(x)}
that corresponds to the initial condition (3b) p.3-7
y
(
0
)
=
1
,
y
′
(
0
)
=
0
{\displaystyle \displaystyle y(0)=1,y'(0)=0}
(6.3)
Start by finding
y
p
′
{\displaystyle y_{p}'}
and
y
p
″
{\displaystyle y_{p}''}
.
y
p
′
=
e
−
2
x
[
(
−
3
M
x
−
2
N
x
+
N
)
sin
3
x
+
(
−
2
M
x
+
3
N
x
+
M
)
cos
3
x
]
{\displaystyle \displaystyle {y}_{p}'={{e}^{-2x}}[(-3Mx-2Nx+N)\sin 3x+(-2Mx+3Nx+M)\cos 3x]}
(6.4)
y
p
″
=
e
−
2
x
[
(
12
M
x
−
6
M
−
5
N
x
−
4
N
)
sin
3
x
+
(
−
15
M
x
−
12
M
−
12
N
x
+
6
N
)
cos
3
x
]
{\displaystyle \displaystyle {y}_{p}''={{e}^{-2x}}[(12Mx-6M-5Nx-4N)\sin 3x+(-15Mx-12M-12Nx+6N)\cos 3x]}
(6.5)
Substitute
y
p
{\displaystyle y_{p}}
and its derivatives into (6.0) to find
M
{\displaystyle M}
and
N
{\displaystyle N}
.
y
p
″
+
4
y
p
′
+
13
y
p
=
e
−
2
x
[
(
−
6
M
)
sin
3
x
+
(
−
10
M
x
−
8
M
+
6
N
)
cos
3
x
]
{\displaystyle \displaystyle {y}_{p}''+4{y}_{p}'+13{y}_{p}={{e}^{-2x}}[(-6M)\sin 3x+(-10Mx-8M+6N)\cos 3x]}
(6.6)
Separating terms and setting equal to the excitation from (6.0):
−
6
M
e
−
2
x
sin
3
x
+
(
−
8
M
+
6
N
)
e
−
2
x
cos
3
x
−
10
M
x
e
−
2
x
cos
3
x
=
2
e
−
2
x
cos
3
x
{\displaystyle \displaystyle -6M{{e}^{-2x}}\sin 3x+(-8M+6N){{e}^{-2x}}\cos 3x-10Mx{{e}^{-2x}}\cos 3x=2{{e}^{-2x}}\cos 3x}
(6.7)
From (6.7), we solve coefficients to get
−
6
M
=
0
{\displaystyle \displaystyle -6M=0}
−
8
M
+
6
N
=
2
{\displaystyle \displaystyle -8M+6N=2}
−
10
M
=
0
{\displaystyle \displaystyle -10M=0}
(6.8)
Solving for
M
{\displaystyle M}
and
N
{\displaystyle N}
:
M
=
0
{\displaystyle \displaystyle M=0}
N
=
1
3
{\displaystyle \displaystyle N={\frac {1}{3}}}
(6.9)
Which gives us the particular solution:
y
p
=
1
3
x
e
−
2
x
sin
3
x
{\displaystyle \displaystyle {y}_{p}={\frac {1}{3}}x{{e}^{-2x}}\sin 3x}
(6.10)
For the general solution,
y
=
y
h
+
y
p
{\displaystyle \displaystyle y={{y}_{h}}+{{y}_{p}}}
(6.11)
y
=
e
−
2
x
[
A
cos
3
x
+
B
sin
3
x
]
+
1
3
x
e
−
2
x
sin
3
x
{\displaystyle \displaystyle y=e^{-2x}[A\cos 3x+B\sin 3x]+{\frac {1}{3}}x{{e}^{-2x}}\sin 3x}
(6.12)
y
=
e
−
2
x
[
A
cos
3
x
+
(
B
+
1
3
x
)
sin
3
x
]
{\displaystyle \displaystyle y=e^{-2x}[A\cos 3x+(B+{\frac {1}{3}}x)\sin 3x]}
(6.13)
To solve for
A
{\displaystyle A}
and
B
{\displaystyle B}
, we use initial conditions from (6.3):
y
(
0
)
=
e
−
2
(
0
)
[
A
cos
3
(
0
)
+
(
B
+
1
3
(
0
)
)
sin
3
(
0
)
]
=
1
{\displaystyle \displaystyle y(0)=e^{-2(0)}[A\cos 3(0)+(B+{\frac {1}{3}}(0))\sin 3(0)]=1}
(6.14)
Which simplifies to:
A
=
1
{\displaystyle \displaystyle A=1}
(6.15)
For the second initial condition from (6.3):
y
′
=
e
−
2
x
[
(
−
3
A
−
2
B
+
1
3
+
2
3
x
)
sin
3
x
+
(
3
B
+
x
−
2
)
cos
3
x
]
{\displaystyle \displaystyle y'={{e}^{-2x}}[(-3A-2B+{\frac {1}{3}}+{\frac {2}{3}}x)\sin 3x+(3B+x-2)\cos 3x]}
(6.16)
y
′
(
0
)
=
e
−
2
(
0
)
[
(
−
3
A
−
2
B
+
1
3
+
2
3
(
0
)
)
sin
3
(
0
)
+
(
3
B
+
(
0
)
−
2
)
cos
3
(
0
)
]
=
0
{\displaystyle \displaystyle y'(0)={{e}^{-2(0)}}[(-3A-2B+{\frac {1}{3}}+{\frac {2}{3}}(0))\sin 3(0)+(3B+(0)-2)\cos 3(0)]=0}
(6.17)
3
B
−
2
=
0
{\displaystyle \displaystyle 3B-2=0}
(6.18)
B
=
2
3
{\displaystyle \displaystyle B={\frac {2}{3}}}
(6.19)
We can now write
y
{\displaystyle y}
with all coefficients known.
y
=
e
−
2
x
[
(
1
)
cos
3
x
+
(
2
3
+
1
3
x
)
sin
3
x
]
{\displaystyle \displaystyle y={{e}^{-2x}}[(1)\cos 3x+({\frac {2}{3}}+{\frac {1}{3}}x)\sin 3x]}
(6.20)
Final Equation
y
=
e
−
2
x
[
cos
3
x
+
(
2
3
+
1
3
x
)
sin
3
x
]
{\displaystyle \displaystyle y={{e}^{-2x}}[\cos 3x+({\frac {2}{3}}+{\frac {1}{3}}x)\sin 3x]}
(6.21)
Solved and Typed By - Egm4313.s12.team1.essenwein (talk ) 00:28, 20 March 2012 (UTC)
Reviewed By - --Egm4313.s12.team1.stewart (talk ) 18:29, 25 March 2012 (UTC)
See R5.7 Lect. 8b pg. 11 :
v
=
4
e
1
+
2
e
2
=
c
1
b
1
+
c
2
b
2
{\displaystyle \mathbf {v} =4\mathbf {e} _{1}+2\mathbf {e} _{2}=c_{1}\mathbf {b} _{1}+c_{2}\mathbf {b} _{2}}
The oblique basis vectors b1 ,b2 are:
b
1
=
2
e
1
+
7
e
2
{\displaystyle \mathbf {b} _{1}=2\mathbf {e} _{1}+7\mathbf {e} _{2}}
b
2
=
1.5
e
1
+
3
e
2
{\displaystyle \mathbf {b} _{2}=1.5\mathbf {e} _{1}+3\mathbf {e} _{2}}
1. Find the components c1 , c2 using the Gramian matrix.
2. Verify the result found above.
To find the components of the oblique basis vectors, the Gram matrix must be used. The Gram matrix is defined as such:
Γ
(
b
1
,
b
2
)
:=
[
⟨
b
1
,
b
1
⟩
⟨
b
1
,
b
2
⟩
⟨
b
2
,
b
1
⟩
⟨
b
2
,
b
2
⟩
]
{\displaystyle \displaystyle {\boldsymbol {\Gamma }}(\mathbf {b} _{1},\mathbf {b} _{2}):={\begin{bmatrix}\langle \mathbf {b} _{1},\mathbf {b} _{1}\rangle &\langle \mathbf {b} _{1},\mathbf {b} _{2}\rangle \\\langle \mathbf {b} _{2},\mathbf {b} _{1}\rangle &\langle \mathbf {b} _{2},\mathbf {b} _{2}\rangle \end{bmatrix}}}
(7.0)
This matrix requires several scalar products to be found. one important feature of scalar products will be presented here and implicitly used throughout these calculations:
⟨
e
1
,
e
2
⟩
=
0
{\displaystyle \displaystyle \langle \mathbf {e} _{1},\mathbf {e} _{2}\rangle =0}
(7.1)
Equation 7.1 states that the scalar product of two perpendicular vectors is zero.
To find the Gram matrix, three scalar products must be found:
⟨
b
1
,
b
1
⟩
=
(
2
)
(
2
)
+
(
7
)
(
7
)
=
53
{\displaystyle \displaystyle \langle \mathbf {b} _{1},\mathbf {b} _{1}\rangle =(2)(2)+(7)(7)=53}
(7.2)
⟨
b
1
,
b
2
⟩
=
⟨
b
2
,
b
1
⟩
=
(
2
)
(
1.5
)
+
(
7
)
(
3
)
=
24
{\displaystyle \displaystyle \langle \mathbf {b} _{1},\mathbf {b} _{2}\rangle =\langle \mathbf {b} _{2},\mathbf {b} _{1}\rangle =(2)(1.5)+(7)(3)=24}
(7.3)
⟨
b
2
,
b
2
⟩
=
(
1.5
)
(
1.5
)
+
(
3
)
(
3
)
=
11.25
{\displaystyle \displaystyle \langle \mathbf {b} _{2},\mathbf {b} _{2}\rangle =(1.5)(1.5)+(3)(3)=11.25}
(7.4)
Using the above values the Gram matrix is found to be:
Γ
(
b
1
,
b
2
)
=
[
53
24
24
11.25
]
{\displaystyle \displaystyle {\boldsymbol {\Gamma }}(\mathbf {b} _{1},\mathbf {b} _{2})={\begin{bmatrix}53&24\\24&11.25\end{bmatrix}}}
(7.5)
To find the components, the Gram matrix must be used to solve the following equation:
[
⟨
b
1
,
b
1
⟩
⟨
b
1
,
b
2
⟩
⟨
b
2
,
b
1
⟩
⟨
b
2
,
b
2
⟩
]
{
c
1
c
2
}
=
{
⟨
b
1
,
v
⟩
⟨
b
2
,
v
⟩
}
{\displaystyle \displaystyle {\begin{bmatrix}\langle \mathbf {b} _{1},\mathbf {b} _{1}\rangle &\langle \mathbf {b} _{1},\mathbf {b} _{2}\rangle \\\langle \mathbf {b} _{2},\mathbf {b} _{1}\rangle &\langle \mathbf {b} _{2},\mathbf {b} _{2}\rangle \end{bmatrix}}{\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}={\begin{Bmatrix}\langle \mathbf {b} _{1},\mathbf {v} \rangle \\\langle \mathbf {b} _{2},\mathbf {v} \rangle \end{Bmatrix}}}
(7.6)
The right hand side of the equation can be found using:
⟨
b
1
,
v
⟩
=
(
2
)
(
4
)
+
(
7
)
(
2
)
=
22
{\displaystyle \displaystyle \langle \mathbf {b} _{1},\mathbf {v} \rangle =(2)(4)+(7)(2)=22}
(7.7)
⟨
b
2
,
v
⟩
=
(
1.5
)
(
4
)
+
(
3
)
(
2
)
=
12
{\displaystyle \displaystyle \langle \mathbf {b} _{2},\mathbf {v} \rangle =(1.5)(4)+(3)(2)=12}
(7.8)
Using known values, 7.6 becomes:
[
53
24
24
11.25
]
{
c
1
c
2
}
=
{
22
12
}
{\displaystyle \displaystyle {\begin{bmatrix}53&24\\24&11.25\end{bmatrix}}{\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}={\begin{Bmatrix}22\\12\end{Bmatrix}}}
(7.9)
To find the components, the Gramian (the determinant of the Gram matrix) and the inverse of the Gram must be found.
Γ
=
|
53
24
24
11.25
|
=
(
53
)
(
11.25
)
−
(
24
)
(
24
)
=
20.25
{\displaystyle \displaystyle \Gamma ={\begin{vmatrix}53&24\\24&11.25\end{vmatrix}}=(53)(11.25)-(24)(24)=20.25}
(7.10)
Γ
−
1
=
1
20.25
[
11.25
−
24
−
24
53
]
=
[
5
9
−
32
27
−
32
27
212
81
]
{\displaystyle \displaystyle {\boldsymbol {\Gamma }}^{-1}={\frac {1}{20.25}}{\begin{bmatrix}11.25&-24\\-24&53\end{bmatrix}}={\begin{bmatrix}{\frac {5}{9}}&-{\frac {32}{27}}\\-{\frac {32}{27}}&{\frac {212}{81}}\end{bmatrix}}}
(7.11)
This can be used in the matrix equation to solve for the components:
{
c
1
c
2
}
=
[
5
9
−
32
27
−
32
27
212
81
]
{
22
12
}
=
{
−
2
16
3
}
{\displaystyle \displaystyle {\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}={\begin{bmatrix}{\frac {5}{9}}&-{\frac {32}{27}}\\-{\frac {32}{27}}&{\frac {212}{81}}\end{bmatrix}}{\begin{Bmatrix}22\\12\end{Bmatrix}}={\begin{Bmatrix}-2\\{\frac {16}{3}}\end{Bmatrix}}}
(7.12)
Therefore the matrix v with respect to the oblique vectors is:
v
=
−
2
b
1
+
16
3
b
2
{\displaystyle \displaystyle \mathbf {v} =-2\mathbf {b} _{1}+{\frac {16}{3}}\mathbf {b} _{2}}
(7.13)
As a check, the definitions of each vectors with respect to the basis e1 and e2 :
4
e
1
+
2
e
2
=
−
2
(
2
e
1
+
7
e
2
)
+
16
3
(
1.5
e
1
+
3
e
2
)
=
−
4
e
1
−
14
e
2
+
8
e
1
+
16
e
2
{\displaystyle \displaystyle 4\mathbf {e} _{1}+2\mathbf {e} _{2}=-2(2\mathbf {e} _{1}+7\mathbf {e} _{2})+{\frac {16}{3}}(1.5\mathbf {e} _{1}+3\mathbf {e} _{2})=-4\mathbf {e} _{1}-14\mathbf {e} _{2}+8\mathbf {e} _{1}+16\mathbf {e} _{2}}
(7.14)
Which reduces to:
4
e
1
+
2
e
2
=
4
e
1
+
2
e
2
{\displaystyle \displaystyle 4\mathbf {e} _{1}+2\mathbf {e} _{2}=4\mathbf {e} _{1}+2\mathbf {e} _{2}}
(7.15)
Solved and Typed By - Egm4313.s12.team1.armanious (talk ) 01:40, 25 March 2012 (UTC)
Reviewed By - --Egm4313.s12.team1.stewart (talk ) 18:31, 25 March 2012 (UTC)
Problem R5.8 Finding the Integral of a Logarithmic Function [ edit | edit source ]
see R5.8 Lect. 8b pg. 16 :
Find the integral:
∫
x
n
log
(
1
+
x
)
d
x
{\displaystyle \int x^{n}\log(1+x)dx\!}
∫
x
n
log
(
1
+
x
)
d
x
{\displaystyle \displaystyle \int x^{n}\log(1+x)dx}
(8.0)
To find this integral, integration by parts must be used. The formula for integration by parts is:
∫
u
d
v
=
u
v
−
∫
v
d
u
+
C
{\displaystyle \displaystyle \int udv=uv-\int vdu+C}
(8.1)
u
=
log
(
1
+
x
)
{\displaystyle \displaystyle u=\log(1+x)}
(8.2)
d
u
=
d
x
1
+
x
{\displaystyle \displaystyle du={\frac {dx}{1+x}}}
(8.3)
d
v
=
x
n
d
x
{\displaystyle \displaystyle dv=x^{n}dx}
(8.4)
v
=
x
n
+
1
n
+
1
{\displaystyle \displaystyle v={\frac {x^{n+1}}{n+1}}}
(8.5)
Using these expressions in the formula yields:
∫
x
n
log
(
1
+
x
)
d
x
=
x
n
+
1
n
+
1
log
(
1
+
x
)
−
1
n
+
1
∫
x
n
+
1
1
+
x
d
x
+
C
{\displaystyle \displaystyle \int x^{n}\log(1+x)dx={\frac {x^{n+1}}{n+1}}\log(1+x)-{\frac {1}{n+1}}\int {\frac {x^{n+1}}{1+x}}dx+C}
(8.6)
Now, the integral
∫
x
n
+
1
1
+
x
d
x
{\displaystyle \int {\frac {x^{n+1}}{1+x}}dx\!}
must be found. To start, the fraction must be expanded using long division:
x
n
+
1
1
+
x
=
x
n
−
x
n
−
1
+
x
n
−
2
−
x
n
−
3
+
.
.
.
+
(
−
1
)
n
+
1
x
+
1
=
(
−
1
)
n
+
1
x
+
1
+
∑
k
=
0
n
(
−
1
)
k
x
n
−
k
{\displaystyle \displaystyle {\frac {x^{n+1}}{1+x}}=x^{n}-x^{n-1}+x^{n-2}-x^{n-3}+...+{\frac {(-1)^{n+1}}{x+1}}={\frac {(-1)^{n+1}}{x+1}}+\sum _{k=0}^{n}(-1)^{k}x^{n-k}}
(8.7)
This expression can now be easily integrated to yield the following:
∫
x
n
+
1
1
+
x
d
x
=
∫
(
−
1
)
n
+
1
x
+
1
d
x
+
∫
∑
k
=
0
n
(
−
1
)
k
x
n
−
k
d
x
{\displaystyle \displaystyle \int {\frac {x^{n+1}}{1+x}}dx=\int {\frac {(-1)^{n+1}}{x+1}}dx+\int \sum _{k=0}^{n}(-1)^{k}x^{n-k}dx}
(8.8)
∫
(
−
1
)
n
+
1
x
+
1
d
x
=
(
−
1
)
n
+
1
log
(
1
+
x
)
{\displaystyle \displaystyle \int {\frac {(-1)^{n+1}}{x+1}}dx=(-1)^{n+1}\log(1+x)}
(8.9)
∫
∑
k
=
0
n
(
−
1
)
k
x
n
−
k
d
x
=
∑
k
=
0
n
∫
(
−
1
)
k
x
n
−
k
d
x
=
∑
k
=
0
n
(
−
1
)
k
n
−
k
+
1
x
n
−
k
+
1
{\displaystyle \displaystyle \int \sum _{k=0}^{n}(-1)^{k}x^{n-k}dx=\sum _{k=0}^{n}\int (-1)^{k}x^{n-k}dx=\sum _{k=0}^{n}{\frac {(-1)^{k}}{n-k+1}}x^{n-k+1}}
(8.10)
Therefore:
∫
x
n
+
1
1
+
x
d
x
=
(
−
1
)
n
+
1
log
(
1
+
x
)
+
∑
k
=
0
n
(
−
1
)
k
n
−
k
+
1
x
n
−
k
+
1
{\displaystyle \displaystyle \int {\frac {x^{n+1}}{1+x}}dx=(-1)^{n+1}\log(1+x)+\sum _{k=0}^{n}{\frac {(-1)^{k}}{n-k+1}}x^{n-k+1}}
(8.11)
Substituting into the original equation:
∫
x
n
log
(
1
+
x
)
d
x
=
x
n
+
1
n
+
1
log
(
1
+
x
)
−
1
n
+
1
(
(
−
1
)
n
+
1
log
(
1
+
x
)
+
∑
k
=
0
n
(
−
1
)
k
n
−
k
+
1
x
n
−
k
+
1
)
+
C
{\displaystyle \displaystyle \int x^{n}\log(1+x)dx={\frac {x^{n+1}}{n+1}}\log(1+x)-{\frac {1}{n+1}}((-1)^{n+1}\log(1+x)+\sum _{k=0}^{n}{\frac {(-1)^{k}}{n-k+1}}x^{n-k+1})+C}
(8.12)
Simplifying this yields:
∫
x
n
log
(
1
+
x
)
d
x
=
1
n
+
1
[
(
x
n
+
1
−
(
−
1
)
n
+
1
)
log
(
1
+
x
)
−
∑
k
=
0
n
(
−
1
)
k
n
−
k
+
1
x
n
−
k
+
1
]
+
C
{\displaystyle \displaystyle \int x^{n}\log(1+x)dx={\frac {1}{n+1}}[(x^{n+1}-(-1)^{n+1})\log(1+x)-\sum _{k=0}^{n}{\frac {(-1)^{k}}{n-k+1}}x^{n-k+1}]+C}
(8.13)
To illustrate this, two test cases with n=0 and n=1 will be used.
For n=0
∫
x
0
log
(
1
+
x
)
d
x
=
1
0
+
1
[
(
x
0
+
1
−
(
−
1
)
0
+
1
)
log
(
1
+
x
)
−
(
−
1
)
0
0
−
0
+
1
x
0
−
0
+
1
]
+
C
{\displaystyle \displaystyle \int x^{0}\log(1+x)dx={\frac {1}{0+1}}[(x^{0+1}-(-1)^{0+1})\log(1+x)-{\frac {(-1)^{0}}{0-0+1}}x^{0-0+1}]+C}
(8.14)
This simplifies to:
∫
log
(
1
+
x
)
d
x
=
(
x
+
1
)
log
(
1
+
x
)
−
x
+
C
{\displaystyle \displaystyle \int \log(1+x)dx=(x+1)\log(1+x)-x+C}
(8.15)
For n=1
∫
x
1
log
(
1
+
x
)
d
x
=
1
1
+
1
[
(
x
1
+
1
−
(
−
1
)
1
+
1
)
log
(
1
+
x
)
−
(
−
1
)
0
1
−
0
+
1
x
1
−
0
+
1
−
(
−
1
)
1
1
−
1
+
1
x
1
−
1
+
1
]
+
C
{\displaystyle \displaystyle \int x^{1}\log(1+x)dx={\frac {1}{1+1}}[(x^{1+1}-(-1)^{1+1})\log(1+x)-{\frac {(-1)^{0}}{1-0+1}}x^{1-0+1}-{\frac {(-1)^{1}}{1-1+1}}x^{1-1+1}]+C}
(8.16)
This simplifies to:
∫
x
log
(
1
+
x
)
d
x
=
1
2
[
(
x
2
−
1
)
log
(
1
+
x
)
−
1
2
x
2
+
x
]
+
C
{\displaystyle \displaystyle \int x\log(1+x)dx={\frac {1}{2}}[(x^{2}-1)\log(1+x)-{\frac {1}{2}}x^{2}+x]+C}
(8.17)
In fact, this formula can be further generalized for any
∫
x
n
log
(
r
+
x
)
d
x
{\displaystyle \int x^{n}\log(r+x)dx\!}
where r is any real number.
The most notable step that changes is the long division expansion. Each term in the expansion increases by a factor of
r
k
{\displaystyle r^{k}\!}
. The result is:
x
n
+
1
r
+
x
=
x
n
−
r
x
n
−
1
+
r
2
x
n
−
2
−
r
3
x
n
−
3
+
.
.
.
+
(
−
r
)
n
+
1
x
+
r
=
(
−
r
)
n
+
1
x
+
r
+
∑
k
=
0
n
(
−
r
)
k
x
n
−
k
{\displaystyle \displaystyle {\frac {x^{n+1}}{r+x}}=x^{n}-rx^{n-1}+r^{2}x^{n-2}-r^{3}x^{n-3}+...+{\frac {(-r)^{n+1}}{x+r}}={\frac {(-r)^{n+1}}{x+r}}+\sum _{k=0}^{n}(-r)^{k}x^{n-k}}
(8.18)
It is important to note that this particular step will fail when r=0 because
0
0
{\displaystyle 0^{0}\!}
is undefined. A special case with r=0 will also be shown for full generality. The rest of the process is identical to that shown above, with every
log
(
1
+
x
)
{\displaystyle \log(1+x)\!}
term replaced with
log
(
r
+
x
)
{\displaystyle \log(r+x)\!}
. The final result of the integration yields:
∫
x
n
log
(
r
+
x
)
d
x
=
1
n
+
1
[
(
x
n
+
1
−
(
−
r
)
n
+
1
)
log
(
r
+
x
)
−
∑
k
=
0
n
(
−
r
)
k
n
−
k
+
1
x
n
−
k
+
1
]
+
C
;
r
≠
0
{\displaystyle \displaystyle \int x^{n}\log(r+x)dx={\frac {1}{n+1}}[(x^{n+1}-(-r)^{n+1})\log(r+x)-\sum _{k=0}^{n}{\frac {(-r)^{k}}{n-k+1}}x^{n-k+1}]+C;r\neq 0}
(8.19)
When r=0, the integral is of the form
∫
x
n
log
(
x
)
d
x
{\displaystyle \int x^{n}\log(x)dx\!}
, which can be integrated using integration by parts.
u
=
log
(
x
)
{\displaystyle \displaystyle u=\log(x)}
(8.20)
d
u
=
d
x
x
{\displaystyle \displaystyle du={\frac {dx}{x}}}
(8.21)
d
v
=
x
n
d
x
{\displaystyle \displaystyle dv=x^{n}dx}
(8.22)
v
=
x
n
+
1
n
+
1
{\displaystyle \displaystyle v={\frac {x^{n+1}}{n+1}}}
(8.23)
Using these in (8.1) yields:
∫
x
n
log
(
x
)
d
x
=
x
n
+
1
n
+
1
log
(
x
)
−
1
n
+
1
∫
x
n
+
1
x
d
x
+
C
{\displaystyle \displaystyle \int x^{n}\log(x)dx={\frac {x^{n+1}}{n+1}}\log(x)-{\frac {1}{n+1}}\int {\frac {x^{n+1}}{x}}dx+C}
(8.24)
Now, the integral
∫
x
n
+
1
x
d
x
{\displaystyle \int {\frac {x^{n+1}}{x}}dx\!}
must be found. This is simply:
∫
x
n
+
1
x
d
x
=
∫
x
n
d
x
=
x
n
+
1
n
+
1
+
C
{\displaystyle \displaystyle \int {\frac {x^{n+1}}{x}}dx=\int x^{n}dx={\frac {x^{n+1}}{n+1}}+C}
(8.25)
Substituting (8.25) into (8.24) yields:
∫
x
n
log
(
x
)
d
x
=
x
n
+
1
n
+
1
log
(
x
)
−
x
n
+
1
(
n
+
1
)
2
+
C
{\displaystyle \displaystyle \int x^{n}\log(x)dx={\frac {x^{n+1}}{n+1}}\log(x)-{\frac {x^{n+1}}{(n+1)^{2}}}+C}
(8.26)
Thus, the overall solution for any real value of r is
∫
x
n
log
(
r
+
x
)
d
x
=
{
1
n
+
1
[
(
x
n
+
1
−
(
−
r
)
n
+
1
)
log
(
r
+
x
)
−
∑
k
=
0
n
(
−
r
)
k
n
−
k
+
1
x
n
−
k
+
1
]
+
C
if
r
≠
0
x
n
+
1
n
+
1
log
(
x
)
−
x
n
+
1
(
n
+
1
)
2
+
C
if
r
=
0
{\displaystyle \displaystyle \int x^{n}\log(r+x)dx={\begin{cases}{\frac {1}{n+1}}[(x^{n+1}-(-r)^{n+1})\log(r+x)-\sum _{k=0}^{n}{\frac {(-r)^{k}}{n-k+1}}x^{n-k+1}]+C&{\text{ if }}r\neq 0\\{\frac {x^{n+1}}{n+1}}\log(x)-{\frac {x^{n+1}}{(n+1)^{2}}}+C&{\text{ if }}r=0\end{cases}}}
(8.27)
Solved and Typed By - Egm4313.s12.team1.armanious (talk ) 02:51, 25 March 2012 (UTC)
Reviewed By - Egm4313.s12.team1.silvestri (talk ) 18:29, 30 March 2012 (UTC)
Consider the following L2-ODE-CC with log(1+x) as the excitation:
r
(
x
)
=
y
″
−
3
y
′
+
2
y
{\displaystyle \displaystyle r(x)=y''-3y'+2y}
(9.0)
r
(
x
)
=
l
o
g
(
1
+
x
)
{\displaystyle \displaystyle r(x)=log(1+x)}
(9.1)
Also, consider the initial conditions:
y
(
−
3
4
)
=
1
,
y
′
(
−
3
4
)
=
0
{\displaystyle \displaystyle y(-{\frac {3}{4}})=1,y'(-{\frac {3}{4}})=0}
(9.2)
1)Project the excitation r(x) on the polynomial basis
{
b
j
(
x
)
=
x
j
,
j
=
0
,
1
,
.
.
.
n
}
{\displaystyle \displaystyle \left\{b_{j}(x)=x^{j},j=0,1,...n\right\}}
(9.3)
i.e., find
d
j
{\displaystyle d_{j}}
such that:
r
(
x
)
≈
r
n
(
x
)
=
∑
j
=
0
n
d
j
x
j
{\displaystyle \displaystyle r(x)\approx r_{n}(x)=\sum _{j=0}^{n}d_{j}x^{j}}
(9.4)
for x in [-.75, 3], and for n= 0,1.
Plot
r
(
x
)
{\displaystyle r(x)}
and
r
n
(
x
)
{\displaystyle r_{n}(x)}
to show uniform approximation and convergence.
Note that
<
x
i
,
r
>=
∫
a
b
x
i
log
(
1
+
x
)
d
x
{\displaystyle <x^{i},r>=\int _{a}^{b}x^{i}\log(1+x)dx}
In a separate series of plots, compare the approximation of the function log(1+x) by 2 methods:
A. Projection on Polynomial basis (1) p8-17
B. Taylor series expansion about
x
^
=
0
{\displaystyle {\hat {x}}=0}
Observe and discuss the pros and cons of each method.
2) Find
y
n
(
x
)
{\displaystyle y_{n}(x)}
such that:
y
n
″
+
a
y
n
′
+
b
y
n
=
r
n
(
x
)
{\displaystyle \displaystyle y''_{n}+ay'_{n}+by_{n}=r_{n}(x)}
(9.5)
With the same initial conditions (9.2).
Plot
y
n
(
x
)
{\displaystyle y_{n}(x)}
for n=0,1 for x in [-.75,3]
In a series of separate plots, compare the results obtained with the projected excitation on polynomial basis to those with truncated Taylor series of the excitation. Plot also the numerical solution as a baseline for comparison.
Part 1
First to project the excitation r(x)=log(1+x) onto the polynomial basis. We know that
<
b
i
,
b
j
>
⋅
d
j
=<
b
i
,
r
>
{\displaystyle <b_{i},b_{j}>\cdot d_{j}=<b_{i},r>}
.
<
b
0
,
b
0
>
{\displaystyle <b_{0},b_{0}>}
is the only term of concern for the
γ
{\displaystyle \gamma }
matrix when n=0.
<
b
0
,
b
0
>=
∫
−
3
/
4
3
x
0
x
1
d
x
=
(
3
+
3
/
4
)
=
3.75
{\displaystyle \displaystyle <b_{0},b_{0}>=\int _{-3/4}^{3}x^{0}x^{1}dx=(3+3/4)=3.75}
(9.6)
<
b
i
,
r
>
{\displaystyle <b_{i},r>}
when n=0 is calculated below:
<
b
0
,
r
>=
∫
−
3
/
4
3
x
0
l
o
g
(
1
+
x
)
d
x
=
2.141751035
{\displaystyle \displaystyle <b_{0},r>=\int _{-3/4}^{3}x^{0}log(1+x)dx=2.141751035}
(9.7)
This means that, as stated in the opening sentence:
<
b
i
,
b
j
>
⋅
d
0
=<
b
i
,
r
>
{\displaystyle \displaystyle <b_{i},b_{j}>\cdot d_{0}=<bi,r>}
3.75
⋅
d
0
=
2.141751035
{\displaystyle \displaystyle 3.75\cdot d_{0}=2.141751035}
d
0
=
.571136093
{\displaystyle \displaystyle d_{0}=.571136093}
(9.8)
With that in mind,
r
n
(
x
)
{\displaystyle r_{n}(x)}
is developed from equation (9.4) as shown below:
r
(
x
)
≈
r
n
(
x
)
=
∑
j
=
0
n
d
j
x
j
=
∑
j
=
0
0
d
j
x
0
=
.571136093
{\displaystyle \displaystyle r(x)\approx r_{n}(x)=\sum _{j=0}^{n}d_{j}x^{j}=\sum _{j=0}^{0}d_{j}x^{0}=.571136093}
(9.9)
Now projecting the excitation onto the polynomial basis with n=1:
We know that (in matrix form)
γ
⋅
d
j
=<
b
i
,
r
>
{\displaystyle \gamma \cdot d_{j}=<b_{i},r>}
. Beginning with the
γ
{\displaystyle \gamma }
matrix:
γ
=
[
<
b
0
,
b
0
>
<
b
0
,
b
1
>
<
b
1
,
b
0
>
<
b
1
,
b
1
>
]
{\displaystyle \displaystyle \gamma ={\begin{bmatrix}<b_{0},b_{0}>&<b_{0},b_{1}>\\<b_{1},b_{0}>&<b_{1},b_{1}>\end{bmatrix}}}
(9.9)
This matrix becomes:
γ
=
[
3.75
4.21875
4.21875
9.140625
]
{\displaystyle \displaystyle \gamma ={\begin{bmatrix}3.75&4.21875\\4.21875&9.140625\end{bmatrix}}}
(9.10)
The matrix containing
<
b
i
,
r
>
{\displaystyle <b_{i},r>}
is shown below, defined as matrix c:
c
=
{
<
b
o
,
r
>
<
b
1
,
r
>
}
====
{
∫
−
3
/
4
3
x
0
l
o
g
(
1
+
x
)
)
∫
−
3
/
4
3
x
1
l
o
g
(
1
+
x
)
)
}
====
{
2.141751035
5.007550553
}
{\displaystyle \displaystyle c={\begin{Bmatrix}<b_{o},r>\\<b_{1},r>\end{Bmatrix}}===={\begin{Bmatrix}\int _{-3/4}^{3}x^{0}log(1+x))\\\int _{-3/4}^{3}x^{1}log(1+x))\end{Bmatrix}}===={\begin{Bmatrix}2.141751035\\5.007550553\end{Bmatrix}}}
(9.10)
We then find the
d
j
{\displaystyle d_{j}}
matrix in the following way:
γ
−
1
⋅
c
=
d
j
=
{
−
.0939750342
.5912076831
}
{\displaystyle \displaystyle \gamma ^{-1}\cdot c=d_{j}={\begin{Bmatrix}-.0939750342\\.5912076831\end{Bmatrix}}}
(9.11)
With that in mind,
r
n
(
x
)
{\displaystyle r_{n}(x)}
is developed from equation (9.4) as shown below:
r
(
x
)
≈
r
n
(
x
)
=
∑
j
=
0
n
d
j
x
j
=
∑
j
=
0
1
d
j
x
j
=
−
.0939750342
+
.5912076831
x
{\displaystyle \displaystyle r(x)\approx r_{n}(x)=\sum _{j=0}^{n}d_{j}x^{j}=\sum _{j=0}^{1}d_{j}x^{j}=-.0939750342+.5912076831x}
(9.12)
Graphed out compared to the actual excitation of log(1+x) [in red], the projections with n=0[in blue],1[in green] are compared below:
Part 2
First we create the characteristic equation in standard form:
λ
2
−
3
λ
+
2
=
0
{\displaystyle \displaystyle {\lambda ^{2}-3\lambda +2=0}}
(9.13)
Then, by setting it equal to zero, we can find what
λ
{\displaystyle \lambda \!}
equals:
(
λ
−
2
)
(
λ
−
1
)
=
0
{\displaystyle \displaystyle {(\lambda -2)(\lambda -1)=0}}
(9.14)
λ
=
2
,
λ
=
1
{\displaystyle \displaystyle {\lambda =2,\lambda =1}}
(9.15)
Given two, distinct, real roots, the homogeneous solution looks like this:
y
h
(
x
)
=
C
1
e
2
x
+
C
2
e
x
{\displaystyle \displaystyle y_{h}(x)=C_{1}e^{2x}+C_{2}e^{x}}
(9.16)
By using the method of undetermined coefficients, for n=0, the excitation
5.571136093
{\displaystyle 5.571136093}
is analyzed to yield a particular solution:
In assessing a polynomial with a power of 0, the form of the particular solution will look like this:
y
p
(
x
)
=
A
0
{\displaystyle \displaystyle y_{p}(x)=A_{0}}
(9.17)
The first and second derivatives of
A
0
{\displaystyle A_{0}}
, being a constant, are 0. So when plugging in
y
p
(
x
)
{\displaystyle y_{p}(x)}
into (9.0),
A
0
{\displaystyle A_{0}}
is determined to be:
.571136093
=
2
A
0
{\displaystyle \displaystyle .571136093=2A_{0}}
.2855680465
=
A
0
{\displaystyle \displaystyle .2855680465=A_{0}}
(9.18)
The general solution , after adding
y
h
{\displaystyle y_{h}}
and
y
p
{\displaystyle y_{p}}
then becomes:
y
g
(
x
)
=
C
1
e
2
x
+
C
2
e
x
+
.2855680465
{\displaystyle \displaystyle y_{g}(x)=C_{1}e^{2x}+C_{2}e^{x}+.2855680465}
(9.18)
We consider the initial conditions by taking the first derivative of the general solution:
2
C
1
e
2
x
+
C
2
e
x
=
y
g
′
(
x
)
{\displaystyle \displaystyle 2C_{1}e^{2x}+C_{2}e^{x}=y'_{g}(x)}
(9.19)
By plugging in -3/4 for x, 1 for y, and 0 for y', we can solve for the constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}\!}
:
y
g
(
−
3
/
4
)
=
1
=
C
1
e
2
(
−
3
/
4
)
+
C
2
e
−
3
/
4
+
.2855680465
{\displaystyle \displaystyle y_{g}(-3/4)=1=C_{1}e^{2(-3/4)}+C_{2}e^{-3/4}+.2855680465}
(9.20)
y
g
′
(
−
3
/
4
)
=
0
=
2
C
1
e
2
(
−
3
/
4
)
+
C
2
e
−
3
/
4
{\displaystyle \displaystyle y'_{g}(-3/4)=0=2C_{1}e^{2(-3/4)}+C_{2}e^{-3/4}}
(9.21)
Solving the equations proves that
C
1
=
−
3.201861878
,
C
2
=
3.024904915
{\displaystyle C_{1}=-3.201861878,C_{2}=3.024904915\!}
:
The resulting complete solution with consideration for initial conditions then becomes:
y
g
(
x
)
=
(
−
3.024904915
)
e
2
x
+
(
3.024904915
)
e
x
+
.2855680465
{\displaystyle \displaystyle y_{g}(x)=(-3.024904915)e^{2x}+(3.024904915)e^{x}+.2855680465}
(9.22)
By using the method of undetermined coefficients, for n=1, the excitation
5.571136093
{\displaystyle 5.571136093}
is analyzed to yield a particular solution:
In assessing a polynomial with a power of 0, the form of the particular solution will look like this:
y
p
(
x
)
=
A
1
x
+
A
0
{\displaystyle \displaystyle y_{p}(x)=A_{1}x+A_{0}}
(9.23)
The first derivative of
y
p
{\displaystyle y_{p}}
is simply
A
1
{\displaystyle A_{1}}
and the second derivative is 0. So when plugging in
y
p
(
x
)
{\displaystyle y_{p}(x)}
into (9.0),
A
0
,
A
1
{\displaystyle A_{0},A_{1}}
are determined to be:
The general solution , after adding
y
h
{\displaystyle y_{h}}
and
y
p
{\displaystyle y_{p}}
then becomes:
y
g
(
x
)
=
C
1
e
2
x
+
C
2
e
x
+
.2956038416
x
+
.3964182452
{\displaystyle \displaystyle y_{g}(x)=C_{1}e^{2x}+C_{2}e^{x}+.2956038416x+.3964182452}
(9.25)
We consider the initial conditions by taking the first derivative of the general solution:
2
C
1
e
2
x
+
C
2
e
x
+
.2956038416
=
y
g
′
(
x
)
{\displaystyle \displaystyle 2C_{1}e^{2x}+C_{2}e^{x}+.2956038416=y'_{g}(x)}
(9.26)
By plugging in -3/4 for x, 1 for y, and 0 for y', we can solve for the constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}\!}
:
y
g
(
−
3
/
4
)
=
1
=
C
1
e
2
(
−
3
/
4
)
+
C
2
e
−
3
/
4
+
.2956038416
(
−
3
/
4
)
+
.3964182452
{\displaystyle \displaystyle y_{g}(-3/4)=1=C_{1}e^{2(-3/4)}+C_{2}e^{-3/4}+.2956038416(-3/4)+.3964182452}
(9.27)
y
g
′
(
−
3
/
4
)
=
0
=
2
C
1
e
2
(
−
3
/
4
)
+
C
2
e
−
3
/
4
+
.2956038416
{\displaystyle \displaystyle y'_{g}(-3/4)=0=2C_{1}e^{2(-3/4)}+C_{2}e^{-3/4}+.2956038416}
(9.28)
Solving the equations proves that
C
1
=
−
13.92092674
,
C
2
=
12.54701279
{\displaystyle C_{1}=-13.92092674,C_{2}=12.54701279\!}
:
The resulting complete solution with consideration for initial conditions then becomes:
y
g
(
x
)
=
(
−
13.92092674
)
e
2
x
+
(
12.54701279
)
e
x
+
.2956038416
x
+
.3964182452
{\displaystyle \displaystyle y_{g}(x)=(-13.92092674)e^{2x}+(12.54701279)e^{x}+.2956038416x+.3964182452}
(9.29)
Graphed below, the approximations are shown. In green, the approximation with n=1, in blue, the approximation with n=0, and in red, the truncated Taylor series as n=1 is shown over the interval of [-3/4 to 3].
Solved and Typed By - Egm4313.s12.team1.silvestri (talk ) 21:26, 29 March 2012 (UTC)
Reviewed By - Egm4313.s12.team1.durrance (talk ) 19:02, 30 March 2012 (UTC)
Team Contribution Table
Problem Number
Lecture
Assigned To
Solved By
Typed By
Proofread By
5.1
R5.8 Lect. 7c pg. 33
Emotion Silvestri
Emotion Silvestri
Emotion Silvestri
Jesse Durrance
5.2
R5.3 Lect. 7c pgs. 36-37
Jesse Durrance
Jesse Durrance
Jesse Durrance
Chris Stewart
5.3
Lect. 7c Pg. 38
Chris Stewart
Chris Stewart
Chris Stewart
Steven Rosenberg
5.4
R5.4 Lect. 8a pg. 3
Wyatt Ling
Wyatt Ling
Wyatt Ling
George Armanious
5.5
R5.5 Lect. 8b pg. 7
Steven Rosenberg
Steven Rosenberg
Steven Rosenberg
Emotion Silvestri
5.6
Lect. 8 pgs. 6-7
Eric Essenwein
Eric Essenwein
Eric Essenwein
Chris Stewart
5.7
R5.6 Lect. 8b pg. 11
George Armanious
George Armanious
George Armanious
Chris Stewart
5.8
R5.8 Lect. 8b pg. 16
George Armanious
George Armanious
George Armanious
Emotion Silvestri
5.9
R5.8 Lect. 8b pg. 18
Emotion Silvestri
Emotion Silvestri
Emotion Silvestri
Jesse Durrance