# User:YangOu

Yang's user page

## Final Project Topic

I'd like to add a detailed example to find the global error in Euler's method and its convergence.On Wikipedia page, it just showed the formula of the global error for the method, but I will add the derivation of that formula.

Your topic is contained within that of User:Hh687711#Claims of Final Project Topic and they claimed it first, so you need to pick a new one. The topic you tried to claim was a single proof, so it was too small anyway. Mjmohio (talk) 18:26, 7 November 2012 (UTC)

## Homework 7

Rootfinding_for_nonlinear_equations#Convergence I have done some changes to the definition of the convergence which is not accurate and listed the definition of the superlinear and sublinear convergence.

After that I made the solution of the given example look clear. I just applied the definition of the convergence to find the approximation of the root if I have known the rate of convergence, the order of convergence, the initial error and the tolerance. When the error becomes less than the tolerance, then we can stop the iteration and draw the conclusion that the value at that step is the approximation of the root.

We'll find the interpolating polynomial passing through the points $(1,-2)$ , $(3,3)$ , $(4,8)$ , using the Vandermonde matrix.

For our polynomial, we'll take $(1,-2)=(x_{0},y_{0})$ , $(3,3)=(x_{1},y_{1})$ , and $(4,8)=(x_{2},y_{2})$ .

Define our interpolating polynomial as:

$p(x)=a_{2}x^{2}+a_{1}x+a_{0}$ .

So, to find the coefficients of our polynomial, we solve the system $p(x_{i})=y_{i}$ , $i\in \{0,1,2\}$ .

In order to solve the system, we will use an augmented matrix based on the Vandermonde matrix, and solve for the coefficients using Gaussian elimination. Substituting in our $x$ and $y$ values, our augmented matrix is:

$\left({\begin{array}{cccc}1&1&1&-2\\9&3&1&3\\16&4&1&8\end{array}}\right)$ Then, using Gaussian elimination,

$\left({\begin{array}{cccc}1&1&1&-2\\0&-6&-8&21\\0&-12&-15&40\end{array}}\right)\rightarrow \left({\begin{array}{cccc}1&1&1&-2\\0&1&4/3&-7/2\\0&-12&-15&40\end{array}}\right)\rightarrow \left({\begin{array}{cccc}1&1&1&-2\\0&1&4/3&-7/2\\0&0&1&-2\end{array}}\right)\rightarrow \left({\begin{array}{cccc}1&0&-1/3&3/2\\0&1&4/3&-7/2\\0&0&1&-2\end{array}}\right)\rightarrow \left({\begin{array}{cccc}1&0&0&5/6\\0&1&0&-5/6\\0&0&1&-2\end{array}}\right)$ Our coefficients are $a_{2}=5/6$ , $a_{1}=-5/6$ , and $a_{0}=-2$ . So, the interpolating polynomial is

$p(x)=5/6x^{2}-5/6x-2$ .

#### Quizzes

1

Is there any possibility that the interpolating polynomial isn't unique

 Yes No

2

Bisection method is linearly convergent

 True False

3

Find the roots of the equation x2-3 for x=2 as the initial point using newtons method

 1.316624 3.667891 Method fails

## Project Report for User:YangOu

For Introduction to Numerical Analysis, Fall 2012.

### Introduction

My final project is about power method and shifted inverse power method. The topic for finding eigenpairs of a given matrix using power method and shifted inverse method is important. It is difficult to understand using only Wikipedia because it does not indicate some exceptional cases where the power method may not converge to the numerically largest eigenvalue and the convergence situations with respect to different initial guess. And then, the Wikepedia page does not show us the shifted inverse power method which can be used to find the precise solution by iterations.

To facilitate learning of this topic,I am going to add something to Wikiversity: examples illustrating some exceptional cases about finding the eigenpairs of a given matrix using the power method. exercises to develop skill at applying the shifted inverse power method to find the exact eigenpairs of a given matrix using a good starting approximation of eigenvalue. definitions and theorems which are not covered on Wikipedia but necessary.

### Contribution

I created an example which contains the application of the power method. If we approximate the eigenvalues of the given matrix by hand using w:power method someone did in the exercise on Wikiversity page, the power method will not always converge to the dominant eigenvalue and eigenvector. But we can find that the power method will be convergent to the second largest eigenvalue firstly and converge to the dominant eigenvalue finally using Matlab language and w:Rayleigh quotient.I applied the power method to the matrix given in the example both by hand and by Matlab. I did two iterations by hand and I found the method will converge to the second largest eigenvalue. And then I wrote the code for power iteration and plotted the graph of the convergence of the sequence $\left(c_{k}\right)$ . I drew the conclusion that the power iteration will be convergent to the dominant eigenvalue eventually.

I chose this particular example because the given matrix has three different eigenvalues and the starting guess happens to be the eigenvector with the second largest eigenvalue,which can cause this problem. Changing the starting guess will fix this problem.

I also showed another example to demonstrate the convergence of the power method for the matrix with the dominant eigenvalues which are complex conjugate of each other. I got that the method will not work for this situation. Even though the method will converge to a number, the number has nothing to do with the dominant eigenvalues which are complex conjugate of each other. We can see this result from the graph I plotted using Matlab. When we switch our starting guess to the vectors which have complex entries, we can get rid of this problem. we have the matrix with complex dominant eigenvalues and the starting guess with all real entries in our example. That is the reason why I chose this kind of example.

I also edited the shifted inverse power method part to show the way to find the middle eigenvalue of the given matrix. Firstly, I introduced two relative theorems before explaining this method. Secondly,I created the long detailed example to show how to use the method to find one of the eigenvalues,and finally I gave the readers two exercises,one of which is to find the middle eigenvalue of the same matrix as the example. The other requires the readers to write a matlab code to approximate the middle eigenvalue of the matrix given in the inverse power method exercise. I chose this exercise because it is connective to our existing exercises.I think this should be helpful to the learners.

### Future Work

I decided that although the way to find the shift of the shifted inverse power method would be good it was too much for this project, so I just made an outline that others can fill in.

It may be beneficial if we introduce the approach to find the appropriate shift when using the shifted inverse power method.We can choose the shift by trial or error. We can usually get the actual eigenvalues of the matrix but we can't sometimes. So choosing a reasonable shift and the way to find this shift seems to be important. My suggestion is to choose the shift by trial and compare the error. Some examples to explain this topic would be helpful.

It may also be beneficial if there will be an example with the starting guess involving complex entries showing the availability of the power method when the matrix has complex dominant eigenvalues. This is related to my contribution for power method.

### Conclusions

In this project I tried to add something which was not presented on Wikipedia and Wikiversity pages. I provided examples, Matlab codes, graphs, exercises of the power method and the shifted inverse power method.

I think this is a valuable contribution because the examples and the graphs I created in power method part will give potential readers deep understanding about the method,for example they will know the availability of the power method. The work I did in the shifted inverse power method part will also be beneficial for readers to learn the method by reading the detailed example and doing the exercises. The second exercise I showed was connective to our existing exercise on Wikiversity page. I believe this would be valuable for learners.