# Numerical Analysis/Power iteration examples

w:Power method is an eigenvalue algorithm which can be used to find the w:eigenvalue with the largest absolute value but in some exceptional cases, it may not numerically converge to the dominant eigenvalue and the dominant eigenvector. We should know the definition for dominant eigenvalue and eigenvector before learning some exceptional examples.

## Contents

### Definitions

Let ${\displaystyle \lambda _{1}}$, ${\displaystyle \lambda _{2}}$,....${\displaystyle \lambda _{n}}$ be the eigenvalues of an ${\displaystyle n\times n}$ matrix ${\displaystyle A}$. ${\displaystyle \lambda _{1}}$ is called the dominant eigenvalue of A if

${\displaystyle |\lambda _{1}|>|\lambda _{i}|\quad {\text{for}}\quad i=2,3.....n.}$

The eigenvectors corresponding to ${\displaystyle \lambda _{1}}$ are called dominant eigenvector of ${\displaystyle A}$. Next we should show some examples where the power method will not converge to the dominant eigenpair of a given matrix.

Use the power method to find an eigenvalue and its corresponding eigenvector of the matrix

${\displaystyle A=\left[{\begin{array}{c c c}1&2&1\\-4&7&1\\-1&-2&-1\end{array}}\right]}$

starting with the vector

${\displaystyle X_{0}=\left[{\begin{array}{c}1\\1\\1\\\end{array}}\right]}$.

We apply the power method and get

${\displaystyle Y_{1}=AX_{0}=\left[{\begin{array}{c}4\\4\\-4\\\end{array}}\right]}$,

with ${\displaystyle c_{1}=4}$, and it implies

${\displaystyle X_{1}=\left[{\begin{array}{c}1\\1\\-1\\\end{array}}\right]}$.

In this way,

${\displaystyle Y_{2}=AX_{1}=\left[{\begin{array}{c}2\\2\\-2\\\end{array}}\right]}$,

so ${\displaystyle c_{2}=2}$, and it implies

${\displaystyle X_{2}=\left[{\begin{array}{c}1\\1\\-1\\\end{array}}\right]}$.

As we can see, the sequence ${\displaystyle \left(c_{k}\right)}$ converges to 2.

Using the w:characteristic polynomial, the eigenvalues of matrix ${\displaystyle A}$ are ${\displaystyle \lambda _{1}=0}$, ${\displaystyle \lambda _{2}=2}$, and ${\displaystyle \lambda _{3}=5}$, so the dominant eigenvalue is 5, and our hand calculations converged to the second largest eigenvalue.

The problem is that our initial guess led us to exactly the eigenvector corresponding to ${\displaystyle \lambda _{2}=2}$. If we run this long enough on a computer then roundoff error will eventually introduce a component of the eigenvector for ${\displaystyle \lambda _{3}=5}$, and that will dominate. This process can be achieved by w:Matlab. Below is the code in the matlab language. This code will plot the convergence of the sequence ${\displaystyle \left(c_{k}\right)}$.

eest=zeros(100,1);
for i=1:100
x=A*v;
e=(x'*v)/(v'*v);
v=x/norm(x);
eest(i)=e;
end
plot(eest);


We can see the convergence of the sequence ${\displaystyle \left(c_{k}\right)}$ in the following figure:

File:Powermethod.jpg
Graph of the convergence of the sequence ${\displaystyle \left(c_{k}\right)}$ by w:Rayleigh quotient

We can draw the conclusion that if the starting guess we chose is very close to the corresponding eigenvector of the second largest eigenvalue, the estimation of the dominant eigenvalue will converge to the second largest eigenvalue first and dominant eigenvalue finally. Changing the starting guess will fix this problem.

### Example: Complex eigenvalue

Consider the matrix

${\displaystyle A=\left[{\begin{array}{c c c}3&2&-2\\-1&1&4\\3&2&-5\end{array}}\right]}$.

Apply the power method to find the eigenvalue of the matrix with starting guess

${\displaystyle X_{0}=\left[{\begin{array}{c}1\\1\\1\\\end{array}}\right]}$.

We compute

${\displaystyle Y_{1}=AX_{0}=\left[{\begin{array}{c}3\\4\\0\\\end{array}}\right]}$,

thus ${\displaystyle c_{1}=4}$, and it implies

${\displaystyle X_{1}=\left[{\begin{array}{c}0.75\\1\\0\\\end{array}}\right]}$.

We continue doing some iterations:

${\displaystyle AX_{1}=\left[{\begin{array}{c}4.25\\0.25\\4.25\\\end{array}}\right]}$

so ${\displaystyle c_{2}=4.25}$, and it implies

${\displaystyle X_{2}=\left[{\begin{array}{c}1\\0.0588\\1\\\end{array}}\right]}$.
${\displaystyle AX_{2}=\left[{\begin{array}{c}1.1176\\3.0588\\-1.8824\\\end{array}}\right]}$

so ${\displaystyle c_{3}=3.0588}$, and it implies

${\displaystyle X_{3}=\left[{\begin{array}{c}0.36537\\1\\-0.615437\\\end{array}}\right]}$.

We can see the sequence ${\displaystyle \left(c_{k}\right)}$ and ${\displaystyle \left(X_{k}\right)}$ are oscillating. The above matrix has eigenvalues 1 and ${\displaystyle (1\pm {\sqrt {14}}i)}$ and the dominant eigenvalues are w:complex conjugate to each other. The power method applied to a real matrix with a real starting guess can not work for matrices with the dominant eigenvalues which are complex conjugate to each other. Sometimes even though the sequence ${\displaystyle \left(c_{k}\right)}$ and ${\displaystyle \left(X_{k}\right)}$ can be convergent, the convergence has nothing to do with the dominant eigenpair. we can see the convergence of the sequence ${\displaystyle \left(c_{k}\right)}$ from the figure below:

File:Conjugate.jpg
Graph of the the sequence ${\displaystyle \left(c_{k}\right)}$ by w:Rayleigh quotient

As we can see, the sequence ${\displaystyle \left(c_{k}\right)}$ converges to -5 which has nothing to with our dominant eigenvalues ${\displaystyle (1\pm {\sqrt {14}}i)}$ and the power method will not work if the matrix has dominant eigenvalues which are complex conjugate to each other and our starting guess has all real entries. When we change our starting guess to the vectors which have complex entries, the power method should work as usual.

### Reference

Deri Prasad, An Introduction to Numerical Analysis, Third Edition,P7.2-7.18