# Numerical Analysis/Topics/Power iteration examples

Power method is an eigenvalue algorithm which can be used to find the eigenvalue with the largest absolute value but in some exceptional cases, it may not numerically converge to the dominant eigenvalue and the dominant eigenvector. We should know the definition for dominant eigenvalue and eigenvector before learning some exceptional examples.

### Definitions

let ${\displaystyle \lambda _{1}}$,${\displaystyle \lambda _{2}}$,....${\displaystyle \lambda _{n}}$ be the eigenvalues of an n×n matrix A. ${\displaystyle \lambda _{1}}$ is called the dominant eigenvalue of A if ${\displaystyle |\lambda _{1}|}$ > ${\displaystyle |\lambda _{i}|}$, i= 1,2,3.....n. The eigenvectors corresponding to ${\displaystyle \lambda _{1}}$ are called dominant eigenvectors of A.

Next we should show some examples where the power method will not converge to the dominant eigenpair of a given matrix.

### Example 1

Use power method to find an eigenvalue and its corresponding eigenvector of the matrix A.

${\displaystyle A=\left[{\begin{array}{c c c}1&2&1\\-4&7&1\\-1&-2&-1\end{array}}\right]}$.

We obtained the eigenvalues of matrix A are ${\displaystyle \lambda _{1}=0}$, ${\displaystyle \lambda _{2}=2}$, ${\displaystyle \lambda _{3}=5}$ solving the characteristic polynomial. Here the dominant eigenvalue is 5. we can choose the initial guess vector is ${\displaystyle X_{0}=\left[{\begin{array}{c}1\\1\\1\\\end{array}}\right]}$. Then we apply the power method.

${\displaystyle Y_{1}}$ = ${\displaystyle A}$${\displaystyle X_{0}}$ = ${\displaystyle \left[{\begin{array}{c}4\\4\\-4\\\end{array}}\right]}$, ${\displaystyle c_{1}=4}$, and it implies ${\displaystyle X_{1}}$ = ${\displaystyle \left[{\begin{array}{c}1\\1\\-1\\\end{array}}\right]}$. In this way,

${\displaystyle Y_{2}}$ = ${\displaystyle A}$${\displaystyle X_{1}}$ = ${\displaystyle \left[{\begin{array}{c}2\\2\\-2\\\end{array}}\right]}$, so ${\displaystyle c_{2}=2}$, and it implies ${\displaystyle X_{2}}$ = ${\displaystyle \left[{\begin{array}{c}1\\1\\-1\\\end{array}}\right]}$. As we can see, the sequence ${\displaystyle \left(c_{k}\right)}$ converges to 2 which is not the dominant eigenvalue.

### Example 2

Consider the matrix ${\displaystyle A=\left[{\begin{array}{c c c}3&2&-2\\-1&1&4\\3&2&-5\end{array}}\right]}$.

Apply the power method to find the eigenvalue of the matrix with starting guess

${\displaystyle X_{0}}$ = ${\displaystyle \left[{\begin{array}{c}1\\1\\1\\\end{array}}\right]}$.

${\displaystyle Y_{1}}$ = ${\displaystyle A}$${\displaystyle X_{0}}$ = ${\displaystyle \left[{\begin{array}{c}3\\4\\0\\\end{array}}\right]}$,

thus ${\displaystyle c_{1}=4}$, and it implies

${\displaystyle X_{1}}$ = ${\displaystyle \left[{\begin{array}{c}0.75\\1\\0\\\end{array}}\right]}$.

We continue doing some iterations:

${\displaystyle A}$${\displaystyle X_{1}}$ = ${\displaystyle \left[{\begin{array}{c}4.25\\0.25\\4.25\\\end{array}}\right]}$

so ${\displaystyle c_{2}=4.25}$, and it implies ${\displaystyle X_{2}}$ = ${\displaystyle \left[{\begin{array}{c}1\\0.0588\\1\\\end{array}}\right]}$.

${\displaystyle A}$${\displaystyle X_{2}}$ = ${\displaystyle \left[{\begin{array}{c}1.1176\\3.0588\\-1.8824\\\end{array}}\right]}$

so ${\displaystyle c_{3}=3.0588}$, and it implies

${\displaystyle X_{3}}$ = ${\displaystyle \left[{\begin{array}{c}0.36537\\1\\-0.615437\\\end{array}}\right]}$.

We can see the sequence ${\displaystyle \left(c_{k}\right)}$ and ${\displaystyle \left(X_{k}\right)}$ are divergent. Under this situation can we conclude that the dominant eigenvalues are complex conjugate of each other.