Inverse iteration is an iterative method for approximating the eigenvectors of a matrix $A\in\mathbb R^{m\times m}$. It is based on the observation that if $v$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda$, then $A-\mu I$ will have the same eigenvector $v$, with a corresponding eigenvalue of $\lambda-\mu$. Additionally, $(A-\mu I)^{-1}$ will have an eigenvalue of $(\lambda-\mu)^{-1}$, which will have very large magnitude if $\mu$ is close to $\lambda$. Recall that the convergence of power iteration is determined by the ratio between the eigenvalue of largest magnitude and the eigenvalue of next-largest magnitude. Therefore, we can use the above manipulation to artificially increae this ratio between eigenvalues by choosing $\mu$ to be an estimate close to $\lambda$. The algorithm is as follows:
1. Let $v^{(0)}$ be an arbitrary unit vector
2. For $k=1,2,\cdots$
1. Solve $(A-\mu I)w = v$ for $w$
2. Let $v^{(k)}=w/\lVert w\rVert$
3. Let $\lambda^{(k)}=r(v^{(k)})$ where $r$ is the Rayleigh quotient
If $\lambda$ is the closest eigenvalue to the estimate $\mu$ and $\lambda'$ is the second-closest, then using the error bound for power iteration, we have that This makes inverse iteration very useful for computing eigenvector estimates when the eigenvalues (or close approximations thereof) are already known.