Quantum Simulation/Exact diagonalization
Resource type: this resource contains a lecture or lecture notes. |
The many-body problem
[edit | edit source]A common theme in all branches of quantum physics is to identify the eigenstates of a Hamiltonian , and the respective eigenenergies . Once equipped with the eigenstates, we can relatively easily calculate many interesting aspects such as the time evolution, or its thermal properties, according to the relation
- ,
where , which gives us the density operator at the inverse temperature . A special case occurs at (or close to) zero temperature, as then only the ground state will provide the dominant contribution. Nevertheless, the system can still undergo phase transitions if the ground state energy itself exhibits non-analytic behavior. Consequently, finding the ground state is often a crucial step to establish a thorough understanding of a quantum system.
In the following, we will mainly focus on discussing ground state properties of many-body systems containing only few local degrees of freedom. A prominent class of such many-body models are spin models, which consist of two-level systems localized at the sites of some lattice. As a specific example, let us turn to the to the one-dimensional transverse field Ising model, whose Hamiltonian is given by
Here, we have expressed the spins with the help of Pauli matrices, while can be interpreted as the strength of a field transverse to the quantization axis of the spins, and all energies are measured in the strength of the interaction between two neighboring spins. While this model is actually exactly solvable in the limit [1], we will use it to illustrate various approaches to many-body problems.
Without going into specific details, let us consider the two possible limits of the transverse Ising model. For , we can ignore the interaction, and the problem reduces to a single spin in a magnetic field. Consequently, the ground state is given by
In the limit of zero external field, , we just have to minimize the interaction energy, which leads to two distinct fully polarized ground states, and . Note that the Hamiltonian does not distinguish between up and down spins (a so-called symmetry), but the two possible ground states do. This symmetry breaking is a manifestation of the system being in two different quantum phases for (ferromagnet) and (paramagnet)[2]. From the exact solution, it is known that the quantum phase transition occurs at a critical transverse field of .
Exact and not-so-exact diagonalization
[edit | edit source]The term "exact diagonalization" is often used in a slightly misleading manner. In general, to find the eigenvalues of a -dimensional Hamiltonian, one has to find the roots to the characteristic polynomial of degree , for which in general no exact solution can be found for . Of course, we can still hope to numerically approximate the eigenvalues to an arbitrary degree, but the fact that we have to work with computers operating with fixed precision numbers makes this endeavour substantially more complicated.
Keeping that aside, finding the eigenvalues by solving the characteristic polynomial is a bad idea, as finding roots of high-degree polynomials is a numerically tricky task [3]. In fact, one of the most powerful methods for finding the roots of such a polynomial is to generate a matrix that has the same characteristic polynomial and find its eigenvalues using a different algorithm! A much better strategy is to find a unitary (or orthogonal, if all matrix elements of the Hamiltonian are real) transformation that makes the Hamiltonian diagonal, i.e.,
The general strategy is to construct the matrix in an iterative way,
until the matrix becomes diagonal. The columns of then contains the eigenvectors of . There are many different algorithms for actually performing the diagonalization, and it is usually a good idea to resort to existing libraries (such as LAPACK) for this task. However, if we are only interested in finding the low-energy eigenvalues of a particular Hamiltonian, we can make a few simplifications that will allow for much faster computations.
The power method
[edit | edit source]Initially, we pick a random state which has a very small but finite overlap with the ground state, i.e., . Then, we repeatedly multiply the Hamiltonian with this initial state and normalize the result,
- ,
where is the normalization operation. This method will eventually converge to the eigenvector with the largest absolute eigenvalue, so by subtracting a constant energy from the Hamiltonian, we can always ensure that this will be the ground state. The key advantage is that the matrix-vector multiplications occuring at each iteration can be implemented very fast: in most cases (as for the transverse Ising model), for each column the number of nonzero entries in such a sparse Hamiltonian are much smaller than the dimension of the Hilbert space (here: vs. ).
Lanczos algorithm
[edit | edit source]The power method can be readily improved by using not only a single state during each iteration, but employing a larger set of states which will be extended until convergence is reached. This procedure is known as Lanczos algorithm and is implemented as follows [4]:
- Pick a random state as in the Power method
- Construct a second state according to
- Starting with , recursively construct an orthogonal set of states given by
- where the coefficients and are given by
- Diagonalize the matrix given by
- If the ground state energy has not converged to the desired accuracy, proceed at step 3 by increasing by one.
While the exact ground state is only reached when is equal to the dimension of the Hilbert space, the remarkable feature of the Lanczos algorithm is that typically only a few hundred iterations are necessary. However, there is one important caveat: due to finite-precision arithmetic, the orthogonality between the states is quickly lost. For practical applications, it is therefore advisable to use an improved algorithm that re-orthogonalizes the states.
Some remarks on complexity
[edit | edit source]Let us briefly consider the difficulty inherent in exact diagonalization studies. On each lattice site, we have 2 degrees of freedom, refering to a spin pointing either up or down. However, for spins, we have possible spin configurations. The consequences of this exponential scaling cannot be underestimated: if we want to store the vector corresponding to the ground state in a computer using double precision, we would need 8 TB of memory for , and for , the number of basis states already exceeds the number of atoms in the universe! From these considerations, we see that exact diagonalization can only work in the limit of small system sizes.
References
[edit | edit source]- ↑ Sachdev, Subir (2001). Quantum phase transitions. Cambridge; New York: Cambridge University Press. ISBN 9780521004541.
- ↑ http://iopscience.iop.org/article/10.1088/0022-3719/4/15/024/pdf
- ↑ Press, William H; Teukolsky, Saul A; Vetterling, William T; Flannery, Brian P (1992). Numerical recipes in C Book, C version / William H. Press.. Cambridge: Cambridge Univ. Pr.. ISBN 9780521431088.
- ↑ Dagotto, Elbio (1994-07). "Correlated electrons in high-temperature superconductors". Reviews of Modern Physics 66 (3): 763-840. doi:10.1103/RevModPhys.66.763. ISSN 1539-0756 0034-6861, 1539-0756. http://link.aps.org/doi/10.1103/RevModPhys.66.763. Retrieved 2013-04-03.