Materials Science and Engineering/List of Topics/Quantum Mechanics/Mathematical Tools of Quantum Mechanics
The Hilbert Space and Wave Functions[edit | edit source]
The Linear Space and Wave Functions[edit | edit source]
In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. More formally, a vector space is a set on which two operations, called (vector) addition and (scalar) multiplication, are defined and satisfy certain natural axioms which are listed below. Vector spaces are the basic objects of study in linear algebra, and are used throughout mathematics, science, and engineering.
The most familiar vector spaces are two- and three-dimensional Euclidean spaces. Vectors in these spaces are ordered pairs or triples of real numbers, and are often represented as geometric vectors which are quantities with a magnitude and a direction, usually depicted as arrows. These vectors may be added together using the parallelogram rule (vector addition) or multiplied by real numbers (scalar multiplication). The behavior of geometric vectors under these operations provides a good intuitive model for the behavior of vectors in more abstract vector spaces, which need not have a geometric interpretation. For example, the set of (real) polynomials forms a vector space.
The Hilbert Space[edit | edit source]
The mathematical concept of a Hilbert space, named after the German mathematician David Hilbert, generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the two-dimensional plane and three-dimensional space to infinite-dimensional spaces. In more formal terms, a Hilbert space is an inner product space — an abstract vector space in which distances and angles can be measured — which is "complete", meaning that if a sequence of vectors approaches a limit, then that limit is guaranteed to be in the space as well.
Hilbert spaces arise naturally and frequently in mathematics, physics, and engineering, typically as infinite-dimensional function spaces. They are indispensable tools in the theories of partial differential equations, quantum mechanics, and signal processing. The recognition of a common algebraic structure within these diverse fields generated a greater conceptual understanding, and the success of Hilbert space methods ushered in a very fruitful era for functional analysis.
Geometric intuition plays an important role in many aspects of Hilbert space theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in the plane. This means that Hilbert space can also usefully be thought of in terms of infinite sequences that are square-summable. Linear operators on a Hilbert space are likewise fairly concrete objects: in good cases, they are simply transformations that stretch the space by different factors in mutually perpendicular directions.
Dimension and Basis of a Vector Space[edit | edit source]
In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. In other words, a basis is a linearly independent spanning set.
Square-Integrable: Wave Functions[edit | edit source]
A real- or complex-valued function of a real or complex variable is square-integrable on an interval if the integral of the square of its absolute value, over that interval, is finite. The set of all measurable functions that are square-integrable forms a Hilbert space, the so-called L2 space.
This is especially useful in quantum mechanics as wave functions must be square integrable over all space if a physically possible solution is to be obtained from the theory.
Dirac Notation[edit | edit source]
Bra-ket notation is the standard notation for describing quantum states in the theory of quantum mechanics. It can also be used to denote abstract vectors and linear functionals in pure mathematics. It is so called because the inner product (or dot product) of two states is denoted by a bracket, , consisting of a left part, , called the bra, and a right part, , called the ket. The notation was invented by Paul Dirac, and is also known as Dirac notation.
Operators[edit | edit source]
General Definitions[edit | edit source]
In mathematics, an operator is a function, that operates on (or modifies) another function. Often, an "operator" is a function that acts on functions to produce other functions (the sense in which Oliver Heaviside used the term); or it may be a generalization of such a function, as in linear algebra, where some of the terminology reflects the origin of the subject in operations on the functions that are solutions of differential equations. An operator can perform a function on any number of operands (inputs) though most often there is only one operand.
An operator might also be called an operation, but the point of view is different. For instance, one can say "the operation of addition" (but not the "operator of addition") when focusing on the operands and result. One says "addition operator" when focusing on the process of addition, or from the more abstract viewpoint, the function +: S×S → S.
Hermitian Adjoint[edit | edit source]
Using the Riesz representation theorem, one can show that there exists a unique continuous linear operator A* : H → H with the following property:
This operator A* is the adjoint of A.
Projection Operators[edit | edit source]
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself such that P2 = P. Projections map the whole vector space to a subspace and leave the points in that subspace unchanged.
Though abstract, this definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
Commutator Algebra[edit | edit source]
Uncertainty Relation between Two[edit | edit source]
Inverse and Unitary Operators[edit | edit source]
Inverse Operator[edit | edit source]
In mathematics, if ƒ is a function from A to B, then an inverse function for ƒ is a function in the opposite direction, from B to A, with the property that a round trip (a composition) returns each element to itself. Not every function has an inverse; those that do are called invertible.
then its inverse function converts degrees Fahrenheit to degrees Celsius:
Or, suppose ƒ assigns each child in a family of three the year of its birth. An inverse function would tell us which child was born in a given year. However, if the family has twins (or triplets) then we cannot know which to name for their common birth year. As well, if we are given a year in which no child was born then we cannot name a child. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example,
Unitary Operator[edit | edit source]
- The range of U is dense, and
- U preserves the inner product 〈 , 〉 on the Hilbert space, i.e. for all vectors x and y in the Hilbert space,
To see this, notice that U preserves the inner product implies U is an isometry (thus, a bounded linear operator). The fact that U has dense range ensures it has a bounded inverse U−1. It is clear that U−1 = U∗.
Thus, unitary operators are just automorphisms of Hilbert spaces, i.e., they preserve the structure (in this case, the linear space structure, the inner product, and hence the topology) of the space on which they act. The group of all unitary operators from a given Hilbert space H to itself is sometimes referred to as the Hilbert group of H, denoted Hilb(H).
Eigenvalues and Eigenvectors[edit | edit source]
In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector (help·info) of a given linear transformation is a vector which is multiplied by a constant called the Template:Audio-nohelp during that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues).
For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector stays the same, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation is the set of all eigenvectors of that transformation that have the same eigenvalue, together with the zero vector (which has no direction). An eigenspace is an example of a subspace of a vector space.
In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below.
These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.
Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency.
Matrix and Wave Mechanics[edit | edit source]
Matrix Mechanics[edit | edit source]
Wave Mechanics[edit | edit source]
Reference[edit | edit source]
Nouredine Zettili, "Quantum Mechanics: Concepts and Application". John Wiley & Sons, LTD. New York, 2001