Linear algebra/Orthogonal matrix

From Wikiversity
Jump to navigation Jump to search
Alternative notations

A real square matrix is orthogonal (orthogonal[1]) if and only if its columns form an orthonormal basis in a Euclidean space in which all numbers are real-valued and dot product is defined in the usual fashion.[2][3] An orthonormal basis in an N dimensional space is one where, (1) all the basis vectors have unit magnitude.[4]

Fundamental properties[edit | edit source]

Visual understanding of multiplication by the transpose of a matrix. If A is an orthogonal matrix and B is its transpose, the ij-the element of the product AAT=0 because the i-th row of A is orthogonal to the j-th row of A.

Three important results that are easy to prove[edit | edit source]

Among the first things a novice should learn are those that are easy to prove.

Orthonormal basis vectors are hiding in plain sight[edit | edit source]

Theorem:

  • If the rows of a square matrix form an orthonormal set of (basis) vectors,
then the transpose of that matrix is its own inverse

Visual understanding[edit | edit source]

Suppose the rows of a matrix form an orthonormal set of basis vectors, as shown in the i-th row in matrix A to the right. The ij-the element of the product AB takes the dot product of the i-th row of A with the j-th column of matrix B, as shown in the upper part of the diagram. in the diagram's upper part, the j-th column is higlighted in yellow. In the diagram's lower part, matrix B is replaced by it's transpose, which shifts the elements in column j to a row (highlighted in cyan.) This establishes that the product of A with the transpose of B creates elements that are the dot product of rows of A with rows of B.

If A is a orthogonal matrix and B is its transpose, this procedure creates matrix elements that are dot products among the rows of the orthogonal matrix.

Rigorous proof[edit | edit source]

This proof illustrates how subscripts are used to manipulate and understand tensors.

1. Suppose

is the i-th element of a orthonormal set of basis vectors.
Here are the original unit vectors used to define the new set of unit vectors that extract from the rows of matrix

2. Now we relabel how we write the sums for and as follows:

Hint: In the first of these two equations, I replace by because summed variables can be changed at will. Sometimes they are called "dummy variables" because they "do not speak" after the sum is done. For example, summing n from 1 to 3 equals 1+2+3, which is the same as summing m from 1 to 3. In the second one I relabeled my dummy variable as because the same dummy variable cannot serve two purposes in a single expression.

3. This yields the following expression for the dot product between our two vectors:

4. This last term introduces the Kronecker delta symbol:

The last term almost looks like the product of the matrix with itself. It can be turned into a product using the transpose on the second term in the product, using

5. If is orthogonal, then and we conclude that the rows of (i.e., the vectors form an orthonormal collection of vectors (i.e. a "rotated" basis for the vector space.)

Change of basis for tensors[edit | edit source]

If a matrix is used to rotate vectors, then use it twice to rotate tensors

A common use of the orthogonal matrix is to express a vector in one reference frame into a "rotated"[8] frame.

Here, we let denote any matrix (i.e. "tensor"), while is any orthogonal matrix (typically a rotation.) Let and be two vectors, and let and represent the same vectors in a rotated reference frame.

Theorem
  • If   ,   then:  
Proof
  1. Define
  2. Assume and
  3. Do some tensor algebra and express in terms of

In this context, the only difference between the tensor and scalar algebras is that with tensors, vector's do not always commute: does not always vanish.

Derivation of the rotation tensor[edit | edit source]

Rotation of basis vectors. Since it is an active transformation this sign on is opposite to the the case for rotating a point.
This image illustrates a proof for a passive transformation, based on the rules for the sine and cosine of the sum of two angles.

The rotation matrix usually the first orthogonal matrix students encounter. While it is conceptually easier to rotate vectors than to rotate a coordinate system, it is algebraically easier to rotate a coordinate system. From the figure, the unit vectors in a rotated reference frame obey:

Students will quickly see the sine and cosine components in this equation, but the minus sign might seem confusing. It comes from the fact that has a negative component when projected along the direction. Now express the vector , first in the unprimed coordinate system, then in primed:

To complete the proof, substitute the expressions that expressed the unit vectors in terms of the unit vectors:

This latter expression solves our problem, as we were seeking an expression of the form,

Note how in this formalism, there is no distinction between the primed and unprimed vector This tends to confuse everyone, including the author. Such confusion can be avoided when writing a textbook or article. But in the free-wheeling world of both scientific literature, as well as wikis, such chaos cannot be avoided. That's why it is good to carefully read books.

Going back to the notation of many WMF pages, we have the following formula for the components of a vector if the coordinate system is rotated by about the z axis:

See also[edit | edit source]

Notes[edit | edit source]

  1. The term "orthogonal" is confusing. A better word in this context would be orthonormal. See the lede sentence in w:special:Permalink/1181197344
  2. w:Special:Permalink/1181197344#Matrix_properties
  3. The physics student's first alternative to the "usual fashion" is the dot product in special relativity, where
  4. "Unit magnitude" means the dot product of the vector with itself equals 1
  5. Most of this page is based on https://en.wikipedia.org/w/index.php?title=Orthogonal_matrix&oldid=1028769520
  6. https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
  7. https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
  8. The quotation marks on "rotation' are intended to include orthogonal matrices that are also reflections of an axis through the origin.