Tensors/Transformation rule under a change of basis

From Wikiversity
Jump to navigation Jump to search
Mathematics
Tensor
Subject classification: this is a mathematics resource.
Subject classification: this is a physics resource.
Educational level: this is a secondary education resource.
Educational level: this is a tertiary (university) resource.
This article presumes that the reader has read the earlier articles in the series, starting with Tensors/Definitions.
In this article, all vector spaces are real and finite-dimensional.

An important theoretical question about tensors is: how do the components of a tensor change when one switches to a different basis (coordinate system)? The reason that this is important is that this series of articles has given a general definition of the tensors of various rank, based directly on the vectors involved, and given detailed formulas for calculating the values of the tensors in terms of the components of the tensors and of the argument vectors and forms. Since the components of a given vector change when the basis changes, the components of all tensors must change in order to make the formulas get the same answers. How do the components of a tensor change?

The starting point for this is the manner in which the components of a simple vector change. That change is best given by a transformation matrix. The 3 (or N) components of a vector in the "new" basis are obtained by multiplying the transformation matrix A by the components (written out as a column) of that vector in the "old" basis. Just where that transformation matrix came from is out of the scope of this article, though it typically comes from some kind of rotation of the coordinate system, or perhaps a change between Euclidean and polar or spherical coordinates.

For the purposes of this article, the "new" components will be written with a bar over them, thusly:

while the "old" components are written without a bar:

The convention of using "barred" and "unbarred" components when discussing coordinate system changes is a common convention when dealing with tensors. It is unwieldy, but probably better than any other convention.

A change of basis could be written out like this:

We have used ordinary notation for the transformation matrix A because it is not a tensor, and this is not a tensor equation. Einstein summation will not be used in this article.

That covers first-rank contravariant tensors. How about forms, that is, first-rank covariant tensors? Assume that they are transformed by some other matrix B:

The task is to figure out how B relates to A.

Now we know that the behavior of a form and a vector is given in terms of its components:

This must be the same in both bases, so

but

and

so

Since this is true for all forms , we must have

    for all i.

Now the transpose of the matrix B is given by

and, by matrix multiplication:

so

Since this is true for all vectors V, the matrix must be the identity matrix, or

or

The transpose of B is the inverse of A, or, equivalently, B is the inverse transpose of A. This gives the rule for handling a change of basis for forms:

Whatever matrix takes care of vectors, its inverse transpose takes care of forms.

How about higher-rank tensors? Just apply one of the other matrices to each index—A for contravariant indices and its inverse transpose B for covariant indices. For example, the second-rank covariant tensor takes two applications of the matrix B, which is the inverse transpose of A.

The reader can check that this gets the right answer when applied to two vectors.

For each index of the tensor, there is a summation and a matrix A or B, according to the covariance.

Many treatments of tensors take this transformation rule as the definition of a tensor. That is, they define a tensor as "a pile of numbers that transform according to ...", giving the rule that we have derived.

Readers who are familiar with the theory of matrices may know that a matrix is orthogonal if and only if its inverse and its transpose are the same. This means that, if the transformation matrix is orthogonal, vectors and forms transform the same way. So, if one deals only with bases that transform orthogonally, one can get away with not worrying about the distinction between covariant and contravariant tensors. The main example of this is Euclidean spaces. The transformations between Euclidean bases are always orthogonal.

See also[edit | edit source]

Footnotes[edit | edit source]