# Tensors/Transformation rule under a change of basis

 Subject classification: this is a mathematics resource.
 Subject classification: this is a physics resource.
 Educational level: this is a secondary education resource.
 Educational level: this is a tertiary (university) resource.

An important theoretical question about tensors is: how do the components of a tensor change when one switches to a different basis (coordinate system)? The reason that this is important is that this series of articles has given a general definition of the tensors of various rank, based directly on the vectors involved, and given detailed formulas for calculating the values of the tensors in terms of the components of the tensors and of the argument vectors and forms. Since the components of a given vector change when the basis changes, the components of all tensors must change in order to make the formulas get the same answers. How do the components of a tensor change?

The starting point for this is the manner in which the components of a simple vector change. That change is best given by a transformation matrix. The 3 (or N) components of a vector in the "new" basis are obtained by multiplying the transformation matrix A by the components (written out as a column) of that vector in the "old" basis. Just where that transformation matrix came from is out of the scope of this article, though it typically comes from some kind of rotation of the coordinate system, or perhaps a change between Euclidean and polar or spherical coordinates.

For the purposes of this article, the "new" components will be written with a bar over them, thusly:

${\displaystyle {\overline {V^{i}}}\,}$

while the "old" components are written without a bar:

${\displaystyle V^{i}\,}$

The convention of using "barred" and "unbarred" components when discussing coordinate system changes is a common convention when dealing with tensors. It is unwieldy, but probably better than any other convention.

A change of basis could be written out like this:

${\displaystyle {\overline {V^{i}}}=\sum _{j=1}^{N}A_{i,j}\ V^{j}}$

We have used ordinary notation for the transformation matrix A because it is not a tensor, and this is not a tensor equation. Einstein summation will not be used in this article.

That covers first-rank contravariant tensors. How about forms, that is, first-rank covariant tensors? Assume that they are transformed by some other matrix B:

${\displaystyle {\overline {\Phi _{i}}}=\sum _{j=1}^{N}B_{i,j}\ \Phi _{j}}$

The task is to figure out how B relates to A.

Now we know that the behavior of a form and a vector is given in terms of its components:

${\displaystyle \Phi (V)=\sum _{i=1}^{N}\Phi _{i}\ V^{i}}$

This must be the same in both bases, so

${\displaystyle \sum _{k=1}^{N}{\overline {\Phi _{k}}}\ {\overline {V^{k}}}=\sum _{i=1}^{N}\Phi _{i}\ V^{i}}$

but

${\displaystyle {\overline {V^{k}}}=\sum _{j=1}^{N}A_{k,j}\ V^{j}}$

and

${\displaystyle {\overline {\Phi _{k}}}=\sum _{i=1}^{N}B_{k,i}\ \Phi _{i}}$

so

${\displaystyle \sum _{i,j,k=1}^{N}B_{k,i}\ \Phi _{i}A_{k,j}\ V^{j}=\sum _{i=1}^{N}\Phi _{i}\ V^{i}}$

Since this is true for all forms ${\displaystyle \Phi \,}$, we must have

${\displaystyle \sum _{j,k=1}^{N}B_{k,i}\ A_{k,j}\ V^{j}=V^{i}}$    for all i.

Now the transpose of the matrix B is given by

${\displaystyle B_{i,k}^{t}=B_{k,i}\,}$

and, by matrix multiplication:

${\displaystyle \sum _{k=1}^{N}B_{k,i}\ A_{k,j}=\sum _{k=1}^{N}B_{i,k}^{t}\ A_{k,j}=[B^{t}A]_{i,j}}$

so

${\displaystyle V^{i}=\sum _{j=1}^{N}[B^{t}A]_{i,j}\ V^{j}}$

Since this is true for all vectors V, the matrix ${\displaystyle [B^{t}A]\,}$ must be the identity matrix, or

${\displaystyle B^{t}=A^{-1}\,}$

or

${\displaystyle B=A^{-1^{t}}\,}$

The transpose of B is the inverse of A, or, equivalently, B is the inverse transpose of A. This gives the rule for handling a change of basis for forms:

 Whatever matrix takes care of vectors, its inverse transpose takes care of forms.

How about higher-rank tensors? Just apply one of the other matrices to each index—A for contravariant indices and its inverse transpose B for covariant indices. For example, the second-rank covariant tensor ${\displaystyle \Omega \,}$ takes two applications of the matrix B, which is the inverse transpose of A.

${\displaystyle {\overline {\Omega _{ij}}}=\sum _{k,l=1}^{N}B_{i,k}B_{j,l}\ \Omega _{kl}}$

The reader can check that this gets the right answer when applied to two vectors.

For each index of the tensor, there is a summation and a matrix A or B, according to the covariance.

Many treatments of tensors take this transformation rule as the definition of a tensor. That is, they define a tensor as "a pile of numbers that transform according to ...", giving the rule that we have derived.

Readers who are familiar with the theory of matrices may know that a matrix is orthogonal if and only if its inverse and its transpose are the same. This means that, if the transformation matrix is orthogonal, vectors and forms transform the same way. So, if one deals only with bases that transform orthogonally, one can get away with not worrying about the distinction between covariant and contravariant tensors. The main example of this is Euclidean spaces. The transformations between Euclidean bases are always orthogonal.