# Tensors/Calculations with index notation

Tensors allow a certain level of abstraction to help apply what mathematicians have learned about linear algebra. Tensors afford a cleaner notation to represent complex linear relationships at a more abstract level. This article explains the index notation, thereby giving the reader a feeling for the power of using tensors.

## Definition of the components

We begin with the observation, from the preceding article in this series, that, once a basis ${\hat {x}}$ , ${\hat {y}}$ , ${\hat {z}}$ for the vector space has been chosen (giving rise to the natural basis ${\hat {e}}$ , ${\hat {f}}$ , ${\hat {g}}$ for the form space), the components of a vector of form are just the results of operating on the appropriate basis items. For vectors:

$V^{1}={\hat {e}}(V)$ $V^{2}={\hat {f}}(V)$ $V^{3}={\hat {g}}(V)$ For forms:

$\Phi _{1}=\Phi ({\hat {x}})$ $\Phi _{2}=\Phi ({\hat {y}})$ $\Phi _{3}=\Phi ({\hat {z}})$ This covers all 1st rank tensors, covariant or contravariant. We can extend this to higher-rank tensors. If $\Omega \,$ is a 2nd rank covariant tensor (that is, a bilinear form), it is completely determined by the results of applying it to all combinations of basis vectors. (Why? Because it is determined by the result of its application to pairs of vectors, those vectors can be broken down into combinations of basis vectors, and $\Omega \,$ is linear.)

So we can call those results the components of the tensor:

$\Omega _{11}=\Omega ({\hat {x}},{\hat {x}})\qquad \Omega _{12}=\Omega ({\hat {x}},{\hat {y}})\qquad \Omega _{13}=\Omega ({\hat {x}},{\hat {z}})$ $\Omega _{21}=\Omega ({\hat {y}},{\hat {x}})\qquad \Omega _{22}=\Omega ({\hat {y}},{\hat {y}})\qquad \Omega _{23}=\Omega ({\hat {y}},{\hat {z}})$ $\Omega _{31}=\Omega ({\hat {z}},{\hat {x}})\qquad \Omega _{32}=\Omega ({\hat {z}},{\hat {y}})\qquad \Omega _{33}=\Omega ({\hat {z}},{\hat {z}})$ The numbers $\Omega _{11}\dots \Omega _{33}\,$ are the components of $\Omega \,$ . It follows that the space of 2nd rank tensors is 9-dimensional. In the general case, the space of Kth rank tensors on an underlying N-dimensional space is of dimension NK.

We can work out the formula for evaluating tensor operations in terms of the components. We already have, from the preceding article, that, if $\Phi \,$ is a form and V is a vector:

$\Phi (V)=\sum _{i=1}^{3}\Phi _{i}\ V^{i}$ In the case of a bilinear form $\Omega \,$ ,

$\Omega (V,W)=\Omega (V^{1}\ {\hat {x}}+V^{2}\ {\hat {y}}+V^{3}\ {\hat {z}},W^{1}\ {\hat {x}}+W^{2}\ {\hat {y}}+W^{3}\ {\hat {z}})$ $=V^{1}W^{1}\ \Omega ({\hat {x}},{\hat {x}})+V^{1}W^{2}\ \Omega ({\hat {x}},{\hat {y}})+V^{1}W^{3}\ \Omega ({\hat {x}},{\hat {z}})$ $+\ V^{2}W^{1}\ \Omega ({\hat {y}},{\hat {x}})+V^{2}W^{2}\ \Omega ({\hat {y}},{\hat {y}})+V^{2}W^{3}\ \Omega ({\hat {y}},{\hat {z}})$ $+\ V^{3}W^{1}\ \Omega ({\hat {z}},{\hat {x}})+V^{3}W^{2}\ \Omega ({\hat {z}},{\hat {y}})+V^{3}W^{3}\ \Omega ({\hat {z}},{\hat {z}})$ $=V^{1}W^{1}\ \Omega _{11}+V^{1}W^{2}\ \Omega _{12}+V^{1}W^{3}\ \Omega _{13}$ $+\ V^{2}W^{1}\ \Omega _{21}+V^{2}W^{2}\ \Omega _{22}+V^{2}W^{3}\ \Omega _{23}$ $+\ V^{3}W^{1}\ \Omega _{31}+V^{3}W^{2}\ \Omega _{32}+V^{3}W^{3}\ \Omega _{33}$ $=\sum _{i,j=1}^{3}\Omega _{ij}\ V^{i}W^{j}$ We can similarly work out the formula for the operation of a 2nd rank mixed tensor $T\,$ on a form and a vector. The components of $T\,$ are, as always, defined by the action on basis forms and basis vectors:

$T_{\ \ 1}^{1}=T({\hat {e}},{\hat {x}})\qquad {}T_{\ \ 2}^{1}=T({\hat {e}},{\hat {y}})\qquad {}T_{\ \ 3}^{1}=T({\hat {e}},{\hat {z}})$ $T_{\ \ 1}^{2}=T({\hat {f}},{\hat {x}})\qquad {}T_{\ \ 2}^{2}=T({\hat {f}},{\hat {y}})\qquad {}T_{\ \ 3}^{2}=T({\hat {f}},{\hat {z}})$ $T_{\ \ 1}^{3}=T({\hat {g}},{\hat {x}})\qquad {}T_{\ \ 2}^{3}=T({\hat {g}},{\hat {y}})\qquad {}T_{\ \ 3}^{3}=T({\hat {g}},{\hat {z}})$ The numbers $T_{\ \ 1}^{1}\dots T_{\ \ 3}^{3}\,$ are the components of $T\,$ .

As before:

$T(\Phi ,V)=T(\Phi _{1}\ {\hat {e}}+\Phi _{2}\ {\hat {f}}+\Phi _{3}\ {\hat {g}},V^{1}\ {\hat {x}}+V^{2}\ {\hat {y}}+V^{3}\ {\hat {z}})$ $=\Phi _{1}V^{1}\ T({\hat {e}},{\hat {x}})+\Phi _{1}V^{2}\ T({\hat {e}},{\hat {y}})+\Phi _{1}V^{3}\ T({\hat {e}},{\hat {z}})$ $+\ \Phi _{2}V^{1}\ T({\hat {f}},{\hat {x}})+\Phi _{2}V^{2}\ T({\hat {f}},{\hat {y}})+\Phi _{2}V^{3}\ T({\hat {f}},{\hat {z}})$ $+\ \Phi _{3}V^{1}\ T({\hat {g}},{\hat {x}})+\Phi _{3}V^{2}\ T({\hat {g}},{\hat {y}})+\Phi _{3}V^{3}\ T({\hat {g}},{\hat {z}})$ $=\Phi _{1}V^{1}\ T_{\ \ 1}^{1}+\Phi _{1}V^{2}\ T_{\ \ 2}^{1}+\Phi _{1}V^{3}\ T_{\ \ 3}^{1}$ $+\ \Phi _{2}V^{1}\ T_{\ \ 1}^{2}+\Phi _{2}V^{2}\ T_{\ \ 2}^{2}+\Phi _{2}V^{3}\ T_{\ \ 3}^{2}$ $+\ \Phi _{3}V^{1}\ T_{\ \ 1}^{3}+\Phi _{3}V^{2}\ T_{\ \ 2}^{3}+\Phi _{3}V^{3}\ T_{\ \ 3}^{3}$ $=\sum _{i,j=1}^{3}T_{\ j}^{i}\ \Phi _{i}V^{j}$ ## Linear transformations (mixed tensors)

A linear transformation A, that is, a linear function that maps vectors into vectors, is essentially the same as a 2nd rank mixed tensor. A is the same as the tensor T that was analyzed above, so

$\Phi (A(V))=T(\Phi ,V)=\sum _{i,j=1}^{3}T_{\ j}^{i}\ \Phi _{i}V^{j}=\sum _{i,j=1}^{3}\Phi _{i}\ A_{\ j}^{i}\ V^{j}$ The converse is also true—If T is a mixed tensor, then there is a linear transformation A, having the same components, such that

$\Phi (A(V))=T(\Phi ,V)\,$ for all linear forms $\Phi \,$ .

The proof is just like the proof of the "double dual" theorem of the preceding article.

Now since

$\Phi (A(V))=\sum _{i=1}^{3}\Phi _{i}\left(A(V)\right)^{i}=\sum _{i=1}^{3}\Phi _{i}\left(\sum _{j=1}^{3}A_{\ j}^{i}\ V^{j}\right)$ for all values of $\Phi _{i}\,$ it must be true that

$\left(A(V)\right)^{i}=\sum _{j=1}^{3}A_{\ j}^{i}\ V^{j}$ for all i.

This tells us how to calculate the result of a linear transformation A on a vector V, using the components of the equivalent tensor T.

## Generalization to arbitrary rank

In the general case, if we have, say, a 4th rank tensor (1 covariant, 3 contravariant), we define the components (all 81 of them in 3 dimensions) as

$\Theta _{2}^{\ \ 231}=\Theta ({\hat {y}},{\hat {f}},{\hat {g}},{\hat {e}})$ etc., 81 equations in all.

And we can evaluate the tensor on 4 arguments by

$\Theta (V,\Phi ,\Psi ,\Lambda )=\sum _{i,j,k,l=1}^{3}\Theta _{i}^{\ jkl}\ V^{i}\Phi _{j}\Psi _{k}\Lambda _{l}$ The reader has no doubt noticed that we are following the convention that indices are written as subscripts for covariant and superscripts for contravariant.

## Pure index notation

At this point, we can stop dealing with with the basis vectors and forms. We used them to define the components, but we now do all calculations directly from the components. This means that we can do everything with numerical indices, without worrying about running out of letters. We can also let the dimensionality of the space be anything, not just 3. (In relativity, it is 4.)

The formulas we have gathered so far look like these:

$\Phi (V)=\sum _{i=1}^{N}\Phi _{i}\ V^{i}$ $\Omega (V,W)=\sum _{i,j=1}^{N}\Omega _{ij}\ V^{i}W^{j}$ $T(\Phi ,V)=\sum _{i,j=1}^{N}T_{\ j}^{i}\ \Phi _{i}V^{j}$ $\left(A(V)\right)^{i}=\sum _{j=1}^{N}A_{\ j}^{i}\ V^{j}$ $\Theta (V,\Phi ,\Psi ,\Lambda )=\sum _{i,j,k,l=1}^{N}\Theta _{i}^{\ jkl}\ V^{i}\Phi _{j}\Psi _{k}\Lambda _{l}$ These formulas are examples of the famous index notation for tensors.

## Einstein summation

Summations such as these, with a summation index appearing once in a superscript and once in a subscript, are extremely common in tensor calculations. The Einstein summation convention, named after Albert Einstein, who struggled with tensors when he was developing General Relativity, says that one can leave out the summation; it is implicit.

$\Phi (V)=\Phi _{i}\ V^{i}$ $\Omega (V,W)=\Omega _{ij}\ V^{i}W^{j}$ $T(\Phi ,V)=T_{\ j}^{i}\ \Phi _{i}V^{j}$ $\left(A(V)\right)^{i}=A_{\ j}^{i}\ V^{j}$ $\Theta (V,\Phi ,\Psi ,\Lambda )=\Theta _{i}^{\ jkl}\ V^{i}\Phi _{j}\Psi _{k}\Lambda _{l}$ People doing serious tensor work make one other simplification—they don't bother with the left-hand-sides of the above equations. They treat the right-hand-sides as though they are the actual operations. That is,

$\Phi _{i}\ V^{i}$ literally means the application of the form $\Phi \,$ to the vector $V\,$ . It's not just a shorthand formula telling how to calculate $\Phi (V)\,$ , it actually denotes the operation.

$\Omega _{ij}\ V^{i}W^{j}$ literally means the application of the bilinear form $\Omega \,$ to the vector $V\,$ and $W\,$ .

How can we tell that $\Omega _{ij}\,$ is a 2nd rank covariant tensor? Because it has two subscripts. Similarly, $V^{i}\,$ is a vector because it has one superscript. The number and position of superscripts and subscripts on a symbol indicate its rank and covariance. By being careful with the positions of superscripts and subscripts, the notation is expressive and unambiguous.

The rules for tensor equations are that indices in any term that appear once in a superscript and once in a subscript use the Einstein summation convention. When terms are added, or terms appear on both sides of an equation, the unsummed indices must match, and there is an implicit "for all" applying to them.

Another example of a tensor equation would be

$V^{i}=3\ W^{i}+X^{i}\,$ meaning that $V=3\ W+X\,$ as vectors.

## Conventional notation for linear transformations

There is one more thing to notice about linear transformations. There is a conventional notation for describing them, in terms of matrices. The standard formula in conventional notation is that if

$W=A(V)\,$ then

$W_{i}=\sum _{j=1}^{N}A_{i,j}\ V_{j}$ where $A_{i,j}\,$ is the matrix of the transformation A. In the tensor formalism,

$W^{i}=A_{\ j}^{i}\ V^{j}=\sum _{j=1}^{N}A_{\ j}^{i}\ V^{j}$ so, aside from the fact that the superscript/subscript issue only applies in tensor notation,

$A_{i,j}\,$ (conventional notation)  $=A_{\ j}^{i}\,$ (tensor notation)

The next article in this series is Tensors/Transformation rule under a change of basis.