In this article, all vector spaces are real and finite-dimensional.
Tensors allow a certain level of abstraction to help apply what mathematicians have learned about linear algebra. Tensors afford a cleaner notation to represent complex linear relationships at a more abstract level. This article explains the index notation, thereby giving the reader a feeling for the power of using tensors.
We begin with the observation, from the preceding article in this series, that, once a basis , , for the vector space has been chosen (giving rise to the natural basis , , for the form space), the components of a vector of form are just the results of operating on the appropriate basis items.
For vectors:
For forms:
This covers all 1st rank tensors, covariant or contravariant. We can extend this to higher-rank tensors. If is a 2nd rank covariant tensor (that is, a bilinear form), it is completely determined by the results of applying it to all combinations of basis vectors. (Why? Because it is determined by the result of its application to pairs of vectors, those vectors can be broken down into combinations of basis vectors, and is linear.)
So we can call those results the components of the tensor:
The numbers are the components of . It follows that the space of 2nd rank tensors is 9-dimensional. In the general case, the space of Kth rank tensors on an underlying N-dimensional space is of dimension NK.
We can work out the formula for evaluating tensor operations in terms of the components. We already have, from the preceding article, that, if is a form and V is a vector:
In the case of a bilinear form ,
We can similarly work out the formula for the operation of a 2nd rank mixed tensor on a form and a vector. The components of are, as always, defined by the action on basis forms and basis vectors:
A linear transformation A, that is, a linear function that maps vectors into vectors, is essentially the same as a 2nd rank mixed tensor. A is the same as the tensor T that was analyzed above, so
The converse is also true—If T is a mixed tensor, then there is a linear transformation A, having the same components, such that
for all linear forms .
The proof is just like the proof of the "double dual" theorem of the preceding article.
Now since
for all values of
it must be true that
for all i.
This tells us how to calculate the result of a linear transformation A on a vector V, using the components of the equivalent tensor T.
In the general case, if we have, say, a 4th rank tensor (1 covariant, 3 contravariant), we define the components (all 81 of them in 3 dimensions) as
etc., 81 equations in all.
And we can evaluate the tensor on 4 arguments by
The reader has no doubt noticed that we are following the convention that indices are written as subscripts for covariant and superscripts for contravariant.
At this point, we can stop dealing with with the basis vectors and forms. We used them to define the components, but we now do all calculations directly from the components. This means that we can do everything with numerical indices, without worrying about running out of letters. We can also let the dimensionality of the space be anything, not just 3. (In relativity, it is 4.)
The formulas we have gathered so far look like these:
These formulas are examples of the famous index notation for tensors.
Summations such as these, with a summation index appearing once in a superscript and once in a subscript, are extremely common in tensor calculations. The Einstein summation convention, named after Albert Einstein, who struggled with tensors when he was developing General Relativity, says that one can leave out the summation; it is implicit.
People doing serious tensor work make one other simplification—they don't bother with the left-hand-sides of the above equations. They treat the right-hand-sides as though they are the actual operations. That is,
literally means the application of the form to the vector . It's not just a shorthand formula telling how to calculate , it actually denotes the operation.
literally means the application of the bilinear form to the vector and .
How can we tell that is a 2nd rank covariant tensor? Because it has two subscripts. Similarly, is a vector because it has one superscript. The number and position of superscripts and subscripts on a symbol indicate its rank and covariance. By being careful with the positions of superscripts and subscripts, the notation is expressive and unambiguous.
The rules for tensor equations are that indices in any term that appear once in a superscript and once in a subscript use the Einstein summation convention. When terms are added, or terms appear on both sides of an equation, the unsummed indices must match, and there is an implicit "for all" applying to them.
There is one more thing to notice about linear transformations. There is a conventional notation for describing them, in terms of matrices. The standard formula in conventional notation is that if
then
where is the matrix of the transformation A. In the tensor formalism,
so, aside from the fact that the superscript/subscript issue only applies in tensor notation,