A sound understanding of tensors and tensor operation is essential if you want to read and understand modern papers on solid mechanics and finite element modeling of complex material behavior. This brief introduction gives you an overview of tensors and tensor notation. For more details you can read A Brief on Tensor Analysis by J. G. Simmonds, the appendix on vector and tensor notation from Dynamics of Polymeric Liquids - Volume 1 by R. B. Bird, R. C. Armstrong, and O. Hassager, and the monograph by R. M. Brannon. An introduction to tensors in continuum mechanics can be found in An Introduction to Continuum Mechanics by M. E. Gurtin. Most of the material in this page is based on these sources.
The following notation is usually used in the literature:

A force
has a magnitude and a direction, can be added to another force, be multiplied by a scalar and so on. These properties make the force
a vector.
Similarly, the displacement
is a vector because it can be added to other displacements and satisfies the other properties of a vector.
However, a force cannot be added to a displacement to yield a physically meaningful quantity. So the physical spaces that these two quantities lie on must be different.
Recall that a constant force
moving through a displacement
does
units of work. How do we compute this product when the spaces of
and
are different? If you try to compute the product on a graph, you will have to convert both quantities to a single basis and then compute the scalar product.
An alternative way of thinking about the operation
is to think of
as a linear operator that acts on
to produce a scalar quantity (work). In the notation of sets we can write

A first order tensor is a linear operator that sends vectors to scalars.
Next, assume that the force
acts at a point
. The moment of the force about the origin is given by
which is a vector. The vector product can be thought of as an linear operation too. In this case the effect of the operator is to convert a vector into another vector.
A second order tensor is a linear operator that sends vectors to vectors.
According to Simmonds, "the name tensor comes from elasticity theory where in a loaded elastic body the stress tensor acting on a unit vector normal to a plane through a point delivers the tension (i.e., the force per unit area) acting across the plane at that point."
Examples of second order tensors are the stress tensor, the deformation gradient tensor, the velocity gradient tensor, and so on.
Another type of tensor that we encounter frequently in mechanics is the fourth order tensor that takes strains to stresses. In elasticity, this is the stiffness tensor.
A fourth order tensor is a linear operator that sends second order tensors to second order tensors.
A tensor
is a linear transformation from a vector space
to
. Thus, we can write

More often, we use the following notation:

I have used the "dot" notation in this handout. None of the above notations is obviously superior to the others and each is used widely.
Let
and
be two tensors. Then the sum
is another tensor
defined by

Let
be a tensor and let
be a scalar. Then the product
is a tensor defined by

The zero tensor
is the tensor which maps every vector
into the zero vector.

The identity tensor
takes every vector
into itself.

The identity tensor is also often written as
.
Let
and
be two tensors. Then the product
is the tensor that is defined by

In general
.
The transpose of a tensor
is the unique tensor
defined by

The following identities follow from the above definition:

A tensor
is symmetric if

A tensor
is skew if

Every tensor
can be expressed uniquely as the sum of a symmetric tensor
(the symmetric part of
) and a skew tensor
(the skew part of
).

The tensor (or dyadic) product
(also written
) of two vectors
and
is a tensor that assigns to each vector
the vector
.

Notice that all the above operations on tensors are remarkably similar to matrix operations.
The spectral theorem for tensors is widely used in mechanics. We will start off by definining eigenvalues and eigenvectors.
Let
be a second order tensor. Let
be a scalar and
be a vector such that

Then
is called an eigenvalue of
and
is an eigenvector .
A second order tensor has three eigenvalues and three eigenvectors, since the space is three-dimensional. Some of the eigenvalues might be repeated. The number of times an eigenvalue is repeated is called multiplicity.
In mechanics, many second order tensors are symmetric and positive definite. Note the following important properties of such tensors:
- If
is positive definite, then
.
- If
is symmetric, the eigenvectors
are mutually orthogonal.
For more on eigenvalues and eigenvectors see Applied linear operators and spectral methods.
Let
be second order tensor with
. Then
- there exist positive definite, symmetric tensors
,
and a rotation (orthogonal) tensor
such that
.
- also each of these decompositions is unique.
Let
be a second order tensor. Then the determinant of
can be expressed as

The quantities
are called the principal invariants of
. Expressions of the principal invariants are given below.
Principal invariants of
![{\displaystyle {\begin{aligned}I_{1}&={\text{tr}}~{\boldsymbol {S}}=\lambda _{1}+\lambda _{2}+\lambda _{3}\\I_{2}&={\cfrac {1}{2}}\left[({\text{tr}}~{\boldsymbol {S}})^{2}-{\text{tr}}({\boldsymbol {S^{2}}})\right]=\lambda _{1}~\lambda _{2}+\lambda _{2}~\lambda _{3}+\lambda _{3}~\lambda _{1}\\I_{3}&=\det {\boldsymbol {S}}=\lambda _{1}~\lambda _{2}~\lambda _{3}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1edd2311c3f57456c02aa39d2495984dd17903b3)
|
Note that
is an eigenvalue of
if and only if

The resulting equations is called the characteristic equation and is usually written in expanded form as

The Cayley-Hamilton theorem is a very useful result in continuum mechanics. It states that
If is a second order tensor then it satisfies its own characteristic equation

|
All the equations so far have made no mention of the coordinate system. When we use vectors and tensor in computations we have to express them in some coordinate system (basis) and use the components of the object in that basis for our computations.
Commonly used bases are the Cartesian coordinate frame, the cylindrical coordinate frame, and the spherical coordinate frame.
A Cartesian coordinate frame consists of an orthonormal basis
together with a point
called the origin. Since these vectors are mutually perpendicular, we have the following relations:

To make the above relations more compact, we introduce the Kronecker delta symbol

Then, instead of the nine equations in (1) we can write (in index notation)

Recall that the vector
can be written as

In index notation, equation (2) can be written as

This convention is called the Einstein summation convention. If indices are repeated, we understand that to mean that there is a sum over the indices.
We can write the Cartesian components of a vector
in the basis
as

Similarly, the components
of a tensor
are defined by

Using the definition of the tensor product, we can also write

Using the summation convention,

In this case, the bases of the tensor are
and the components are
.
From the definition of the components of tensor
, we can also see that (using the summation convention)

Similarly, the dyadic product can be expressed as

We can also write a tensor
in matrix notation as

Note that the Kronecker delta represents the components of the identity tensor in a Cartesian basis. Therefore, we can write

The inner product
of two tensors
and
is an operation that generates a scalar. We define (summation implied)

The inner product can also be expressed using the trace :

Proof using the definition of the trace below :


The trace of a tensor is the scalar given by

The trace of an N x N-matrix is the sum of the components on the downward-sloping diagonal.
The magnitude of a tensor
is defined by

Another tensor operation that is often seen is the tensor product of a tensor with a vector. Let
be a tensor and let
be a vector. Then the tensor cross product gives a tensor
defined by

The permutation symbol
is defined as

Let
,
and
be three second order tensors. Then

Proof:
It is easiest to show these relations by using index notation with respect to an orthonormal basis. Then we can write

Similarly,

Recall that the vector differential operator (with respect to a Cartesian basis) is defined as

In this section we summarize some operations of
on vectors and tensors.
The dyadic product
(or
) is called the gradient of the vector field
. Therefore, the quantity
is a tensor given by

In the alternative dyadic notation,

'Warning: Some authors define the
component of
as
.
Let
be a tensor field. Then the divergence of the tensor field is a vector
given by
![{\displaystyle {{\boldsymbol {\nabla }}\bullet {\boldsymbol {A}}=\sum _{j}\left[\sum _{i}{\cfrac {\partial A_{ij}}{\partial x_{i}}}\right]\mathbf {e} _{j}\equiv {\cfrac {\partial A_{ij}}{\partial x_{i}}}\mathbf {e} _{j}=A_{ij,i}\mathbf {e} _{j}~.}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ee0798d27ca387360a21f9b5995af4f11b08fea)
To fix the definition of divergence of a general tensor field (possibly of higher order than 2), we use the relation

where
is an arbitrary constant vector.
The Laplacian of a vector field is given by
![{\displaystyle {\nabla ^{2}{\mathbf {v} }={\boldsymbol {\nabla }}\bullet {{\boldsymbol {\nabla }}{\mathbf {v} }}=\sum _{j}\left[\sum _{i}{\cfrac {\partial ^{2}v_{j}}{\partial x_{i}^{2}}}\right]\mathbf {e} _{j}\equiv v_{j,ii}\mathbf {e} _{j}~.}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cb4db120c6ff7f7bc923e23c61e1d252370210a4)
Some important identities involving tensors are:
.
.
.
.
.
.
The following integral theorems are useful in continuum mechanics and finite elements.
If
is a region in space enclosed by a surface
and
is a tensor field, then

where
is the unit outward normal to the surface.
If
is a surface bounded by a closed curve
, then

where
is a tensor field,
is the unit normal vector to
in the direction of a right-handed screw motion along
, and
is a unit tangential vector in the direction of integration along
.
Let
be a closed moving region of space enclosed by a surface
. Let the velocity of any surface element be
. Then if
is a tensor function of position and time,

where
is the outward unit normal to the surface
.
We often have to find the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. The directional directive provides a systematic way of finding these derivatives.
The definitions of directional derivatives for various situations are
given below. It is assumed that the functions are sufficiently smooth
that derivatives can be taken.
Let
be a real valued function of the vector
. Then the
derivative of
with respect to
(or at
) in the direction
is the vector defined as
![{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )[\mathbf {u} ]=\left[{\frac {\partial }{\partial \alpha }}~f(\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/223ca630303943037465d356402064d3950ecacd)
for all vectors
.
Properties:
1) If
then
2) If
then
3) If
then
Let
be a vector valued function of the vector
. Then the
derivative of
with respect to
(or at
) in the direction
is the second order tensor defined as
![{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )[\mathbf {u} ]=\left[{\frac {\partial }{\partial \alpha }}~\mathbf {f} (\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4353855feac5e2febcb5a9bd204bd3eed271666d)
for all vectors
.
Properties:
1) If
then
2) If
then
3) If
then
Let
be a real valued function of the second order tensor
. Then
the derivative of
with respect to
(or at
) in the direction
is the second order tensor defined as
![{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {\partial }{\partial \alpha }}~f({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64b69cfbbabd9463fc0217bab188afe1f93c6e60)
for all second order tensors
.
Properties:
1) If
then
2) If
then
3) If
then
Let
be a second order tensor valued function of the second order
tensor
. Then the derivative of
with respect to
(or at
) in the direction
is the fourth order tensor defined as
![{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {\partial }{\partial \alpha }}~{\boldsymbol {F}}({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ce08d7933787cb370e565d0a9dbd29ac3c0fbc6)
for all second order tensors
.
Properties:
1) If
then
2) If
then
3) If
then
3) If
then
Derivative of the determinant of a tensor
|
Proof:
Let
be a second order tensor and let
. Then,
from the definition of the derivative of a scalar valued function of a tensor,
we have
![{\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\det({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\det \left[\alpha ~{\boldsymbol {A}}\left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\det \left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}~.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dadd082492fde7d0db51c1f05489a1a6e2d4d476)
Recall that we can expand the determinant of a tensor in the form of
a characteristic equation in terms of the invariants
using
(note the sign of
)

Using this expansion we can write
![{\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\left({\cfrac {1}{\alpha ^{3}}}+I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~{\cfrac {1}{\alpha ^{2}}}+I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~{\cfrac {1}{\alpha }}+I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})\right)\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~{\cfrac {d}{d\alpha }}\left[1+I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha +I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{2}+I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{3}\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~\left[I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})+2~I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha +3~I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{2}\right]\right|_{\alpha =0}\\&=\det({\boldsymbol {A}})~I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/16b8207ed9f2cc9582d921a734df86274e421fad)
Recall that the invariant
is given by

Hence,
![{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\det({\boldsymbol {A}})~{\text{tr}}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}:{\boldsymbol {T}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8531ea9a5b37c7af67277056ae9f11e1e15b6f75)
Invoking the arbitrariness of
we then have
![{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/080361970fc0603caedbc79b2b1c305f37897621)
Derivatives of the principal invariants of a tensor
The principal invariants of a second order tensor are
![{\displaystyle {\begin{aligned}I_{1}({\boldsymbol {A}})&={\text{tr}}{\boldsymbol {A}}\\I_{2}({\boldsymbol {A}})&={\frac {1}{2}}\left[({\text{tr}}{\boldsymbol {A}})^{2}-{\text{tr}}{{\boldsymbol {A}}^{2}}\right]\\I_{3}({\boldsymbol {A}})&=\det({\boldsymbol {A}})\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cb5f440de0bb33a949001c6bef13f9f829fb1a42)
The derivatives of these three invariants with respect to are
![{\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\\{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}~(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T})=({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}})^{T}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/958082555bbf6e268f358d1286fab2eef6d5d5d0)
|
Proof:
From the derivative of the determinant we know that
![{\displaystyle {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6f59a269c296f625706d56b5bf8e45763228b86d)
For the derivatives of the other two invariants, let us go back to the
characteristic equation

Using the same approach as for the determinant of a tensor, we can show that
![{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}]^{T}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3bf5247a085970a226c820d13945cd1a7220cd49)
Now the left hand side can be expanded as
![{\displaystyle {\begin{aligned}{\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})&={\frac {\partial }{\partial {\boldsymbol {A}}}}\left[\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})\right]\\&={\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a4ad226ea72ae236eaddfe419007ff6de53d55d)
Hence
![{\displaystyle {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}]^{T}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d73a983e1a52cffc7b3a033b31ea4e166f2ae2bb)
or,
![{\displaystyle (\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{T}\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~{\boldsymbol {\mathit {1}}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd98450e2e00629178bedeee4200baed0a59616d)
Expanding the right hand side and separating terms on the left hand side
gives
![{\displaystyle (\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T})\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4716bd540dce8c574d01b9b956e33ac50244ae99)
or,
![{\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda \right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a103ff061c93c7199e75f5a2e355b32fd81e0fee)
If we define
and
, we can write the above as
![{\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}\right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[I_{0}~\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68503b4edd5ff8b08d4e88769764c4861dc553e3)
Collecting terms containing various powers of
, we get

Then, invoking the arbitrariness of
, we have

This implies that

Let
be the second order identity tensor. Then the derivative of this
tensor with respect to a second order tensor
is given by

This is because
is independent of
.
Let
be a second order tensor. Then
![{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\left[{\frac {\partial }{\partial \alpha }}({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}={\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}:{\boldsymbol {T}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d4cf9341eabbe69c48f4ff85db571b84c8b2c318)
Therefore,

Here
is the fourth order identity tensor. In index
notation with respect to an orthonormal basis

This result implies that

where

Therefore, if the tensor
is symmetric, then the derivative is also symmetric and
we get

where the symmetric fourth order identity tensor is

Derivative of the inverse of a tensor
|
Proof:
Recall that

Since
, we can write

Using the product rule for second order tensors
![{\displaystyle {\frac {\partial }{\partial {\boldsymbol {S}}}}[{\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})]:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {F}}_{2}+{\boldsymbol {F}}_{1}\cdot \left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/73a25e5e0ee3f8a2f287da104d5f72d8342899b9)
we get

or,

Therefore,

The boldface notation that I've used is called the Gibbs notation. The index notation that I have used is also called Cartesian tensor notation.