Elasticity/Tensors

From Wikiversity
Jump to navigation Jump to search

Tensors in Solid Mechanics[edit | edit source]

A sound understanding of tensors and tensor operation is essential if you want to read and understand modern papers on solid mechanics and finite element modeling of complex material behavior. This brief introduction gives you an overview of tensors and tensor notation. For more details you can read A Brief on Tensor Analysis by J. G. Simmonds, the appendix on vector and tensor notation from Dynamics of Polymeric Liquids - Volume 1 by R. B. Bird, R. C. Armstrong, and O. Hassager, and the monograph by R. M. Brannon. An introduction to tensors in continuum mechanics can be found in An Introduction to Continuum Mechanics by M. E. Gurtin. Most of the material in this page is based on these sources.

Notation[edit | edit source]

The following notation is usually used in the literature:

Motivation[edit | edit source]

A force has a magnitude and a direction, can be added to another force, be multiplied by a scalar and so on. These properties make the force a vector.

Similarly, the displacement is a vector because it can be added to other displacements and satisfies the other properties of a vector.

However, a force cannot be added to a displacement to yield a physically meaningful quantity. So the physical spaces that these two quantities lie on must be different.

Recall that a constant force moving through a displacement does units of work. How do we compute this product when the spaces of and are different? If you try to compute the product on a graph, you will have to convert both quantities to a single basis and then compute the scalar product.

An alternative way of thinking about the operation is to think of as a linear operator that acts on to produce a scalar quantity (work). In the notation of sets we can write

A first order tensor is a linear operator that sends vectors to scalars.

Next, assume that the force acts at a point . The moment of the force about the origin is given by which is a vector. The vector product can be thought of as an linear operation too. In this case the effect of the operator is to convert a vector into another vector.

A second order tensor is a linear operator that sends vectors to vectors.

According to Simmonds, "the name tensor comes from elasticity theory where in a loaded elastic body the stress tensor acting on a unit vector normal to a plane through a point delivers the tension (i.e., the force per unit area) acting across the plane at that point."

Examples of second order tensors are the stress tensor, the deformation gradient tensor, the velocity gradient tensor, and so on.

Another type of tensor that we encounter frequently in mechanics is the fourth order tensor that takes strains to stresses. In elasticity, this is the stiffness tensor.

A fourth order tensor is a linear operator that sends second order tensors to second order tensors.

Tensor algebra[edit | edit source]

A tensor is a linear transformation from a vector space to . Thus, we can write

More often, we use the following notation:

I have used the "dot" notation in this handout. None of the above notations is obviously superior to the others and each is used widely.

Addition of tensors[edit | edit source]

Let and be two tensors. Then the sum is another tensor defined by

Multiplication of a tensor by a scalar[edit | edit source]

Let be a tensor and let be a scalar. Then the product is a tensor defined by

Zero tensor[edit | edit source]

The zero tensor is the tensor which maps every vector into the zero vector.

Identity tensor[edit | edit source]

The identity tensor takes every vector into itself.

The identity tensor is also often written as .

Product of two tensors[edit | edit source]

Let and be two tensors. Then the product is the tensor that is defined by

In general .

Transpose of a tensor[edit | edit source]

The transpose of a tensor is the unique tensor defined by

The following identities follow from the above definition:

Symmetric and skew tensors[edit | edit source]

A tensor is symmetric if

A tensor is skew if

Every tensor can be expressed uniquely as the sum of a symmetric tensor (the symmetric part of ) and a skew tensor (the skew part of ).

Tensor product of two vectors[edit | edit source]

The tensor (or dyadic) product (also written ) of two vectors and is a tensor that assigns to each vector the vector .

Notice that all the above operations on tensors are remarkably similar to matrix operations.

Spectral theorem[edit | edit source]

The spectral theorem for tensors is widely used in mechanics. We will start off by definining eigenvalues and eigenvectors.

Eigenvalues and eigenvectors[edit | edit source]

Let be a second order tensor. Let be a scalar and be a vector such that

Then is called an eigenvalue of and is an eigenvector .

A second order tensor has three eigenvalues and three eigenvectors, since the space is three-dimensional. Some of the eigenvalues might be repeated. The number of times an eigenvalue is repeated is called multiplicity.

In mechanics, many second order tensors are symmetric and positive definite. Note the following important properties of such tensors:

  1. If is positive definite, then .
  2. If is symmetric, the eigenvectors are mutually orthogonal.

For more on eigenvalues and eigenvectors see Applied linear operators and spectral methods.

Spectral theorem[edit | edit source]

Let be a symmetric second-order tensor. Then

  1. the normalized eigenvectors form an orthonormal basis.
  2. if are the corresponding eigenvalues then .

This relation is called the spectral decomposition of .

Polar decomposition theorem[edit | edit source]

Let be second order tensor with . Then

  1. there exist positive definite, symmetric tensors , and a rotation (orthogonal) tensor such that .
  2. also each of these decompositions is unique.

Principal invariants of a tensor[edit | edit source]

Let be a second order tensor. Then the determinant of can be expressed as

The quantities are called the principal invariants of . Expressions of the principal invariants are given below.

Principal invariants of

Note that is an eigenvalue of if and only if

The resulting equations is called the characteristic equation and is usually written in expanded form as

Cayley-Hamilton theorem[edit | edit source]

The Cayley-Hamilton theorem is a very useful result in continuum mechanics. It states that

Cayley-Hamilton theorem

If is a second order tensor then it satisfies its own characteristic equation

Index notation[edit | edit source]

All the equations so far have made no mention of the coordinate system. When we use vectors and tensor in computations we have to express them in some coordinate system (basis) and use the components of the object in that basis for our computations.

Commonly used bases are the Cartesian coordinate frame, the cylindrical coordinate frame, and the spherical coordinate frame.

A Cartesian coordinate frame consists of an orthonormal basis together with a point called the origin. Since these vectors are mutually perpendicular, we have the following relations:

Kronecker delta[edit | edit source]

To make the above relations more compact, we introduce the Kronecker delta symbol

Then, instead of the nine equations in (1) we can write (in index notation)

Einstein summation convention[edit | edit source]

Recall that the vector can be written as

In index notation, equation (2) can be written as

This convention is called the Einstein summation convention. If indices are repeated, we understand that to mean that there is a sum over the indices.

Components of a vector[edit | edit source]

We can write the Cartesian components of a vector in the basis as

Components of a tensor[edit | edit source]

Similarly, the components of a tensor are defined by

Using the definition of the tensor product, we can also write

Using the summation convention,

In this case, the bases of the tensor are and the components are .

Operation of a tensor on a vector[edit | edit source]

From the definition of the components of tensor , we can also see that (using the summation convention)

Dyadic product[edit | edit source]

Similarly, the dyadic product can be expressed as

Matrix notation[edit | edit source]

We can also write a tensor in matrix notation as

Note that the Kronecker delta represents the components of the identity tensor in a Cartesian basis. Therefore, we can write

Tensor inner product[edit | edit source]

The inner product of two tensors and is an operation that generates a scalar. We define (summation implied)

The inner product can also be expressed using the trace :

Proof using the definition of the trace below :

Trace of a tensor[edit | edit source]

The trace of a tensor is the scalar given by

The trace of an N x N-matrix is the sum of the components on the downward-sloping diagonal.

Magnitude of a tensor[edit | edit source]

The magnitude of a tensor is defined by

Tensor product of a tensor with a vector[edit | edit source]

Another tensor operation that is often seen is the tensor product of a tensor with a vector. Let be a tensor and let be a vector. Then the tensor cross product gives a tensor defined by

Permutation symbol[edit | edit source]

The permutation symbol is defined as

Identities in tensor algebra[edit | edit source]

Let , and be three second order tensors. Then

Proof:

It is easiest to show these relations by using index notation with respect to an orthonormal basis. Then we can write

Similarly,

Tensor calculus[edit | edit source]

Recall that the vector differential operator (with respect to a Cartesian basis) is defined as

In this section we summarize some operations of on vectors and tensors.

The gradient of a vector field[edit | edit source]

The dyadic product (or ) is called the gradient of the vector field . Therefore, the quantity is a tensor given by

In the alternative dyadic notation,

'Warning: Some authors define the component of as .

The divergence of a tensor field[edit | edit source]

Let be a tensor field. Then the divergence of the tensor field is a vector given by

To fix the definition of divergence of a general tensor field (possibly of higher order than 2), we use the relation

where is an arbitrary constant vector.

The Laplacian of a vector field[edit | edit source]

The Laplacian of a vector field is given by

Tensor Identities[edit | edit source]

Some important identities involving tensors are:

  1. .
  2. .
  3. .
  4. .
  5. .
  6. .

Integral theorems[edit | edit source]

The following integral theorems are useful in continuum mechanics and finite elements.

The Gauss divergence theorem[edit | edit source]

If is a region in space enclosed by a surface and is a tensor field, then

where is the unit outward normal to the surface.

The Stokes curl theorem[edit | edit source]

If is a surface bounded by a closed curve , then

where is a tensor field, is the unit normal vector to in the direction of a right-handed screw motion along , and is a unit tangential vector in the direction of integration along .

The Leibniz formula[edit | edit source]

Let be a closed moving region of space enclosed by a surface . Let the velocity of any surface element be . Then if is a tensor function of position and time,

where is the outward unit normal to the surface .

Directional derivatives[edit | edit source]

We often have to find the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. The directional directive provides a systematic way of finding these derivatives.

The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.

Derivatives of scalar valued functions of vectors[edit | edit source]

Let be a real valued function of the vector . Then the derivative of with respect to (or at ) in the direction is the vector defined as

for all vectors .

Properties:

1) If then

2) If then

3) If then

Derivatives of vector valued functions of vectors[edit | edit source]

Let be a vector valued function of the vector . Then the derivative of with respect to (or at ) in the direction is the second order tensor defined as

for all vectors .

Properties:

1) If then

2) If then

3) If then

Derivatives of scalar valued functions of tensors[edit | edit source]

Let be a real valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the second order tensor defined as

for all second order tensors .

Properties:

1) If then

2) If then

3) If then

Derivatives of tensor valued functions of tensors[edit | edit source]

Let be a second order tensor valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the fourth order tensor defined as

for all second order tensors .

Properties:

1) If then

2) If then

3) If then

3) If then

Derivative of the determinant of a tensor[edit | edit source]

Derivative of the determinant of a tensor

The derivative of the determinant of a second order tensor is given by

In an orthonormal basis the components of can be written as a matrix . In that case, the right hand side corresponds the cofactors of the matrix.

Proof:

Let be a second order tensor and let . Then, from the definition of the derivative of a scalar valued function of a tensor, we have

Recall that we can expand the determinant of a tensor in the form of a characteristic equation in terms of the invariants using (note the sign of )

Using this expansion we can write

Recall that the invariant is given by

Hence,

Invoking the arbitrariness of we then have

Derivatives of the invariants of a tensor[edit | edit source]

Derivatives of the principal invariants of a tensor

The principal invariants of a second order tensor are

The derivatives of these three invariants with respect to are

Proof:

From the derivative of the determinant we know that

For the derivatives of the other two invariants, let us go back to the characteristic equation

Using the same approach as for the determinant of a tensor, we can show that

Now the left hand side can be expanded as

Hence

or,

Expanding the right hand side and separating terms on the left hand side gives

or,

If we define and , we can write the above as

Collecting terms containing various powers of , we get

Then, invoking the arbitrariness of , we have

This implies that

Derivative of the identity tensor[edit | edit source]

Let be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor is given by

This is because is independent of .

Derivative of a tensor with respect to itself[edit | edit source]

Let be a second order tensor. Then

Therefore,

Here is the fourth order identity tensor. In index notation with respect to an orthonormal basis

This result implies that

where

Therefore, if the tensor is symmetric, then the derivative is also symmetric and we get

where the symmetric fourth order identity tensor is

Derivative of the inverse of a tensor[edit | edit source]

Derivative of the inverse of a tensor

Let and be two second order tensors, then

In index notation with respect to an orthonormal basis

We also have

In index notation

If the tensor is symmetric then

Proof:

Recall that

Since , we can write

Using the product rule for second order tensors

we get

or,

Therefore,

Remarks[edit | edit source]

The boldface notation that I've used is called the Gibbs notation. The index notation that I have used is also called Cartesian tensor notation.