# Continuum mechanics/Calculus of variations

Ideas from the calculus of variations are commonly found in papers dealing with the finite element method. This handout discusses some of the basic notations and concepts of variational calculus. Most of the examples are from Variational Methods in Mechanics by T. Mura and T. Koya, Oxford University Press, 1992.

The calculus of variations is a sort of generalization of the calculus that you all know. The goal of variational calculus is to find the curve or surface that minimizes a given function. This function is usually a function of other functions and is also called a functional.

## Maxima and minima of functions

The calculus of variations extends the ideas of maxima and minima of functions to functionals.

For a function of one variable ${\displaystyle f(x)}$, the minimum occurs at some point ${\displaystyle x_{\text{min}}}$. For a functional, instead of a point minimum, we think in terms of a function that minimizes the functional. Thus, for a functional ${\displaystyle I[f(x)]}$ we can have a minimizing function ${\displaystyle f_{\text{min}}(x)}$.

The problem of finding extrema (minima and maxima) or points of inflection (saddle points) can either be constrained or unconstrained.

### The unconstrained problem.

Suppose ${\displaystyle f(x)}$ is a function of one variable. We want to find the maxima, minima, and points of inflection for this function. No additional constraints are imposed on the function. Then, from elementary calculus, the function ${\displaystyle f(x)}$ has

• a minimum if ${\displaystyle {\cfrac {df}{dx}}=0}$ and ${\displaystyle {\cfrac {d^{2}f}{dx^{2}}}>0}$.
• a maximum if ${\displaystyle {\cfrac {df}{dx}}=0}$ and ${\displaystyle {\cfrac {d^{2}f}{dx^{2}}}<0}$.
• a point of inflection if ${\displaystyle {\cfrac {d^{2}f}{dx^{2}}}=0}$.

Any point where the condition ${\displaystyle {\cfrac {df}{dx}}=0}$ is satisfied is called a stationary point and we say that the function is stationary at that point.

A similar concept is used when the function is of the form ${\displaystyle f(x_{1},x_{2},x_{3},t)}$. Then, the function ${\displaystyle f}$ is stationary if

${\displaystyle df={\frac {\partial f}{\partial x_{1}}}dx_{1}+{\frac {\partial f}{\partial x_{2}}}dx_{2}+{\frac {\partial f}{\partial x_{3}}}dx_{3}+{\frac {\partial f}{\partial t}}dt=0~.}$

Since ${\displaystyle x_{1}}$, ${\displaystyle x_{2}}$, ${\displaystyle x_{3}}$, and ${\displaystyle t}$ are independent variables, we can write the stationarity condition as

${\displaystyle {\frac {\partial f}{\partial x_{1}}}=0~;~~{\frac {\partial f}{\partial x_{2}}}=0~;~~{\frac {\partial f}{\partial x_{3}}}=0~;~~{\frac {\partial f}{\partial t}}=0~.}$

### The constrained problem - Lagrange multipliers.

Suppose we have a function ${\displaystyle f(x_{1},x_{2},x_{3})}$. We want to find the minimum (or maximum) of the function ${\displaystyle f}$ with the added constraint that

${\displaystyle {\text{(1)}}\qquad g(x_{1},x_{2},x_{3})=0~.}$

The added constraint is equivalent to saying that the variables ${\displaystyle x_{1}}$, ${\displaystyle x_{2}}$, and ${\displaystyle x_{3}}$ are not independent and we can write one of the variables in terms of the other two.

The stationarity condition for ${\displaystyle f}$ is

${\displaystyle {\text{(2)}}\qquad df={\frac {\partial f}{\partial x_{1}}}dx_{1}+{\frac {\partial f}{\partial x_{2}}}dx_{2}+{\frac {\partial f}{\partial x_{3}}}dx_{3}=0~.}$

Since the variables ${\displaystyle x_{1}}$, ${\displaystyle x_{2}}$, and ${\displaystyle x_{3}}$ are not independent, the coefficients of ${\displaystyle dx_{1}}$, ${\displaystyle dx_{2}}$, and ${\displaystyle dx_{3}}$ are not zero.

At this stage we could express ${\displaystyle x_{3}}$ in terms of ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ using the constraint equation (1), form another stationarity condition involving only ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$, and set the coefficients of ${\displaystyle dx_{1}}$ and ${\displaystyle dx_{2}}$ to zero. However, it is usually impossible to solve equation (1) analytically for ${\displaystyle x_{3}}$. Hence, we use a more convenient approach called the Lagrange multiplier method.

##### Lagrange multiplier method.

From equation (1) we have

${\displaystyle dg={\frac {\partial g}{\partial x_{1}}}dx_{1}+{\frac {\partial g}{\partial x_{2}}}dx_{2}+{\frac {\partial g}{\partial x_{3}}}dx_{3}=0~.}$

We introduce a parameter ${\displaystyle \lambda }$ called the Lagrange multiplier and using equation (2) we get

${\displaystyle df+\lambda dg=0~.}$

Then we have,

${\displaystyle \left({\frac {\partial f}{\partial x_{1}}}+\lambda {\frac {\partial g}{\partial x_{1}}}\right)dx_{1}+\left({\frac {\partial f}{\partial x_{2}}}+\lambda {\frac {\partial g}{\partial x_{2}}}\right)dx_{2}+\left({\frac {\partial f}{\partial x_{3}}}+\lambda {\frac {\partial g}{\partial x_{3}}}\right)dx_{3}=0~.}$

We choose the parameter ${\displaystyle \lambda }$ such that

${\displaystyle {\text{(3)}}\qquad {\frac {\partial f}{\partial x_{3}}}+\lambda {\frac {\partial g}{\partial x_{3}}}=0~.}$

Then, because ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are independent, we must have

${\displaystyle {\text{(4)}}\qquad {\frac {\partial f}{\partial x_{1}}}+\lambda {\frac {\partial g}{\partial x_{1}}}=0~~{\text{and}}~~~{\frac {\partial f}{\partial x_{2}}}+\lambda {\frac {\partial g}{\partial x_{2}}}=0}$

We can now use equations (1), (3), and (4) to solve for the extremum point and the Lagrange multiplier. The constraint is satisfied in the process.

Notice that equations (1), (3) and (4) can also be written as

${\displaystyle {{\frac {\partial h}{\partial \lambda }}=0~;~~{\frac {\partial h}{\partial x_{1}}}=0~;~~{\frac {\partial h}{\partial x_{2}}}=0~;~~{\frac {\partial h}{\partial x_{3}}}=0}}$

where

${\displaystyle {h(x_{1},x_{2},x_{3},\lambda ):=f(x_{1},x_{2},x_{3})+\lambda g(x_{1},x_{2},x_{3})~.}}$

## Minima of functionals

Consider the functional

${\displaystyle {\text{(5)}}\qquad I[y(x)]=\int _{x_{0}}^{x_{1}}\left[f(x)\left({\cfrac {dy(x)}{dx}}\right)^{2}+g(x)y(x)^{2}+2h(x)y(x)\right]~dx~.}$

We wish to minimize the functional ${\displaystyle I}$ with the constraints (prescribed boundary conditions)

${\displaystyle y(x_{0})=y_{0}~,~~~~y(x_{1})=y_{1}~.}$

Let the function ${\displaystyle y=y(x)}$ minimize ${\displaystyle I}$. Let us also choose a trial function (that is not quite equal to the solution ${\displaystyle y(x)}$)

${\displaystyle {\text{(6)}}\qquad y=y(x)+\lambda v(x)}$

where ${\displaystyle \lambda }$ is a parameter, and ${\displaystyle v(x)}$ is an arbitrary continuous function that has the property that

${\displaystyle v(x_{0})=0~~{\text{and}}~~v(x_{1})=0~.}$

(See Figure 1 for a geometric interpretation.)

 Figure 1. Minimizing function ${\displaystyle y(x)}$ and trial functions.

Plug (6) into (5) to get

${\displaystyle {\text{(8)}}\qquad I[y(x)+\lambda v(x)]=\int _{x_{0}}^{x_{1}}\left[f(x)\left({\cfrac {dy(x)}{dx}}+\lambda {\cfrac {dv}{dx}}\right)^{2}+g(x)\left[y(x)+\lambda v(x)\right]^{2}+2h(x)\left[y(x)+\lambda v(x)\right]\right]~dx~.}$

You can show that equation (8) can be written as (show this)

${\displaystyle I[y(x)+\lambda v(x)]=I[y(x)]+\delta I+\delta ^{2}I~~~~{\text{or,}}~~~~I[y(x)+\lambda v(x)]-I[y(x)]=\delta I+\delta ^{2}I}$

where

${\displaystyle {\text{(9)}}\qquad \delta I=2\lambda \int _{x_{0}}^{x_{1}}\left[f(x)\left({\cfrac {dy(x)}{dx}}\right)\left({\cfrac {dv(x)}{dx}}\right)+g(x)y(x)v(x)+h(x)v(x)\right]~dx}$

and

${\displaystyle {\text{(10)}}\qquad \delta ^{2}I=\lambda ^{2}\int _{x_{0}}^{x_{1}}\left[f(x)\left({\cfrac {dv(x)}{dx}}\right)^{2}+g(x)[v(x)]^{2}\right]~dx~.}$

The quantity ${\displaystyle \delta I}$ is called the first variation of ${\displaystyle I}$ and the quantity ${\displaystyle \delta ^{2}I}$ is called the second variation of ${\displaystyle I}$. Notice that ${\displaystyle \delta I}$ consists only of terms containing ${\displaystyle \lambda }$ while ${\displaystyle \delta ^{2}I}$ consists only of terms containing ${\displaystyle \lambda ^{2}}$.

The necessary condition for ${\displaystyle I[y(x)]}$ to be a minimum is

${\displaystyle {\text{(11)}}\qquad {\delta I=0~.}}$
##### Remark.

The first variation of the functional ${\displaystyle I[y]}$ in the direction ${\displaystyle v}$ is defined as

${\displaystyle {\delta I(y;v)=\lim _{\epsilon \rightarrow 0}{\cfrac {I[y+\epsilon v]-I[y]}{\epsilon }}\equiv \left.{\cfrac {d}{d\epsilon }}I[y+\epsilon v]\right|_{\epsilon =0}~.}}$

To find which function makes ${\displaystyle \delta I}$ zero, we first integrate the first term of equation (9) by parts. We have,

${\displaystyle \int _{x_{0}}^{x_{1}}\left(f{\cfrac {dy}{dx}}\right){\cfrac {dv}{dx}}~dx=\left[\left(f{\cfrac {dy}{dx}}\right)v\right]_{x_{0}}^{x_{1}}-\int _{x_{0}}^{x_{1}}{\cfrac {d}{dx}}\left(f{\cfrac {dy}{dx}}\right)v~dx~.}$

Since ${\displaystyle v=0}$ at ${\displaystyle x_{0}}$ and ${\displaystyle x_{1}}$, we have

${\displaystyle {\text{(12)}}\qquad \int _{x_{0}}^{x_{1}}\left(f{\cfrac {dy}{dx}}\right){\cfrac {dv}{dx}}~dx=-\int _{x_{0}}^{x_{1}}{\cfrac {d}{dx}}\left(f{\cfrac {dy}{dx}}\right)v~dx}$

Plugging equation (12) into (9) and applying the minimizing condition (11), we get

${\displaystyle 0=\int _{x_{0}}^{x_{1}}\left[-{\cfrac {d}{dx}}\left(f(x){\cfrac {dy(x)}{dx}}\right)v(x)+g(x)y(x)v(x)+h(x)v(x)\right]~dx}$

or,

${\displaystyle {\text{(13)}}\qquad \int _{x_{0}}^{x_{1}}\left[-{\cfrac {d}{dx}}\left(f(x){\cfrac {dy(x)}{dx}}\right)+g(x)y(x)+h(x)\right]v(x)~dx=0~.}$

The fundamental lemma of variational calculus states that if ${\displaystyle u(x)}$ is a piecewise continuous function of ${\displaystyle x}$ and ${\displaystyle v(x)}$ is a continuous function that vanishes on the boundary, then

${\displaystyle {\text{(14)}}\qquad {\int _{x_{0}}^{x_{1}}u(x)v(x)~dx=0\implies u(x)=0~.}}$

Applying (14) to (13) we get

${\displaystyle {\text{(15)}}\qquad -{\cfrac {d}{dx}}\left(f(x){\cfrac {dy(x)}{dx}}\right)+g(x)y(x)+h(x)=0~.}$

Equation (15) is called the Euler equation of the functional ${\displaystyle I}$. The solution of the Euler equation is the minimizing function that we seek.

Of course, we cannot be sure that the solution represents and minimum unless we check the second variation ${\displaystyle \delta ^{2}I}$. From equation (10) we can see that ${\displaystyle \delta ^{2}I>0}$ if ${\displaystyle f(x)>0}$ and ${\displaystyle g(x)>0}$ and in that case the problem is guaranteed to be a minimization problem.

We often define

${\displaystyle \delta y:=\lambda v(x)~~~{\text{and}}~~~\delta y^{'}:=\lambda {\cfrac {dv(x)}{dx}}}$

where ${\displaystyle \delta y}$ is called a variation of ${\displaystyle y(x)}$.

In this notation, equation (9) can be written as

${\displaystyle \delta I=2\int _{x_{0}}^{x_{1}}\left[f\left({\cfrac {dy}{dx}}\right)\delta y^{'}+gy\delta y+h\delta y\right]~dx}$

You see this notation in the principle of virtual work in the mechanics of materials.

## An example

Consider the string of length ${\displaystyle l}$ under a tension ${\displaystyle T}$ (see Figure 2). When a vertical load ${\displaystyle f}$ is applied, the string deforms by an amount ${\displaystyle u(x)}$ in the ${\displaystyle y}$-direction. The deformed length of an element ${\displaystyle dx}$ of the string is

${\displaystyle ds={\sqrt {1+\left({\cfrac {du}{dx}}\right)^{2}}}dx~.}$

If the deformation is small, we can expand the relation into a Taylor series and ignore the higher order terms to get

${\displaystyle ds=\left[1+{\frac {1}{2}}\left({\cfrac {du}{dx}}\right)^{2}\right]dx~.}$
 Figure 2. An elastic string under a transverse load.

The force T in the string moves a distance

${\displaystyle ds-dx={\frac {1}{2}}\left({\cfrac {du}{dx}}\right)^{2}~dx.}$

Therefore, the work done by the force ${\displaystyle T}$ (per unit original length of the string) (the stored elastic energy) is

${\displaystyle {\frac {1}{2}}T\left({\cfrac {du}{dx}}\right)^{2}~.}$

The work done by the forces ${\displaystyle f}$ (per unit original length of string) is

${\displaystyle fu}$

We want to minimize the total energy. Therefore, the functional to be minimized is

${\displaystyle I[y]={\cfrac {T}{2}}\int _{0}^{l}\left({\cfrac {du}{dx}}\right)^{2}~dx-\int _{0}^{l}fu~dx~.}$

The Euler equation is

${\displaystyle T{\cfrac {d^{2}u}{dx^{2}}}+f=0~.}$

The solution is

${\displaystyle u={\cfrac {f}{2T}}(l-x)x~.}$