Waves in composites and metamaterials/Willis equations for elastodynamics

From Wikiversity
Jump to: navigation, search

The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.


In the previous lecture introduced the Willis equations (Willis81,Willis81a,Willis83,Willis97,Milton07). In this lecture we will discuss how those equations are derived.

Recall that by ensemble averaging the governing equations of elastodynamics we get

    & \boldsymbol{\nabla} \cdot \left\langle \boldsymbol{\sigma} \right\rangle + \mathbf{f} = \left\langle \dot{\mathbf{p}} \right\rangle \\
    & \left\langle \boldsymbol{\varepsilon} \right\rangle = \frac{1}{2}~[\boldsymbol{\nabla} \left\langle \mathbf{u} \right\rangle + (\boldsymbol{\nabla} \left\langle \mathbf{u} \right\rangle)^T] 

where \left\langle (\bullet) \right\rangle is the ensemble average over realizations and not a volume average.

We need to derive the effective constitutive relations

  \left\langle \boldsymbol{\sigma} \right\rangle & = \boldsymbol{\mathsf{C}}_\text{eff}\star\left\langle \boldsymbol{\varepsilon} \right\rangle +\boldsymbol{\mathcal{S}}_\text{eff}\star\left\langle \dot{\mathbf{u}} \right\rangle\\
  \left\langle \mathbf{p} \right\rangle & = \boldsymbol{\mathcal{S}}_\text{eff}^\dagger \star\left\langle \boldsymbol{\varepsilon} \right\rangle + 
     \boldsymbol{\rho}_\text{eff}\star\left\langle \dot{\mathbf{u}} \right\rangle

where the operator \star represents a convolution over time, i.e.,

  &\boldsymbol{\mathsf{C}}\star\boldsymbol{\varepsilon} \equiv 
    \int_{-\infty}^t \boldsymbol{\mathsf{C}}(t-\tau) : \boldsymbol{\varepsilon}(\tau)~\text{d}\tau ~;~~
  \boldsymbol{\mathcal{S}}\star\dot{\mathbf{u}} \equiv
    \int_{-\infty}^t \boldsymbol{\mathcal{S}}(t-\tau) \cdot \dot{\mathbf{u}}(\tau)~\text{d}\tau \\
  &\boldsymbol{\mathcal{S}}^\dagger\star\boldsymbol{\varepsilon} \equiv 
    \int_{-\infty}^t \boldsymbol{\mathcal{S}}^\dagger(t-\tau) \cdot \boldsymbol{\varepsilon}(\tau)~\text{d}\tau ~;~~
  \boldsymbol{\rho}\star\dot{\mathbf{u}} \equiv
    \int_{-\infty}^t \boldsymbol{\rho}(t-\tau)\cdot\dot{\mathbf{u}}(\tau)~\text{d}\tau ~.

and the adjoint operator (represented by the superscript \dagger) is defined via

  \int \mathbf{a} \star (\boldsymbol{\mathcal{S}}^\dagger_\text{eff} \star \boldsymbol{A})~\text{d}\mathbf{x} = 
  \int \boldsymbol{A} \star (\boldsymbol{\mathcal{S}}_\text{eff} \star \mathbf{a})~\text{d}\mathbf{x}

for all vector fields \mathbf{a} and second order tensor fields \boldsymbol{A} and at time t. Note that the quantities \boldsymbol{\mathcal{S}}_\text{eff} and \boldsymbol{\mathcal{S}}^\dagger_\text{eff} are third-order tensors. In the above definition the convolutions are defined as

  \mathbf{a} \star \mathbf{b} = \int_{-\infty}^t \mathbf{a}(t-\tau) \cdot \mathbf{b}(\tau)~\text{d}\tau ~;~~
  \boldsymbol{A} \star \boldsymbol{B} = \int_{-\infty}^t \boldsymbol{A}(t-\tau) : \boldsymbol{B}(\tau)~\text{d}\tau

where \mathbf{a}, \mathbf{b} are vectors and \boldsymbol{A}, \boldsymbol{B} are second-order tensors.

Derivation of Willis' equations[edit]

Let us introduce a homogeneous reference medium with properties \boldsymbol{\mathsf{C}}_0 and \rho_0 (constant). The polarization fields are defined as

 \text{(1)} \qquad 
  \boldsymbol{\tau} & := (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0) \star \boldsymbol{\varepsilon} & = \boldsymbol{\sigma} - \boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon} \\
  \boldsymbol{\pi} & := (\rho - \rho_0)~\dot{\mathbf{u}} & = \mathbf{p} - \rho_0~\dot{\mathbf{u}} ~.


 \text{(2)} \qquad 
    \boldsymbol{\sigma} & = \boldsymbol{\tau} + \boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon} \\
    \mathbf{p} & = \boldsymbol{\pi} + \rho_0~\dot{\mathbf{u}} ~.

Taking the divergence of the equation (2)_1, we get

 \text{(3)} \qquad 
  \boldsymbol{\nabla} \cdot \boldsymbol{\sigma} = \boldsymbol{\nabla} \cdot \boldsymbol{\tau} + \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}) ~.

Also, taking the time derivative of equation (2)_2, we have

 \text{(4)} \qquad 
  \dot{\mathbf{p}} = \dot{\boldsymbol{\pi}} + \rho_0~\ddot{\mathbf{u}} ~.

Recall that the equation of motion is

 \text{(5)} \qquad 
  \boldsymbol{\nabla} \cdot \boldsymbol{\sigma} + \mathbf{f} = \dot{\mathbf{p}} ~.

Plugging (3) and (4) into (5) gives

  \boldsymbol{\nabla} \cdot \boldsymbol{\tau} + \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}) + \mathbf{f} = 
      \dot{\boldsymbol{\pi}} + \rho_0~\ddot{\mathbf{u}}


 \text{(6)} \qquad 
  \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}) + \mathbf{f} + \boldsymbol{\nabla} \cdot \boldsymbol{\tau} - \dot{\boldsymbol{\pi}} =
      \rho_0~\ddot{\mathbf{u}} ~.

In the reference medium, \boldsymbol{\tau} = \boldsymbol{0} and \boldsymbol{\pi} = \boldsymbol{0}. Let \mathbf{u}_0 be the solution in the reference medium in the presence of the body force \mathbf{f} and with the same boundary conditions and initial conditions. For example, if the actual body has \mathbf{u} \rightarrow \boldsymbol{0} as t \rightarrow -\infty, then \mathbf{u}_0 \rightarrow \boldsymbol{0} as t \rightarrow -\infty. Then, in the reference medium, we have

 \text{(7)} \qquad 
   & \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}_0) + \mathbf{f} = \rho_0~\ddot{\mathbf{u}}_0\\
   & \boldsymbol{\varepsilon}_0 = \frac{1}{2} [\boldsymbol{\nabla} \mathbf{u}_0 + (\boldsymbol{\nabla} \mathbf{u}_0)^T] ~.

Remember that we want our effective stress-strain relations to be independent of the body force \mathbf{f}. So all we have to do is subtract (7)_1 from (6). Then we get

  \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}) - \boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}_0) + \boldsymbol{\nabla} \cdot \boldsymbol{\tau} 
    - \dot{\boldsymbol{\pi}} =
      \rho_0~\left[\ddot{\mathbf{u}} - \ddot{\mathbf{u}}_0\right]


  \boldsymbol{\nabla} \cdot [\boldsymbol{\mathsf{C}}_0\star(\boldsymbol{\varepsilon}-\boldsymbol{\varepsilon}_0)] + \boldsymbol{\nabla} \cdot \boldsymbol{\tau} - \dot{\boldsymbol{\pi}} =
      \rho_0~\left[\ddot{\mathbf{u}} - \ddot{\mathbf{u}}_0\right] ~.


  \mathbf{u}' := \mathbf{u} - \mathbf{u}_0 ~;~~ \boldsymbol{\varepsilon}' := \frac{1}{2}[\boldsymbol{\nabla} \mathbf{u}' + (\boldsymbol{\nabla} \mathbf{u}')^T]
    = \boldsymbol{\varepsilon} - \boldsymbol{\varepsilon}_0 ~;~~ \mathbf{h} := \boldsymbol{\nabla} \cdot \boldsymbol{\tau} - \dot{\boldsymbol{\pi}} ~.


 \text{(8)} \qquad 
  -\boldsymbol{\nabla} \cdot (\boldsymbol{\mathsf{C}}_0\star\boldsymbol{\varepsilon}') + \rho_0~\ddot{\mathbf{u}}'  = \mathbf{h} ~.

If we assume that \mathbf{h} is fixed, then (8) can be written as

  \mathcal{L}~\mathbf{u}' = \mathbf{h}

where \mathcal{L} is a linear operator. The solution of this equation is

  \mathbf{u}' = \boldsymbol{G} \star \mathbf{h}

where \boldsymbol{G} is the Green's function associated with the operator \mathcal{L}. Plugging back our definitions of \mathbf{u}' and \mathbf{h}, we get

 \text{(9)} \qquad 
  \mathbf{u} = \mathbf{u}_0 + \boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau} - \dot{\boldsymbol{\pi}}) 
      = \mathbf{u}_0 + \boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau}) - \boldsymbol{G}\star\dot{\boldsymbol{\pi}} ~.

The strain-displacement relation is

  \boldsymbol{\varepsilon} = \frac{1}{2}[\boldsymbol{\nabla}\mathbf{u} + (\boldsymbol{\nabla}\mathbf{u})^T] ~.

Plugging the solution (9) into the strain-displacement relation gives

 \text{(10)} \qquad 
  \boldsymbol{\varepsilon} = \boldsymbol{\varepsilon}_0 + \frac{1}{2}~\boldsymbol{\nabla} [\boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau})] + 
     \frac{1}{2}~[\boldsymbol{\nabla} [\boldsymbol{G}\star(\boldsymbol{\nabla} \cdot \boldsymbol{\tau})]]^T - 
     \frac{1}{2}~\boldsymbol{\nabla} (\boldsymbol{G}\star\dot{\boldsymbol{\pi})} - 
     \frac{1}{2}~[\boldsymbol{\nabla} (\boldsymbol{G}\star\dot{\boldsymbol{\pi})}]^T ~.

Define \boldsymbol{\mathsf{S}}_x and \boldsymbol{\mathcal{M}}_x via

  \boldsymbol{\mathsf{S}}_x\star\boldsymbol{\tau} & = -\frac{1}{2}~\left\{\boldsymbol{\nabla} [\boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau})] + 
     [\boldsymbol{\nabla} [\boldsymbol{G}\star(\boldsymbol{\nabla} \cdot \boldsymbol{\tau})]]^T\right\} \\
  \boldsymbol{\mathcal{M}}_x\star\boldsymbol{\pi} & = \frac{1}{2}~\left\{\boldsymbol{\nabla} (\boldsymbol{G}\star\dot{\boldsymbol{\pi})} + 
     \frac{1}{2}~[\boldsymbol{\nabla} (\boldsymbol{G}\star\dot{\boldsymbol{\pi})}]^T\right\} ~.

Then we can write (10) as

 \text{(11)} \qquad 
  \boldsymbol{\varepsilon} = \boldsymbol{\varepsilon}_0 - \boldsymbol{\mathsf{S}}_x\star\boldsymbol{\tau} - \boldsymbol{\mathcal{M}}_x\star\boldsymbol{\pi} ~.

Also, taking the time derivative of (9), we get

 \text{(12)} \qquad 
  \dot{\mathbf{u}} = \dot{\mathbf{u}}_0 + \cfrac{d }{d t}\left[\boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau})\right] -
     \cfrac{d }{d t}\left[\boldsymbol{G}\star\dot{\boldsymbol{\pi}}\right] ~.

Define \boldsymbol{\mathcal{S}}_t and \boldsymbol{M}_t via

    \boldsymbol{\mathcal{S}}_t\star\boldsymbol{\tau} & = -\cfrac{d }{d t}\left[\boldsymbol{G} \star (\boldsymbol{\nabla} \cdot \boldsymbol{\tau})\right] \\
    \boldsymbol{M}_t\star\boldsymbol{\pi} & = \cfrac{d }{d t}\left[\boldsymbol{G}\star\dot{\boldsymbol{\pi}}\right] ~.

Then we can write (12) as

 \text{(13)} \qquad 
  \dot{\mathbf{u}} = \dot{\mathbf{u}}_0 - \boldsymbol{\mathcal{S}}_t\star\boldsymbol{\tau} - \boldsymbol{M}_t\star\boldsymbol{\pi} ~.

Willis (Willis81a) has shown that \boldsymbol{\mathcal{S}}_t and \boldsymbol{\mathcal{M}}_x are formal adjoints, i.e., \boldsymbol{\mathcal{S}}_t = \boldsymbol{\mathcal{M}}_x^\dagger, in the sense that

  \int \boldsymbol{\pi} \star (\boldsymbol{\mathcal{S}}_t \star \boldsymbol{\tau})~\text{d}\mathbf{x} = 
  \int \boldsymbol{\tau} \star (\boldsymbol{\mathcal{M}}_x \star \boldsymbol{\pi})~\text{d}\mathbf{x} ~\qquad \forall~\boldsymbol{\pi},\boldsymbol{\tau},t ~.

From (11) and (13), eliminating \boldsymbol{\epsilon} and \dot{\mathbf{u}} via equations (1), we have

 \text{(14)} \qquad 
    (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0)^{-1}\star\boldsymbol{\tau} + \boldsymbol{\mathsf{S}}_x\star\boldsymbol{\tau} + \boldsymbol{\mathcal{M}}_x\star\boldsymbol{\pi} 
       & = \boldsymbol{\varepsilon}_0 \\
    (\rho - \rho_0)^{-1}~\boldsymbol{\pi} + \boldsymbol{\mathcal{S}}_t\star\boldsymbol{\tau} + \boldsymbol{M}_t\star\boldsymbol{\pi} 
       & = \dot{\mathbf{u}}_0 ~.

Also, ensemble averaging equations (11) and (13), we have

 \text{(15)} \qquad 
    \left\langle \boldsymbol{\varepsilon} \right\rangle &= \boldsymbol{\varepsilon}_0 - \boldsymbol{\mathsf{S}}_x\star\left\langle \boldsymbol{\tau} \right\rangle - \boldsymbol{\mathcal{M}}_x\star\left\langle \boldsymbol{\pi} \right\rangle\\
    \left\langle \dot{\mathbf{u}} \right\rangle  &= \dot{\mathbf{u}}_0 - \boldsymbol{\mathcal{S}}_t\star\left\langle \boldsymbol{\tau} \right\rangle - 
       \boldsymbol{M}_t\star\left\langle \boldsymbol{\pi} \right\rangle~.

From (14) and (15), eliminating \boldsymbol{\varepsilon}_0 and \dot{\mathbf{u}}_0, we get

   (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0)^{-1}\star\boldsymbol{\tau} + \boldsymbol{\mathsf{S}}_x\star\boldsymbol{\tau} + \boldsymbol{\mathcal{M}}_x\star\boldsymbol{\pi} 
       & = \left\langle \boldsymbol{\varepsilon} \right\rangle + \boldsymbol{\mathsf{S}}_x\star\left\langle \boldsymbol{\tau} \right\rangle + \boldsymbol{\mathcal{M}}_x\star\left\langle \boldsymbol{\pi} \right\rangle\\
    (\rho - \rho_0)^{-1}~\boldsymbol{\pi} + \boldsymbol{\mathcal{S}}_t\star\boldsymbol{\tau} + \boldsymbol{M}_t\star\boldsymbol{\pi} 
       & = \left\langle \dot{\mathbf{u}} \right\rangle  + \boldsymbol{\mathcal{S}}_t\star\left\langle \boldsymbol{\tau} \right\rangle + \boldsymbol{M}_t\star\left\langle \boldsymbol{\pi} \right\rangle


 \text{(16)} \qquad 
   (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0)^{-1}\star\boldsymbol{\tau} + \boldsymbol{\mathsf{S}}_x\star(\boldsymbol{\tau}-\left\langle \boldsymbol{\tau} \right\rangle) + 
     \boldsymbol{\mathcal{M}}_x\star(\boldsymbol{\pi} -\left\langle \boldsymbol{\pi} \right\rangle) & = \left\langle \boldsymbol{\varepsilon} \right\rangle\\
    (\rho - \rho_0)^{-1}~\boldsymbol{\pi} + \boldsymbol{\mathcal{S}}_t\star(\boldsymbol{\tau}-\left\langle \boldsymbol{\tau} \right\rangle) + 
     \boldsymbol{M}_t\star(\boldsymbol{\pi} -\left\langle \boldsymbol{\pi} \right\rangle) & = \left\langle \dot{\mathbf{u}} \right\rangle  ~.

Equations (16) are linear in \boldsymbol{\tau} and \boldsymbol{\pi}. Therefore, formally these equations have the form

 \text{(17)} \qquad 
  \begin{bmatrix} \boldsymbol{\tau} \\ \boldsymbol{\pi} \end{bmatrix} = 
    \mathcal{T} \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle \\  \left\langle \dot{\mathbf{u}} \right\rangle 
    \end{bmatrix} ~.

That such an argument can be made has been rigorously shown for low contrast media but not for high contrast media. Hence, these ideas work for composites that are close to homogeneous.

From the definition of \boldsymbol{\tau} and \boldsymbol{\pi}, taking the ensemble average gives us

 \text{(18)} \qquad 
  \left\langle \boldsymbol{\tau} \right\rangle = \left\langle \boldsymbol{\sigma} \right\rangle - \boldsymbol{\mathsf{C}}_0\star\left\langle \boldsymbol{\varepsilon} \right\rangle ~;~~
  \left\langle \boldsymbol{\pi} \right\rangle = \left\langle \mathbf{p} \right\rangle - \rho_0~\left\langle \dot{\mathbf{u}} \right\rangle  ~.

Also, from (17), taking the ensemble average leads to

 \text{(19)} \qquad 
  \begin{bmatrix} \left\langle \boldsymbol{\tau} \right\rangle \\ \left\langle \boldsymbol{\pi} \right\rangle \end{bmatrix} = 
    \left\langle \mathcal{T} \right\rangle \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle \\  \left\langle \dot{\mathbf{u}} \right\rangle 
    \end{bmatrix} ~.

Plugging in the relations (18) in these equations gives us

    \left\langle \boldsymbol{\sigma} \right\rangle - \boldsymbol{\mathsf{C}}_0\star\left\langle \boldsymbol{\varepsilon} \right\rangle\\
    \left\langle \mathbf{p} \right\rangle - \rho_0~\left\langle \dot{\mathbf{u}} \right\rangle  
  \end{bmatrix} = 
  \left\langle \mathcal{T} \right\rangle \star 
    \left\langle \boldsymbol{\varepsilon} \right\rangle \\  \left\langle \dot{\mathbf{u}} \right\rangle


  \begin{bmatrix} \left\langle \boldsymbol{\sigma} \right\rangle \\ \left\langle \mathbf{p} \right\rangle \end{bmatrix} = 
  \left\langle \mathcal{T} \right\rangle \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle\\\left\langle \dot{\mathbf{u}} \right\rangle \end{bmatrix} +
  \begin{bmatrix} \boldsymbol{\mathsf{C}}_0 & 0 \\ 0 & \rho_0~\boldsymbol{\mathit{1}}\end{bmatrix} 
  \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle\\\left\langle \dot{\mathbf{u}} \right\rangle \end{bmatrix}


 \text{(20)} \qquad 
  \begin{bmatrix} \left\langle \boldsymbol{\sigma} \right\rangle \\ \left\langle \mathbf{p} \right\rangle \end{bmatrix} = 
  \begin{bmatrix} \boldsymbol{\mathsf{C}}_\text{eff} & \boldsymbol{\mathcal{S}}_\text{eff} \\ \boldsymbol{\mathcal{S}}^\dagger_\text{eff} & 
    \boldsymbol{\rho}_\text{eff} \end{bmatrix} 
  \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle\\\left\langle \dot{\mathbf{u}} \right\rangle \end{bmatrix} ~.

These are the Willis equations.

Willis equations for electromagnetism[edit]

For electromagnetism, we can use similar arguments to obtain

    \left\langle \mathbf{D} \right\rangle & = \boldsymbol{\epsilon}_\text{eff} \star \left\langle \mathbf{E} \right\rangle + 
        \boldsymbol{\alpha}_\text{eff} \star \left\langle \mathbf{B} \right\rangle\\
    \left\langle \mathbf{H} \right\rangle & = \boldsymbol{\alpha}_\text{eff} \star \left\langle \mathbf{E} \right\rangle + 
        (\boldsymbol{\mu}_\text{eff})^{-1} \star \left\langle \mathbf{B} \right\rangle

where \boldsymbol{\alpha}_\text{eff} is a coupling term.

In particular, if the fields are time harmonic with non-local operators being approximated by local ones, then

    \left\langle \widehat{\mathbf{D}} \right\rangle & = \boldsymbol{\epsilon}_\text{eff} \cdot \left\langle \widehat{\mathbf{E}} \right\rangle + 
        \boldsymbol{\lambda}_\text{eff} \cdot \left\langle \widehat{\mathbf{H}} \right\rangle\\
    \left\langle \widehat{\mathbf{B}} \right\rangle & = \overline{\boldsymbol{\lambda}_\text{eff}} \cdot \left\langle \widehat{\mathbf{E}} \right\rangle + 
        \boldsymbol{\mu}_\text{eff} \cdot \left\langle \widehat{\mathbf{H}} \right\rangle ~.

If the operators are local, then \boldsymbol{\epsilon}_\text{eff}, \boldsymbol{\lambda}_\text{eff}, \boldsymbol{\mu}_\text{eff} will just be matrices that depend on the frequency \omega.

If the composite material is isotropic, then

   \boldsymbol{\epsilon}_\text{eff} = \epsilon_\text{eff}~\boldsymbol{\mathit{1}} ~;~~
   \boldsymbol{\lambda}_\text{eff} = \lambda_\text{eff}~\boldsymbol{\mathit{1}} ~;~~
   \boldsymbol{\mu}_\text{eff} = \mu_\text{eff}~\boldsymbol{\mathit{1}} ~.

Under reflection, \left\langle \widehat{\mathbf{E}} \right\rangle reflects like a normal vector. However, \left\langle \widehat{\mathbf{H}} \right\rangle reflects like an axial vector (i.e., it changes direction). Hence \lambda_\text{eff} would have to change sign under a reflection. Therefore, with \lambda_\text{eff} fixed, the constitutive relations are not invariant with respect to reflections! This means that if \lambda_\text{eff} \ne 0 the medium has a certain handedness and is called a chiral medium.

Extension of the Willis approach to composites with voids[edit]

Sometimes the quantity \left\langle \mathbf{u} \right\rangle is not an appropriate macroscopic variable. For example, in materials with voids \mathbf{u} is undefined inside the voids. Even if the voids are filled with an elastic material with modulus tending to zero, the value of \left\langle \mathbf{u} \right\rangle will depend on the way this limit is taken. Also, for materials such as the rigid matrix filled with rubber and lead (see Figure 1), it makes senses to average \mathbf{u} only over the deformable material phase.

Figure 1. A composite consisting of a rigid matrix and deformable phases.

Therefore it makes sense to look for equations for \left\langle \mathbf{u}_w \right\rangle where

 \text{(21)} \qquad 
  \mathbf{u}_w(\mathbf{x}, t) = w(\mathbf{x})~\mathbf{u}(\mathbf{x}, t)

where w(\mathbf{x}) is a weight which could be zero in the region where there are voids. Also, the weights could vary from realization to realization.

Also, if we have \dot{\mathbf{u}} we can recover \mathbf{u} by integrating over time, i.e.,

  \mathbf{u}(t) = \int_{-\infty}^t \dot{\mathbf{u}}(\tau)~\text{d}\tau
         = \int_{-\infty}^{\infty} H(t-\tau)~\dot{\mathbf{u}}(\tau)~\text{d}\tau


   H(v) = \begin{cases}
          1 & \text{for}~~v > 0 \\
          0 & \text{for}~~v < 0 

Hence we can write

 \text{(22)} \qquad 
   \mathbf{u} = H \star \dot{\mathbf{u}} ~.

So, from the definitions of \boldsymbol{\tau} and \boldsymbol{\pi} and using the relation (22), we have

  \begin{bmatrix} \boldsymbol{\varepsilon} \\ \mathbf{u} \end{bmatrix} = 
  \begin{bmatrix} (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0)^{-1} & 0 \\ 0 & H\star(\rho-\rho_0)^{-1}
  \end{bmatrix} \star \begin{bmatrix} \boldsymbol{\tau} \\ \boldsymbol{\pi} \end{bmatrix} ~.

Form the Willis equations (17) we have

  \begin{bmatrix} \boldsymbol{\tau} \\ \boldsymbol{\pi} \end{bmatrix} = 
    \mathcal{T} \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle \\  \left\langle \dot{\mathbf{u}} \right\rangle
    \end{bmatrix} ~.


 \text{(23)} \qquad 
  \begin{bmatrix} \boldsymbol{\varepsilon} \\ \mathbf{u} \end{bmatrix} = 
  \begin{bmatrix} (\boldsymbol{\mathsf{C}} - \boldsymbol{\mathsf{C}}_0)^{-1} & 0 \\ 0 & H\star(\rho-\rho_0)^{-1}
  \end{bmatrix} \star 
    \mathcal{T} \star \begin{bmatrix} \left\langle \boldsymbol{\varepsilon} \right\rangle \\  \left\langle \dot{\mathbf{u}} \right\rangle
    \end{bmatrix} ~.

Now, if the weighted strain is defined as

  \boldsymbol{\varepsilon}_w = \frac{1}{2}~[\boldsymbol{\nabla}\mathbf{u}_w + (\boldsymbol{\nabla}\mathbf{u}_w)^T]

then, taking the ensemble average, we have

  \left\langle \boldsymbol{\varepsilon}_w \right\rangle = \frac{1}{2}~\left\langle [\boldsymbol{\nabla}\mathbf{u}_w + (\boldsymbol{\nabla}\mathbf{u}_w)^T] \right\rangle ~.

Using equation (21) we can show that

 \text{(24)} \qquad 
  \left\langle \boldsymbol{\varepsilon}_w \right\rangle = \left\langle w~\boldsymbol{\varepsilon} \right\rangle + \frac{1}{2}~\left\langle \boldsymbol{\nabla} w\otimes\mathbf{u}+ 
      \mathbf{u}\otimes\boldsymbol{\nabla} w\right\rangle  ~.

Using (23) we can express (24) in terms of \left\langle \boldsymbol{\varepsilon} \right\rangle and \dot{\mathbf{u}}, and hence also in terms of \dot{\mathbf{u}}_w. After some algebra (see Milton07 for details), we can show that

    \left\langle \boldsymbol{\varepsilon}_w \right\rangle \\ \left\langle \dot{\mathbf{u}} \right\rangle_w 
  = \boldsymbol{R}_w \star
    \left\langle \boldsymbol{\varepsilon} \right\rangle \\ \left\langle \dot{\mathbf{u}} \right\rangle

where \boldsymbol{R}_w = \boldsymbol{\mathit{1}} when w(\mathbf{x}) = 1.

Taking the inverse, we can express the Willis equations (20) in terms of \left\langle \boldsymbol{\varepsilon}_w \right\rangle and \left\langle \dot{\mathbf{u}} \right\rangle_w as

  \begin{bmatrix} \left\langle \boldsymbol{\sigma} \right\rangle \\ \left\langle \mathbf{p} \right\rangle \end{bmatrix} = 
  \begin{bmatrix} \boldsymbol{\mathsf{C}}_\text{eff} & \boldsymbol{\mathcal{S}}_\text{eff} \\ \boldsymbol{\mathcal{S}}^\dagger_\text{eff} & 
    \boldsymbol{\rho}_\text{eff} \end{bmatrix} 
  \star \boldsymbol{R}_w^{-1} \star 
    \left\langle \boldsymbol{\varepsilon}_w \right\rangle \\ \left\langle \dot{\mathbf{u}} \right\rangle_w


  \begin{bmatrix} \left\langle \boldsymbol{\sigma} \right\rangle \\ \left\langle \mathbf{p} \right\rangle \end{bmatrix} = 
  \begin{bmatrix} \boldsymbol{\mathsf{C}}^w_\text{eff} & \boldsymbol{\mathcal{S}}^w_\text{eff} \\ \boldsymbol{\mathcal{D}}^w_\text{eff} & 
    \boldsymbol{\rho}^w_\text{eff} \end{bmatrix} 
    \left\langle \boldsymbol{\varepsilon}_w \right\rangle \\ \left\langle \dot{\mathbf{u}} \right\rangle_w

These equations have the same form as the Willis equations. However, \boldsymbol{\mathcal{D}}^w_\text{eff} \ne (\boldsymbol{\mathcal{S}}^w_\text{eff})^\dagger. We now have a means of using the Willis equations even in the case where there are voids.


  • [Milton07]    G. W. Milton and J. R. Willis. On modifications of Newton's second law and linear continuum elastodynamics. Proc. R. Soc. London A, 463:855--880, 2007.
  • [Willis81]    J. R. Willis. Variational and related methods for the overall properties of composites. Advanced in Applied Mechanics, 21:1--78, 1981.
  • [Willis81a]    J. R. Willis. Variational principles for dynamics problems in inhomogenous elastic media. Wave Motion, 3:1--11, 1981.
  • [Willis83]    J. R. Willis. The overall elastic response of composite materials. J. Appl. Mech., 50:1202--1209, 1983.
  • [Willis97]    J. R. Willis. Dynamics of composites. In Suquet P., editor, Continuum Micromechanics: CISM Courses and Lectures No. 377, pages 265--290. Springer-Verlag-Wien, New York, 1997.