# Continuum mechanics/Matrices

Much of finite elements revolves around forming matrices and solving systems of linear equations using matrices. This learning resource gives you a brief review of matrices.

## Matrices

Suppose that you have a linear system of equations

{\displaystyle {\begin{aligned}a_{11}x_{1}+a_{12}x_{2}+a_{13}x_{3}+a_{14}x_{4}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+a_{23}x_{3}+a_{24}x_{4}&=b_{2}\\a_{31}x_{1}+a_{32}x_{2}+a_{33}x_{3}+a_{34}x_{4}&=b_{3}\\a_{41}x_{1}+a_{42}x_{2}+a_{43}x_{3}+a_{44}x_{4}&=b_{4}\end{aligned}}~.}

Matrices provide a simple way of expressing these equations. Thus, we can instead write

${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}&a_{14}\\a_{21}&a_{22}&a_{23}&a_{24}\\a_{31}&a_{32}&a_{33}&a_{34}\\a_{41}&a_{42}&a_{43}&a_{44}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\\b_{4}\end{bmatrix}}~.}$

An even more compact notation is

${\displaystyle \left[{\mathsf {A}}\right]\left[{\mathsf {x}}\right]=\left[{\mathsf {b}}\right]~~~~{\text{or}}~~~~\mathbf {A} \mathbf {x} =\mathbf {b} ~.}$

Here ${\displaystyle \mathbf {A} }$ is a ${\displaystyle 4\times 4}$ matrix while ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {b} }$ are ${\displaystyle 4\times 1}$ matrices. In general, an ${\displaystyle m\times n}$ matrix ${\displaystyle \mathbf {A} }$ is a set of numbers arranged in ${\displaystyle m}$ rows and ${\displaystyle n}$ columns.

${\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&a_{m3}&\dots &a_{mn}\end{bmatrix}}~.}$

## Practice Exercises

Practice: Expressing Linear Equations As Matrices

## Types of Matrices

Common types of matrices that we encounter in finite elements are:

• a row vector that has one row and ${\displaystyle n}$ columns.
${\displaystyle \mathbf {v} ={\begin{bmatrix}v_{1}&v_{2}&v_{3}&\dots &v_{n}\end{bmatrix}}}$
• a column vector that has ${\displaystyle n}$ rows and one column.
${\displaystyle \mathbf {v} ={\begin{bmatrix}v_{1}\\v_{2}\\v_{3}\\\vdots \\v_{n}\end{bmatrix}}}$
• a square matrix that has an equal number of rows and columns.
• a diagonal matrix which is a square matrix with only the

diagonal elements (${\displaystyle a_{ii}}$) nonzero.

${\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&0&0&\dots &0\\0&a_{22}&0&\dots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &a_{nn}\end{bmatrix}}~.}$
• the identity matrix (${\displaystyle \mathbf {I} }$) which is a diagonal matrix and

with each of its nonzero elements (${\displaystyle a_{ii}}$) equal to 1.

${\displaystyle \mathbf {A} ={\begin{bmatrix}1&0&0&\dots &0\\0&1&0&\dots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &1\end{bmatrix}}~.}$
• a symmetric matrix which is a square matrix with elements

such that ${\displaystyle a_{ij}=a_{ji}}$.

${\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{12}&a_{22}&a_{23}&\dots &a_{2n}\\a_{13}&a_{23}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{1n}&a_{2n}&a_{3n}&\dots &a_{nn}\end{bmatrix}}~.}$
• a skew-symmetric matrix which is a square matrix with elements

such that ${\displaystyle a_{ij}=-a_{ji}}$.

${\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\-a_{12}&a_{22}&a_{23}&\dots &a_{2n}\\-a_{13}&-a_{23}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\-a_{1n}&-a_{2n}&-a_{3n}&\dots &a_{nn}\end{bmatrix}}~.}$

Note that the diagonal elements of a skew-symmetric matrix have to be zero: ${\displaystyle a_{ii}=-a_{ii}\Rightarrow a_{ii}=0}$.

Let ${\displaystyle \mathbf {A} }$ and ${\displaystyle \mathbf {B} }$ be two ${\displaystyle m\times n}$ matrices with components ${\displaystyle a_{ij}}$ and ${\displaystyle b_{ij}}$, respectively. Then

${\displaystyle \mathbf {C} =\mathbf {A} +\mathbf {B} \implies c_{ij}=a_{ij}+b_{ij}}$

## Multiplication by a scalar

Let ${\displaystyle \mathbf {A} }$ be a ${\displaystyle m\times n}$ matrix with components ${\displaystyle a_{ij}}$ and let ${\displaystyle \lambda }$ be a scalar quantity. Then,

${\displaystyle \mathbf {C} =\lambda \mathbf {A} \implies c_{ij}=\lambda a_{ij}}$

## Multiplication of matrices

Let ${\displaystyle \mathbf {A} }$ be a ${\displaystyle m\times n}$ matrix with components ${\displaystyle a_{ij}}$. Let ${\displaystyle \mathbf {B} }$ be a ${\displaystyle p\times q}$ matrix with components ${\displaystyle b_{ij}}$.

The product ${\displaystyle \mathbf {C} =\mathbf {A} \mathbf {B} }$ is defined only if ${\displaystyle n=p}$. The matrix ${\displaystyle \mathbf {C} }$ is a ${\displaystyle m\times q}$ matrix with components ${\displaystyle c_{ij}}$. Thus,

${\displaystyle \mathbf {C} =\mathbf {A} \mathbf {B} \implies c_{ij}=\sum _{k=1}^{n}a_{ik}b_{kj}}$

Similarly, the product ${\displaystyle \mathbf {D} =\mathbf {B} \mathbf {A} }$ is defined only if ${\displaystyle q=m}$. The matrix ${\displaystyle \mathbf {D} }$ is a ${\displaystyle p\times n}$ matrix with components ${\displaystyle d_{ij}}$. We have

${\displaystyle \mathbf {D} =\mathbf {B} \mathbf {A} \implies d_{ij}=\sum _{k=1}^{m}b_{ik}a_{kj}}$

Clearly, ${\displaystyle \mathbf {C} \neq \mathbf {D} }$ in general, i.e., the matrix product is not commutative.

However, matrix multiplication is distributive. That means

${\displaystyle \mathbf {A} (\mathbf {B} +\mathbf {C} )=\mathbf {A} \mathbf {B} +\mathbf {A} \mathbf {C} ~.}$

The product is also associative. That means

${\displaystyle \mathbf {A} (\mathbf {B} \mathbf {C} )=(\mathbf {A} \mathbf {B} )\mathbf {C} ~.}$

## Transpose of a matrix

Let ${\displaystyle \mathbf {A} }$ be a ${\displaystyle m\times n}$ matrix with components ${\displaystyle a_{ij}}$. Then the transpose of the matrix is defined as the ${\displaystyle n\times m}$ matrix ${\displaystyle \mathbf {B} =\mathbf {A} ^{T}}$ with components ${\displaystyle b_{ij}=a_{ji}}$. That is,

${\displaystyle \mathbf {B} =\mathbf {A} ^{T}={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\a_{31}&a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&a_{m3}&\dots &a_{mn}\end{bmatrix}}^{T}={\begin{bmatrix}a_{11}&a_{21}&a_{31}&\dots &a_{m1}\\a_{12}&a_{22}&a_{32}&\dots &a_{m2}\\a_{13}&a_{23}&a_{33}&\dots &a_{m3}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{1n}&a_{2n}&a_{3n}&\dots &a_{mn}\end{bmatrix}}}$

An important identity involving the transpose of matrices is

${\displaystyle {(\mathbf {A} \mathbf {B} )^{T}=\mathbf {B} ^{T}\mathbf {A} ^{T}}~.}$

## Determinant of a matrix

The determinant of a matrix is defined only for square matrices.

For a ${\displaystyle 2\times 2}$ matrix ${\displaystyle \mathbf {A} }$, we have

${\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}}\implies \det(\mathbf {A} )={\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}=a_{11}a_{22}-a_{12}a_{21}~.}$

For a ${\displaystyle n\times n}$ matrix, the determinant is calculated by expanding into minors as

{\displaystyle {\begin{aligned}&\det(\mathbf {A} )={\begin{vmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\a_{31}&a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\dots &a_{nn}\end{vmatrix}}\\&=a_{11}{\begin{vmatrix}a_{22}&a_{23}&\dots &a_{2n}\\a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\ddots &\vdots \\a_{n2}&a_{n3}&\dots &a_{nn}\end{vmatrix}}-a_{12}{\begin{vmatrix}a_{21}&a_{23}&\dots &a_{2n}\\a_{31}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n3}&\dots &a_{nn}\end{vmatrix}}+\dots \pm a_{1n}{\begin{vmatrix}a_{21}&a_{22}&\dots &a_{2(n-1)}\\a_{31}&a_{32}&\dots &a_{3(n-1)}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\dots &a_{n(n-1)}\end{vmatrix}}\end{aligned}}}

In short, the determinant of a matrix ${\displaystyle \mathbf {A} }$ has the value

${\displaystyle {\det(\mathbf {A} )=\sum _{j=1}^{n}(-1)^{1+j}a_{1j}M_{1j}}}$

where ${\displaystyle M_{ij}}$ is the determinant of the submatrix of ${\displaystyle \mathbf {A} }$ formed by eliminating row ${\displaystyle i}$ and column ${\displaystyle j}$ from ${\displaystyle \mathbf {A} }$.

Some useful identities involving the determinant are given below.

• If ${\displaystyle \mathbf {A} }$ is a ${\displaystyle n\times n}$ matrix, then
${\displaystyle \det(\mathbf {A} )=\det(\mathbf {A} ^{T})~.}$
• If ${\displaystyle \lambda }$ is a constant and ${\displaystyle \mathbf {A} }$ is a ${\displaystyle n\times n}$ matrix, then
${\displaystyle \det(\lambda \mathbf {A} )=\lambda ^{n}\det(\mathbf {A} )\implies \det(-\mathbf {A} )=(-1)^{n}\det(\mathbf {A} )~.}$
• If ${\displaystyle \mathbf {A} }$ and ${\displaystyle \mathbf {B} }$ are two ${\displaystyle n\times n}$ matrices, then
${\displaystyle \det(\mathbf {A} \mathbf {B} )=\det(\mathbf {A} )\det(\mathbf {B} )~.}$

If you think you understand determinants, take the quiz.

## Inverse of a matrix

Let ${\displaystyle \mathbf {A} }$ be a ${\displaystyle n\times n}$ matrix. The inverse of ${\displaystyle \mathbf {A} }$ is denoted by ${\displaystyle \mathbf {A} ^{-1}}$ and is defined such that

${\displaystyle {\mathbf {A} \mathbf {A} ^{-1}=\mathbf {I} }}$

where ${\displaystyle \mathbf {I} }$ is the ${\displaystyle n\times n}$ identity matrix.

The inverse exists only if ${\displaystyle \det(\mathbf {A} )\neq 0}$. A singular matrix does not have an inverse.

An important identity involving the inverse is

${\displaystyle {(\mathbf {A} \mathbf {B} )^{-1}=\mathbf {B} ^{-1}\mathbf {A} ^{-1},}}$

since this leads to: ${\displaystyle {(\mathbf {A} \mathbf {B} )^{-1}(\mathbf {A} \mathbf {B} )=(\mathbf {B} ^{-1}\mathbf {A} ^{-1})(\mathbf {A} \mathbf {B} )=\mathbf {B} ^{-1}\mathbf {A} ^{-1}\mathbf {A} \mathbf {B} =\mathbf {B} ^{-1}(\mathbf {A} ^{-1}\mathbf {A} )\mathbf {B} =\mathbf {B} ^{-1}\mathbf {I} \mathbf {B} =\mathbf {B} ^{-1}\mathbf {B} =\mathbf {I} .}}$

Some other identities involving the inverse of a matrix are given below.

• The determinant of a matrix is equal to the multiplicative inverse of the

determinant of its inverse.

${\displaystyle \det(\mathbf {A} )={\cfrac {1}{\det(\mathbf {A} ^{-1})}}~.}$
• The determinant of a similarity transformation of a matrix

is equal to the original matrix.

${\displaystyle \det(\mathbf {B} \mathbf {A} \mathbf {B} ^{-1})=\det(\mathbf {A} )~.}$

We usually use numerical methods such as Gaussian elimination to compute the inverse of a matrix.

## Eigenvalues and eigenvectors

A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:

• Let :${\displaystyle \mathbf {A} ={\begin{bmatrix}1&6\\5&2\end{bmatrix}},\mathbf {v} ={\begin{bmatrix}6\\-5\end{bmatrix}},\mathbf {t} ={\begin{bmatrix}7\\4\end{bmatrix}}~.}$

Which vector is an eigenvector for ${\displaystyle \mathbf {A} }$ ?

We have ${\displaystyle \mathbf {A} \mathbf {v} ={\begin{bmatrix}1&6\\5&2\end{bmatrix}}{\begin{bmatrix}6\\-5\end{bmatrix}}={\begin{bmatrix}-24\\20\end{bmatrix}}=-4{\begin{bmatrix}6\\-5\end{bmatrix}}}$ , and ${\displaystyle \mathbf {A} \mathbf {t} ={\begin{bmatrix}1&6\\5&2\end{bmatrix}}{\begin{bmatrix}7\\4\end{bmatrix}}={\begin{bmatrix}31\\43\end{bmatrix}}~.}$

Thus, ${\displaystyle \mathbf {v} }$ is an eigenvector.

• Is ${\displaystyle \mathbf {u} ={\begin{bmatrix}1\\4\end{bmatrix}}}$ an eigenvector for ${\displaystyle \mathbf {A} ={\begin{bmatrix}-3&-3\\1&8\end{bmatrix}}}$ ?

We have that since ${\displaystyle \mathbf {A} \mathbf {u} ={\begin{bmatrix}-3&-3\\1&8\end{bmatrix}}{\begin{bmatrix}1\\4\end{bmatrix}}={\begin{bmatrix}-15\\33\end{bmatrix}}}$ , ${\displaystyle \mathbf {u} ={\begin{bmatrix}1\\4\end{bmatrix}}}$ is not an eigenvector for ${\displaystyle \mathbf {A} ={\begin{bmatrix}-3&-3\\1&8\end{bmatrix}}~.}$