Much of finite elements revolves around forming matrices and
solving systems of linear equations using matrices. This learning resource
gives you a brief review of matrices.
Suppose that you have a linear system of equations
![{\displaystyle {\begin{aligned}a_{11}x_{1}+a_{12}x_{2}+a_{13}x_{3}+a_{14}x_{4}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+a_{23}x_{3}+a_{24}x_{4}&=b_{2}\\a_{31}x_{1}+a_{32}x_{2}+a_{33}x_{3}+a_{34}x_{4}&=b_{3}\\a_{41}x_{1}+a_{42}x_{2}+a_{43}x_{3}+a_{44}x_{4}&=b_{4}\end{aligned}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/992c48f6fc6e6224067a6fd6ebead8a262d989b8)
Matrices provide a simple way of expressing these equations. Thus,
we can instead write
![{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}&a_{14}\\a_{21}&a_{22}&a_{23}&a_{24}\\a_{31}&a_{32}&a_{33}&a_{34}\\a_{41}&a_{42}&a_{43}&a_{44}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\\b_{4}\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c0b5008c12261a1898bbc408c736623892d07e76)
An even more compact notation is
![{\displaystyle \left[{\mathsf {A}}\right]\left[{\mathsf {x}}\right]=\left[{\mathsf {b}}\right]~~~~{\text{or}}~~~~\mathbf {A} \mathbf {x} =\mathbf {b} ~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef632986c10373b1cfd2995247b5ddc3a1e92b9a)
Here
is a
matrix while
and
are
matrices. In general, an
matrix
is a set of numbers
arranged in
rows and
columns.
![{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&a_{m3}&\dots &a_{mn}\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5602a17bd6b5468d5456737dab521a1b36e1aa2)
Practice: Expressing Linear Equations As Matrices
Common types of matrices that we encounter in finite elements are:
- a row vector that has one row and
columns.
![{\displaystyle \mathbf {v} ={\begin{bmatrix}v_{1}&v_{2}&v_{3}&\dots &v_{n}\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/390fc35c1328e05c3dc31ff7809700281e02b5da)
- a column vector that has
rows and one column.
![{\displaystyle \mathbf {v} ={\begin{bmatrix}v_{1}\\v_{2}\\v_{3}\\\vdots \\v_{n}\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/aaa3f3c1d3b1761f2245626ca8b03f90207dfe3b)
- a square matrix that has an equal number of rows and columns.
- a diagonal matrix which is a square matrix with only the
diagonal elements (
) nonzero.
![{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&0&0&\dots &0\\0&a_{22}&0&\dots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &a_{nn}\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c5b97e8fc9d80f20ee15e50cb66e85b78a181d65)
- the identity matrix (
) which is a diagonal matrix and
with each of its nonzero elements (
) equal to 1.
![{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0&0&\dots &0\\0&1&0&\dots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &1\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a1fe9cd74a59ce106a0b12ed8bbb69ae09f7f2ed)
- a symmetric matrix which is a square matrix with elements
such that
.
![{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{12}&a_{22}&a_{23}&\dots &a_{2n}\\a_{13}&a_{23}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{1n}&a_{2n}&a_{3n}&\dots &a_{nn}\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/39110fc7d71381d45b5c6b89b3bcd111e461840b)
- a skew-symmetric matrix which is a square matrix with elements
such that
.
![{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\-a_{12}&a_{22}&a_{23}&\dots &a_{2n}\\-a_{13}&-a_{23}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\-a_{1n}&-a_{2n}&-a_{3n}&\dots &a_{nn}\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f80b52fab7d24e1e3044a1c8336cb8a6594eb5e3)
Note that the diagonal elements of a skew-symmetric matrix have to be zero:
.
Let
and
be two
matrices with components
and
, respectively. Then
![{\displaystyle \mathbf {C} =\mathbf {A} +\mathbf {B} \implies c_{ij}=a_{ij}+b_{ij}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c879d7606a034d5c1dc524643908b71d10806978)
Let
be a
matrix with components
and let
be a scalar quantity. Then,
![{\displaystyle \mathbf {C} =\lambda \mathbf {A} \implies c_{ij}=\lambda a_{ij}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f0458deb570083e16e3dc3eda54562b2b522fa76)
Let
be a
matrix with components
. Let
be a
matrix with components
.
The product
is defined only if
. The matrix
is a
matrix with components
. Thus,
![{\displaystyle \mathbf {C} =\mathbf {A} \mathbf {B} \implies c_{ij}=\sum _{k=1}^{n}a_{ik}b_{kj}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dcc9db605f9f14b1c2beced8463090e14878225e)
Similarly, the product
is defined only if
. The matrix
is a
matrix with components
. We have
![{\displaystyle \mathbf {D} =\mathbf {B} \mathbf {A} \implies d_{ij}=\sum _{k=1}^{m}b_{ik}a_{kj}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fc124b88a05186f37b89289ed8dfd7e572756cb0)
Clearly,
in general, i.e., the matrix product is not commutative.
However, matrix multiplication is distributive. That means
![{\displaystyle \mathbf {A} (\mathbf {B} +\mathbf {C} )=\mathbf {A} \mathbf {B} +\mathbf {A} \mathbf {C} ~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a54cb69481c2b899d9081030846a394a70896ca0)
The product is also associative. That means
![{\displaystyle \mathbf {A} (\mathbf {B} \mathbf {C} )=(\mathbf {A} \mathbf {B} )\mathbf {C} ~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1e4cfb3772d679e8b9f6c787ab853e8f98e271d)
Let
be a
matrix with components
. Then the transpose of the matrix is defined as the
matrix
with components
. That is,
![{\displaystyle \mathbf {B} =\mathbf {A} ^{T}={\begin{bmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\a_{31}&a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&a_{m3}&\dots &a_{mn}\end{bmatrix}}^{T}={\begin{bmatrix}a_{11}&a_{21}&a_{31}&\dots &a_{m1}\\a_{12}&a_{22}&a_{32}&\dots &a_{m2}\\a_{13}&a_{23}&a_{33}&\dots &a_{m3}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{1n}&a_{2n}&a_{3n}&\dots &a_{mn}\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b87342107127128b9f8d7019fb13eb021d1c0e45)
An important identity involving the transpose of matrices is
![{\displaystyle {(\mathbf {A} \mathbf {B} )^{T}=\mathbf {B} ^{T}\mathbf {A} ^{T}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f9e4a506b466445e8411c1ebb72209ebef8fa611)
The determinant of a matrix is defined only for square matrices.
For a
matrix
, we have
![{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}}\implies \det(\mathbf {A} )={\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}=a_{11}a_{22}-a_{12}a_{21}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ee69d46c818809d5efbca460cef4a67d462ed96)
For a
matrix, the determinant is calculated by expanding into
minors as
![{\displaystyle {\begin{aligned}&\det(\mathbf {A} )={\begin{vmatrix}a_{11}&a_{12}&a_{13}&\dots &a_{1n}\\a_{21}&a_{22}&a_{23}&\dots &a_{2n}\\a_{31}&a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&a_{n3}&\dots &a_{nn}\end{vmatrix}}\\&=a_{11}{\begin{vmatrix}a_{22}&a_{23}&\dots &a_{2n}\\a_{32}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\ddots &\vdots \\a_{n2}&a_{n3}&\dots &a_{nn}\end{vmatrix}}-a_{12}{\begin{vmatrix}a_{21}&a_{23}&\dots &a_{2n}\\a_{31}&a_{33}&\dots &a_{3n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n3}&\dots &a_{nn}\end{vmatrix}}+\dots \pm a_{1n}{\begin{vmatrix}a_{21}&a_{22}&\dots &a_{2(n-1)}\\a_{31}&a_{32}&\dots &a_{3(n-1)}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\dots &a_{n(n-1)}\end{vmatrix}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e882c4d74d02e397506c917c2a76e312bb3d48f)
In short, the determinant of a matrix
has the value
![{\displaystyle {\det(\mathbf {A} )=\sum _{j=1}^{n}(-1)^{1+j}a_{1j}M_{1j}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f3781616e549fdab2a7627fbec21ca9f0db7f54d)
where
is the determinant of the submatrix of
formed
by eliminating row
and column
from
.
Some useful identities involving the determinant are given below.
- If
is a
matrix, then
![{\displaystyle \det(\mathbf {A} )=\det(\mathbf {A} ^{T})~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b2ad1ca3a5b78b83a7cd5e39f9674989a82feb6)
- If
is a constant and
is a
matrix, then
![{\displaystyle \det(\lambda \mathbf {A} )=\lambda ^{n}\det(\mathbf {A} )\implies \det(-\mathbf {A} )=(-1)^{n}\det(\mathbf {A} )~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1200cbebc63cad51a52e237477ab7db5609851ef)
- If
and
are two
matrices, then
![{\displaystyle \det(\mathbf {A} \mathbf {B} )=\det(\mathbf {A} )\det(\mathbf {B} )~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83971e8330bb82a731c02583e0a39d36cde3829a)
If you think you understand determinants, take the quiz.
Let
be a
matrix. The inverse of
is denoted by
and is defined such that
![{\displaystyle {\mathbf {A} \mathbf {A} ^{-1}=\mathbf {I} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4d4878a696102c443f1838262588dd9bf7c1f65d)
where
is the
identity matrix.
The inverse exists only if
. A singular matrix
does not have an inverse.
An important identity involving the inverse is
![{\displaystyle {(\mathbf {A} \mathbf {B} )^{-1}=\mathbf {B} ^{-1}\mathbf {A} ^{-1},}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2321fe8e731accc4ad582260609a00e1859e5806)
since this leads to:
Some other identities involving the inverse of a matrix are given below.
- The determinant of a matrix is equal to the multiplicative inverse of the
determinant of its inverse.
![{\displaystyle \det(\mathbf {A} )={\cfrac {1}{\det(\mathbf {A} ^{-1})}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/627295bbf9a2e117f2edc4bcc1050d6b2709a089)
- The determinant of a similarity transformation of a matrix
is equal to the original matrix.
![{\displaystyle \det(\mathbf {B} \mathbf {A} \mathbf {B} ^{-1})=\det(\mathbf {A} )~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e2c1d80c3ba2ec0ec799e69dbbf297d75db66603)
We usually use numerical methods such as Gaussian elimination to compute
the inverse of a matrix.
A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:
- Let :
![{\displaystyle \mathbf {A} ={\begin{bmatrix}1&6\\5&2\end{bmatrix}},\mathbf {v} ={\begin{bmatrix}6\\-5\end{bmatrix}},\mathbf {t} ={\begin{bmatrix}7\\4\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/009152c537be48aa7cd2be32120cee963ec062e9)
Which vector is an eigenvector for
?
We have
, and
Thus,
is an eigenvector.
- Is
an eigenvector for
?
We have that since
,
is not an eigenvector for