Linear operators can be thought of as infinite dimensional matrices. Hence we
can use well known results from matrix theory when dealing with linear operators.
However, we have to be careful. A finite dimensional matrix has an inverse if
none of its eigenvalues are zero. For an infinite dimensional matrix, even
though all the eigenvectors may be nonzero, we might have a sequence of
eigenvalues that tend to zero. There are several other subtleties that we will
discuss in the course of this series of lectures.
Let us start off with the basics, i.e., linear vector spaces.
Let
be a linear vector space.
Let us first define addition and scalar
multiplication in this space. The addition operation acts completely in
while the scalar multiplication operation may involved multiplication either
by a real (in
) or by a complex number (in
). These
operations must have the following closure properties:
- If
then
.
- If
(or
) and
then
.
And the following laws must hold for addition
=
Commutative law.
=
Associative law.
such that
Additive identity.
such that
Additive inverse.
For scalar multiplication we have the properties
.
.
.
.
.
The
tuples
with
![{\displaystyle {\begin{aligned}(x_{1},x_{2},\dots ,x_{n})+(y_{1},y_{2},\dots ,y_{n})&=(x_{1}+y_{1},x_{2}+y_{2},\dots ,x_{n}+y_{n})\\\alpha ~(x_{1},x_{2},\dots ,x_{n})&=(\alpha ~x_{1},\alpha ~x_{2},\dots ,\alpha ~x_{n})\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eb2653fe3d8ab438a9b3941213fa6091596b9bc3)
form a linear vector space.
Another example of a linear vector space is the set of
matrices
with addition as usual and scalar multiplication, or more generally
matrices.
![{\displaystyle \alpha {\begin{bmatrix}x_{11}&x_{12}\\x_{21}&x_{22}\end{bmatrix}}={\begin{bmatrix}\alpha ~x_{11}&\alpha ~x_{12}\\\alpha ~x_{21}&\alpha ~x_{22}\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dabc0b67cb2f794d5b48a7f1b8ba56b521513d0e)
The space of
-th order polynomials forms a linear vector space.
![{\displaystyle p_{n}=\sum _{j=1}^{n}\alpha _{j}~x^{j}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8c5d68c83752d71a625821b2aedeae6fa8e3ee0)
The space of continuous functions, say in
, also forms a linear vector
space with addition and scalar multiplication defined as usual.
A set of vectors
are said to be linearly
dependent if
not all zero such that
![{\displaystyle \alpha ~\mathbf {x} _{1}+\alpha ~\mathbf {x} _{2}+\dots +\alpha ~\mathbf {x} _{n}=\mathbf {0} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/0b9ba59ca9af723ff94fe975f4e0a36f2074772a)
If such a set of constants
do not exists
then the vectors are said to be linearly independent.
Consider the matrices
![{\displaystyle {\boldsymbol {M}}_{1}={\begin{bmatrix}1&0\\0&2\end{bmatrix}},{\boldsymbol {M}}_{2}={\begin{bmatrix}1&0\\0&0\end{bmatrix}},{\boldsymbol {M}}_{3}={\begin{bmatrix}0&0\\0&-1\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de41ab3ddd946ea3afc488eb31ad9ae4d9061aa2)
These are linearly dependent since
.
The span of a set of vectors
is the set of all vectors that are
linear combinations of the vectors
. Thus
![{\displaystyle {\text{span}}({\boldsymbol {T}})=\{{\boldsymbol {T}}_{1},{\boldsymbol {T}}_{2},\dots ,{\boldsymbol {T}}_{n}\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/daba7cf1531addca07c66b3df85a6e4fd228b735)
where
![{\displaystyle {\boldsymbol {T}}_{i}=\alpha _{1}~\mathbf {x} _{1}+\alpha _{2}~\mathbf {x} _{2}+\dots +\alpha _{n}~\mathbf {x} _{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/439612ad421833a0ad8a5d1fecb338ccd0ce34f6)
as
vary.
If the span =
then
is said to be a spanning set.
If
is a spanning set and its elements are linearly independent then we call
it a basis for
. A vector in
has a unique representation as a
linear combination of the basis elements. why is it unqiue?
The dimension of a space
is the number of elements in the basis. This is
independent of actual elements that form the basis and is a property of
.
Any two non-collinear vectors
is a basis for
because any other vector in
can be expressed as a linear
combination of the two vectors.
A basis for the linear space of
matrices is
![{\displaystyle {\begin{bmatrix}1&0\\0&0\end{bmatrix}},{\begin{bmatrix}1&1\\0&0\end{bmatrix}},{\begin{bmatrix}1&1\\0&1\end{bmatrix}},{\begin{bmatrix}1&3\\1&1\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab22273e11ce43329f75455013b37e41d76073e0)
Note that there is a lot of nonuniqueness in the choice of bases. One
important skill that you should develop is to choose the right basis to solve
a particular problem.
The set
is a basis for polynomials of degree
.
A natural basis is the set
where the
th
entry of
is
![{\displaystyle \delta _{jk}={\begin{cases}1&{\mbox{for}}~j=k\\0&{\mbox{for}}~j\neq k\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/af665b4b08a244c8bd118de1c9c23ad48f73c44f)
The quantity
is also called the Kronecker delta.
To give more structure to the idea of a vector space we need concepts such as
magnitude and angle. The inner product provides that structure.
The inner product generalizes the concept of an angle and is defined as a
function
![{\displaystyle \langle \bullet ,~\bullet \rangle :{\mathcal {S}}\times {\mathcal {S}}\rightarrow \mathbb {R} \quad ({\text{or}}~\mathbb {C} ~{\text{for a complex vector space}})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36acc3305536f5d42fb6880578c6d7c239dcf922)
with the properties
overbar indicates complex conjugation.
Linear with respect to scalar multiplication.
Linearity with respect to addition.
if
and
if and only if
.
A vector space with an inner product is called an inner product space.
![{\displaystyle \langle \mathbf {x} ,~\beta ~\mathbf {y} \rangle ={\overline {\langle \beta ~\mathbf {y} ,~\mathbf {x} \rangle }}={\overline {\beta }}~{\overline {\langle \mathbf {y} ,~\mathbf {x} \rangle }}={\overline {\beta }}\langle \mathbf {x} ,~\mathbf {y} \rangle }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9494416fa055986dea8c683472832ff02fa9c9d0)
In
with
and
the Eulidean norm is given by
![{\displaystyle \langle \mathbf {x} ,~\mathbf {y} \rangle =\sum _{n}x_{n}~y_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ee36ae946640f1b3dbf8038f790647562dc2550)
With
the standard norm is
![{\displaystyle \langle \mathbf {x} ,~\mathbf {y} \rangle =\sum _{k}x_{k}~{\overline {y_{k}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/515710e4f47d10ccd82a4315e61052cfebfff4b2)
For two complex valued continuous functions
and
in
we could approximately represent them by their function values at
equally spaced points.
Approximate
and
by
![{\displaystyle {\begin{aligned}F&=\{f(x_{1}),f(x_{2}),\dots ,f(x_{n})\}\qquad {\text{with}}~x_{k}={\cfrac {k}{n}}\\G&=\{g(x_{1}),g(x_{2}),\dots ,g(x_{n})\}\qquad {\text{with}}~x_{k}={\cfrac {k}{n}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3613205efbcaa8b016f671113a67101b4a5ae4e)
With that approximation, a natural norm is
![{\displaystyle \langle F,~G\rangle ={\cfrac {1}{n}}~\sum _{k=1}^{n}f(x_{k})~{\overline {g(x_{k})}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d0e3b95b13413fbda06d9471fd11b6e3f44b0f67)
Taking the limit as
(show this)
![{\displaystyle \langle f,~g\rangle =\int _{0}^{1}f(x)~{\overline {g(x)}}~dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3798a91420565a63436266524f9a48980f8b45bb)
If we took non-equally spaced yet smoothly distributed points we would get
![{\displaystyle \langle f,~g\rangle =\int _{0}^{1}f(x)~{\overline {g(x)}}~w(x)~dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/70d24bd3368708fad2e14ed608955b0f60cb1755)
where
is a smooth weighting function (show this).
There are many other inner products possible. For functions that are not only
continuous but also differentiable, a useful norm is
![{\displaystyle \langle f,~g\rangle =\int _{0}^{1}\left[f(x)~{\overline {g(x)}}+f^{'}(x)~{\overline {g^{'}(x)}}\right]~dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/45c116ac50c186a3d16543b5ccedbe35fc31abe6)
We will continue further explorations into linear vector spaces in the next
lecture.