The term Norm is often used without additional qualification to refer to a particular type of norm such as a Matrix norm or a Vector norm. Most commonly the unqualified term Norm refers to flavor of Vector norm technically known as the L2 norm. This norm is variously denoted
,
, or
and give the length of an n-vector
Norms provide vector spaces and their linear operators with measures of size, length and distance more general than those we already use routinely in everyday life.
If vector norms on Km and Kn are given (K is field of real or complex numbers), then one defines the corresponding induced norm or operator norm on the space of m-by-n matrices as the following maxima:
![{\displaystyle {\begin{aligned}\|A\|&=\max\{\|Ax\|:x\in K^{n}{\mbox{ with }}\|x\|=1\}\\&=\max \left\{{\frac {\|Ax\|}{\|x\|}}:x\in K^{n}{\mbox{ with }}x\neq 0\right\}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8a52ec1fe69c8c5f19f81a0a9d4bfc8653a86a8)
If m = n and one uses the same norm on the domain and the range, then the induced operator norm is a sub-multiplicative matrix norm.
The operator norm corresponding to the p-norm for vectors is:
![{\displaystyle \left\|A\right\|_{p}=\max \limits _{x\neq 0}{\frac {\left\|Ax\right\|_{p}}{\left\|x\right\|_{p}}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/29f8885f68c5b9dd97c6b36826c26482d71518fd)
In the case of
and
, the norms can be computed as:
which is simply the maximum absolute column sum of the matrix.
which is simply the maximum absolute row sum of the matrix.
If
is a vector norm on
then
is a matrix norm.
To show
is a matrix norm we need to show several things.
if and only if
.
If
then
for all vectors x with
and so
.
If
then
for all
.
Using
,
and
successively implies that each column of
is zero. Thus,
if and only if
for scalars
Using the definition of Induced norms and the properties of the vector norm we have,
![{\displaystyle \|\alpha A\|=\max \limits _{\left\|x\right\|_{=1}}\|\alpha Ax\|=|\alpha |\max \limits _{\left\|x\right\|_{=1}}\left\|\ Ax\right\|=|\alpha |.\left\|A\right\|\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89dafe14980a9edca31a076b08a11d79e06396f6)
Again using the definition of Induced norms and the triangle inequality for the vector norm, we have
![{\displaystyle \|A+B\|=\max \limits _{\|x\|=1}\|(A+B)x\|\leq \max \limits _{\|x\|=1}(\|Ax\|+\|Bx\|)\leq \max \limits _{\|x\|=1}\|Ax\|+\max \limits _{\|x\|=1}\|Bx\|=\|A\|+\|B\|\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9f45060136df05b3eb10d4dfbeaf195d8ae84418)
All induced norms are sub-multiplicative.
We want to show
.
We have
.
If
is an
matrix, then
First we show that
-
![{\displaystyle \left\|A\right\|_{\infty }=\max \limits _{\left\|x\right\|_{\infty }=1}\left\|AX\right\|_{\infty }\leq \max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7d4bd84473ed18ac3de0bf8f469eeeda2a1be07d)
|
|
( 1)
|
Let
be an n-dimensional vector with
Since
is also an n-dimensional vector,
![{\displaystyle \left\|Ax\right\|_{\infty }=\max \limits _{1\leq i\leq n}|{Ax}_{i}|=\max \limits _{1\leq i\leq n}\left|\sum _{j=1}^{n}a_{ij}x_{j}\right|\leq \max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|\max \limits _{1\leq i\leq n}|{x_{i}}|.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/37c10216ee86aea819a3a3a3b0d81f7cee2dc4d2)
But
so
![{\displaystyle \left\|Ax\right\|_{\infty }\leq \max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2860666043af070536179b1de3d795503c28f1ae)
and we have shown (1).
Now we will show the opposite inequality, that
-
![{\displaystyle \left\|A\right\|_{\infty }=\max \limits _{\left\|x\right\|_{\infty }=1}\left\|AX\right\|_{\infty }\geq \max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4f33a8e0976e3e684b792ac913e7aa303e974d6)
|
|
( 2)
|
Let p be an integer with
![{\displaystyle \sum _{j=1}^{n}|a_{pj}|=\max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/aadd45b0c177abbb665e5dc78f50eb1e9d7ab454)
and
be the vector with components
![{\displaystyle x_{j}={\begin{cases}1,&{\text{if}}\qquad a_{pj}\geq 0,\\-1,&{\text{if}}\qquad a_{pj}<0.\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0102ce4d3a5d88a4c95a89b3720fd653b3e12e30)
Then
and
for all
so
![{\displaystyle \left\|Ax\right\|_{\infty }=\max \limits _{1\leq i\leq n}\left|\sum _{j=1}^{n}a_{ij}x_{j}\right|\geq \left|\sum _{j=1}^{n}a_{pj}x_{j}\right|=\left|\sum _{j=1}^{n}|a_{pj}|\right|=\max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f38e0b536660f9a6b996fc2472bcf2f2f6453647)
and we have shown (2). Together, (1) and (2) yield
![{\displaystyle \left\|A\right\|_{\infty }=\max \limits _{1\leq i\leq n}\sum _{j=1}^{n}|a_{ij}|\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4e6120ee682b47b58f68c3459998614256348de5)
If
![{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2&-1\\0&3&-1\\5&-1&1\\\end{bmatrix}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/23df3746fa744ef15a61be78facb7d3626058e4f)
![{\displaystyle \sum _{j=1}^{3}|a_{1j}|=|1|+|2|+|-1|=4,\qquad \sum _{j=1}^{3}|a_{2j}|=|0|+|3|+|-1|=4,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf1773d99d8569fa777e05aa9a78c3df88f405c4)
and
![{\displaystyle \sum _{j=1}^{3}|a_{3j}|=|5|+|-1|+|1|=7}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd3981c3a2b2fbcac503fbaf1f2dddf95f3c76f5)
so,
Equivalence Of Norms is defined as:
For any two norms ||·||α and ||·||β, we have
for some positive numbers r and s, for all matrices A in
.
This is true because the vector space
has the finite dimension
.
For matrix
the following inequalities hold
, where
is the rank of ![{\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
, where
is the rank of ![{\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![{\displaystyle \|A\|_{\text{max}}\leq \|A\|_{2}\leq {\sqrt {mn}}\|A\|_{\text{max}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e81cf5520f2f7610d73ca4043b68eca834afe7d3)
![{\displaystyle {\frac {1}{\sqrt {n}}}\|A\|_{\infty }\leq \|A\|_{2}\leq {\sqrt {m}}\|A\|_{\infty }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/981f98060717da1bf4bd835caf28dbe58b7efdee)
![{\displaystyle {\frac {1}{\sqrt {m}}}\|A\|_{1}\leq \|A\|_{2}\leq {\sqrt {n}}\|A\|_{1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/920106eda1615e979656e6544e3ec344ccf8ddab)
Here, ||·||p refers to the matrix norm induced by the vector p-norm.
We will show some of these norm equivalences for the matrix
First we compute several norms:
![{\displaystyle \rho (A)\approx |16.1168|=16.1168,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a2c6b89c4d1a0ab6438bca566d9271b36a838bf6)
![{\displaystyle \|A\|_{\infty }=\max\{|1|+|-2|+|3|,|-4|+|5|+|-6|,|7|+|-8|+|9|\}=24,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4335ed36358b0bf43e8a7e0d650f4058d6d77c91)
![{\displaystyle \|A\|_{1}=\max\{|1|+|-4|+|7|,|-2|+|5|+|-8|,|3|+|-6|+|9|\}=18,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/063154d682196d307620c7871d0b53b5ae81d8dd)
![{\displaystyle \|A\|_{2}={\sqrt {\rho (AA^{*})}}\approx {\sqrt {283.8585}}\approx 16.8481,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf4a10ab32b53ada75f5ee7a40a59bea39e32146)
Where r is the rank of the matrix.
![{\displaystyle \|A\|_{F}={\sqrt {|1|^{2}+|-2|^{2}+|3|^{2}+|-4|^{2}+|5|^{2}+|-6|^{2}+|7|^{2}+|-8|^{2}+|9|^{2}+}}={\sqrt {285}}\approx 16.8819,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/03b3c080861e3a4bd046ead36fcceed31229d829)
![{\displaystyle \|A\|_{\max }=\max\{|1|,|-2|,|3|,|-4|,|5|,|-6|,|7|,|-8|,|9|\}=|9|.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1bd61336c1f79f111bc3515a7e5f78bd70e8d8e)
We can then verify the norm equivalence
![{\displaystyle \left\|A\right\|_{max}<\rho (A)<\left\|A\right\|_{2}<\left\|A\right\|_{F}<\left\|A\right\|_{1}<\left\|A\right\|_{\infty }<{\sqrt {r}}\|A\|_{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4681d9d80449f04628667cf73da941d14a5b854c)
with our numbers
![{\displaystyle |9|<16.1168<16.8481<16.8819<18<24<29.1818,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96f278c599afd034b7313b5e7f44f93f6ede5847)
and
![{\displaystyle \|A\|_{2}\leq \|A\|_{F}\leq {\sqrt {r}}\|A\|_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab83c06d095aa3e33a08f4a37fdff4f72204dc35)
with our numbers
![{\displaystyle 16.8481\leq \ 16.8819\leq \ 29.1818.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c129b8f6f2e16423ad1915068a130a84e7a23a8)
- Numerical Analysis by Richard L. Burden and J. Douglas Faires (EIGHT EDITION)
- Elementary Numerical Analysis by Kendall Atkinson (Second Edition)
- Applied Numerical Analysis by Gerald / Wheatley (Sixth Edition)
- Theory and Problems of Numerical Analysis by Francis Scheid