Talk:PlanetPhysics/Matrix

From Wikiversity
Jump to navigation Jump to search

Original TeX Content from PlanetPhysics Archive[edit source]

%%% This file is part of PlanetPhysics snapshot of 2011-09-01 %%% Primary Title: matrix %%% Primary Category Code: 02. %%% Filename: Matrix.tex %%% Version: 1 %%% Owner: bloftin %%% Author(s): bloftin %%% PlanetPhysics is released under the GNU Free Documentation License. %%% You should have received a file called fdl.txt along with this file. %%% If not, please write to gnu@gnu.org. \documentclass[12pt]{article} \pagestyle{empty} \setlength{\paperwidth}{8.5in} \setlength{\paperheight}{11in}

\setlength{\topmargin}{0.00in} \setlength{\headsep}{0.00in} \setlength{\headheight}{0.00in} \setlength{\evensidemargin}{0.00in} \setlength{\oddsidemargin}{0.00in} \setlength{\textwidth}{6.5in} \setlength{\textheight}{9.00in} \setlength{\voffset}{0.00in} \setlength{\hoffset}{0.00in} \setlength{\marginparwidth}{0.00in} \setlength{\marginparsep}{0.00in} \setlength{\parindent}{0.00in} \setlength{\parskip}{0.15in}

\usepackage{html}

% this is the default PlanetMath preamble. as your knowledge % of TeX increases, you will probably want to edit this, but % it should be fine as is for beginners.

% almost certainly you want these \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsfonts}

% used for TeXing text within eps files %\usepackage{psfrag} % need this for including graphics (\includegraphics) %\usepackage{graphicx} % for neatly defining theorems and propositions %\usepackage{amsthm} % making logically defined graphics %\usepackage{xypic}

% there are many more packages, add them here as you need them

% define commands here

\begin{document}

A matrix is defined as a rectangular array of elements (usually the elements are real or complex numbers). An algebra of matrices is developed by defining addition of matrices, multiplication of matrices, multiplication of a matrix by a \htmladdnormallink{scalar}{http://planetphysics.us/encyclopedia/Vectors.html} (real or complex number), differentiation of matrices, etc. The definitions chosen for the above-mentioned \htmladdnormallink{operations}{http://planetphysics.us/encyclopedia/Cod.html} will be such as to make the calculus of matrices highly applicable. A matrix $\mathbf{A}$ may be denoted as follows:

\begin{equation} \mathbf{A} = \left ( \begin{array}{cccc} a_1^1 & a_2^1 & \dots & a_n^1 \\ a_1^2 & a_2^2 & \dots & a_n^2 \\ \dots & \dots & \dots & \dots \\ a_1^m & a_2^m & \dots & a_n^m \end{array} \right ) = \left \| a_j^i \right \| \end{equation}

If $m = n$, we say that $\mathbf{A}$ is a square matrix of order $n$. If $\mathbf{B}$ is the matrix of elements $\left \| b_j^i \right|, i = 1,2, \dots, m, j = 1, 2, \dots, n$, then $\mathbf{B}$ is said to be equal to $\mathbf{A}$, written $\mathbf{B} = \mathbf{A}$ or $\mathbf{A} = \mathbf{B}$, if and only if $a_j^i = b_j^i$ for the complete range of values of $i$ and $j$.

Two matrices can be compared for equality if and only if they are comparable in the sense that they have the same number of rows and the same number of columns.

The sum of two comparable matrices $\mathbf{A}$, $\mathbf{B}$ is defined as a new matric $\mathbf{C}$ whose elements $c_j^i$ are obtained by adding the corresponding elements of $\mathbf{A}$ and $\mathbf{B}$. Thus

\begin{equation} \left \| c_j^i \right \| = \mathbf{C} = \mathbf{A} + \mathbf{B} = \left \| a_j^i \right \| + \left \| b_j^i \right \| = \left \| a_j^i + b_j^i \right \| \end{equation}

We note that $\mathbf{A} + \mathbf{B} = \mathbf{B} +\mathbf{A}$.

We call $\mathbf{A}$ a zero matrix if and only if each element of $\mathbf{A}$ is equal to the real number zero.

The product of a matrix $\mathbf{A}$ by a numbe $k$ (real or complex) is defined as the matrix whose elements are each $k$ times those of $\mathbf{A}$, that is

$$k \mathbf{A} = k \left \| a_j^i \right \| = \| k a_j^i \|$$

Every matrix $\mathbf{A}$ can be associated with a negative matrix $\mathbf{B} = - \mathbf{A}$ such that $\mathbf{A}+ (-\mathbf{A}) = (-\mathbf{A}) +\mathbf{A} = 0$ (zero matrix).

The rule for multiplying a matrix $\mathbf{A}$ by a scalar $k$ should not be confused with the rule for multiplying a \htmladdnormallink{determinant}{http://planetphysics.us/encyclopedia/Determinant.html} by $k$, for in this latter case the elements of only one row or only one column are multiplied by $k$.

Before defining the product of two matrices let us consider the following sets of linear transformations:

$$ \begin{array}{ccl} A: & z^i = a_j^i y^i & i = 1, 2, \dots, m; j = 1, 2, \dots, n \\ B: & y^i = b_k^j x^k & k = 1, 2, \dots, p \end{array} $$

Since the $z$'s depend on the $y$'s, which in turn depend on the $x$'s, we can solve for the $z$'s in terms of the $x$'s. We write this transformation as follows:

$$ \begin{array}{ccc} AB: & z^i = a_j^i b_k^j x^k & c_k^i = a_j^i b_k^j \end{array} $$

This suggests a method for defining multiplication of the matrics $\mathbf{A}$, $\mathbf{B}$.

If $\mathbf{A} = \left \| a_j^i \right \|, i = 1, 2, \dots, m, j = 1, 2, \dots, n, \mathbf{B} = \left \| b_j^i \right \|, i = 1, 2, \dots, n, j = 1, 2, \dots, p$, then $\mathbf{A} \mathbf{B}$ is defined as the matrix $\mathbf{C}$ such that

\begin{equation} \mathbf{C} = \mathbf{A} \mathbf{B} = \left \| a_j^i \right \| \cdot \left \| b_j^i \right \| = \left \| a_{\alpha}^i b_j^{\alpha} \right \| = \left \| c_j^i \right \| \end{equation}

Let us note that the number of columns of the matrix $\mathbf{A}$ must be equal the number of rows of $\mathbf{B}$. The matrix $\mathbf{C}$ of (3) is an $m \times p$ matrix. In the case of square matrices the definition for multiplication of matrices corresponds to that for multiplication of determinants. This implies that $\left | C \right | = \left | A \right | \cdot \left | B \right |$, where $\left | C \right |$ denotes the determinant of the set of elements comprising the square matrix $\mathbf{C} = \mathbf{A}{B}$.

A square matrix $\mathbf{A}$ is said to be a symmetric matrix if and only if $\mathbf{A} = \mathbf{A}^T$. If $\mathbf{A} = - \mathbf{A}^T$, we say the $\mathbf{A}$ is a skew-symmetric matrix. We now exhibit a symmetric matrix $\mathbf{A}$ and a skew-symmetric matrix $\mathbf{B}$.

$$ \begin{array}{cc} \mathbf{A} = \mathbf{A}^T = \left ( \begin{array}{rrrr} 2 & -1 & 4 & -2 \\ -1 & 0 & 3 & 5 \\ 4 & 3 & 1 & -1 \\ -2 & 5 & -1 & 3 \end{array} \right )& \mathbf{B} = -\mathbf{B}^T = \left ( \begin{array}{rrr} 0 & -1 & 3 \\ 1 & 0 & -2 \\ -3 & 2 & 0 \end{array} \right ) \end{array} $$

We let the reader verify that $\frac{1}{2} \left( \mathbf{A}+\mathbf{A}^T \right )$ is a symmetric matrix if $\mathbf{A}$ is a square matrix. Let the reader first prove that $\left( \mathbf{A}^T \right )^T = \mathbf{A}$,

$$\left( \mathbf{A} + \mathbf{B} \right )^T = \mathbf{A}^T +\mathbf{B}^T$$

It is easily seen that $\frac{1}{2} \left( \mathbf{A}+\mathbf{A}^T \right )$ is a symmetric matrix. Any square matrix $\mathbf{A}$ can obviously be written as

$$\mathbf{A} = \frac{1}{2} \left( \mathbf{A}+\mathbf{A}^T \right ) + \frac{1}{2} \left( \mathbf{A}-\mathbf{A}^T \right )$$

Hence every square matrix can be written as the sum of a symmetric and a skew-symmetric matrix.

\section{References}

[1] Lass, Harry. "Elements of pure and applied mathematics" New York: McGraw-Hill Companies, 1957.

This entry is a derivative of the Public domain work [1].

\end{document}