Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 4/latex

From Wikiversity

\setcounter{section}{4}

In \keyword {linear algebra} {,} everything is worked out over a field $K$, and the reader might think about the real numbers $\R$. But, at the moment, only the algebraic properties of $\R$ are relevant, so instead one can think about the rational numbers $\Q$. Starting with the theory of eigenvalues, also more specific properties of the field \extrabracket {like the existence of roots} {} {} are important.

The \quotationshort{mother of all systems of linear equations}{} is just one linear equation in one variable of the form
\mathrelationchaindisplay
{\relationchain
{ax }
{ =} {b }
{ } { }
{ } { }
{ } { }
} {}{}{,} with given elements \mathl{a,b}{} from a field $K$ and wanted $x$. We have three possibilities how the solution behavior might look like. For
\mathrelationchain
{\relationchain
{a }
{ \neq }{0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we can multiply the equation with the inverse $a^{-1}$ of $a$, yielding the unique solution
\mathrelationchaindisplay
{\relationchain
{ x }
{ =} { b a^{-1} }
{ =} { { \frac{ b }{ a } } }
{ } { }
{ } { }
} {}{}{.} Computationally, one can find the solution, as long as one can find the inverse element and can perform the multiplication in the field. For
\mathrelationchain
{\relationchain
{ a }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the solution behavior depends on $b$. If
\mathrelationchain
{\relationchain
{ b }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then every
\mathrelationchain
{\relationchain
{ x }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is a solution; if
\mathrelationchain
{\relationchain
{b }
{ \neq }{0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then there is no solution.






\subtitle {Linear systems}

Firstly, we give three further introductory examples, one from every day's life, one from geometry, and one from physics. They all lead to systems of linear equations.




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Mulled-wine-3.jpg} }
\end{center}
\imagetext {} }

\imagelicense { Mulled-wine-3.jpg } {} {Loyna} {Commons} {CC-by-sa 2.5} {}

At a booth on the Christmas market, there are three different pots of mulled wine. All three contain the ingredients cinnamon, cloves, red wine, and sugar, but the compositions differ. The mixtures of the mulled wines are
\mathdisp {G_1 = \begin{pmatrix} 1 \\2\\ 11\\2 \end{pmatrix} , \, G_2 = \begin{pmatrix} 2 \\2\\ 12\\3 \end{pmatrix} , \, G_3 = \begin{pmatrix} 3 \\1\\ 20\\7 \end{pmatrix}} { . }
Every mulled wine is represented by a four-tuple, where the entries represent the respective shares of the ingredients. The set of all \extrabracket {possible} {} {} mulled wines forms a vector space  \extrabracket {we will introduce this concept in the next lecture} {} {} and the three concrete mulled wines are vectors in this space.

Now suppose that none of the three mulled wines meets exactly our taste; in fact, the wanted mulled wine has the mixture
\mathrelationchaindisplay
{\relationchain
{ W }
{ =} { \begin{pmatrix} 1 \\2\\ 20\\5 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} Is there a possibility to get the wanted mulled wine by pouring together the given mulled wines in some way? Are there numbers\extrafootnote {In this example, only positive numbers have a practical interpretation. In linear algebra, everything is over a field, so we also allow negative numbers} {.} {}
\mathrelationchain
{\relationchain
{ a,b,c }
{ \in }{ \Q }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} such that
\mathrelationchaindisplay
{\relationchain
{ a \begin{pmatrix} 1 \\2\\ 11\\2 \end{pmatrix} + b \begin{pmatrix} 2 \\2\\ 12\\3 \end{pmatrix} + c \begin{pmatrix} 3 \\1\\ 20\\7 \end{pmatrix} }
{ =} { \begin{pmatrix} 1 \\2\\ 20\\5 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} holds? This vector equation can be expressed by four equations in the \quotationshort{variables}{} $a,b,c$, where the equations come from the rows. When does there exist a solution, when none, when many? These are typical questions of linear algebra.

}




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {IntersectingPlanes.png} }
\end{center}
\imagetext {Two planes in space intersecting in a line.} }

\imagelicense { IntersectingPlanes.png } {} {ShahabELS} {Commons} {CC-by-sa 3.0} {}

Suppose that two planes are given in $\R^3$\extrafootnote {Right here, we do not discuss that such equations define a plane. The solution sets are \quotationshort{shifted linear subspaces of dimension two}{}} {.} {,}
\mathrelationchaindisplay
{\relationchain
{E }
{ =} { { \left\{ (x,y,z) \in \R^3 \mid 4x-2y-3z = 5 \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mathrelationchaindisplay
{\relationchain
{F }
{ =} { { \left\{ (x,y,z) \in \R^3 \mid 3x-5y+2z = 1 \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{.} How can we describe the intersecting line
\mathrelationchain
{\relationchain
{G }
{ = }{E \cap F }
{ }{ }
{ }{ }
{ }{ }
} {}{}{?} A point
\mathrelationchain
{\relationchain
{P }
{ = }{ (x,y,z) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} belongs to the intersection line if and only if it satisfies both \keyword {plane equations} {.} Therefore, both equations,
\mathdisp {4x-2y-3z = 5 \text{ and } 3x-5y+2z = 1} { , }
must hold. We multiply the first equation by $3$, and subtract from that four times the second equation, and get
\mathrelationchaindisplay
{\relationchain
{ 14 y - 17 z }
{ =} { 11 }
{ } { }
{ } { }
{ } { }
} {}{}{.} If we set
\mathrelationchain
{\relationchain
{y }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then
\mathrelationchain
{\relationchain
{z }
{ = }{- { \frac{ 11 }{ 17 } } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} and
\mathrelationchain
{\relationchain
{x }
{ = }{ { \frac{ 13 }{ 17 } } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} must hold. This means that the point
\mathrelationchain
{\relationchain
{ P }
{ = }{ \left( { \frac{ 13 }{ 17 } } , \, 0 , \, - { \frac{ 11 }{ 17 } } \right) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} belongs to $G$. In the same way, setting
\mathrelationchain
{\relationchain
{z }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we find the point
\mathrelationchain
{\relationchain
{Q }
{ = }{ \left( { \frac{ 23 }{ 14 } } , \, { \frac{ 11 }{ 14 } } , \, 0 \right) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Therefore, the intersecting line is the line connecting these points, so
\mathrelationchaindisplay
{\relationchain
{G }
{ =} { { \left\{ \left( { \frac{ 13 }{ 17 } } , \, 0 , \, - { \frac{ 11 }{ 17 } } \right) + t \left( { \frac{ 209 }{ 238 } } , \, { \frac{ 11 }{ 14 } } , \, { \frac{ 11 }{ 17 } } \right) \mid t \in \R \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Wbridge2.svg} }
\end{center}
\imagetext {} }

\imagelicense { Wbridge2.svg } {} {Rhdv} {Commons} {CC-by-sa 3.0} {}

An electrical network consists of several connected wires, which we call the edges of the network in this context. In every edge $K_j$, there is a certain \extrabracket {depending on the material and the length of the edge} {} {} resistance $R_j$. The points $P_n$, where the edges meet, are called the vertices of the network. If we put to some edges of the network a certain electric tension \extrabracket {voltage} {} {,} then we will have in every edge a certain current $I_j$. The goal is to determine the currents from the data of the network and the voltages.

It is helpful to assign to each edge a fixed direction in order to distinguish the direction of the current in this edge \extrabracket {if the current is in the opposite direction, it gets a minus sign} {} {.} We call these directed edges. In every vertex of the network, the currents of the adjacent edges come together; therefore, their sum must be $0$. In an edge $K_j$, there is a voltage drop $U_j$, determined by Ohm's law to be
\mathrelationchaindisplay
{\relationchain
{ U_j }
{ =} { R_j \cdot I_j }
{ } { }
{ } { }
{ } {}
} {}{}{.} We call a closed, directed alignment of edges in a network a mesh. For such a mesh, the sum of voltages is $0$, unless a certain voltage is enforced from \quotationshort{outside}{.}

We list these \keyword {Kirchhoff's laws} {} again. \enumerationthree {In every vertex, the sum of the currents equals $0$. } {In every mesh, the sum of the voltages equals $0$. } {If in a mesh, a voltage $V$ is enforced, then the sum of the voltages equals $V$. } Due to \quotationshort{physical reasons}{,} we expect that, given voltages in every edge, there should be a well-defined current in every edge. In fact, these currents can be computed if we translate the stated laws into a system of linear equations and solve this system.

In the example given by the picture, suppose that the edges \mathl{K_1 , \ldots , K_5}{} \extrabracket {with the resistances \mathl{R_1 , \ldots , R_5}{}} {} {} are directed from left to right and that the connecting edge $K_0$ from $A$ to $C$ \extrabracket {where the voltage $V$ is applied} {} {} is directed upwards. The four vertices and the three meshes $(A,D,B),\, (D,B,C)$ and $(A,D,C)$ yield the system of linear equations
\mathdisp {\begin{matrix} I_0 & + I_1 & & -I_3 & & & = & 0 \\ & & & I_3 & +I_4 & +I_5 & = & 0 \\ - I_0 & & +I_2 & & -I_4 & & = & 0 \\ & -I_1 & -I_2 & & & -I_5 & = & 0 \\ & R_1 I_1 & & +R_3 I_3 & & -R_5 I_5 & = & 0 \\ & & -R_2 I_2 & & -R_4I_4 & +R_5I_5 & = & 0 \\ & -R_1I_1 & +R_2I_2 & & & & = & -V \, . \end{matrix}} { }
Here the $R_j$ and $V$ are given numbers, and the $I_j$ are the unknowns we are looking for.

}

We give now the definition of a homogeneous and of an inhomogeneous system of linear equations over a field for a given set of variables.


\inputdefinition
{ }
{

Let $K$ denote a field, and let
\mathrelationchain
{\relationchain
{ a_{ij} }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for \mathcor {} {1 \leq i \leq m} {and} {1 \leq j \leq n} {.} We call
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & 0 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & 0 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & 0 \end{matrix}} { }
a \extrabracket {homogeneous} {} {} \definitionword {system of linear equations}{} in the variables \mathl{x_1 , \ldots , x_n}{.} A tuple
\mathrelationchain
{\relationchain
{ ( \xi_1 , \ldots , \xi_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \definitionword {solution of the linear system}{} if
\mathrelationchain
{\relationchain
{ \sum_{j = 1}^n a_{ij } \xi_j }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds for all
\mathrelationchain
{\relationchain
{i }
{ = }{ 1 , \ldots , m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

If
\mathrelationchain
{\relationchain
{ (c_1 , \ldots , c_m) }
{ \in }{ K^m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is given\extrafootnote {Such a vector is sometimes called a \keyword {disturbance vector} {} of the system} {.} {,} then
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m \end{matrix}} { }
is called an \definitionword {inhomogeneous system of linear equations}{.} A tuple
\mathrelationchain
{\relationchain
{ ( \zeta_1 , \ldots , \zeta_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \definitionword {solution to the inhomogeneous linear system}{} if
\mathrelationchain
{\relationchain
{ \sum_{j = 1}^n a_{ij} \zeta_j }
{ = }{ c_i }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}

holds for all $i$.
}

The set of all solutions of the system is called the \keyword {solution set} {.} In the homogeneous case, this is also called the \keyword {solution space} {,} as it is indeed, by Lemma 6.11 , a vector space.

A homogeneous system of linear equations always has the so-called \keyword {trivial solution} {}
\mathrelationchain
{\relationchain
{0 }
{ = }{ (0 , \ldots , 0) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} An inhomogeneous system does not necessarily have a solution. For a given inhomogeneous linear system of equations, the homogeneous system that arises when we replace the tuple on the right-hand side by the null vector $0$ is called the \keyword {corresponding homogeneous system} {.}

The following situation describes a more abstract version of Example 4.1 .


\inputexample{}
{

Let $K$ denote a field, and
\mathrelationchain
{\relationchain
{ m }
{ \in }{ \N }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Suppose that in $K^m$, there are $n$ vectors \extrabracket {or $m$-tuples} {} {}
\mathdisp {v_1 = \begin{pmatrix} a_{1 1 } \\ a_{2 1 }\\ \vdots\\ a_{ m 1 } \end{pmatrix},\, v_2= \begin{pmatrix} a_{1 2 } \\ a_{2 2 }\\ \vdots\\ a_{ m 2 } \end{pmatrix} , \ldots , v_n = \begin{pmatrix} a_{1 n } \\ a_{2 n }\\ \vdots\\ a_{ m n } \end{pmatrix}} { }
given. Let
\mathrelationchaindisplay
{\relationchain
{ w }
{ =} { \begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} be another vector. We want to know whether $w$ can be written as a linear combination of the $v_j$. Thus, we are dealing with the question whether there are $n$ elements
\mathrelationchain
{\relationchain
{ s_1 , \ldots , s_n }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} such that
\mathrelationchaindisplay
{\relationchain
{ s_1 \begin{pmatrix} a_{1 1 } \\ a_{2 1 }\\ \vdots\\ a_{ m 1 } \end{pmatrix} + s_2 \begin{pmatrix} a_{1 2 } \\ a_{2 2 }\\ \vdots\\ a_{ m 2 } \end{pmatrix} + \cdots + s_n \begin{pmatrix} a_{1 n } \\ a_{2 n }\\ \vdots\\ a_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} holds. This equality of vectors means identity in every component, so that this condition yields a system of linear equations
\mathdisp {\begin{matrix} a _{ 1 1 } s _1 + a _{ 1 2 } s _2 + \cdots + a _{ 1 n } s _{ n } & = & c_1 \\ a _{ 2 1 } s _1 + a _{ 2 2 } s _2 + \cdots + a _{ 2 n } s _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } s _1 + a _{ m 2 } s _2 + \cdots + a _{ m n } s _{ n } & = & c_m .\end{matrix}} { }

}




\inputremark {}
{

It might happen that a system of linear equations is given in such a way that there are variables on both sides of the equations, like in
\mathrelationchaindisplay
{\relationchain
{3x-4+5y }
{ =} {8z+7x }
{ } { }
{ } { }
{ } { }
} {}{}{,}
\mathrelationchaindisplay
{\relationchain
{2-4x+z }
{ =} { 2y+3x+6 }
{ } { }
{ } { }
{ } { }
} {}{}{,}
\mathrelationchaindisplay
{\relationchain
{4z-3x +2x +3 }
{ =} { 5x-11y+2z-8 }
{ } { }
{ } { }
{ } { }
} {}{}{.} In this case, one first transforms this system to the standard form by simple additions and processing the coefficients in each equation.

}






\subtitle {Matrices}

A system of linear equations can easily be written with a matrix. This allows us to make the manipulations that lead to the solution of such a system without writing down the variables. Matrices are quite simple objects; however, they can represent quite different mathematical objects \extrabracket {e.g., a family of column vectors, a family of row vectors, a linear mapping, a table of physical interactions, a relation, a linear vector field, etc.} {} {,} which one has to keep in mind in order to prevent wrong conclusions.




\inputdefinition
{ }
{

Let $K$ denote a field, and let \mathcor {} {I} {and} {J} {} denote index sets. An \mathl{I\times J}{-}\definitionword {matrix}{} is a mapping
\mathdisp {I \times J \longrightarrow K , (i,j) \longmapsto a_{ij}} { . }
If
\mathrelationchain
{\relationchain
{I }
{ = }{\{1 , \ldots , m\} }
{ }{}
{ }{}
{ }{}
} {}{}{} and
\mathrelationchain
{\relationchain
{ J }
{ = }{\{1 , \ldots , n\} }
{ }{}
{ }{}
{ }{}
} {}{}{,} then we talk about an \mathl{m \times n}{-}\definitionword {matrix}{.} In this case, the matrix is usually written as
\mathdisp {\begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\

\vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix}} { . }

}

We will usually restrict to this last situation.


For every
\mathrelationchain
{\relationchain
{ i }
{ \in }{ I }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the family
\mathcond {a_{ij}} {,}
{j \in J} {}
{} {} {} {,} is called the $i$-th \keyword {row} {} of the matrix, which is usually written as a \keyword {row tuple} {} \extrabracket {or \keyword {row vector} {}} {} {}
\mathdisp {(a_{i1}, a_{i2} , \ldots , a_{in})} { . }
For every
\mathrelationchain
{\relationchain
{ j }
{ \in }{ J }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the family
\mathcond {a_{ij}} {,}
{i \in I} {}
{} {} {} {,} is called the $j$-th \keyword {column} {} of the matrix, usually written as a column tuple \extrabracket {or column vector} {} {}
\mathdisp {\begin{pmatrix} a_{1j} \\a_{2j}\\ \vdots\\a_{mj} \end{pmatrix}} { . }
The elements \mathl{a_{ij}}{} are called the \keyword {entries} {} of the matrix. For \mathl{a_{ij}}{,} the number $i$ is called the \keyword {row index} {,} and $j$ is called the \keyword {column index} {} of the entry. The position of the entry \mathl{a_{ij}}{} is where the $i$-th row meets the $j$-th column. A matrix with
\mathrelationchain
{\relationchain
{m }
{ = }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \keyword {square matrix} {.} An \mathl{m \times 1}{-}matrix is simply a column tuple \extrabracket {or column vector} {} {} of length $m$, and an \mathl{1 \times n}{-}matrix is simply a row tuple \extrabracket {or row vector} {} {} of length $n$. The set of all matrices with $m$ rows and $n$ columns \extrabracket {and with entries in $K$} {} {} is denoted by \mathl{\operatorname{Mat}_{ m \times n } (K)}{;} in case
\mathrelationchain
{\relationchain
{m }
{ = }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} we also write \mathl{\operatorname{Mat}_{ n } (K)}{.}


Two matrices
\mathrelationchain
{\relationchain
{A,B }
{ \in }{ \operatorname{Mat}_{ m \times n } (K) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} are added by adding corresponding entries. The multiplication of a matrix $A$ with an element
\mathrelationchain
{\relationchain
{ r }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} \extrabracket {a \keyword {scalar} {}} {} {} is also defined entrywise, so
\mathrelationchaindisplayhandleft
{\relationchaindisplayhandleft
{ \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} + \begin{pmatrix} b_{11 } & b_{1 2} & \ldots & b_{1 n } \\ b_{21 } & b_{2 2} & \ldots & b_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ b_{ m 1 } & b_{ m 2 } & \ldots & b_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} a_{11 } +b_{11} & a_{1 2} +b_{12} & \ldots & a_{1 n } +b_{1n} \\ a_{21 } +b_{21} & a_{2 2} +b_{22} & \ldots & a_{2 n } +b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } +b_{m1} & a_{ m 2 } +b_{m2} & \ldots & a_{ m n } +b_{mn} \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mathrelationchaindisplay
{\relationchain
{ r \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} ra_{11 } & ra_{1 2} & \ldots & ra_{1 n } \\ ra_{21 } & ra_{2 2} & \ldots & ra_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ ra_{ m 1 } & ra_{ m 2 } & \ldots & ra_{ m n } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

The multiplication of matrices is defined in the following way:


\inputdefinition
{ }
{

Let $K$ denote a field, and let $A$ denote an $m \times n$-matrix and $B$ an $n\times p$-matrix over $K$. Then the \definitionword {matrix product}{}
\mathdisp {AB} { }
is the \mathl{m\times p}{-}matrix, whose entries are given by
\mathrelationchaindisplay
{\relationchain
{ c_{ik} }
{ =} { \sum_{j = 1}^n a_{ij} b_{jk} }
{ } { }
{ } { }
{ } { }
}

{}{}{.}

}


A matrix multiplication is only possible when the number of columns of the left-hand matrix equals the number of rows of the right-hand matrix. Just think of the scheme
\mathrelationchaindisplay
{\relationchain
{ (R O W R O W ) \begin{pmatrix} C \\O\\ L\\U\\ M\\ N \end{pmatrix} }
{ =} { (RC+O^2+WL+RU+OM+WN) }
{ } { }
{ } { }
{ } { }
} {}{}{,} the result is an $1 \times 1$-Matrix. In particular, one can multiply an \mathl{m \times n}{-}matrix $A$ with a column vector of length $n$ \extrabracket {the vector on the right} {} {,} and the result is a column vector of length $m$. The two matrices can also be multiplied with roles interchanged,
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} C \\O\\ L\\U\\ M\\ N \end{pmatrix} (R O W R O W) }
{ =} { \begin{pmatrix} CR & CO & CW & CR & CO & CW \\

OR &  O^2 &   OW  &  OR &  O^2 &  OW \\

L R & LO & L W& L R & L O & L W \\ U R & U O & U W & U R & U O & U W \\

MR &  MO & M W & M R &  M O  & MW\\
NR &  NO &  NW &  NR &   NO  &  NW

\end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}




\inputdefinition
{ }
{

The $n \times n$-matrix
\mathrelationchaindisplay
{\relationchain
{ E_{ n } }
{ \defeq} { \begin{pmatrix} 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & 1 & 0 \\ 0 & \cdots & \cdots & 0 & 1 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{}

is called \definitionword {identity matrix}{.}

}

The identity matrix $E_n$ has the property
\mathrelationchain
{\relationchain
{ E_n M }
{ = }{ M }
{ = }{ M E_n }
{ }{ }
{ }{ }
} {}{}{,} for an arbitrary \mathl{n\times n}{-}matrix $M$. Hence, the identity matrix is the neutral element with respect to matrix multiplication.




\inputremark {}
{

If we multiply an $m\times n$-matrix
\mathrelationchain
{\relationchain
{A }
{ = }{(a_{ij})_{ij} }
{ }{ }
{ }{ }
{ }{}
} {}{}{} with a column vector
\mathrelationchain
{\relationchain
{x }
{ = }{\begin{pmatrix} x_{1 } \\ x_{2 }\\ \vdots\\ x_{ n } \end{pmatrix} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then we get
\mathrelationchaindisplay
{\relationchain
{ A x }
{ =} { \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} \begin{pmatrix} x_{1 } \\ x_{2 }\\ \vdots\\ x_{ n } \end{pmatrix} }
{ =} { \begin{pmatrix} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n} x_n \\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n} x_n\\ \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn} x_n \end{pmatrix} }
{ } { }
{ } {}
} {}{}{.} Hence, an inhomogeneous system of linear equations with \keyword {disturbance vector} {} $\begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix}$ can be written briefly as
\mathrelationchaindisplay
{\relationchain
{Ax }
{ =} {c }
{ } { }
{ } { }
{ } { }
} {}{}{.} Then, the manipulations on the equations that do not change the solution set, can be replaced by corresponding manipulations on the rows of the matrix. It is not necessary to write down the variables.

}




\inputdefinition
{ }
{

An $n \times n$-matrix of the form
\mathdisp {\begin{pmatrix} d_{11} & 0 & \cdots & \cdots & 0 \\ 0 & d_{22} & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & d_{ n-1\, n-1} & 0 \\ 0 & \cdots & \cdots & 0 & d_{ n n} \end{pmatrix}} { }

is called a \definitionword {diagonal matrix}{.}

}




\inputdefinition
{ }
{

Let $K$ be a field, and let
\mathrelationchain
{\relationchain
{ M }
{ = }{( a_{ i j } )_{ i j } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} be an $m \times n$-matrix over $K$. Then the \mathl{n \times m}{-}matrix
\mathdisp {{ M^{ \text{tr} } } ={ \left( b_{ij} \right) }_{ij} \text{ with } b_{ij} := a_{ji}} { }

is called the \definitionword {transposed matrix}{} of $M$.

}

The transposed matrix arises by interchanging the roles of the rows and the columns. For example, we have
\mathrelationchaindisplay
{\relationchain
{ { \begin{pmatrix} t & n & o & d \\ r & s & s & x \\ a & p & e & y \end{pmatrix} ^{ \text{tr} } } }
{ =} { \begin{pmatrix} t & r & a \\ n & s & p \\ o & s & e \\ d & x & y \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}