Jump to content

Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 21/latex

From Wikiversity

\setcounter{section}{21}

The lectures of the next weeks deal with \keyword {linear algebra} {.} We fix a field $K$, and one might think of the real numbers $\R$. But since we are first concerned only with algebraic properties of $\R$, one might also think of the rational numbers. Starting with the theory of eigenspaces, also analytic properties like the existence of roots will be important.






\subtitle {Systems of linear equations}

In the context of polynomial interpolation, we have already encountered systems of linear equations.

Firstly, we give three further introductory examples, one from every day's life, one from geometry, and one from physics. They all lead to systems of linear equations.




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Mulled-wine-3.jpg} }
\end{center}
\imagetext {} }

\imagelicense { Mulled-wine-3.jpg } {} {Loyna} {Commons} {CC-by-sa 2.5} {}

At a booth on the Christmas market, there are three different pots of mulled wine. All three contain the ingredients cinnamon, cloves, red wine, and sugar, but the compositions differ. The mixtures of the mulled wines are
\mathdisp {G_1 = \begin{pmatrix} 1 \\2\\ 11\\2 \end{pmatrix} , \, G_2 = \begin{pmatrix} 2 \\2\\ 12\\3 \end{pmatrix} , \, G_3 = \begin{pmatrix} 3 \\1\\ 20\\7 \end{pmatrix}} { . }
Every mulled wine is represented by a four-tuple, where the entries represent the respective shares of the ingredients. The set of all \extrabracket {possible} {} {} mulled wines forms a vector space \extrabracket {we will introduce this concept in the next lecture} {} {} and the three concrete mulled wines are vectors in this space.

Now suppose that none of the three mulled wines meets exactly our taste; in fact, the wanted mulled wine has the mixture
\mathrelationchaindisplay
{\relationchain
{ W }
{ =} { \begin{pmatrix} 1 \\2\\ 20\\5 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} Is there a possibility to get the wanted mulled wine by pouring together the given mulled wines in some way? Are there numbers\extrafootnote {In this example, only positive numbers have a practical interpretation. In linear algebra, everything is over a field, so we also allow negative numbers} {.} {}
\mathrelationchain
{\relationchain
{ a,b,c }
{ \in }{ \Q }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} such that
\mathrelationchaindisplay
{\relationchain
{ a \begin{pmatrix} 1 \\2\\ 11\\2 \end{pmatrix} + b \begin{pmatrix} 2 \\2\\ 12\\3 \end{pmatrix} + c \begin{pmatrix} 3 \\1\\ 20\\7 \end{pmatrix} }
{ =} { \begin{pmatrix} 1 \\2\\ 20\\5 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} holds? This vector equation can be expressed by four equations in the \quotationshort{variables}{} $a,b,c$, where the equations come from the rows. When does there exist a solution, when none, when many? These are typical questions of linear algebra.

}




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {IntersectingPlanes.png} }
\end{center}
\imagetext {Two planes in space intersecting in a line.} }

\imagelicense { IntersectingPlanes.png } {} {ShahabELS} {Commons} {CC-by-sa 3.0} {}

Suppose that two planes are given in $\R^3$\extrafootnote {Right here, we do not discuss that such equations define a plane. The solution sets are \quotationshort{shifted linear subspaces of dimension two}{}} {.} {,}
\mathrelationchaindisplay
{\relationchain
{E }
{ =} { { \left\{ (x,y,z) \in \R^3 \mid 4x-2y-3z = 5 \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mathrelationchaindisplay
{\relationchain
{F }
{ =} { { \left\{ (x,y,z) \in \R^3 \mid 3x-5y+2z = 1 \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{.} How can we describe the intersecting line
\mathrelationchain
{\relationchain
{G }
{ = }{E \cap F }
{ }{ }
{ }{ }
{ }{ }
} {}{}{?} A point
\mathrelationchain
{\relationchain
{P }
{ = }{ (x,y,z) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} belongs to the intersection line if and only if it satisfies both \keyword {plane equations} {.} Therefore, both equations,
\mathdisp {4x-2y-3z = 5 \text{ and } 3x-5y+2z = 1} { , }
must hold. We multiply the first equation by $3$, and subtract from that four times the second equation, and get
\mathrelationchaindisplay
{\relationchain
{ 14 y - 17 z }
{ =} { 11 }
{ } { }
{ } { }
{ } { }
} {}{}{.} If we set
\mathrelationchain
{\relationchain
{y }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then
\mathrelationchain
{\relationchain
{z }
{ = }{- { \frac{ 11 }{ 17 } } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} and
\mathrelationchain
{\relationchain
{x }
{ = }{ { \frac{ 13 }{ 17 } } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} must hold. This means that the point
\mathrelationchain
{\relationchain
{ P }
{ = }{ \left( { \frac{ 13 }{ 17 } } , \, 0 , \, - { \frac{ 11 }{ 17 } } \right) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} belongs to $G$. In the same way, setting
\mathrelationchain
{\relationchain
{z }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we find the point
\mathrelationchain
{\relationchain
{Q }
{ = }{ \left( { \frac{ 23 }{ 14 } } , \, { \frac{ 11 }{ 14 } } , \, 0 \right) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Therefore, the intersecting line is the line connecting these points, so
\mathrelationchaindisplay
{\relationchain
{G }
{ =} { { \left\{ \left( { \frac{ 13 }{ 17 } } , \, 0 , \, - { \frac{ 11 }{ 17 } } \right) + t \left( { \frac{ 209 }{ 238 } } , \, { \frac{ 11 }{ 14 } } , \, { \frac{ 11 }{ 17 } } \right) \mid t \in \R \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Wbridge2.svg} }
\end{center}
\imagetext {} }

\imagelicense { Wbridge2.svg } {} {Rhdv} {Commons} {CC-by-sa 3.0} {}

An electrical network consists of several connected wires, which we call the edges of the network in this context. In every edge $K_j$, there is a certain \extrabracket {depending on the material and the length of the edge} {} {} resistance $R_j$. The points $P_n$, where the edges meet, are called the vertices of the network. If we put to some edges of the network a certain electric tension \extrabracket {voltage} {} {,} then we will have in every edge a certain current $I_j$. The goal is to determine the currents from the data of the network and the voltages.

It is helpful to assign to each edge a fixed direction in order to distinguish the direction of the current in this edge \extrabracket {if the current is in the opposite direction, it gets a minus sign} {} {.} We call these directed edges. In every vertex of the network, the currents of the adjacent edges come together; therefore, their sum must be $0$. In an edge $K_j$, there is a voltage drop $U_j$, determined by Ohm's law to be
\mathrelationchaindisplay
{\relationchain
{ U_j }
{ =} { R_j \cdot I_j }
{ } { }
{ } { }
{ } {}
} {}{}{.} We call a closed, directed alignment of edges in a network a mesh. For such a mesh, the sum of voltages is $0$, unless a certain voltage is enforced from \quotationshort{outside}{.}

We list these \keyword {Kirchhoff's laws} {} again. \enumerationthree {In every vertex, the sum of the currents equals $0$. } {In every mesh, the sum of the voltages equals $0$. } {If in a mesh, a voltage $V$ is enforced, then the sum of the voltages equals $V$. } Due to \quotationshort{physical reasons}{,} we expect that, given voltages in every edge, there should be a well-defined current in every edge. In fact, these currents can be computed if we translate the stated laws into a system of linear equations and solve this system.

In the example given by the picture, suppose that the edges \mathl{K_1 , \ldots , K_5}{} \extrabracket {with the resistances \mathl{R_1 , \ldots , R_5}{}} {} {} are directed from left to right and that the connecting edge $K_0$ from $A$ to $C$ \extrabracket {where the voltage $V$ is applied} {} {} is directed upwards. The four vertices and the three meshes $(A,D,B),\, (D,B,C)$ and $(A,D,C)$ yield the system of linear equations
\mathdisp {\begin{matrix} I_0 & + I_1 & & -I_3 & & & = & 0 \\ & & & I_3 & +I_4 & +I_5 & = & 0 \\ - I_0 & & +I_2 & & -I_4 & & = & 0 \\ & -I_1 & -I_2 & & & -I_5 & = & 0 \\ & R_1 I_1 & & +R_3 I_3 & & -R_5 I_5 & = & 0 \\ & & -R_2 I_2 & & -R_4I_4 & +R_5I_5 & = & 0 \\ & -R_1I_1 & +R_2I_2 & & & & = & -V \, . \end{matrix}} { }
Here the $R_j$ and $V$ are given numbers, and the $I_j$ are the unknowns we are looking for.

}

We give now the definition of a homogeneous and of an inhomogeneous system of linear equations over a field for a given set of variables.


\inputdefinition
{ }
{

Let $K$ denote a field, and let
\mathrelationchain
{\relationchain
{ a_{ij} }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for \mathcor {} {1 \leq i \leq m} {and} {1 \leq j \leq n} {.} We call
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & 0 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & 0 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & 0 \end{matrix}} { }
a \extrabracket {homogeneous} {} {} \definitionword {system of linear equations}{} in the variables \mathl{x_1 , \ldots , x_n}{.} A tuple
\mathrelationchain
{\relationchain
{ ( \xi_1 , \ldots , \xi_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \definitionword {solution of the linear system}{} if
\mathrelationchain
{\relationchain
{ \sum_{j = 1}^n a_{ij } \xi_j }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds for all
\mathrelationchain
{\relationchain
{i }
{ = }{ 1 , \ldots , m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

If
\mathrelationchain
{\relationchain
{ (c_1 , \ldots , c_m) }
{ \in }{ K^m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is given\extrafootnote {Such a vector is sometimes called a \keyword {disturbance vector} {} of the system} {.} {,} then
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m \end{matrix}} { }
is called an \definitionword {inhomogeneous system of linear equations}{.} A tuple
\mathrelationchain
{\relationchain
{ ( \zeta_1 , \ldots , \zeta_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \definitionword {solution to the inhomogeneous linear system}{} if
\mathrelationchain
{\relationchain
{ \sum_{j = 1}^n a_{ij} \zeta_j }
{ = }{ c_i }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}

holds for all $i$.
}

The set of all solutions of the system is called the \keyword {solution set} {.} In the homogeneous case, this is also called the \keyword {solution space} {,} as it is indeed, by Lemma 22.14 , a vector space.

A homogeneous system of linear equations always has the so-called \keyword {trivial solution} {}
\mathrelationchain
{\relationchain
{0 }
{ = }{ (0 , \ldots , 0) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} An inhomogeneous system does not necessarily have a solution. For a given inhomogeneous linear system of equations, the homogeneous system that arises when we replace the tuple on the right-hand side by the null vector $0$ is called the \keyword {corresponding homogeneous system} {.}

The following situation describes a more abstract version of Example 21.1 .


\inputexample{}
{

Let $K$ denote a field, and
\mathrelationchain
{\relationchain
{ m }
{ \in }{ \N }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Suppose that in $K^m$, there are $n$ vectors \extrabracket {or $m$-tuples} {} {}
\mathdisp {v_1 = \begin{pmatrix} a_{1 1 } \\ a_{2 1 }\\ \vdots\\ a_{ m 1 } \end{pmatrix},\, v_2= \begin{pmatrix} a_{1 2 } \\ a_{2 2 }\\ \vdots\\ a_{ m 2 } \end{pmatrix} , \ldots , v_n = \begin{pmatrix} a_{1 n } \\ a_{2 n }\\ \vdots\\ a_{ m n } \end{pmatrix}} { }
given. Let
\mathrelationchaindisplay
{\relationchain
{ w }
{ =} { \begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} be another vector. We want to know whether $w$ can be written as a linear combination of the $v_j$. Thus, we are dealing with the question whether there are $n$ elements
\mathrelationchain
{\relationchain
{ s_1 , \ldots , s_n }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} such that
\mathrelationchaindisplay
{\relationchain
{ s_1 \begin{pmatrix} a_{1 1 } \\ a_{2 1 }\\ \vdots\\ a_{ m 1 } \end{pmatrix} + s_2 \begin{pmatrix} a_{1 2 } \\ a_{2 2 }\\ \vdots\\ a_{ m 2 } \end{pmatrix} + \cdots + s_n \begin{pmatrix} a_{1 n } \\ a_{2 n }\\ \vdots\\ a_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} holds. This equality of vectors means identity in every component, so that this condition yields a system of linear equations
\mathdisp {\begin{matrix} a _{ 1 1 } s _1 + a _{ 1 2 } s _2 + \cdots + a _{ 1 n } s _{ n } & = & c_1 \\ a _{ 2 1 } s _1 + a _{ 2 2 } s _2 + \cdots + a _{ 2 n } s _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } s _1 + a _{ m 2 } s _2 + \cdots + a _{ m n } s _{ n } & = & c_m .\end{matrix}} { }

}






\subtitle {Solving linear systems}

Systems of linear equations are best solved by the \keyword {elimination method} {,} where successively a variable gets eliminated, and in the end we get an equivalent simple system which can be solved directly \extrabracket {or read of that there is no solution} {} {.} For small systems, also the substitution method or the equating method are useful.




\inputdefinition
{ }
{

Let $K$ denote a field, and let two \extrabracket {inhomogeneous} {} {} systems of linear equations,

with respect to the same set of variables, be given. The systems are called \definitionword {equivalent}{,} if their solution sets are identical.

}




\inputfactproof
{System of linear equations/Set of variables/Equivalent system/Manipulations/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m \end{matrix}} { }
be an inhomogeneous system of linear equations over $K$.}
\factconclusion {Then the following manipulations on this system yield an equivalent system. \enumerationsix {Swapping two equations. } {The multiplication of an equation by a scalar
\mathrelationchain
{\relationchain
{ s }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {The omitting of an equation, if it occurs twice. } {The duplication of an equation \extrabracket {in the sense to write down the equation again} {} {.} } {The omitting or adding of a zero row \extrabracket {zero equation} {} {.} } {The replacement of an equation $H$ by the equation that arises if we add to $H$ another equation $G$ of the system. }}
\factextra {}
}
{

Most statements are immediately clear. (2) follows from the fact that if
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n a_i \xi_i }
{ =} {c }
{ } { }
{ } { }
{ } { }
} {}{}{} holds, then also
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n (s a_i) \xi_i }
{ =} { s c }
{ } { }
{ } { }
{ } { }
} {}{}{} holds for every
\mathrelationchain
{\relationchain
{ s }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} If
\mathrelationchain
{\relationchain
{ s }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then this implication can be reversed by multiplication with $s^{-1}$.

(6). Let $G$ be the equation
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n a_ix_i }
{ =} { c }
{ } { }
{ } { }
{ } { }
} {}{}{,} and $H$ be the equation
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n b_ix_i }
{ =} { d }
{ } { }
{ } { }
{ } { }
} {}{}{.} If a tuple
\mathrelationchain
{\relationchain
{ (\xi_1 , \ldots , \xi_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} satisfies both equations, then it also satisfies the equation
\mathrelationchain
{\relationchain
{H' }
{ = }{G+H }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} And if the tuple satisfies the equations \mathcor {} {G} {and} {H'} {,} then it also satisfies the equation \mathcor {} {G} {and} {H=H'-G} {.}

}


For finding the solution of a linear system, the manipulations (2) and (6) are most important, where in general these two steps are combined, and the equation $H$ is replaced by an equation of the form \mathl{H + \lambda G}{} \extrabracket {with
\mathrelationchain
{\relationchain
{ G }
{ \neq }{ H }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {.} Here,
\mathrelationchain
{\relationchain
{ \lambda }
{ \in }{K }
{ }{ }
{ }{ }
{ }{}
} {}{}{} has to be chosen is such a way that the new equation contains one variable less than the old equation. This process is called \keyword {elimination of a Variable} {.} This elimination is not only applied to one equation, but for all equations except one \extrabracket {suitable chosen} {} {} \quotationshort{working row}{} $G$, and with a fixed \quotationshort{working variable}{.} The following \keyword {elimination lemma} {} describes this step.




\inputfactproof
{Linear system/Elimination lemma/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ denote a field, and let $S$ denote an \extrabracket {inhomogeneous} {} {} system of linear equations over $K$ in the variables \mathl{x_1 , \ldots , x_n}{.}}
\factcondition {Suppose that $x$ is a variable which occurs in at least one equation $G$ with a coefficient
\mathrelationchain
{\relationchain
{ a }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}}
\factconclusion {Then every equation $H$, different from $G$\extrafootnote {It is enough that these equations have a different index in the system} {.} {,} can be replaced by an equation $H'$, in which $x$ does not occur any more, and such that the new system of equations $S'$ that consists of $G$ and the equations $H'$, is equivalent with the system $S$.}
\factextra {}


}
{

Changing the numbering, we may assume
\mathrelationchain
{\relationchain
{x }
{ = }{x_1 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Let $G$ be the equation
\mathrelationchaindisplay
{\relationchain
{ ax_1 + \sum_{i = 2}^n a_ix_i }
{ =} {b }
{ } { }
{ } { }
{ } { }
} {}{}{} \extrabracket {with
\mathrelationchain
{\relationchain
{ a }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {,} and let $H$ be the equation
\mathrelationchaindisplay
{\relationchain
{ cx_1 + \sum_{i = 2}^n c_ix_i }
{ =} {d }
{ } { }
{ } { }
{ } { }
} {}{}{.} Then the equation
\mathrelationchaindisplay
{\relationchain
{H' }
{ =} {H - { \frac{ c }{ a } } G }
{ } { }
{ } { }
{ } { }
} {}{}{} has the form
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 2}^n { \left(c_i- { \frac{ c }{ a } } a_i\right) } x_i }
{ =} { d -{ \frac{ c }{ a } } b }
{ } { }
{ } { }
{ } { }
} {}{}{,} and $x_1$ does not occur in it. Because of
\mathrelationchain
{\relationchain
{H }
{ = }{H' + { \frac{ c }{ a } } G }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the systems are equivalent.

}





\inputfactproof
{Linear inhomogeneous system/Elimination/Echelon form/Fact}
{Theorem}
{}
{

\factsituation {Every \extrabracket {inhomogeneous} {} {} system of linear equations over a field $K$}
\factconclusion {can be transformed, by the manipulations described in Lemma 21.7 , to an equivalent linear system of the form
\mathdisp {\begin{matrix} b_{1s_1} x_{s_1} & + b_{1 s_1 +1} x_{s_1+1} & \ldots & \ldots & \ldots & \ldots & \ldots & +b_{1 n} x_{n} & = & d_1 \\ 0 & \ldots & 0 & b_{2 s_2} x_{s_2} & \ldots & \ldots & \ldots & + b_{2 n} x_{n} & = & d_2 \\ \vdots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & = & \vdots \\ 0 & \ldots & \ldots & \ldots & 0 & b_{m {s_m} } x_{s_m} & \ldots & +b_{m n} x_n & = & d_m \\ ( 0 & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & 0 & = & d_{m+1} ) , \end{matrix}} { }
where in each row, the first coefficient \mathl{b_{1s_1}, b_{2 s_2} , \ldots , b_{m s_m}}{} is different from $0$.}
\factextra {Here, either
\mathrelationchain
{\relationchain
{ d_{m+1} }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and the last row can be omitted, or
\mathrelationchain
{\relationchain
{ d_{m+1} }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and then the system has no solution at all.}
}
{

This follows directly from the elimination lemma, by eliminating successively variables. Elimination is applied firstly to the first variable \extrabracket {in the given ordering} {} {,} say \mathl{x_{s_1}}{,} which occurs in at least one equation with a coefficient $\neq 0$ \extrabracket {if it only occurs in one equation, then this elimination step is already done} {} {.} This elimination process is applied as long as the new subsystem \extrabracket {without the working equation used in the elimination step before} {} {} has at least one equation with a coefficient for one variable different from $0$. After this, we have in the end only equations without variables, and they are either only zero equations, or there is no solution.

}





\inputfactproof
{Linear inhomogeneous system/Strictly triangular/Solution/Fact}
{Lemma}
{}
{

\factsituation {Let an inhomogeneous system of linear equations in triangular form
\mathdisp {\begin{matrix} a_{11} x_1 & + a_{12} x_2 & \ldots & +a_{1m} x_m & \ldots & + a_{1 n} x_{n} & = & c_1 \\ 0 & a_{22} x_2 & \ldots & \ldots & \ldots & + a_{2 n} x_{n} & = & c_2 \\ \vdots & \ddots & \ddots & \vdots & \vdots & \vdots & = & \vdots \\ 0 & \ldots & 0 & a_{mm} x_m & \ldots & +a_{m n} x_n & = & c_m \\ \end{matrix}} { }
with
\mathrelationchain
{\relationchain
{m }
{ \leq }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} over a field $K$ be given, where the diagonal elements are all not $0$.}
\factconclusion {Then the solutions \mathl{(x_1 , \ldots , x_m, x_{m+1} , \ldots , x_n)}{} are in bijection with the tuples
\mathrelationchain
{\relationchain
{ ( x_{m+1} , \ldots , x_n) }
{ \in }{ K^{n-m} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} The \mathl{n-m}{} entries \mathl{x_{m+1} , \ldots , x_n}{} can be chosen arbitrarily, and they determine a unique solution, and every solution is of this form.}
\factextra {}
}
{

This is clear, as when the tuple \mathl{(x_{m+1} , \ldots , x_n)}{} is given, the rows determine successively the other variables from bottom to top.

}


For
\mathrelationchain
{\relationchain
{m }
{ = }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} there are no free variables, and the linear system has exactly one solution.




\inputexample{}
{

We want to solve the inhomogeneous linear system
\mathdisp {\begin{matrix} 2x & +5y & +2z & & -v & = & 3 \\ 3x & -4y & & +u & +2v & = & 1 \\ 4x & & -2z & +2u & & = & 7 \, \end{matrix}} { }
over $\R$ \extrabracket {or over $\Q$} {} {.} Firstly, we eliminate $x$ by keeping the first row $I$, replacing the second row $II$ by \mathl{II - { \frac{ 3 }{ 2 } }I}{,} and replacing the third row $III$ by \mathl{III-2I}{.} This yields
\mathdisp {\begin{matrix} 2x & +5y & +2z & & -v & = & 3 \\ & - { \frac{ 23 }{ 2 } } y & -3z & +u & + { \frac{ 7 }{ 2 } } v & = & { \frac{ -7 }{ 2 } } \\ & -10y & -6z & +2u & +2v & = & 1 \, . \end{matrix}} { }
Now, we can eliminate $y$ from the \extrabracket {new} {} {} third row, with the help of the second row. Because of the fractions, we rather eliminate $z$ \extrabracket {which eliminates also $u$} {} {.} We leave the first and the second row as they are, and we replace the third row $III$ by \mathl{III-2II}{.} This yields the system, in a new ordering of the variables\extrafootnote {Such a reordering is safe as long as we keep the names of the variables. But if we write down the system in matrix notation without the variables, then one has to be careful and remember the reordering of the columns} {.} {,}
\mathdisp {\begin{matrix} 2x & +2z & & +5y & -v & = & 3 \\ & -3z & +u & - { \frac{ 23 }{ 2 } } y & + { \frac{ 7 }{ 2 } } v & = & { \frac{ -7 }{ 2 } } \\ & & & 13y & -5v & = & 8 \, . \end{matrix}} { }
Now we can choose an arbitrary \extrabracket {free} {} {} value for $v$. The third row determines $y$ uniquely, we must have
\mathrelationchaindisplay
{\relationchain
{ y }
{ =} { { \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v }
{ } { }
{ } { }
{ } { }
} {}{}{.} In the second equation, we can choose $u$ arbitrarily, this determines $z$ via
\mathrelationchainalign
{\relationchainalign
{z }
{ =} { - { \frac{ 1 }{ 3 } } { \left(- { \frac{ 7 }{ 2 } } -u - { \frac{ 7 }{ 2 } } v + { \frac{ 23 }{ 2 } } { \left({ \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v\right) } \right) } }
{ =} { - { \frac{ 1 }{ 3 } } { \left(- { \frac{ 7 }{ 2 } } -u - { \frac{ 7 }{ 2 } } v + { \frac{ 92 }{ 13 } } + { \frac{ 115 }{ 26 } } v\right) } }
{ =} { - { \frac{ 1 }{ 3 } } { \left({ \frac{ 93 }{ 26 } } -u + { \frac{ 12 }{ 13 } } v\right) } }
{ =} { -{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v }
} {} {}{.} The first row determines $x$, namely
\mathrelationchainalign
{\relationchainalign
{x }
{ =} { { \frac{ 1 }{ 2 } } { \left(3 -2z -5y +v\right) } }
{ =} { { \frac{ 1 }{ 2 } } { \left(3 -2 { \left(-{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v\right) } - 5 { \left({ \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v\right) } + v\right) } }
{ =} { { \frac{ 1 }{ 2 } } { \left({ \frac{ 30 }{ 13 } } - { \frac{ 2 }{ 3 } } u - { \frac{ 4 }{ 13 } } v\right) } }
{ =} { { \frac{ 15 }{ 13 } } - { \frac{ 1 }{ 3 } } u - { \frac{ 2 }{ 13 } } v }
} {} {}{.} Hence, the solution set is
\mathdisp {{ \left\{ { \left({ \frac{ 15 }{ 13 } } - { \frac{ 1 }{ 3 } } u - { \frac{ 2 }{ 13 } } v, { \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v ,-{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v ,u,v\right) } \mid u,v \in \R \right\} }} { . }
A particularly simple solution is obtained by equating the free variables \mathcor {} {u} {and} {v} {} with $0$. This yields the special solution
\mathrelationchaindisplay
{\relationchain
{ (x,y,z,u,v) }
{ =} { \left( { \frac{ 15 }{ 13 } } , \, { \frac{ 8 }{ 13 } } , \, - { \frac{ 31 }{ 26 } } , \, 0 , \, 0 \right) }
{ } { }
{ } { }
{ } { }
} {}{}{.} The general solution set can also be written as
\mathdisp {{ \left\{ { \left({ \frac{ 15 }{ 13 } } , { \frac{ 8 }{ 13 } } , - { \frac{ 31 }{ 26 } } ,0,0\right) } + u { \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) } + v { \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, - { \frac{ 4 }{ 13 } },0,1\right) } \mid u, v \in \R \right\} }} { . }
Here,
\mathdisp {{ \left\{ u { \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) } +v { \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, -{ \frac{ 4 }{ 13 } },0,1\right) } \mid u,v \in \R \right\} }} { }
is a description of the general solution of the corresponding homogeneous linear system.

}




\inputremark {}
{

A \keyword {system of linear inequalities} {} over the rational numbers or over the real numbers is a system of the form
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & \star & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & \star & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & \star & c_m \, , \end{matrix}} { }
where \mathl{\star}{} might be \mathl{\leq}{} or \mathl{\geq}{.} It is considerably more difficult to find the solution set of such a system than in the case of equations. In general, it is not possible to eliminate the variables.

}