Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 22/refcontrol
A healthy breakfast starts with a fruit salad. The following table shows how much vitamin C, calcium and magnesium various fruits have (in milligram with respect to 100 gram of the fruit).
apple | orange | grapes | banana | |
---|---|---|---|---|
vitamin C | ||||
calcium | ||||
magnesium |
My fruit salad today consists of the mentioned fruits with portions (meaning gram apple and so on). From that, one can calculate the total vitamin-C-amount, the calcium-amount and the magnesium-amount of the fruit salad, by simply multiplying for each fruit its portion with its specific amount, and summing up everything. The vitamin-C-amount of the complete fruit salad is thus
This operation is an example for how a matrix operates. The table yields immediately a -matrix, namely , and the above calculation is realized by the matrix multiplication
One can also ask for a fruit salad which has certain amounts of vitamin C, calcium and magnesium, say . This leads to the linear system of linear equations in matrix form,
- Matrices
A system of linear equationsMDLD/system of linear equations can easily be written with a matrix. This allows us to make the manipulations that lead to the solution of such a system without writing down the variables. Matrices are quite simple objects; however, they can represent quite different mathematical objects (e.g., a family of column vectors, a family of row vectors, a linear mapping, a table of physical interactions, a relation, a linear vector field, etc.), which one has to keep in mind in order to prevent wrong conclusions.
Let denote a field,MDLD/field and let and denote index sets. An -matrix is a mappingMDLD/mapping
If and , then we talk about an -matrix. In this case, the matrix is usually written as
We will usually restrict to this last situation.
For every , the family , , is called the -th row of the matrix, which is usually written as a row tuple (or row vector)
For every , the family , , is called the -th column of the matrix, usually written as a column tuple (or column vector)
The elements are called the entries of the matrix. For , the number is called the row index, and is called the column index of the entry. The position of the entry is where the -th row meets the -th column. A matrix with is called a square matrix. An -matrix is simply a column tuple (or column vector) of length , and an -matrix is simply a row tuple (or row vector) of length . The set of all matrices with rows and columns (and with entries in ) is denoted by ; in case we also write .
Two matrices are added by adding corresponding entries. The multiplication of a matrix with an element (a scalar) is also defined entrywise, so
and
The multiplication of matrices is defined in the following way:
Let denote a field, and let denote an -matrixMDLD/matrix and an -matrix over . Then the matrix product
is the -matrix, whose entries are given by
A matrix multiplication is only possible when the number of columns of the left-hand matrix equals the number of rows of the right-hand matrix. Just think of the scheme
the result is an -Matrix. In particular, one can multiply an -matrix with a column vector of length (the vector on the right), and the result is a column vector of length . The two matrices can also be multiplied with roles interchanged,
An -matrixMDLD/matrix of the form
The -matrixMDLD/matrix
The identity matrix has the property , for an arbitrary -matrix .
If we multiply an -matrixMDLD/matrix with a column vector , then we get
Hence, an inhomogeneous system of linear equationsMDLD/inhomogeneous system of linear equations with disturbance vector can be written briefly as
Then, the manipulations on the equations that do not change the solution set, can be replaced by corresponding manipulations on the rows of the matrix. It is not necessary to write down the variables.
- Vector spaces
The central concept of linear algebra is a vector space.
Let denote a field,MDLD/field and a set with a distinguished element , and with two mappings
and
Then is called a -vector space (or a vector space over ), if the following axioms hold[1] (where and are arbitrary). [2]
- ,
- ,
- ,
- For every , there exists a such that ,
- ,
- ,
- ,
- .
The binary operation in is called (vector-)addition, and the operation is called scalar multiplication. The elements in a vector space are called vectors, and the elements are called scalars. The null element is called null vector, and for , the inverse element is called the negative of , denoted by . The field which occurs in the definition of a vector space is called the base field. All the concepts of linear algebra refer to such a base field. In case , we talk about a real vector space, and in case , we talk about a complex vector space. For real and complex vector spaces there exist further structures like length, angle, inner product. But first we develop the algebraic theory of vector spaces over an arbitrary field.
Let denote a field,MDLD/field and let . Then the product setMDLD/product set
with componentwise addition and with scalar multiplication given by
is a vector space.MDLD/vector space This space is called the -dimensional standard space. In particular, is a vector space.
The null space , consisting of just one element , is a vector space. It might be considered as .
The vectors in the standard space can be written as row vectors
or as column vectors
The vector
where the is at the -th position, is called -th standard vector.
The complex numbersMDLD/complex numbers form a field,MDLD/field and therefore they form also a vector spaceMDLD/vector space over the field itself. However, the set of complex numbers equals as an additive group. The multiplication of a complex number with a real number is componentwise, so this multiplication coincides with the scalar multiplication on . Hence, the set of complex numbers is also a real vector space.
For a fieldMDLD/field , and given natural numbers , the set
of all -matrices, endowed with componentwise addition and componentwise scalar multiplication, is a -vector space.MDLD/vector space The null element in this vector space is the null matrix
Let be the polynomial ringMDLD/polynomial ring (1K) in one variable over the fieldMDLD/field , consisting of all polynomials, that is, expressions of the form
with . Using componentwise addition and componentwise multiplication with a scalar (this is also multiplication with the constant polynomial ), the polynomial ring is a -vector space.MDLD/vector space
Let be a field,MDLD/field and let be a -vector space.MDLD/vector space Then the following properties hold (for
and ).- We have .
- We have .
- We have .
- If and , then .
Proof
- Linear subspaces
Let be a field,MDLD/field and let be a -vector space.MDLD/vector space A subset is called a linear subspace if the following properties hold.
- .
- If , then also .
- If and , then also holds.
Addition and scalar multiplication can be restricted to such a linear subspace. Hence, the linear subspace is itself a vector space, see Exercise 22.20 . The simplest linear subspaces in a vector space are the null space and the whole vector space .
Let be a field,MDLD/field and let
be a homogeneous system of linear equationsMDLD/homogeneous system of linear equations over . Then the set of all solutions to the system is a linear subspaceMDLD/linear subspace of the standard space .
Proof
Therefore, we talk about the solution space of the linear system. In particular, the sum of two solutions of a system of linear equations is again a solution. The solution set of an inhomogeneous linear system is not a vector space. However, one can add, to a solution of an inhomogeneous system, a solution of the corresponding homogeneous system, and get a solution of the inhomogeneous system again.
We take a look at the homogeneous version of Example 21.11 , so we consider the homogeneous linear system
over . Due to Lemma 22.14 , the solution set is a linear subspace of . We have described it explicitly in Example 21.11 as
This description also shows that the solution set is a vector space. Moreover, with this description, it is clear that is in bijection with , and this bijection respects the addition and also the scalar multiplication (the solution set of the inhomogeneous system is also in bijection with , but there is no reasonable addition nor scalar multiplication on ). However, this bijection depends heavily on the chosen "basic solutions“ and , which depends on the order of elimination. There are several equally good basic solutions for .
This example shows also the following: the solution space of a linear system over is "in a natural way“, that means, independent on any choice, a linear subspace of (where is the number of variables). For this solution space, there always exists a "linear bijection“ (an "isomorphism“) to some (), but there is no natural choice for such a bijection. This is one of the main reasons to work with abstract vector spaces, instead of just .
- Footnotes
- ↑ The first four axioms, which are independent of , mean that is a commutative groupMDLD/commutative group.
- ↑ Also for vector spaces, there is the convention that multiplication binds stronger than addition.
<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >> PDF-version of this lecture Exercise sheet for this lecture (PDF) |
---|