Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 17

From Wikiversity



Universal property of the determinant

The determinant fulfills the characteristic properties that it is multilinear and alternating. This, together with the property that the determinant of the identity matrix is , determines already the determinant in a unique way.


Let be a vector space over a field of dimension . A mapping

is called a determinant function if the following two conditions are fulfilled.

  1. is multilinear.
  2. is alternating.

Let be a field and . Let

be a determinant function. Then fulfills the following properties.

  1. If a row of is multiplied with , then is multiplied by .
  2. If contains a zero row, then .
  3. If in two rows are swapped, then is multiplied with the factor .
  4. If a multiple of a row is added to another row, then does not change.
  5. If , then, for an upper triangular matrix, we have .

(1) and (2) follow directly from multilinearity.
(3) follows from Lemma 16.8 .
To prove (4), we consider the situation where we add to the -th row the -multiple of the -th row, . Due to the parts already proven, we have


(5). If a diagonal element is , then set . We can add to the -th row suitable multiples of the -th rows, , in order to achieve that the new -th row is a zero row, without changing the value of the determinant function. Due to (2), this value is .

In case no diagonal element is , we may obtain, by several scalings, that all diagonal element are . By adding rows, we obtain furthermore the identity matrix. Therefore,




Let be a field and . Then there exists exactly one determinant function

fulfilling

where denote the standard vectors, namely the

determinant.

The determinant fulfills, due to Theorem 16.9 , Theorem 16.10 and Lemma 16.4 , all the given properties.
Uniqueness. For every matrix , there exists a sequence of elementary row operations such that, in the end, we get an upper triangular matrix. Hence, due to Lemma 17 2. , the value of the determinant function is determined by the values on the upper triangular matrices. Therefore, after scaling and row addition, it is even determined by its value on the identity matrix.



The multiplication theorem for determinants

We discuss several important theorems about the determinant.


Let denote a field, and . Then for matrices , the relation

holds.

We fix the matrix .

Suppose first that . Then, due to Theorem 16.11 the matrix is not invertible and therefore, also is not invertible. Hence, .

Suppose now that is invertible. In this case, we consider the well-defined mapping

We want to show that this mapping equals the mapping , by showing that it fulfills all the properties which, according to Theorem 17.3 , characterize the determinant. If denote the rows of , then is computed by applying the determinant to the rows , and then by multiplying with . Hence the multilinearity and the alternating property follows from Exercise 16.29 . If we start with , then and thus



Let denote a field, and let denote an -matrix over . Then

If is not invertible, then, due to Theorem 16.11 , the determinant is and the rank is smaller than . This does also hold for the transposed matrix, so that its determinant is again . So suppose that is invertible. We reduce the statement in this case to the corresponding statement for the elementary matrices, which can be verified directly, see Exercise 16.13 . Because of Lemma 12.18 , there exist elementary matrices such that

is a diagonal matrix. Due to Exercise 4.20 , we have

and

The diagonal matrix is not changed under transposing it. Since the determinants of the elementary matrices are also not changed under transposition, we get, using Theorem 17.4 ,



Let be a field, and let be an -matrix over . For , let be the matrix which arises from , by leaving out the -th row and the -th column. Then (for and for every fixed and )

For , the first equation is the recursive definition of the determinant. From that statement, the case follows, due to Fact *****. By exchanging columns and rows, the statement follows in full generality, see Exercise 17.11 .



The determinant of a linear mapping

Let

be a linear mapping from a vector space of dimension into itself. This is described by a matrix with respect to a given basis. We would like to define the determinant of the linear mapping, by the determinant of the matrix. However, we have here the problem whether this is well-defined, since a linear mapping is described by quite different matrices, with respect to different bases. But, because of Corollary 11.12 , when we have two describing matrices and , and the matrix for the change of bases, we have the relation . The multiplication theorem for determinants yields then

so that the following definition is in fact independent of the basis chosen.


Let denote a field, and let denote a -vector space of finite dimension. Let

be a linear mapping, which is described by the matrix , with respect to a basis. Then

is called the determinant of the linear mapping .



Cramer's rule

For a square matrix , we call

the adjugate matrix of , where arises from by deleting the -th row and the -th column.

Note that in this definition, for the entries of the adjugate, the rows and the columns are swapped.


Let be a field, and let denote an -matrix over . Then

If is

invertible, then

Let . Let the coefficients of the adjugate matrix be denoted by

The coefficients of the product are

In case , this is , as this sum is the expansion of the determinant with respect to the -th column. So let , and let denote the matrix that arises from by replacing in the -th column by the -th column. If we expand with respect to the -th column, then we get

Therefore, these coefficients are , and the first equation holds.
The second equation is proved similarly, where we use now the expansion of the determinant with respect to the rows.


The following statement is called Cramer's rule.


Let be a field, and let

be an inhomogeneous linear system over . Suppose that the describing matrix is invertible. Then the unique solution for is given by

.

For an invertible matrix , the solution of the linear system can be found by applying , that is, . Using Theorem 17.9 , this means . For the -th component, this means

The right-hand factor is the expansion of the determinant of the matrix shown in the numerator with respect to the -th column.



We solve the linear system

using Cramer's rule. This yields

and


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)