We want to show that the recursively defined determinant is a multilinear and alternating mapping. To make sense of this, we identify
Mat
n
(
K
)
≅
(
K
n
)
n
,
{\displaystyle {}\operatorname {Mat} _{n}(K)\cong (K^{n})^{n}\,,}
that is, we identify a matrix with the
n
{\displaystyle {}n}
-tuple of its rows. Thus, in the following, we consider a matrix as a column tuple
(
v
1
⋮
v
n
)
,
{\displaystyle {\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}},}
where the entries
v
i
{\displaystyle {}v_{i}}
are row vectors of length
n
{\displaystyle {}n}
.
Let
K
{\displaystyle {}K}
be a field, and
n
∈
N
+
{\displaystyle {}n\in \mathbb {N} _{+}}
. Then the
determinant
Mat
n
(
K
)
=
(
K
n
)
n
⟶
K
,
M
⟼
det
M
,
{\displaystyle \operatorname {Mat} _{n}(K)=(K^{n})^{n}\longrightarrow K,M\longmapsto \det M,}
is
multilinear . This means that for every
k
∈
{
1
,
…
,
n
}
{\displaystyle {}k\in \{1,\ldots ,n\}}
,
and for every choice of
n
−
1
{\displaystyle {}n-1}
vectors
v
1
,
…
,
v
k
−
1
,
v
k
+
1
,
…
,
v
n
∈
K
n
{\displaystyle {}v_{1},\ldots ,v_{k-1},v_{k+1},\ldots ,v_{n}\in K^{n}}
,
and for any
u
,
w
∈
K
n
{\displaystyle {}u,w\in K^{n}}
,
the identity
det
(
v
1
⋮
v
k
−
1
u
+
w
v
k
+
1
⋮
v
n
)
=
det
(
v
1
⋮
v
k
−
1
u
v
k
+
1
⋮
v
n
)
+
det
(
v
1
⋮
v
k
−
1
w
v
k
+
1
⋮
v
n
)
{\displaystyle {}\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u+w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}=\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}+\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}\,}
holds, and for
s
∈
K
{\displaystyle {}s\in K}
,
the identity
det
(
v
1
⋮
v
k
−
1
s
u
v
k
+
1
⋮
v
n
)
=
s
det
(
v
1
⋮
v
k
−
1
u
v
k
+
1
⋮
v
n
)
{\displaystyle {}\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\su\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}=s\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}\,}
holds.
Let
M
:=
(
v
1
⋮
v
k
−
1
u
v
k
+
1
⋮
v
n
)
,
M
′
:=
(
v
1
⋮
v
k
−
1
w
v
k
+
1
⋮
v
n
)
and
M
~
:=
(
v
1
⋮
v
k
−
1
u
+
w
v
k
+
1
⋮
v
n
)
,
{\displaystyle M:={\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}\,,M':={\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}{\text{ and }}{\tilde {M}}:={\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u+w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}},}
where we denote the entries and the matrices arising from deleting a row in an analogous way. In particular,
u
=
(
a
k
1
,
…
,
a
k
n
)
{\displaystyle {}u=\left(a_{k1},\,\ldots ,\,a_{kn}\right)}
and
w
=
(
a
k
1
′
,
…
,
a
k
n
′
)
{\displaystyle {}w=\left(a_{k1}',\,\ldots ,\,a_{kn}'\right)}
. We prove the statement by induction over
n
{\displaystyle {}n}
, For
i
≠
k
{\displaystyle {}i\neq k}
,
we have
a
~
i
1
=
a
i
1
=
a
i
1
′
{\displaystyle {}{\tilde {a}}_{i1}=a_{i1}=a'_{i1}}
and
det
M
~
i
=
det
M
i
+
det
M
i
′
{\displaystyle {}\det {\tilde {M}}_{i}=\det M_{i}+\det M'_{i}\,}
due to the induction hypothesis. For
i
=
k
{\displaystyle {}i=k}
,
we have
M
k
=
M
k
′
=
M
~
k
{\displaystyle {}M_{k}=M_{k}'={\tilde {M}}_{k}}
and
a
~
k
1
=
a
k
1
+
a
k
1
′
{\displaystyle {}{\tilde {a}}_{k1}=a_{k1}+a'_{k1}}
.
Altogether, we get
det
M
~
=
∑
i
=
1
n
(
−
1
)
i
+
1
a
~
i
1
det
M
~
i
=
∑
i
=
1
,
i
≠
k
n
(
−
1
)
i
+
1
a
i
1
(
det
M
i
+
det
M
i
′
)
+
(
−
1
)
k
+
1
(
a
k
1
+
a
k
1
′
)
(
det
M
~
k
)
=
∑
i
=
1
,
i
≠
k
n
(
−
1
)
i
+
1
a
i
1
det
M
i
+
∑
i
=
1
,
i
≠
k
n
(
−
1
)
i
+
1
a
i
1
det
M
i
′
+
(
−
1
)
k
+
1
a
k
1
det
M
k
+
(
−
1
)
k
+
1
a
k
1
′
det
M
k
=
∑
i
=
1
n
(
−
1
)
i
+
1
a
i
1
det
M
i
+
∑
i
=
1
,
i
≠
k
,
n
(
−
1
)
i
+
1
a
i
1
det
M
i
′
+
(
−
1
)
k
+
1
a
k
1
′
det
M
k
=
∑
i
=
1
n
(
−
1
)
i
+
1
a
i
1
det
M
i
+
∑
i
=
1
n
(
−
1
)
i
+
1
a
i
1
′
det
M
i
′
=
det
M
+
det
M
′
.
{\displaystyle {}{\begin{aligned}\det {\tilde {M}}&=\sum _{i=1}^{n}(-1)^{i+1}{\tilde {a}}_{i1}\det {\tilde {M}}_{i}\\&=\sum _{i=1,\,i\neq k}^{n}(-1)^{i+1}a_{i1}(\det {M}_{i}+\det {M}'_{i})+(-1)^{k+1}(a_{k1}+a'_{k1})(\det {\tilde {M}}_{k})\\&=\sum _{i=1,\,i\neq k}^{n}(-1)^{i+1}a_{i1}\det {M}_{i}+\sum _{i=1,\,i\neq k}^{n}(-1)^{i+1}a_{i1}\det {M}'_{i}+(-1)^{k+1}a_{k1}\det M_{k}+(-1)^{k+1}a'_{k1}\det M_{k}\\&=\sum _{i=1}^{n}(-1)^{i+1}a_{i1}\det {M}_{i}+\sum _{i=1,\,i\neq k,\,}^{n}(-1)^{i+1}a_{i1}\det {M}'_{i}+(-1)^{k+1}a'_{k1}\det M_{k}\\&=\sum _{i=1}^{n}(-1)^{i+1}a_{i1}\det {M}_{i}+\sum _{i=1}^{n}(-1)^{i+1}a'_{i1}\det {M}'_{i}\\&=\det M+\det M'.\end{aligned}}}
The compatibility with the scalar multiplication is proved in a similar way, see
exercise .
◻
{\displaystyle \Box }
Let
K
{\displaystyle {}K}
be a
field
and
n
∈
N
+
{\displaystyle {}n\in \mathbb {N} _{+}}
. Then the
determinant
Mat
n
(
K
)
=
(
K
n
)
n
⟶
K
,
M
⟼
det
M
,
{\displaystyle \operatorname {Mat} _{n}(K)=(K^{n})^{n}\longrightarrow K,M\longmapsto \det M,}
is
alternating .
We proof the statement by induction over
n
{\displaystyle {}n}
, So suppose that
n
≥
2
{\displaystyle {}n\geq 2}
and set
M
=
(
v
1
⋮
v
n
)
=
(
a
i
j
)
i
j
{\displaystyle {}M={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}={\left(a_{ij}\right)}_{ij}}
.
Let
v
r
{\displaystyle {}v_{r}}
and
v
s
{\displaystyle {}v_{s}}
with
r
<
s
{\displaystyle {}r<s}
be the relevant row. By definition, we have
det
M
=
∑
i
=
1
n
(
−
1
)
i
+
1
a
i
1
det
M
i
{\displaystyle {}\det M=\sum _{i=1}^{n}(-1)^{i+1}a_{i1}\det M_{i}}
.
Due to the induction hypothesis, we have
det
M
i
=
0
{\displaystyle {}\det M_{i}=0}
for
i
≠
r
,
s
{\displaystyle {}i\neq r,s}
,
because two rows coincide in these cases. Therefore,
det
M
=
(
−
1
)
r
+
1
a
r
1
det
M
r
+
(
−
1
)
s
+
1
a
s
1
det
M
s
,
{\displaystyle {}\det M=(-1)^{r+1}a_{r1}\det M_{r}+(-1)^{s+1}a_{s1}\det M_{s}\,,}
where
a
r
1
=
a
s
1
{\displaystyle {}a_{r1}=a_{s1}}
.
The matrices
M
r
{\displaystyle {}M_{r}}
and
M
s
{\displaystyle {}M_{s}}
consist in the same rows, however, the row
z
=
v
r
=
v
s
{\displaystyle {}z=v_{r}=v_{s}}
is in
M
r
{\displaystyle {}M_{r}}
the
(
s
−
1
)
{\displaystyle {}(s-1)}
-th row and in
M
s
{\displaystyle {}M_{s}}
the
r
{\displaystyle {}r}
-th row. All other rows occur in both matrices in the same order. By swapping altogether
s
−
r
−
1
{\displaystyle {}s-r-1}
times adjacent rows, we can transform
M
r
{\displaystyle {}M_{r}}
into
M
s
{\displaystyle {}M_{s}}
. Due to the induction hypothesis and
fact ,
their determinants are related by the factor
(
−
1
)
s
−
r
−
1
{\displaystyle {}(-1)^{s-r-1}}
, thus
det
M
s
=
(
−
1
)
s
−
r
−
1
det
M
r
{\displaystyle {}\det M_{s}=(-1)^{s-r-1}\det M_{r}}
.
Using this, we obtain
det
M
=
(
−
1
)
r
+
1
a
r
1
det
M
r
+
(
−
1
)
s
+
1
a
s
1
det
M
s
=
a
r
1
(
(
−
1
)
r
+
1
det
M
r
+
(
−
1
)
s
+
1
(
−
1
)
s
−
r
−
1
det
M
r
)
=
a
r
1
(
(
−
1
)
r
+
1
+
(
−
1
)
2
s
−
r
)
det
M
r
=
a
r
1
(
(
−
1
)
r
+
1
+
(
−
1
)
r
)
det
M
r
=
0.
{\displaystyle {}{\begin{aligned}\det M&=(-1)^{r+1}a_{r1}\det M_{r}+(-1)^{s+1}a_{s1}\det M_{s}\\&=a_{r1}{\left((-1)^{r+1}\det M_{r}+(-1)^{s+1}(-1)^{s-r-1}\det M_{r}\right)}\\&=a_{r1}{\left((-1)^{r+1}+(-1)^{2s-r}\right)}\det M_{r}\\&=a_{r1}{\left((-1)^{r+1}+(-1)^{r}\right)}\det M_{r}\\&=0.\end{aligned}}}
◻
{\displaystyle \Box }
The property of the determinant to be alternating simplifies its computation. In particular, it is clear how the determinat behaves under elementary row operations. If a row is multiplied with a number
s
{\displaystyle {}s}
, the determinant has to be multiplied with
s
{\displaystyle {}s}
as well. If two rows are swapped, then the sign of the determinant changes. If a row
(or a multiple of a row)
is added to another row, then the determinant does not change.