The cross product
A special feature of
R
3
{\displaystyle {}\mathbb {R} ^{3}}
is the so-called cross product . This assigns, to two given vectors, a vector that is orthogonal to them.
Let
K
{\displaystyle {}K}
be a
field . The
operation
on
K
3
{\displaystyle {}K^{3}}
, defined by
x
×
y
=
(
x
1
x
2
x
3
)
×
(
y
1
y
2
y
3
)
:=
(
x
2
y
3
−
x
3
y
2
−
x
1
y
3
+
x
3
y
1
x
1
y
2
−
x
2
y
1
)
,
{\displaystyle {}x\times y={\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\times {\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\end{pmatrix}}:={\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}\\-x_{1}y_{3}+x_{3}y_{1}\\x_{1}y_{2}-x_{2}y_{1}\end{pmatrix}}\,,}
is called the
cross product .
The cross product is also called the vector product . To remember this formula, one might think
x
×
y
=
det
(
e
1
x
1
y
1
e
2
x
2
y
2
e
3
x
3
y
3
)
,
{\displaystyle {}x\times y=\det {\begin{pmatrix}e_{1}&x_{1}&y_{1}\\e_{2}&x_{2}&y_{2}\\e_{3}&x_{3}&y_{3}\end{pmatrix}}\,,}
where
e
1
,
e
2
,
e
3
{\displaystyle {}e_{1},e_{2},e_{3}}
are the standard vectors, and where we expand formally with respect to the first column. In this way, the cross product is defined with respect to the standard basis.
The
cross product
on
K
3
{\displaystyle {}K^{3}}
fulfills the following properties
(where
x
,
y
,
z
∈
K
3
{\displaystyle {}x,y,z\in K^{3}}
and
a
,
b
∈
K
{\displaystyle {}a,b\in K}
).
We have
x
×
y
=
−
(
y
×
x
)
.
{\displaystyle {}x\times y=-(y\times x)\,.}
We have
(
a
x
+
b
y
)
×
z
=
a
(
x
×
z
)
+
b
(
y
×
z
)
{\displaystyle {}(ax+by)\times z=a(x\times z)+b(y\times z)\,}
and
z
×
(
a
x
+
b
y
)
=
a
(
z
×
x
)
+
b
(
z
×
y
)
.
{\displaystyle {}z\times (ax+by)=a(z\times x)+b(z\times y)\,.}
We have
x
×
y
=
0
{\displaystyle {}x\times y=0\,}
if and only if
x
{\displaystyle {}x}
and
y
{\displaystyle {}y}
are
linearly dependent .
We have
x
×
(
y
×
z
)
+
y
×
(
z
×
x
)
+
z
×
(
x
×
y
)
=
0
.
{\displaystyle {}x\times (y\times z)+y\times (z\times x)+z\times (x\times y)=0\,.}
We have
⟨
x
×
y
,
z
⟩
=
det
(
x
,
y
,
z
)
,
{\displaystyle {}\left\langle x\times y,z\right\rangle =\det(x\,,y\,,z)\,,}
where
⟨
−
,
−
⟩
{\displaystyle {}\left\langle -,-\right\rangle }
denotes the formal evaluation[ 1]
in the sense of the
standard inner product.
We have
⟨
x
,
x
×
y
⟩
=
0
=
⟨
y
,
x
×
y
⟩
,
{\displaystyle {}\left\langle x,x\times y\right\rangle =0=\left\langle y,x\times y\right\rangle \,,}
where
⟨
−
,
−
⟩
{\displaystyle {}\left\langle -,-\right\rangle }
denotes the formal evaluation in the sense of the standard inner product.
(1) is clear from the definition.
(2). We have
(
a
(
x
1
x
2
x
3
)
+
b
(
y
1
y
2
y
3
)
)
×
(
z
1
z
2
z
3
)
=
(
a
x
1
+
b
y
1
a
x
2
+
b
y
2
a
x
3
+
b
y
3
)
×
(
z
1
z
2
z
3
)
=
(
(
a
x
2
+
b
y
2
)
z
3
−
(
a
x
3
+
b
y
3
)
z
2
−
(
a
x
1
+
b
y
1
)
z
3
+
(
a
x
3
+
b
y
3
)
z
1
(
a
x
1
+
b
y
1
)
z
2
−
(
a
x
2
+
b
y
2
)
z
1
)
=
(
a
x
2
z
3
−
a
x
3
z
2
−
a
x
1
z
3
+
a
x
3
z
1
a
x
1
z
2
−
a
x
2
z
1
)
+
(
b
y
2
z
3
−
b
y
3
z
2
−
b
y
1
z
3
+
b
y
1
z
3
b
y
1
z
2
−
b
y
2
z
1
)
=
a
(
x
2
z
3
−
x
3
z
2
−
x
1
z
3
+
x
3
z
1
x
1
z
2
−
x
2
z
1
)
+
b
(
y
2
z
3
−
y
3
z
2
−
y
1
z
3
+
y
1
z
3
y
1
z
2
−
y
2
z
1
)
=
a
(
x
×
z
)
+
b
(
y
×
z
)
.
{\displaystyle {}{\begin{aligned}{\left(a{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}+b{\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\end{pmatrix}}\right)}\times {\begin{pmatrix}z_{1}\\z_{2}\\z_{3}\end{pmatrix}}&={\begin{pmatrix}ax_{1}+by_{1}\\ax_{2}+by_{2}\\ax_{3}+by_{3}\end{pmatrix}}\times {\begin{pmatrix}z_{1}\\z_{2}\\z_{3}\end{pmatrix}}\\&={\begin{pmatrix}(ax_{2}+by_{2})z_{3}-(ax_{3}+by_{3})z_{2}\\-(ax_{1}+by_{1})z_{3}+(ax_{3}+by_{3})z_{1}\\(ax_{1}+by_{1})z_{2}-(ax_{2}+by_{2})z_{1}\end{pmatrix}}\\&={\begin{pmatrix}ax_{2}z_{3}-ax_{3}z_{2}\\-ax_{1}z_{3}+ax_{3}z_{1}\\ax_{1}z_{2}-ax_{2}z_{1}\end{pmatrix}}+{\begin{pmatrix}by_{2}z_{3}-by_{3}z_{2}\\-by_{1}z_{3}+by_{1}z_{3}\\by_{1}z_{2}-by_{2}z_{1}\end{pmatrix}}\\&=a{\begin{pmatrix}x_{2}z_{3}-x_{3}z_{2}\\-x_{1}z_{3}+x_{3}z_{1}\\x_{1}z_{2}-x_{2}z_{1}\end{pmatrix}}+b{\begin{pmatrix}y_{2}z_{3}-y_{3}z_{2}\\-y_{1}z_{3}+y_{1}z_{3}\\y_{1}z_{2}-y_{2}z_{1}\end{pmatrix}}\\&=a(x\times z)+b(y\times z).\end{aligned}}}
The second equation follows from this and from (1).
(3). If
x
{\displaystyle {}x}
and
y
{\displaystyle {}y}
are linearly dependent, then we can write
x
=
c
y
{\displaystyle {}x=cy}
(or the other way round).
In this case,
(
c
y
1
c
y
2
c
y
3
)
×
(
y
1
y
2
y
3
)
=
(
c
y
2
y
3
−
c
y
2
y
3
−
c
y
1
y
3
+
c
y
3
y
1
c
y
1
y
2
−
c
y
2
y
1
)
=
0
.
{\displaystyle {}{\begin{pmatrix}cy_{1}\\cy_{2}\\cy_{3}\end{pmatrix}}\times {\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\end{pmatrix}}={\begin{pmatrix}cy_{2}y_{3}-cy_{2}y_{3}\\-cy_{1}y_{3}+cy_{3}y_{1}\\cy_{1}y_{2}-cy_{2}y_{1}\end{pmatrix}}=0\,.}
If the cross product is
0
{\displaystyle {}0}
, then all entries of the vectors
(
x
2
y
3
−
x
3
y
2
−
x
1
y
3
+
x
3
y
1
x
1
y
2
−
x
2
y
1
)
{\displaystyle {}{\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}\\-x_{1}y_{3}+x_{3}y_{1}\\x_{1}y_{2}-x_{2}y_{1}\end{pmatrix}}}
equal
0
{\displaystyle {}0}
. For example, let
y
1
≠
0
{\displaystyle {}y_{1}\neq 0}
.
From
x
1
=
0
{\displaystyle {}x_{1}=0}
,
we can deduce directly
x
2
=
x
3
=
0
,
{\displaystyle {}x_{2}=x_{3}=0\,,}
and
x
{\displaystyle {}x}
is the zero vector. So suppose that
x
1
≠
0
{\displaystyle {}x_{1}\neq 0}
.
Then
y
2
=
y
1
x
1
x
2
{\displaystyle {}y_{2}={\frac {y_{1}}{x_{1}}}x_{2}}
and
y
3
=
y
1
x
1
x
3
{\displaystyle {}y_{3}={\frac {y_{1}}{x_{1}}}x_{3}}
;
therefore, we get
y
=
y
1
x
1
x
.
{\displaystyle {}y={\frac {y_{1}}{x_{1}}}x\,.}
(4). See
exercise ***** .
(5). We have
⟨
x
×
y
,
z
⟩
=
⟨
(
x
2
y
3
−
x
3
y
2
−
x
1
y
3
+
x
3
y
1
x
1
y
2
−
x
2
y
1
)
,
(
z
1
z
2
z
3
)
⟩
=
z
1
x
2
y
3
−
z
1
x
3
y
2
−
z
2
x
1
y
3
+
z
2
x
3
y
1
+
z
3
x
1
y
2
−
z
3
x
2
y
1
.
{\displaystyle {}{\begin{aligned}\left\langle x\times y,z\right\rangle &=\left\langle {\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}\\-x_{1}y_{3}+x_{3}y_{1}\\x_{1}y_{2}-x_{2}y_{1}\end{pmatrix}},{\begin{pmatrix}z_{1}\\z_{2}\\z_{3}\end{pmatrix}}\right\rangle \\&=z_{1}x_{2}y_{3}-z_{1}x_{3}y_{2}-z_{2}x_{1}y_{3}+z_{2}x_{3}y_{1}+z_{3}x_{1}y_{2}-z_{3}x_{2}y_{1}.\end{aligned}}}
This coincides with the determinant,
due to Sarrus .
(6) follows from (5).
◻
{\displaystyle \Box }
The expression
⟨
x
×
y
,
z
⟩
{\displaystyle {}\left\langle x\times y,z\right\rangle }
from (5), that is, the determinant of the three vectors, considered as a column vector, is also called triple product .
Let
u
1
,
u
2
,
u
3
{\displaystyle {}u_{1},u_{2},u_{3}}
be an
orthonormal basis
of
R
3
{\displaystyle {}\mathbb {R} ^{3}}
with[ 2]
det
(
u
1
,
u
2
,
u
3
)
=
1
.
{\displaystyle {}\det {\left(u_{1},\,u_{2},\,u_{3}\right)}=1\,.}
Then the
cross product
x
×
y
{\displaystyle {}x\times y}
can be computed with the coordinates of
x
{\displaystyle {}x}
and
y
{\displaystyle {}y}
with respect to this basis
(and the formula from
Definition).
Let
x
=
c
1
u
1
+
c
2
u
2
+
c
3
u
3
{\displaystyle {}x=c_{1}u_{1}+c_{2}u_{2}+c_{3}u_{3}\,}
and
y
=
d
1
u
1
+
d
2
u
2
+
d
3
u
3
.
{\displaystyle {}y=d_{1}u_{1}+d_{2}u_{2}+d_{3}u_{3}\,.}
Due to
Fact (2) ***** ,
we have
x
×
y
=
(
c
1
u
1
+
c
2
u
2
+
c
3
u
3
)
×
(
d
1
u
1
+
d
2
u
2
+
d
3
u
3
)
=
∑
1
≤
i
,
j
≤
3
c
i
d
j
(
u
i
×
u
j
)
.
{\displaystyle {}x\times y={\left(c_{1}u_{1}+c_{2}u_{2}+c_{3}u_{3}\right)}\times {\left(d_{1}u_{1}+d_{2}u_{2}+d_{3}u_{3}\right)}=\sum _{1\leq i,j\leq 3}c_{i}d_{j}{\left(u_{i}\times u_{j}\right)}\,.}
Due to
Fact (3) ***** ,
we have
u
i
×
u
i
=
0
,
{\displaystyle {}u_{i}\times u_{i}=0\,,}
and, because of
Fact (1) ***** ,
we have
u
i
×
u
j
=
−
u
j
×
u
i
.
{\displaystyle {}u_{i}\times u_{j}=-u_{j}\times u_{i}\,.}
According to
Fact (6) ***** ,
u
1
×
u
2
{\displaystyle {}u_{1}\times u_{2}}
is perpendicular to
u
1
{\displaystyle {}u_{1}}
and to
u
2
{\displaystyle {}u_{2}}
; therefore,
u
1
×
u
2
=
λ
u
3
{\displaystyle {}u_{1}\times u_{2}=\lambda u_{3}\,}
with some
λ
∈
R
{\displaystyle {}\lambda \in \mathbb {R} }
,
as this orthogonality condition defines a line. Because of
Fact (5) *****
and the condition, we get
λ
=
⟨
λ
u
3
,
u
3
⟩
=
⟨
u
1
×
u
2
,
u
3
⟩
=
det
(
u
1
,
u
2
,
u
3
)
=
1
;
{\displaystyle {}\lambda =\left\langle \lambda u_{3},u_{3}\right\rangle =\left\langle u_{1}\times u_{2},u_{3}\right\rangle =\det {\left(u_{1},\,u_{2},\,u_{3}\right)}=1\,;}
hence,
u
1
×
u
2
=
u
3
.
{\displaystyle {}u_{1}\times u_{2}=u_{3}\,.}
Using
Lemma 17 2.
(3) ,
we obtain
u
1
×
u
3
=
−
u
2
{\displaystyle {}u_{1}\times u_{3}=-u_{2}}
and
u
2
×
u
3
=
u
1
{\displaystyle {}u_{2}\times u_{3}=u_{1}}
.
Altogether we get
x
×
y
=
∑
1
≤
i
,
j
≤
3
c
i
d
j
(
u
i
×
u
j
)
=
∑
i
<
j
(
c
i
d
j
−
c
j
d
i
)
(
u
i
×
u
j
)
=
(
c
1
d
2
−
c
2
d
1
)
u
3
−
(
c
1
d
3
−
c
3
d
1
)
u
2
+
(
c
2
d
3
−
c
3
d
2
)
u
1
,
{\displaystyle {}{\begin{aligned}x\times y&=\sum _{1\leq i,j\leq 3}c_{i}d_{j}{\left(u_{i}\times u_{j}\right)}\\&=\sum _{i<j}(c_{i}d_{j}-c_{j}d_{i}){\left(u_{i}\times u_{j}\right)}\\&=(c_{1}d_{2}-c_{2}d_{1})u_{3}-(c_{1}d_{3}-c_{3}d_{1})u_{2}+(c_{2}d_{3}-c_{3}d_{2})u_{1},\end{aligned}}}
and this is the claim.
◻
{\displaystyle \Box }
Isometries
Let
V
,
W
{\displaystyle {}V,W}
be
vector spaces
over
K
{\displaystyle {}{\mathbb {K} }}
, endowed with
inner products,
and let
φ
:
V
⟶
W
{\displaystyle \varphi \colon V\longrightarrow W}
be s
linear mapping .
Then
φ
{\displaystyle {}\varphi }
is called an isometry if
⟨
φ
(
v
)
,
φ
(
w
)
⟩
=
⟨
v
,
w
⟩
{\displaystyle {}\left\langle \varphi (v),\varphi (w)\right\rangle =\left\langle v,w\right\rangle \,}
holds for all
v
,
w
∈
V
{\displaystyle {}v,w\in V}
.
An isometry is always
injective .
For
K
=
C
{\displaystyle {}{\mathbb {K} }=\mathbb {C} }
,
we also talk about an unitary mapping . As there are also affine isometries, we talk about a linear isometry .
Let
V
{\displaystyle {}V}
and
W
{\displaystyle {}W}
be
vector spaces
over
K
{\displaystyle {}{\mathbb {K} }}
, both endowed with an
inner product,
and let
φ
:
V
→
W
{\displaystyle {}\varphi \colon V\rightarrow W}
be a
linear mapping . Then the following statements are equivalent.
φ
{\displaystyle {}\varphi }
is an
isometry.
For all
u
,
v
∈
V
{\displaystyle {}u,v\in V}
,
we have
d
(
φ
(
u
)
,
φ
(
v
)
)
=
d
(
u
,
v
)
{\displaystyle {}d(\varphi (u),\varphi (v))=d(u,v)}
.
For all
v
∈
V
{\displaystyle {}v\in V}
,
we have
‖
φ
(
v
)
‖
=
‖
v
‖
{\displaystyle {}\Vert {\varphi (v)}\Vert =\Vert {v}\Vert }
.
For all
v
∈
V
{\displaystyle {}v\in V}
fulfilling
‖
v
‖
=
1
{\displaystyle {}\Vert {v}\Vert =1}
,
we have
‖
φ
(
v
)
‖
=
1
{\displaystyle {}\Vert {\varphi (v)}\Vert =1}
.
◻
{\displaystyle \Box }
Therefore, an isometry is just a
(linear)
mapping that preserves distances. The set of all the vectors with norm
1
{\displaystyle {}1}
in a Euclidean vector space is also called the
sphere.
Hence, an isometry is characterized by the property that it maps the sphere to the sphere.
Let
V
{\displaystyle {}V}
and
W
{\displaystyle {}W}
be
euclidean vector spaces,
and let
φ
:
V
⟶
W
{\displaystyle \varphi \colon V\longrightarrow W}
denote a
linear mapping . Then the following statements are equivalent.
φ
{\displaystyle {}\varphi }
is an
isometry.
For every
orthonormal basis
u
i
{\displaystyle {}u_{i}}
,
i
=
1
,
…
,
n
{\displaystyle {}i=1,\ldots ,n}
,
of
V
{\displaystyle {}V}
,
φ
(
u
i
)
{\displaystyle {}\varphi (u_{i})}
,
i
=
1
,
…
,
n
{\displaystyle {}i=1,\ldots ,n}
,
is part of an orthonormal basis of
W
{\displaystyle {}W}
.
There exists an
orthonormal basis
u
i
{\displaystyle {}u_{i}}
,
i
=
1
,
…
,
n
{\displaystyle {}i=1,\ldots ,n}
,
of
V
{\displaystyle {}V}
such that
φ
(
u
i
)
{\displaystyle {}\varphi (u_{i})}
,
i
=
1
,
…
,
n
{\displaystyle {}i=1,\ldots ,n}
,
is part of an orthonormal basis of
W
{\displaystyle {}W}
.
Proof
◻
{\displaystyle \Box }
For every
euclidean vector space
V
{\displaystyle {}V}
, there exists a bijective
isometry
φ
:
R
n
⟶
V
,
{\displaystyle \varphi \colon \mathbb {R} ^{n}\longrightarrow V,}
where
R
n
{\displaystyle {}\mathbb {R} ^{n}}
carries the
standard inner product.
Let
u
1
,
…
,
u
n
{\displaystyle {}u_{1},\ldots ,u_{n}}
be an
orthonormal basis
of
V
{\displaystyle {}V}
, and let
φ
:
R
n
⟶
V
{\displaystyle \varphi \colon \mathbb {R} ^{n}\longrightarrow V}
be the
linear mapping
given by
φ
(
e
i
)
=
u
i
.
{\displaystyle {}\varphi (e_{i})=u_{i}\,.}
Because of
Fact (3) ***** ,
this is an
isometry.
◻
{\displaystyle \Box }
Isometries on a Euclidean vector space
We now discuss isometries of a Euclidean vector space in itself. These are always stets bijective. With respect to any orthonormal basis of
V
{\displaystyle {}V}
, they are described in the following way.
◻
{\displaystyle \Box }
The set of isometries on a Euclidean vector space form a group; in fact, it is a
subgroup
of the group of all bijective linear mappings. We recall briefly the corresponding definitions.
For a
field
K
{\displaystyle {}K}
and
n
∈
N
+
{\displaystyle {}n\in \mathbb {N} _{+}}
,
the set of all
invertible
n
×
n
{\displaystyle {}n\times n}
-matrices
with entries in
K
{\displaystyle {}K}
is called the
general linear group
over
K
{\displaystyle {}K}
. It is denoted by
GL
n
(
K
)
{\displaystyle {}\operatorname {GL} _{n}\!{\left(K\right)}}
.
For a
field
K
{\displaystyle {}K}
, and
n
∈
N
+
{\displaystyle {}n\in \mathbb {N} _{+}}
,
the set of all
invertible
n
×
n
{\displaystyle {}n\times n}
-matrices
over
K
{\displaystyle {}K}
with
det
M
=
1
{\displaystyle {}\det M=1\,}
is called the
special linear group
over
K
{\displaystyle {}K}
. It is denoted by
SL
n
(
K
)
{\displaystyle {}\operatorname {SL} _{n}\!{\left(K\right)}}
.
A matrix
M
∈
GL
n
(
C
)
{\displaystyle {}M\in \operatorname {GL} _{n}\!{\left(\mathbb {C} \right)}}
fulfilling
M
¯
tr
M
=
E
n
{\displaystyle {}{{\overline {M}}^{\text{tr}}}M=E_{n}\,}
is called a
unitary matrix .
The set of all unitary matrices is called
unitary group ;
it is denoted by
U
n
=
{
M
∈
GL
n
(
C
)
∣
M
¯
tr
M
=
E
n
}
.
{\displaystyle {}\operatorname {U} _{n}\!={\left\{M\in \operatorname {GL} _{n}\!{\left(\mathbb {C} \right)}\mid {{\overline {M}}^{\text{tr}}}M=E_{n}\right\}}\,.}
Eigenvalues of an isometry
◻
{\displaystyle \Box }
In general, an isometry does not have necessarily an eigenvalue; however, if the dimension is odd, then there is an eigenvalue, see the following lecture.
The
determinant
of a
linear isometry
φ
:
V
⟶
V
{\displaystyle \varphi \colon V\longrightarrow V}
on a
euclidean vector space
V
{\displaystyle {}V}
is
1
{\displaystyle {}1}
or
−
1
{\displaystyle {}-1}
.
Due to
Fact ***** ,
we have
M
tr
M
=
(
1
0
⋯
⋯
0
0
1
0
⋯
0
⋮
⋱
⋱
⋱
⋮
0
⋯
0
1
0
0
⋯
⋯
0
1
)
.
{\displaystyle {}{M^{\text{tr}}}M={\begin{pmatrix}1&0&\cdots &\cdots &0\\0&1&0&\cdots &0\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&1&0\\0&\cdots &\cdots &0&1\end{pmatrix}}\,.}
Therefore, the statement follows from
the multiplication theorem for the determinant
and from
Fact ***** .
◻
{\displaystyle \Box }
Proper isometries
An
isometry
on a
euclidean vector space
is called proper if its
determinant
is
1
{\displaystyle {}1}
.
An isometry that is not proper, that is, its determinant is
−
1
{\displaystyle {}-1}
, is also called an improper isometry .
Let
K
{\displaystyle {}K}
be a
field ,
and
n
∈
N
+
{\displaystyle {}n\in \mathbb {N} _{+}}
.
An
orthogonal
n
×
n
{\displaystyle {}n\times n}
-matrix
M
{\displaystyle {}M}
fulfilling
det
M
=
1
{\displaystyle {}\det M=1\,}
is called a
special orthogonal matrix .
The set of all special orthogonal matrices is called
special orthogonal group ;
it is denoted by
SO
n
(
K
)
{\displaystyle {}\operatorname {SO} _{n}\!{\left(K\right)}}
.
A
unitary
n
×
n
{\displaystyle {}n\times n}
-matrix
M
∈
GL
n
(
C
)
{\displaystyle {}M\in \operatorname {GL} _{n}\!{\left(\mathbb {C} \right)}}
fulfilling
det
M
=
1
{\displaystyle {}\det M=1\,}
is called a
special unitary matrix .
The set of all special unitary matrices is called
Special unitary group ;
it is denoted by
SU
n
{\displaystyle {}\operatorname {SU} _{n}}
.
Footnotes
↑ This formulation is due the fact that an inner product is defined only over
R
{\displaystyle {}\mathbb {R} }
and
C
{\displaystyle {}\mathbb {C} }
. However, the formula that describes the standard inner product in the real case, works over every field.
↑ We say that such an orthonormal basis represents the orientation.