>back to Chapters of Fluid Mechanics for Mechanical Engineers
Each scientific field needs an optimized language. Without it, this field can not develop.
For example, calculus could not have developed as easily with Roman numerals:
X
X
V
I
I
I
∗
X
X
I
V
=
?
{\displaystyle \displaystyle \displaystyle XXVIII*XXIV=?}
Without Arabic numbers and the decimal system, even basic operations would not be possible.
Fluid Mechanics deals with the changes of flow and fluid variables in three space dimensions and in time. In order to overcome this complexity, there is also a need for an optimized language in the field of fluid mechanics. Unfortunately, for the same vector operation there are multiple mathematical notations:
d
i
v
a
=
∇
⋅
a
→
or
g
r
a
d
a
=
∇
a
a
n
d
f
o
r
a
v
e
c
t
o
r
a
,
a
→
,
a
¯
,
a
→
¯
o
r
a
→
~
{\displaystyle \displaystyle div\ {\textbf {a}}=\nabla \cdot {\vec {a}}\,{\textrm {or}}\,grad\ a=\nabla \ a\ and\ for\ a\ vector\ {\textbf {a}},\ {\vec {a}},{\bar {a}},\ {\bar {\vec {a}}}\ or\ {\tilde {\vec {a}}}}
Symbols are not unique and somewhat confusing, therefore a unified language accelerates the development of the field.
In this chapter we will be introduced to the fascinating Tensor, a unified mathematical concept which is very important to computational fluid dynamics and other scientific fields. This chapter will give a gentle introduction, but readers are encouraged to read more from the references. Due to this short introduction, Tensors will not be used as a separate chapter.
The concept of tensor could be easily understood if we want to imagine a function which takes vector as input and do some operation with it and then give another vector as output. For example, let's assume a vector which is defined in Euclidean reference co-ordinate system. Now for some unknown reason, the co-ordinate system changed via rotation or translation. To find our previous vector defined in the new co ordinate system, we can assume a magical operator or function which will take the initial components of the vector in old co-ordinate system as input and give us back the new vector component in new co-ordinate system as output. In other way, New position vector= f(Old position vector). It is found that if this operator or function is linear, then this function or operator would be a 2nd rank tensor.
A second rank tensor looks like
3
×
3
{\displaystyle 3\times 3}
matrix. For example,
|
1
5
3
2
9
4
6
1
8
|
{\displaystyle {\begin{vmatrix}1&5&3\\2&9&4\\6&1&8\end{vmatrix}}}
is a
3
×
3
{\displaystyle 3\times 3}
matrix.
If we now multiply this matrix with another matrix with
1
×
3
{\displaystyle 1\times 3}
matrix we will get as an output,
3
×
1
{\displaystyle 3\times 1}
matrix where the element of these matrices could be imagined as the vector components of initial and new vector.
So we can say that a 2nd rank Tensor is capable to keep the map or information of how a vector will change between new to old co-ordinate system.
Now, we have small introduction to tensor, we can present the definition of tensor. A tensor
A
{\displaystyle {\boldsymbol {A}}\,}
is a linear transformation from a vector space
V
{\displaystyle {\mathcal {V}}}
to
V
{\displaystyle {\mathcal {V}}}
.[ 1] Thus, we can write:
A
:
u
∈
V
→
v
∈
V
{\displaystyle {\boldsymbol {A}}:\mathbf {u} \in {\mathcal {V}}\rightarrow \mathbf {v\in {\mathcal {V}}} }
which means tensor
A
{\displaystyle {\boldsymbol {A}}}
operates on vector
u
{\displaystyle \mathbf {u} }
to give new vector
v
{\displaystyle \mathbf {v} }
.
This concept is found to be extremely useful where all computational calculations have to be performed in somehow matrix format. A very nice illustration of use of tensor is shown here[ 2] .
Very common examples of Tensor are : Deformation Gradient Tensor, Stress Tensor, Einstein's famous stress-energy tensor. Even dot and cross product of two vectors are also tensor.
Although the concept of linear operation is very useful to understand the application of tensor at first time,it's not all.
Let's consider about a cylinder which is stretched along it's length and due to poisson's contraction, we will see contraction in radial direction to keep the volume constant. So we can say that
the deformation along different directions is somehow related on the stress applied along different directions.It appears that this relationship is a tensor which is called Stiffness Tensor and this relationship forms like :[Strain Tensor (written in column vector and reduced form)]=[Stiffness Tensor ,C][Stress Tensor (same as Strain Tensor )]. The relationship for isotropic material will look like below:
[
ε
11
ε
22
ε
33
2
ε
23
2
ε
31
2
ε
12
]
=
[
ε
11
ε
22
ε
33
γ
23
γ
31
γ
12
]
=
1
E
[
1
−
ν
−
ν
0
0
0
−
ν
1
−
ν
0
0
0
−
ν
−
ν
1
0
0
0
0
0
0
2
(
1
+
ν
)
0
0
0
0
0
0
2
(
1
+
ν
)
0
0
0
0
0
0
2
(
1
+
ν
)
]
[
σ
11
σ
22
σ
33
σ
23
σ
31
σ
12
]
{\displaystyle {\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{33}\\2\varepsilon _{23}\\2\varepsilon _{31}\\2\varepsilon _{12}\end{bmatrix}}={\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{33}\\\gamma _{23}\\\gamma _{31}\\\gamma _{12}\end{bmatrix}}={\cfrac {1}{E}}{\begin{bmatrix}1&-\nu &-\nu &0&0&0\\-\nu &1&-\nu &0&0&0\\-\nu &-\nu &1&0&0&0\\0&0&0&2(1+\nu )&0&0\\0&0&0&0&2(1+\nu )&0\\0&0&0&0&0&2(1+\nu )\end{bmatrix}}{\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{33}\\\sigma _{23}\\\sigma _{31}\\\sigma _{12}\end{bmatrix}}}
This above matrix looks differently for an anisotropic material[ 3] . So in other way, we can say this particular tensor can help to express how the Stress Tensor will be influencing Strain Tensor and this 'how' is a signature of type of material which is point of interest.
Tensor could be envisioned in more easy way. Let us consider a vector which has components of
a
x
1
,
a
y
1
,
a
z
1
{\displaystyle a_{x1},a_{y1},a_{z1}}
. This components are essential to express this vector precisely in a given co-ordinate system.These components together will give value of the vector by
a
x
1
2
+
a
y
1
2
+
a
z
1
2
{\displaystyle {\sqrt {a_{x1}^{2}+a_{y1}^{2}+a_{z1}^{2}}}}
. If this co-ordinate system change also, the vector will be represented with 3 components with different value
a
x
2
,
a
y
2
,
a
z
2
{\displaystyle a_{x2},a_{y2},a_{z2}}
but
a
x
2
2
+
a
y
2
2
+
a
z
2
2
{\displaystyle {\sqrt {a_{x2}^{2}+a_{y2}^{2}+a_{z2}^{2}}}}
would give the same value as previous which means that something remains as identity of the vector. The components changed keeping something constant. Now lets consider about tensor. Imagine we form an entity where we have 3 vectors as components(meaning actually it has 9 or
3
×
3
{\displaystyle 3\times 3}
components of vectors now) and they are somehow coupled.This means that when the co-ordinate system will change, the individual components of vector(hence the entity) will be determined keeping 'some other value' constant like
a
x
,
a
y
,
a
z
{\displaystyle a_{x},a_{y},a_{z}}
in the case of vector. Of course, this 'other value' has to be calculated by using 9 components of that entity. After transformation, this new 9 components will give the same 'other value'. It would be shown later on actually that vector itself is 1st order tensor.
We may associate some properties of fluid or flow at one point without any preference of direction.
P
=
1.01
×
10
5
⏟
m
a
g
n
i
t
u
d
e
[
N
m
2
]
⏟
u
n
i
t
,
T
=
293
[
K
]
and
ρ
=
1.01
×
10
3
[
k
g
m
3
]
{\displaystyle \displaystyle P=\underbrace {1.01\times 10^{5}} _{magnitude}\ \underbrace {\left[{\frac {N}{m^{2}}}\right]} _{unit},T=293\left[K\right]\ {\text{and}}\ \rho =1.01\times 10^{3}\left[{\frac {kg}{m^{3}}}\right]}
These are some examples of such fluid variables.
Summation can be only done between variables having the same unit, i.e.
a
[
P
a
]
+
b
[
P
a
]
=
a
+
b
[
P
a
]
{\displaystyle \displaystyle a[Pa]+b[Pa]=a+b[Pa]}
a
[
P
a
]
+
b
[
K
]
=
?
{\displaystyle \displaystyle a[Pa]+b[K]=\ ?}
Multiplication is valid for variables having different units.
a
[
P
a
]
b
[
K
]
=
a
b
[
P
a
K
]
{\displaystyle \displaystyle a[Pa]\,b[K]=a\,b[PaK]}
Products of variables having different units, have a new physical meaning.
m
=
ρ
V
{\displaystyle \displaystyle m=\rho \,V}
[
k
g
]
=
[
k
g
m
3
]
[
m
3
]
{\displaystyle \displaystyle [kg]=\left[{\frac {kg}{m^{3}}}\right][m^{3}]}
Power loss (Energy loss per unit time):
Pressure drop measurement in a pipe
Δ
P
V
˙
=
|
Δ
P
|
|
V
˙
|
[
N
m
2
]
[
m
3
s
]
{\displaystyle \displaystyle \Delta P\,{\dot {V}}=\left|\Delta P\right|\left|{\dot {V}}\right|\left[{\frac {N}{m^{2}}}\right]\left[{\frac {m^{3}}{s}}\right]}
Δ
P
V
˙
=
|
Δ
P
|
|
V
˙
|
[
N
m
s
]
{\displaystyle \displaystyle \Delta P\,{\dot {V}}=\left|\Delta P\right|\left|{\dot {V}}\right|\left[{\frac {Nm}{s}}\right]}
Δ
P
V
˙
=
|
Δ
P
|
|
V
˙
|
[
W
a
t
t
]
{\displaystyle \displaystyle \Delta P\,{\dot {V}}=\left|\Delta P\right|\left|{\dot {V}}\right|\left[Watt\right]}
Watt is the unit for power.
A vector
Variables of flow, associated with a point and a direction, like force and velocity, are vectors.
The direction of a vector should be specified in relation with respect to a given frame of reference. This frame of reference is arbitrary as the units. Let us use the Cartesian coordinate system having 3 mutually orthogonal axes.
U
1
,
U
2
,
U
3
{\displaystyle \displaystyle U_{1},U_{2},U_{3}}
are components of this vector and can differ with respect to selected coordinate system but the amplitude not. Components allow us to reconstruct the vector in a particular system of reference. One should distinguish vector as an entity from its components.
Components of a vector
Summation of variables, identified with indices can be shortly written with the summation symbol:
a
1
x
1
+
a
2
x
2
+
a
3
x
3
+
.
.
.
+
a
n
x
n
=
∑
i
=
1
n
a
i
x
i
{\displaystyle \displaystyle a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+...+a_{n}x_{n}=\sum _{i=1}^{n}{a_{i}x_{i}}}
In short, we will write
→
a
i
x
i
{\displaystyle \rightarrow \displaystyle a_{i}x_{i}}
In other words, repeated indices means series sum over this index. Thus,
U
→
=
U
i
e
→
i
{\displaystyle \displaystyle {\vec {U}}=U_{i}\ {\vec {e}}_{i}}
U
→
=
|
U
|
c
o
s
α
i
e
→
i
{\displaystyle \displaystyle {\vec {U}}=\left|U\right|cos\alpha _{i}\ {\vec {e}}_{i}}
Now we can introduce the component representation of a vector
U
→
{\displaystyle {\vec {U}}}
:
U
i
=
|
U
|
c
o
s
α
i
{\displaystyle \displaystyle \displaystyle U_{i}=\left|U\right|cos\alpha _{i}}
Let's look some examples:
a
i
b
i
=
a
1
b
1
+
a
2
b
2
+
a
3
b
3
(
i
,
j
=
1
,
2
,
3
)
{\displaystyle \displaystyle \displaystyle a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}\,(i,j=1,2,3)}
whereas
a
i
b
j
{\displaystyle \displaystyle \displaystyle a_{i}b_{j}}
indicates no summation over
i
{\displaystyle i}
and
j
{\displaystyle j}
and
a
i
j
b
k
{\displaystyle \displaystyle a_{ij}b_{k}}
indicates also no summation. Other examples of summation are:
a
i
i
b
j
=
a
11
b
j
+
a
22
b
j
+
a
33
b
j
{\displaystyle \displaystyle \displaystyle a_{ii}b_{j}=a_{11}b_{j}+a_{22}b_{j}+a_{33}b_{j}}
a
i
j
b
j
=
a
i
1
b
1
+
a
i
2
b
2
+
a
i
3
b
3
{\displaystyle \displaystyle a_{ij}b_{j}=a_{i1}b_{1}+a_{i2}b_{2}+a_{i3}b_{3}}
In the following expression
a
i
j
b
j
=
a
i
k
b
k
(
i
,
j
,
k
=
1
.
.
.
n
)
{\displaystyle \displaystyle a_{ij}b_{j}=a_{ik}b_{k}\ (i,j,k=1\ ...\ n)}
the repeating index
j
{\displaystyle j}
is also a dummy index, i.e. can be replaced by
k
{\displaystyle k}
.
The following equality
a
i
j
b
j
=
a
k
j
b
j
{\displaystyle \displaystyle a_{ij}b_{j}=a_{kj}b_{j}}
is valid only when
i
=
k
{\displaystyle i=k}
. Then,
i
{\displaystyle i}
is called free index
y
i
=
a
i
r
x
r
{\displaystyle \displaystyle y_{i}=a_{ir}x_{r}}
for
i
,
r
=
1
,
2
,
3
{\displaystyle i,r=1,2,3}
can be expanded as
y
1
=
a
1
r
x
r
=
a
11
x
1
+
a
12
x
2
+
a
13
x
3
{\displaystyle \displaystyle y_{1}=a_{1r}x_{r}=a_{11}x_{1}+a_{12}x_{2}+a_{13}x_{3}}
y
2
=
a
2
r
x
r
=
a
21
x
1
+
a
22
x
2
+
a
23
x
3
{\displaystyle \displaystyle y_{2}=a_{2r}x_{r}=a_{21}x_{1}+a_{22}x_{2}+a_{23}x_{3}}
y
3
=
a
3
r
x
r
=
a
31
x
1
+
a
32
x
2
+
a
33
x
3
{\displaystyle \displaystyle y_{3}=a_{3r}x_{r}=a_{31}x_{1}+a_{32}x_{2}+a_{33}x_{3}}
Any expression involving a repeated index (sub- or superscript) shall automatically considered for its sum over the index range
1
,
2
,
3...
n
{\displaystyle 1,2,3...n}
.
Remark:
All indices have the same range,
n
{\displaystyle n}
, unless stated otherwise.
No index appear more than twice in any given expression.
Summed over i:
a
i
j
x
i
y
j
=
a
1
j
x
1
y
j
+
a
2
j
x
2
y
j
+
a
3
j
x
3
y
j
{\displaystyle \displaystyle a_{ij}x_{i}y_{j}=a_{1j}x_{1}y_{j}+a_{2j}x_{2}y_{j}+a_{3j}x_{3}y_{j}}
Summed over j:
a
i
j
x
i
y
j
=
(
a
11
x
1
y
1
+
a
12
x
1
y
2
+
a
13
x
1
y
3
)
+
(
a
21
x
2
y
1
+
a
22
x
2
y
2
+
a
23
x
2
y
3
)
+
(
a
31
x
3
y
1
+
a
32
x
3
y
2
+
a
33
x
3
y
3
)
{\displaystyle \displaystyle a_{ij}x_{i}y_{j}=(a_{11}x_{1}y_{1}+a_{12}x_{1}y_{2}+a_{13}x_{1}y_{3})+(a_{21}x_{2}y_{1}+a_{22}x_{2}y_{2}+a_{23}x_{2}y_{3})+(a_{31}x_{3}y_{1}+a_{32}x_{3}y_{2}+a_{33}x_{3}y_{3})}
Let
Q
=
b
i
j
y
i
x
j
{\displaystyle \displaystyle Q=b_{ij}y_{i}x_{j}}
and
y
i
=
a
i
j
x
j
{\displaystyle \displaystyle y_{i}=a_{ij}x_{j}}
.
Then
Q
=
b
i
j
a
i
j
x
j
x
j
{\displaystyle \displaystyle Q=b_{ij}a_{ij}x_{j}x_{j}}
. However, we have now more than two repeated indices and it is not correct. In order to have a correct substitution,
1
s
t
{\displaystyle \displaystyle 1^{st}}
Find duplicated dummy index:
→
j
{\displaystyle \displaystyle \rightarrow \ j}
.
2
n
d
{\displaystyle \displaystyle 2^{nd}}
Change dummy index:
→
y
i
=
a
i
r
x
r
{\displaystyle \displaystyle \rightarrow \ y_{i}=a_{ir}x_{r}}
.
3
r
d
{\displaystyle \displaystyle 3^{rd}}
Substitute:
→
Q
=
b
i
j
a
i
r
x
r
x
j
{\displaystyle \displaystyle \rightarrow \ Q=b_{ij}a_{ir}x_{r}x_{j}}
.
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \displaystyle \delta _{ij}=\left\{{\begin{matrix}1&i=j\\0&i\neq j\end{matrix}}\right.}
δ
i
i
=
δ
11
+
δ
22
+
δ
33
=
3
{\displaystyle \displaystyle \delta _{ii}=\delta _{11}+\delta _{22}+\delta _{33}=3}
δ
i
j
x
i
x
j
=
1
x
1
x
1
+
0
x
1
x
2
+
0
x
1
x
3
+
0
x
2
x
1
+
1
x
2
x
2
+
0
x
2
x
3
+
0
x
3
x
1
+
0
x
3
x
2
+
1
x
3
x
3
{\displaystyle \displaystyle \delta _{ij}x_{i}x_{j}=1x_{1}x_{1}+0x_{1}x_{2}+0x_{1}x_{3}+0x_{2}x_{1}+1x_{2}x_{2}+0x_{2}x_{3}+0x_{3}x_{1}+0x_{3}x_{2}+1x_{3}x_{3}}
δ
i
j
x
i
x
j
=
x
1
x
1
+
x
2
x
2
+
x
3
x
3
{\displaystyle \displaystyle \delta _{ij}x_{i}x_{j}=x_{1}x_{1}+x_{2}x_{2}+x_{3}x_{3}}
δ
i
j
x
i
x
j
=
x
i
x
i
{\displaystyle \displaystyle \delta _{ij}x_{i}x_{j}=x_{i}x_{i}}
→
δ
i
j
{\displaystyle \rightarrow \,\delta _{ij}}
replaces one of the dummy index with the other one.
Remember that,
U
→
=
U
i
e
→
i
{\displaystyle \displaystyle {\vec {U}}=U_{i}\ {\vec {e}}_{i}}
U
→
=
|
U
|
c
o
s
α
i
e
→
i
{\displaystyle \displaystyle {\vec {U}}=\left|U\right|cos\alpha _{i}\ {\vec {e}}_{i}}
α
U
→
=
α
U
i
e
→
i
→
c
h
a
n
g
e
o
n
l
y
i
n
a
m
p
l
i
t
u
d
e
.
{\displaystyle \displaystyle \alpha {\vec {U}}=\alpha U_{i}{\vec {e}}_{i}\rightarrow change\ only\ in\ amplitude.}
W
→
=
U
→
+
V
→
=
U
i
e
→
i
+
V
→
i
e
→
i
=
(
U
i
+
V
i
)
e
→
i
{\displaystyle \displaystyle {\vec {W}}={\vec {U}}+{\vec {V}}=U_{i}{\vec {e}}_{i}+{\vec {V}}_{i}{\vec {e}}_{i}=(U_{i}+V_{i}){\vec {e}}_{i}}
W
i
=
U
i
+
V
i
{\displaystyle \displaystyle \displaystyle W_{i}=U_{i}+V_{i}}
U
→
+
V
→
=
V
→
+
U
→
{\displaystyle \displaystyle {\vec {U}}+{\vec {V}}={\vec {V}}+{\vec {U}}}
U
→
+
(
V
→
+
W
→
)
=
(
U
→
+
V
→
)
+
W
→
{\displaystyle \displaystyle {\vec {U}}+({\vec {V}}+{\vec {W}})=({\vec {U}}+{\vec {V}})+{\vec {W}}}
U
→
⋅
V
→
=
U
i
V
i
{\displaystyle \displaystyle {\vec {U}}\cdot {\vec {V}}=U_{i}V_{i}}
If m and n are unit vectors in the direction of
U
→
{\displaystyle \displaystyle {\vec {U}}}
and
V
→
{\displaystyle \displaystyle {\vec {V}}}
.
m
→
=
U
→
|
U
|
=
c
o
s
α
i
e
→
i
{\displaystyle \displaystyle {\vec {m}}={\frac {\vec {U}}{\left|U\right|}}=cos\alpha _{i}\ {\vec {e}}_{i}}
n
→
=
V
→
|
V
|
=
c
o
s
β
i
e
→
i
{\displaystyle \displaystyle {\vec {n}}={\frac {\vec {V}}{\left|V\right|}}=cos\beta _{i}\ {\vec {e}}_{i}}
m
→
⋅
n
→
=
c
o
s
α
i
c
o
s
β
i
=
c
o
s
θ
{\displaystyle \displaystyle {\vec {m}}\cdot {\vec {n}}=cos\alpha _{i}cos\beta _{i}=cos\theta }
f
o
r
θ
=
90
→
c
o
s
θ
=
0
i
.
e
.
m
→
⋅
n
→
=
0
→
U
→
⋅
V
→
=
|
U
|
|
V
|
m
→
⋅
n
→
=
0
→
U
→
⊥
V
→
{\displaystyle \displaystyle for\ \theta =90\rightarrow cos\theta =0\ \ i.e.\ \ {\vec {m}}\cdot {\vec {n}}=0\rightarrow {\vec {U}}\cdot {\vec {V}}=\left|U\right|\left|V\right|{\vec {m}}\cdot {\vec {n}}=0\rightarrow {\vec {U}}\bot {\vec {V}}}
Vector product of two vectors
Consider vector products of unit vectors.
e
→
2
×
e
→
3
=
−
e
→
3
×
e
→
2
=
e
→
1
{\displaystyle \displaystyle {\vec {e}}_{2}\times {\vec {e}}_{3}=-{\vec {e}}_{3}\times {\vec {e}}_{2}={\vec {e}}_{1}}
e
→
3
×
e
→
1
=
−
e
→
1
×
e
→
3
=
e
→
2
{\displaystyle \displaystyle {\vec {e}}_{3}\times {\vec {e}}_{1}=-{\vec {e}}_{1}\times {\vec {e}}_{3}={\vec {e}}_{2}}
e
→
1
×
e
→
2
=
−
e
→
2
×
e
→
1
=
e
→
3
{\displaystyle \displaystyle {\vec {e}}_{1}\times {\vec {e}}_{2}=-{\vec {e}}_{2}\times {\vec {e}}_{1}={\vec {e}}_{3}}
(
a
1
e
→
1
+
a
2
e
→
2
+
a
3
e
→
3
)
×
(
b
1
e
→
1
+
b
2
e
→
2
+
b
3
e
→
3
)
=
(
a
2
b
3
−
a
3
b
2
)
e
→
1
+
(
a
3
b
1
−
a
1
b
3
)
e
→
2
+
(
a
1
b
2
−
a
2
b
1
)
e
→
3
{\displaystyle \displaystyle (a_{1}{\vec {e}}_{1}+a_{2}{\vec {e}}_{2}+a_{3}{\vec {e}}_{3})\times (b_{1}{\vec {e}}_{1}+b_{2}{\vec {e}}_{2}+b_{3}{\vec {e}}_{3})=(a_{2}b_{3}-a_{3}b_{2}){\vec {e}}_{1}+(a_{3}b_{1}-a_{1}b_{3}){\vec {e}}_{2}+(a_{1}b_{2}-a_{2}b_{1}){\vec {e}}_{3}}
The determinant of the symmetric matrix will be the same:
|
e
→
1
e
→
2
e
→
3
a
1
a
2
a
3
b
1
b
2
b
3
|
{\displaystyle \left|{\begin{array}{ccc}{\vec {e}}_{1}&{\vec {e}}_{2}&{\vec {e}}_{3}\\a_{1}&a_{2}&a_{3}\\b_{1}&b_{2}&b_{3}\end{array}}\right|}
The permutation symbol is defined as:
ϵ
i
j
k
=
{
0
if any of
i
,
j
,
k
are the same
1
if
i
,
j
,
k
is an even permutation
−
1
if
i
,
j
,
k
is an odd permutation
{\displaystyle \epsilon _{ijk}=\left\{{\begin{array}{ll}0&{\text{if any of }}i,j,k{\text{ are the same}}\\1&{\text{if }}i,j,k{\text{ is an even permutation}}\\-1&{\text{if }}i,j,k{\text{ is an odd permutation}}\end{array}}\right.}
Hence, the vector product can be written as:
a
→
×
b
→
=
∈
i
j
k
a
i
b
j
e
→
k
=
∈
i
j
k
a
j
b
k
e
→
i
{\displaystyle {\vec {a}}\times {\vec {b}}=\in _{ijk}a_{i}b_{j}{\vec {e}}_{k}=\in _{ijk}a_{j}b_{k}{\vec {e}}_{i}}
a
⋅
(
b
×
c
)
=
a
⋅
[
∈
i
j
k
b
i
c
j
e
→
k
]
=
a
k
∈
i
j
k
b
i
c
j
=
ϵ
i
j
k
a
k
b
i
c
j
=
∈
i
j
k
a
i
b
j
c
k
{\displaystyle \displaystyle a\cdot (b\times c)=a\cdot \left[\in _{ijk}b_{i}c_{j}{\vec {e}}_{k}\right]=a_{k}\in _{ijk}b_{i}c_{j}=\epsilon _{ijk}a_{k}b_{i}c_{j}=\in _{ijk}a_{i}b_{j}c_{k}}
Product of two tensors
a
i
b
j
=
c
i
j
f
o
r
i
,
j
=
1
,
2
,
3
{\displaystyle \displaystyle a_{i}b_{j}=c_{ij}\ for\ i,j=1,2,3}
where
c
i
j
=
[
c
11
c
12
c
13
c
21
c
22
c
23
c
31
c
32
c
33
]
{\displaystyle c_{ij}=\left[{\begin{array}{lll}c_{11}&c_{12}&c_{13}\\c_{21}&c_{22}&c_{23}\\c_{31}&c_{32}&c_{33}\end{array}}\right]}
Similarly,
c
i
j
D
k
l
m
=
E
i
j
k
l
m
(
2
+
3
=
5
t
h
o
r
d
e
r
t
e
n
s
o
r
)
{\displaystyle \displaystyle c_{ij}D_{klm}=E_{ijklm}\ (2+3=5^{th}\ order\ tensor)}
Thus, generally, the product of a tensor having
m
t
h
{\displaystyle \displaystyle m^{th}}
order and another tensor having
n
t
h
{\displaystyle \displaystyle n^{th}}
order will result in a tensor having an order of m + n.
Therefore,
δ
i
j
c
k
l
=
D
i
j
k
l
{\displaystyle \displaystyle \delta _{ij}c_{kl}=D_{ijkl}}
The order is increased by two.
Substitution
δ
i
j
c
j
k
=
D
i
j
j
k
{\displaystyle \displaystyle \delta _{ij}c_{jk}=D_{ijjk}}
D
i
j
j
k
=
D
i
11
k
+
D
i
22
k
+
D
i
33
k
=
E
i
k
{\displaystyle \displaystyle D_{ijjk}=D_{i11k}+D_{i22k}+D_{i33k}=E_{ik}}
Consequently if a tensor has a pair of repeated indices, its order reduces by two. And it can be shown that:
E
i
k
=
c
i
k
i
.
e
.
{\displaystyle \displaystyle E_{ik}=c_{ik}\ i.e.}
δ
i
j
c
j
k
=
c
i
k
{\displaystyle \displaystyle \delta _{ij}c_{jk}=c_{ik}}
Index j is replaced by index i. This operation is called substitution. Generally:
δ
i
j
A
k
j
r
s
=
A
k
i
r
s
{\displaystyle \displaystyle \delta _{ij}A_{kjrs}=A_{kirs}}
Contraction
δ
i
j
A
i
j
r
s
=
B
i
j
i
j
r
s
=
D
r
s
{\displaystyle \displaystyle \delta _{ij}A_{ijrs}=B_{ijijrs}=D_{rs}}
The total order is reduced by four.
Or equally:
δ
i
j
A
i
j
r
s
=
A
i
i
r
s
=
D
r
s
{\displaystyle \displaystyle \delta _{ij}A_{ijrs}=A_{iirs}=D_{rs}}
The order of
A
i
j
r
s
{\displaystyle \displaystyle A_{ijrs}}
is reduced by two.
This operation is called contraction with respect to i and j.
Scalar product of tensors
Remember:
a
→
⋅
b
→
=
a
i
b
i
=
δ
i
j
a
i
b
j
=
δ
i
j
c
i
j
=
c
i
i
=
a
i
b
i
{\displaystyle \displaystyle {\vec {a}}\cdot {\vec {b}}=a_{i}b_{i}=\delta _{ij}a_{i}b_{j}=\delta _{ij}c_{ij}=c_{ii}=a_{i}b_{i}}
Contraction of a tensor product of two
1
s
t
{\displaystyle \displaystyle 1^{st}}
order tensors results in scalar product of two vectors.
Hence, the scalar product of two tensors are:
δ
i
p
A
i
j
B
p
q
r
=
A
i
j
B
i
q
r
=
c
i
q
r
=
A
p
j
B
p
q
r
=
c
j
q
r
{\displaystyle \displaystyle \delta _{ip}A_{ij}B_{pqr}=A_{ij}B_{iqr}=c_{iqr}=A_{pj}B_{pqr}=c_{jqr}}
Remember:
A
→
×
B
→
=
C
→
{\displaystyle \displaystyle {\vec {A}}\times {\vec {B}}={\vec {C}}}
ϵ
i
j
k
A
j
B
k
=
D
i
j
j
k
k
=
C
i
{\displaystyle \displaystyle \epsilon _{ijk}A_{j}B_{k}=D_{ijjkk}=C_{i}}
Consider,
A
→
⋅
(
B
→
×
C
→
)
=
A
i
(
ϵ
i
j
k
B
j
C
k
)
=
D
i
i
j
j
k
k
(
s
c
a
l
a
r
)
{\displaystyle \displaystyle {\vec {A}}\cdot ({\vec {B}}\times {\vec {C}})=A_{i}(\epsilon _{ijk}B_{j}C_{k})=D_{iijjkk}\ (scalar)}
In conclusion, using component notation and basic rules the form of the tensor can be deduced.
Important Relations
δ
i
i
=
3
{\displaystyle \displaystyle \delta _{ii}=3}
δ
i
j
δ
j
k
=
δ
i
k
{\displaystyle \displaystyle \delta _{ij}\delta _{jk}=\delta _{ik}}
δ
i
k
ϵ
i
k
m
=
0
{\displaystyle \displaystyle \delta _{ik}\epsilon _{ikm}=0}
(repeated index)
ϵ
i
j
k
ϵ
l
m
n
=
|
δ
i
l
δ
i
m
δ
i
n
δ
j
l
δ
j
m
δ
j
n
δ
k
l
δ
k
m
δ
k
n
|
{\displaystyle \epsilon _{ijk}\epsilon _{lmn}=\left|{\begin{array}{lll}\delta _{il}&\delta _{im}&\delta _{in}\\\delta _{jl}&\delta _{jm}&\delta _{jn}\\\delta _{kl}&\delta _{km}&\delta _{kn}\end{array}}\right|}
ϵ
i
j
k
ϵ
l
m
n
=
δ
i
l
δ
j
m
δ
k
n
+
δ
i
m
δ
j
n
δ
k
l
+
δ
i
n
δ
j
l
δ
k
m
−
(
δ
i
l
δ
j
n
δ
k
m
+
δ
i
m
δ
j
l
δ
k
n
+
δ
i
n
δ
j
m
δ
k
l
)
{\displaystyle \displaystyle \epsilon _{ijk}\epsilon _{lmn}=\delta _{il}\delta _{jm}\delta _{kn}+\delta _{im}\delta _{jn}\delta _{kl}+\delta _{in}\delta _{jl}\delta _{km}-(\delta _{il}\delta _{jn}\delta _{km}+\delta _{im}\delta _{jl}\delta _{kn}+\delta _{in}\delta _{jm}\delta _{kl})}
Contraction with respect to k n.
δ
k
n
ϵ
i
j
k
ϵ
l
m
n
=
ϵ
i
j
k
ϵ
l
m
k
=
δ
i
l
δ
j
m
−
δ
i
m
δ
j
l
{\displaystyle \displaystyle \delta _{kn}\epsilon _{ijk}\epsilon _{lmn}=\epsilon _{ijk}\epsilon _{lmk}=\delta _{il}\delta _{jm}-\delta _{im}\delta _{jl}}
Twice contraction,
δ
j
m
δ
k
n
ϵ
i
j
k
ϵ
l
m
n
=
δ
j
m
[
δ
i
l
δ
j
m
−
δ
i
m
δ
j
l
]
{\displaystyle \displaystyle \delta _{jm}\delta _{kn}\epsilon _{ijk}\epsilon _{lmn}=\delta _{jm}[\delta _{il}\delta _{jm}-\delta _{im}\delta _{jl}]}
δ
j
m
δ
k
n
ϵ
i
j
k
ϵ
l
m
n
=
[
δ
i
l
δ
j
j
−
δ
i
j
δ
j
l
]
=
[
3
δ
i
l
−
δ
i
l
]
=
2
δ
i
l
{\displaystyle \displaystyle \delta _{jm}\delta _{kn}\epsilon _{ijk}\epsilon _{lmn}=[\delta _{il}\delta _{jj}-\delta _{ij}\delta _{jl}]=[3\delta _{il}-\delta _{il}]=2\delta _{il}}
δ
j
m
[
δ
k
n
ϵ
i
j
k
ϵ
l
m
n
]
=
δ
j
m
ϵ
i
j
k
ϵ
l
m
k
=
ϵ
i
j
k
ϵ
l
j
k
=
2
δ
i
l
{\displaystyle \displaystyle \delta _{jm}[\delta _{kn}\epsilon _{ijk}\epsilon _{lmn}]=\delta _{jm}\ \epsilon _{ijk}\ \epsilon _{lmk}\ =\ \epsilon _{ijk}\epsilon _{ljk}\ =2\delta _{il}}
and
ϵ
i
j
k
ϵ
i
j
k
=
6
{\displaystyle \displaystyle \epsilon _{ijk}\epsilon _{ijk}=6}
Triple vector product
Therefore, it can be a linear sum of two vectors.
a
→
×
(
b
→
×
c
→
)
=
β
b
→
+
γ
c
→
=
T
→
{\displaystyle \displaystyle {\vec {a}}\times ({\vec {b}}\times {\vec {c}})=\beta {\vec {b}}+\gamma {\vec {c}}={\vec {T}}}
Use component notation.
T
i
=
∈
i
j
k
a
j
(
∈
k
l
m
b
l
c
m
)
=
β
b
i
+
γ
c
i
{\displaystyle \displaystyle T_{i}=\in _{ijk}a_{j}(\in _{klm}b_{l}c_{m})=\beta b_{i}+\gamma c_{i}}
T
i
=
∈
i
j
k
∈
k
l
m
a
j
b
l
c
m
=
β
b
i
+
γ
c
i
{\displaystyle \displaystyle T_{i}=\in _{ijk}\in _{klm}a_{j}b_{l}c_{m}=\beta b_{i}+\gamma c_{i}}
T
i
=
(
δ
i
l
δ
j
m
−
δ
i
m
δ
j
l
)
a
j
b
l
c
m
{\displaystyle \displaystyle \displaystyle T_{i}=(\delta _{il}\delta _{jm}-\delta _{im}\delta _{jl})a_{j}b_{l}c_{m}}
T
i
=
δ
i
l
δ
j
m
a
j
b
l
c
m
−
δ
i
m
δ
j
l
a
j
b
l
c
m
{\displaystyle \displaystyle \displaystyle T_{i}=\delta _{il}\delta _{jm}a_{j}b_{l}c_{m}-\delta _{im}\delta _{jl}a_{j}b_{l}c_{m}}
T
i
=
a
m
b
i
c
m
−
a
j
b
j
c
i
=
b
i
(
a
m
c
m
)
−
c
i
(
a
j
b
j
)
=
(
a
→
⋅
c
→
)
b
−
(
a
→
⋅
b
→
)
c
{\displaystyle \displaystyle T_{i}=a_{m}b_{i}c_{m}-a_{j}b_{j}c_{i}=b_{i}(a_{m}c_{m})-c_{i}(a_{j}b_{j})=({\vec {a}}\cdot {\vec {c}})b-({\vec {a}}\cdot {\vec {b}})c}
Let A scalar derivative with respect to
x
i
{\displaystyle \displaystyle x_{i}}
:
∂
A
∂
x
i
=
B
i
{\displaystyle \displaystyle {\frac {\partial A}{\partial x_{i}}}=B_{i}}
Let A a vector:
∂
A
i
∂
x
j
=
B
i
j
{\displaystyle \displaystyle {\frac {\partial A_{i}}{\partial x_{j}}}=B_{ij}}
Generally, the
m
t
h
{\displaystyle \displaystyle m^{th}}
derivative of a tensor of the
n
t
h
{\displaystyle \displaystyle n^{th}}
order is a tensor of the
(
n
+
m
)
t
h
{\displaystyle \displaystyle (n+m)^{th}}
order.
Special case,
∂
x
j
∂
x
i
=
1
(
i
f
i
=
j
)
{\displaystyle \displaystyle {\frac {\partial x_{j}}{\partial x_{i}}}=1\ (if\ i=j)}
∂
x
j
∂
x
i
=
0
(
i
f
i
≠
j
)
{\displaystyle \displaystyle {\frac {\partial x_{j}}{\partial x_{i}}}=0\ (if\ i\neq j)}
∂
x
j
∂
x
i
=
δ
i
j
→
∂
x
i
∂
x
i
=
δ
i
i
=
3
{\displaystyle \displaystyle {\frac {\partial x_{j}}{\partial x_{i}}}=\delta _{ij}\rightarrow {\frac {\partial x_{i}}{\partial x_{i}}}=\delta _{ii}=3}
Let
A
(
x
j
)
{\displaystyle \displaystyle A(x_{j})}
:
∂
A
(
x
j
)
∂
x
i
=
∂
A
∂
x
j
∂
x
j
∂
x
i
=
δ
i
j
∂
A
∂
x
j
=
δ
i
j
C
j
=
C
i
{\displaystyle \displaystyle {\frac {\partial A(x_{j})}{\partial x_{i}}}={\frac {\partial A}{\partial x_{j}}}{\frac {\partial x_{j}}{\partial x_{i}}}=\delta _{ij}{\frac {\partial A}{\partial x_{j}}}=\delta _{ij}C_{j}=C_{i}}
A
(
n
)
{\displaystyle \displaystyle A(n)}
n
(
x
j
)
{\displaystyle \displaystyle n(x_{j})}
:
∂
A
∂
x
i
=
∂
A
∂
n
∂
n
∂
x
j
∂
x
j
∂
x
i
=
δ
i
j
∂
A
∂
n
∂
n
∂
x
j
⏟
v
e
k
t
o
r
⏟
v
e
k
t
o
r
⏟
v
e
k
t
o
r
{\displaystyle \displaystyle {\frac {\partial A}{\partial x_{i}}}={\frac {\partial A}{\partial n}}{\frac {\partial n}{\partial x_{j}}}{\frac {\partial x_{j}}{\partial x_{i}}}=\underbrace {\delta _{ij}\underbrace {{\frac {\partial A}{\partial n}}\underbrace {\frac {\partial n}{\partial x_{j}}} _{vektor}} _{vektor}} _{vektor}}