Back to Information theory/Permutations and combinations
Lifted from Multinomial Theorem of 2ⁿᵈ kind by w:User:Seeker220
Theorem : The number of non-negative integral solution of the equation
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
k
=
n
{\displaystyle x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{k}=n}
is
(
n
+
k
−
1
k
−
1
)
{\displaystyle {n+k-1 \choose k-1}}
Proof :
Let
E
q
u
a
t
i
o
n
−
1
:=
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
k
=
n
{\displaystyle Equation-1:=x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{k}=n}
Let
f
(
y
1
,
y
2
,
y
3
,
⋅
⋅
⋅
,
y
k
)
=
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle f(y_{1},y_{2},y_{3},\cdot \cdot \cdot ,y_{k})=(y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
∴ Each term of
f
(
y
1
,
y
2
,
y
3
,
⋅
⋅
⋅
,
y
k
)
{\displaystyle f(y_{1},y_{2},y_{3},\cdot \cdot \cdot ,y_{k})}
will be of the form
∑
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
k
=
n
n
!
x
1
!
x
2
!
x
3
!
⋅
⋅
⋅
x
k
!
y
1
x
1
y
2
x
2
y
3
x
3
⋅
⋅
⋅
y
k
x
k
=
∑
E
q
u
a
t
i
o
n
−
1
n
!
x
1
!
x
2
!
x
3
!
⋅
⋅
⋅
x
k
!
y
1
x
1
y
2
x
2
y
3
x
3
⋅
⋅
⋅
y
k
x
k
{\displaystyle \displaystyle \sum _{x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{k}=n}{{\frac {n!}{x_{1}!x_{2}!x_{3}!\cdot \cdot \cdot x_{k}!}}\ y_{1}^{x_{1}}}y_{2}^{x_{2}}y_{3}^{x_{3}}\cdot \cdot \cdot y_{k}^{x_{k}}=\displaystyle \sum _{Equation-1}{{\frac {n!}{x_{1}!x_{2}!x_{3}!\cdot \cdot \cdot x_{k}!}}\ y_{1}^{x_{1}}}y_{2}^{x_{2}}y_{3}^{x_{3}}\cdot \cdot \cdot y_{k}^{x_{k}}}
W
h
e
r
e
x
i
≥
0
f
o
r
a
l
l
i
∈
[
0
,
k
]
{\displaystyle Where\ x_{i}\geq 0\ for\ all\ i\in [0,k]}
For each unique non-negative solution of Equation-1, there exist a unique monomial term of degree n of
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
.
For each unique monomial term of
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
, there exists an unique non-negative solution of Equation-1
No unique non-negative solution of Equation-1 yields two different monomial terms of
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
.
No unique monomial term of
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
yields two different non-negative solution of Equation-1.
Thus, by bijection ,
#
t
e
r
m
s
o
f
g
(
x
)
=
#
n
o
n
−
n
e
g
a
t
i
v
e
i
n
t
e
g
r
a
l
s
o
l
u
t
i
o
n
s
o
f
E
q
u
a
t
i
o
n
−
1
{\displaystyle \#terms\ of\ g(x)=\#non-negative\ integral\ solutions\ of\ Equation-1}
Thus, to prove the theorem, it is sufficient to prove that number of terms of
f
(
x
)
{\displaystyle f(x)}
is
(
n
+
k
−
1
k
−
1
)
{\displaystyle {n+k-1 \choose k-1}}
Now, we shall proceed by Mathematical Induction .
Base Case:
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
for k=1 is
(
y
1
)
n
=
y
1
n
{\displaystyle (y_{1})^{n}=y_{1}^{n}}
, which has
1
=
(
n
+
1
−
1
1
−
1
)
t
e
r
m
s
{\displaystyle 1={n+1-1 \choose 1-1}\ terms}
terms.
Also, for k=2,
(
y
1
+
y
2
+
y
3
+
⋅
⋅
⋅
+
y
k
)
n
{\displaystyle (y_{1}+y_{2}+y_{3}+\cdot \cdot \cdot +y_{k})^{n}}
is
(
x
1
+
x
2
)
n
=
∑
i
=
0
n
(
n
i
)
x
i
y
n
−
i
{\displaystyle (x_{1}+x_{2})^{n}=\displaystyle \sum _{i=0}^{n}{{n \choose i}x^{i}y^{n-i}}}
, which has
∑
i
=
0
n
1
t
e
r
m
s
o
f
t
h
e
f
o
r
m
(
n
i
)
x
i
y
n
−
i
{\displaystyle \displaystyle \sum _{i=0}^{n}1\ terms\ of\ the\ form\ {n \choose i}x^{i}y^{n-i}}
.
∑
i
=
0
n
1
=
(
n
+
1
)
=
(
n
+
2
−
1
2
−
1
)
{\displaystyle \displaystyle \sum _{i=0}^{n}1=(n+1)={n+2-1 \choose 2-1}}
Induction Hypothesis: Let for some
k
=
r
{\displaystyle k=r}
,
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
)
n
h
a
s
(
n
+
r
−
1
r
−
1
)
t
e
r
m
s
{\displaystyle (x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r})^{n}\ \ has\ {n+r-1 \choose r-1}\ terms}
Inductive Step: We have to prove that for
k
=
r
+
1
{\displaystyle k=r+1}
too,
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
+
1
)
n
h
a
s
(
n
+
(
r
+
1
)
−
1
(
r
+
1
)
−
1
)
t
e
r
m
s
,
i
.
e
.
(
n
+
r
r
)
t
e
r
m
s
{\displaystyle (x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r+1})^{n}\ \ has\ {n+(r+1)-1 \choose (r+1)-1}\ terms,\ i.e.\ {n+r \choose r}\ terms}
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
+
1
)
n
=
[
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
)
+
(
x
r
+
1
)
]
n
=
∑
i
=
0
n
(
n
i
)
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
)
i
y
n
−
i
{\displaystyle (x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r+1})^{n}=\left[(x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r})+(x_{r+1})\right]^{n}=\displaystyle \sum _{i=0}^{n}{{n \choose i}(x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r})^{i}y^{n-i}}}
F
o
r
e
a
c
h
u
n
i
q
u
e
i
,
(
n
i
)
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
)
i
x
n
−
1
h
a
s
(
i
+
r
−
1
r
−
1
)
u
n
i
q
u
e
t
e
r
m
s
.
[
F
r
o
m
I
n
d
u
c
t
i
o
n
H
y
p
o
t
h
e
s
i
s
]
{\displaystyle For\ each\ unique\ i,\ {n \choose i}(x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r})^{i}x^{n-1}\ has\ {i+r-1 \choose r-1}\ unique\ terms.\left[From\ Induction\ Hypothesis\right]}
Thus,
N
u
m
b
e
r
o
f
t
e
r
m
s
o
f
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
+
1
)
n
=
∑
i
=
0
n
(
n
i
)
(
x
1
+
x
2
+
x
3
+
⋅
⋅
⋅
+
x
r
)
i
y
n
−
i
i
s
∑
i
=
0
n
(
i
+
r
−
1
r
−
1
)
t
e
r
m
s
.
{\displaystyle Number\ of\ terms\ of\ (x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r+1})^{n}=\displaystyle \sum _{i=0}^{n}{{n \choose i}(x_{1}+x_{2}+x_{3}+\cdot \cdot \cdot +x_{r})^{i}y^{n-i}}\ is\ \displaystyle \sum _{i=0}^{n}{i+r-1 \choose r-1}\ terms.}
∑
i
=
0
n
(
i
+
r
−
1
r
−
1
)
=
(
r
−
1
r
−
1
)
+
(
r
r
−
1
)
+
(
r
+
1
r
−
1
)
+
(
r
+
2
r
−
1
)
+
⋅
⋅
⋅
+
(
n
+
r
−
2
r
−
1
)
+
(
n
+
r
−
1
r
−
1
)
=
[
(
r
r
)
+
(
r
r
−
1
)
]
+
(
r
+
1
r
−
1
)
+
(
r
+
2
r
−
1
)
+
⋅
⋅
⋅
+
(
n
+
r
−
2
r
−
1
)
+
(
n
+
r
−
1
r
−
1
)
=
[
(
r
+
1
r
)
+
(
r
+
1
r
−
1
)
]
+
(
r
+
2
r
−
1
)
+
⋅
⋅
⋅
+
(
n
+
r
−
2
r
−
1
)
+
(
n
+
r
−
1
r
−
1
)
=
⋅
⋅
⋅
=
[
(
n
+
r
−
2
r
)
+
(
n
+
r
−
2
r
−
1
)
]
+
(
n
+
r
−
1
r
−
1
)
=
(
n
+
r
−
1
r
)
+
(
n
+
r
−
1
r
−
1
)
=
(
n
+
r
r
)
[
T
h
u
s
,
p
r
o
v
e
d
]
{\displaystyle {\begin{aligned}\displaystyle \sum _{i=0}^{n}{i+r-1 \choose r-1}={r-1 \choose r-1}+{r \choose r-1}+{r+1 \choose r-1}+{r+2 \choose r-1}+\cdot \cdot \cdot \\+{n+r-2 \choose r-1}+{n+r-1 \choose r-1}=\left[{r \choose r}+{r \choose r-1}\right]+{r+1 \choose r-1}+{r+2 \choose r-1}+\cdot \cdot \cdot \\+{n+r-2 \choose r-1}+{n+r-1 \choose r-1}=\left[{r+1 \choose r}+{r+1 \choose r-1}\right]+{r+2 \choose r-1}+\cdot \cdot \cdot \\+{n+r-2 \choose r-1}+{n+r-1 \choose r-1}=\cdot \cdot \cdot \\=\left[{n+r-2 \choose r}+{n+r-2 \choose r-1}\right]+{n+r-1 \choose r-1}={n+r-1 \choose r}+{n+r-1 \choose r-1}={n+r \choose r}\ \left[Thus,\ proved\right]\end{aligned}}}
Fascii The 95 printable ASCII characters includes the invisible space, shown above E
Suppose we adopt a labeling system by which
This problem is inspired by the ASCII code (see Fascii ), as well as the fact that there are an estimated
10
80
{\displaystyle 10^{80}}
protons in the visible universe.
To illustrate how much information even a modest number of characters can transmit, consider strings made from permutations of the 94 ASCII characters that are both printable and visible. One benefit of performing this calculation is that it highlights the superior accuracy of the second order approximation,
ln
n
!
∼
n
ln
n
−
n
{\displaystyle \ln n!\sim n\ln n-n}
,
to the first order approximation favored by those who are only comfortable with base-10 logarigthms:
log
10
n
!
∼
n
log
10
n
.
{\displaystyle \log _{10}n!\sim n\log _{10}n.}
A calculation like this should be left as an exercise for students, who are encouraged to post their calculations on Wikiversity. What follows is this Wikiversarian's work resulting from performing the calculation using two different tools available to anybody with internet access: First, Google Search was used as an online calculator. Second, Google Documents was used to generate a spreadsheet. Both calculations give the same result.
94
!
=
1.087366
×
10
146
{\displaystyle 94!=1.087366\times 10^{146}}
log
10
(
94
!
)
=
146.036375812
{\displaystyle \log _{10}(94!)=146.036375812}
94
×
log
10
(
94
)
=
185.474018238
∼
log
10
(
94
!
)
{\displaystyle 94\times \log _{10}(94)=185.474018238\sim \log _{10}(94!)}
log
10
(
94
)
=
1.9731278536
≈
2
{\displaystyle \log _{10}(94)=1.9731278536\approx 2}
log
10
(
94
)
≈
2
{\displaystyle \log _{10}(94)\approx 2}
⇒
94
×
log
10
≈
94
×
2
=
188
∼
log
10
(
94
!
)
{\displaystyle \Rightarrow 94\times \log _{10}\approx 94\times 2=188\sim \log _{10}(94!)}
use
log
10
X
=
ln
X
ln
10
{\displaystyle \log _{10}X={\frac {\ln X}{\ln {10}}}}
ln
(
94
!
)
∼
94
×
ln
(
94
)
−
94
=
94
×
(
4.54329478227
−
1
)
=
333.069709533
{\displaystyle \ln(94!)\sim 94\times \ln(94)-94=94\times (4.54329478227-1)=333.069709533}
log
10
(
94
!
)
=
ln
(
94
!
)
ln
10
{\displaystyle \log _{10}(94!)={\frac {\ln(94!)}{\ln 10}}}
and since,
ln
10
=
2.30258509299
{\displaystyle \ln 10=2.30258509299}
, we have
l
o
g
10
(
94
!
)
=
144.65033694
{\displaystyle log_{10}(94!)=144.65033694}
https://tableconvert.com/csv-to-mediawiki
Formula
log10(n!)
ln(n!)
log10(n!)
n! estimate
Normalize
inverse
fact(94)
146.0
336.3
146.0
exact
1.1E+146
1
1
94 log10 94
185.5
427.1
185.5
n log10(n_
3.0E+185
1.2701
0.7874
94ln94-94
144.7
333.1
144.7
n ln n - n
4.5E+144
0.9905
1.0096
base-2
Check:
compare w ss
"log(94,2)"
6.5546
fact(94)=
146.0
1
"94*log*(94,2)"
616.1314
94 log10 94
185.5
1
2^{above}
2.98E+185
ln(94!) ~
144.7
1
log10(above)
185.4740
Back to Information theory/Permutations and combinations