# Convergence acceleration by the Ford-Sidi W(m) algorithm

## Computation of infinite integrals involving non-oscillating or oscillating functions (trigonometric or Bessel) multiplied by Bessel function of an arbitrary order

Although we deal with infinite integrals we will use ${\displaystyle {d}^{(m)}}$ transformation for infinite series [2] because these integrals are evaluated by accelerating the convergence of sequence of partial sums. Still using the ${\displaystyle {D}^{(m)}}$ transformation for infinite integrals [2] produces excellent results.The integrals are of the form :${\displaystyle \int _{0}^{\infty }c(x)w(x)dx}$ where c(x) is non-oscillating function or oscillating function (trigonometric or Bessel) and w(x) is Bessel  function of an arbitrary order.

As in Sidi Modified W algorithm (mW algorithm ) (see[7]) the only asymptotic information required is the eventual distance between the zeroes of the Bessel function, these partial sums are computed by integration between consecutive zeros of the Bessel function (see[6]) by a low order Gaussian quadrature (see[7]), although other numerical quadrature methods can be employed.

The Ford - Sidi ${\displaystyle {W}^{(m)}}$algorithm routine is applied to these partial sums for convergence acceleration.

The ${\displaystyle {d}^{(m)}}$transformation is implemented very efficiently by the Ford-Sidi ${\displaystyle {W}^{(m)}}$ algorithm [1] . Special case with m=1 (${\displaystyle {W}^{(1)}}$algorithm) is the Sidi W algorithm [4].

This Ford-Sidi ${\displaystyle {W}^{(m)}}$ algorithm [1] is cheap mainly because set of sophisticated recursions that enable to compute the g function and the rest of the algorithm only up to the value m and not to the full length of LMAX+1 as needed by direct application of the Ford-Sidi algorithm.

Anyway the development of the Ford - Sidi ${\displaystyle {W}^{(m)}}$algorithm depends heavily on the FS algorithm.

We propose here  Ford-Sidi ${\displaystyle {W}^{(m)}}$algorithm computed by direct application of the Ford-Sidi algorithm, even if it costs more

than the Ford-Sidi ${\displaystyle {W}^{(m)}}$algorithm [1] because :

1. It is simple and short as it does not include those sophisticated recursions.

2.The g function has no particular structure below and equal to m so the FS algorithm is applied directly. For values of the g function above m up to LMAX+1 the structured part of the g function is dealt also by applying directly the FS algorithm using the g(k)=t*g(k-m) (see 3.10 p. 1219 in [1]).

3. Most of the arrays dimensions are reduced to 1-D.

In computing the g function the simplest sampling of r(l)=l is chosen. It enables easily the efficiency of the code by saving previous values of the partial sums. Previous values of the g function can be also saved but it requests computing of two dimensional g instead of one in this algorithm. This sampling is also very useful for alternating series.

The proposed Ford-Sidi ${\displaystyle {W}^{(m)}}$ algorithm produces accurate results even with simplest sampling of  r(l)=l and small value of m=2.

This algorithm copes successfully with:

1. Integrals with non oscillating function multiplied by Bessel function of arbitrary order.
2. Integrals with oscillating function as Bessel multiplied by Bessel function of arbitrary order.
3. Integrals that involve Bessel function of very high order (see also [6] and the ${\displaystyle {\overline {D}}^{(m)}}$transformation of Sidi [3]).
4. Integrals as (27)(28) that their integrands are not integrable at infinity and need to be defined in the sense of "Abel sums". (see[5]).
5. integrals that involve products of Bessel functions of different arguments and different order.

22 integrals are tested successfully.

The proposed Ford-Sidi ${\displaystyle {W}^{(m)}}$algorithm recursive scheme is :

FSA,FSI, and G are PSIA,PSII and PSIG in the FS algorithm accordingly (see[1]).

${\displaystyle FSA_{0}^{(n)}={\frac {S^{(n)}}{g_{1}^{n}}}\qquad n=0...Nmax}$

${\displaystyle FSI_{0}^{(n)}={\frac {1}{g_{1}^{(n)}}}\qquad n=0...Nmax}$

${\displaystyle G_{0,i}^{(n)}={\frac {g_{i}^{(n)}}{g_{1}^{(n)}}}\qquad n=0...Nmax\qquad i=2...Nmax+1}$

${\displaystyle FSA_{k+1}^{(n)}={\frac {FSA_{k}^{(n+1)}-FSA_{k}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1}$

${\displaystyle FSI_{k+1}^{(n)}={\frac {FSI_{k}^{(n+1)}-FSI_{k}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1}$

${\displaystyle G_{k+1,i}^{(n)}={\frac {G_{k,i}^{(n+1)}-G_{k,i}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1\qquad i=k+3...Nmax+1}$

${\displaystyle APPROX={\frac {FSA_{N_{max}}^{(0)}}{FSI_{N_{max}}^{(0)}}}}$

Rewriting with mm=n-(k+1)  :

${\displaystyle FSA_{0}^{(n)}={\frac {S^{(n)}}{g_{1}}}\qquad n=0...Nmax}$

${\displaystyle FSI_{0}^{(n)}={\frac {1}{g_{1}}}\qquad n=0...Nmax}$

${\displaystyle G_{i}^{(n)}={\frac {g_{i}}{g_{1}}}\qquad n=0...Nmax\qquad i=2...Nmax+1}$

${\displaystyle FSA_{k+1}^{(mm)}={\frac {FSA_{k}^{(mm+1)}-FSA_{k}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0}$

${\displaystyle FSI_{k+1}^{(mm)}={\frac {FSI_{k}^{(mm+1)}-FSI_{k}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0}$

${\displaystyle G_{i}^{(mm)}={\frac {G_{i}^{(mm+1)}-G_{i}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0\qquad i=k+3...Nmax+1}$

${\displaystyle APPROX={\frac {FSA_{N}^{(0)}}{FSI_{N}^{(0)}}}}$

Formulating the arrays in  1-D with mm=n-(k+1) :

${\displaystyle FSA^{(n)}={\frac {S^{(n)}}{g_{1}}}\qquad n=0...Nmax}$

${\displaystyle FSI^{(n)}={\frac {1}{g_{1}}}\qquad n=0...Nmax}$

${\displaystyle G_{i}^{(n)}={\frac {g_{i}}{g_{1}}}\qquad n=0...Nmax\qquad i=2...Nmax+1}$

${\displaystyle FSA^{(mm)}={\frac {FSA^{(mm+1)}-FSA^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0}$

${\displaystyle FSI^{(mm)}={\frac {FSI^{(mm+1)}-FSI^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0}$

${\displaystyle G_{i}^{(mm)}={\frac {G_{i}^{(mm+1)}-G_{i}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0\qquad i=k+3...Nmax+1}$

${\displaystyle APPROX={\frac {FSA^{(0)}}{FSI^{(0)}}}}$

with S=partial sum Nmax=number of partial sums

the simple and short semi formal code for the Ford - Sidi ${\displaystyle {W}^{(m)}}$algorithm is :

// MAIN PROGRAM

S = 0

//// calling FUNCTION WMALG that computes the Ford-Sidi W(m) algorithm for convergence acceleration.
// and also calling FUNCTION MLTAG that computes the g funcion and the partial sums

END PROGRAM

//Routine for computing Ford-Sidi W(m) algorithm
FUNCTION WMALG
// prevent zero denominator - not exists in [1].
IF (ABS(G1) >= 1D-77) THEN
FSA(N) = S / G(1)
FSI(N) = 1 / G(1)
FOR I=2 TO Nmax + 1 DO
FSG(I,N) = G(I) / G(1)
ENDFOR
ELSE
FSA(N) = S
FSI(N) = 1
FOR I=2 TO Nmax + 1 DO
FSG(I,N) = G(I)
ENDFOR
ENDIF
FOR K=0 TO N-1 DO
MM=N-(K+1)
D = FSG(K+2,MM+1) - FSG(K+2,MM)
FOR I=K+3 TO Nmax+1 DO
FSG(I,MM) = (FSG(I,MM+1) - FSG(I,MM)) / D
ENDFOR
FSA(MM) = (FSA(MM+1) - FSA(MM)) / D
FSI(MM) = (FSI(MM+1) - FSI(MM)) / D
ENDFOR
ENDFOR
// prevent zero denominator
IF (ABS(FSA(0)) >= 1D-77) THEN
APPROX = FSA(0) / FSI(0)
ELSE
APPROX = 1D-77
ENDIF

END FUNCTION


with AN = sequence element S = partial sum   the simple and short semi formal code for computing the partial sums and the g function is :

//routine for computing partial sums and g function
FUNCTION MLTAG
// making partial sums S
//the simplicity of this part of the code is enabled because the choice of the simplest sampling
//R(L)=L (see B.5 p. 1228 in [1]). It also enables efficiency of the code by saving previous values
//of the partial sums (S).
AN=
S =S+AN

// making g function up to M
FOR K=1 TO M DO
//computing the sequence elements AN(needed for the g function)

//efficiency of this part of the code can be also achived by saving previous results of the sequence elements
//AN that were computed already for the prtial sums S.
AN=  G(K)
ENDFOR

// forward difference
FOR I=2 TO M DO
FOR J=M TO I STEP=-1 DO
G(J)=G(J) - G(J-1)
ENDFOR
ENDFOR

T=1/(N +1)

// making g function from M to Nmax+1 (see 3.10 p. 1219 in [1]).
FOR K=1 TO Nmax+1 DO
IF K <= M THEN
G(K)=G(K) * ((N+1) ^ k)
ELSE
G(K)=T * G(K-M)
ENDIF
ENDFOR
ENDFOR

END FUNCTION


## the 22 tested integrals, with exact values, are:

(1)${\displaystyle \int _{0}^{\infty }e^{-k}J_{2}(kr)dk={\frac {({{\sqrt {r^{2}+1}}-1})^{2}}{r^{2}{\sqrt {r^{2}+1}}}}}$ Bessel function order greater than 1

(2)${\displaystyle \int _{0}^{\infty }J_{1}(kr)\ J_{0}(ka)\ dk={\begin{cases}{\frac {1}{r}}&{\text{if }}r>a\\{\frac {1}{2a}}&{\text{if }}r=a\\0&{\text{if }}r products of Bessel functions-different argument different order

(3) ${\displaystyle \int _{0}^{\infty }J_{4}(kr)J_{3}(ka)dk={\begin{cases}{\frac {a^{3}}{r^{4}}}&{\text{if }}r>a\\{\frac {1}{2a}}&{\text{if }}r=a\\0&{\text{if }}r products of Bessel functions - different argument different order

(4)${\displaystyle \int _{0}^{\infty }{\frac {k}{\sqrt {k^{2}+\alpha ^{2}}}}J_{0}(kr)dk={\frac {e^{-\alpha r}}{r}}}$ alpha=1 (see[10])

(5) ${\displaystyle \int _{0}^{\infty }k^{2}J_{0}(k)dk=-1}$ divergent oscillatory (see[5][7])

(6)${\displaystyle \int _{0}^{\infty }{\frac {1}{2}}\ln(1+k^{2})J_{1}(k)=0.421024438240708333}$ (see[7])

(7)${\displaystyle \int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{0}(k)dk=0.421024438240708333}$ (see[2][11])

(8)${\displaystyle \int _{0}^{\infty }{\frac {1-e^{-k}}{k\ln(1+{\sqrt {2}})}}J_{0}(k)=1}$ oscillatory kernel (see[10])

(9)${\displaystyle \int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{10}(k)dk=0.098970545308402}$ (see[11]) big order of Bessel function

(10)${\displaystyle \int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{100}(k)dk=0.0099989997000302}$ (see[11]) very big order of Bessel function

(11)${\displaystyle \int _{0}^{\infty }k^{2}J_{1}(k)[J_{0}(k)]^{2}dk={\frac {4}{3\pi {\sqrt {3}}}}}$ see[9] oscillatory kernel - product of Bessel functions - same argument

(12)${\displaystyle \int _{0}^{\infty }{\frac {1}{k}}J_{0}(k)J_{1}(k)dk={\frac {2}{\pi }}}$ oscillatory kernel - product of Bessel functions (see[2] [3]) - same argument

(13)${\displaystyle \int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{0}(k)dk=I_{0}(2.0)K_{0}(2.0)}$ see [6]

(14)${\displaystyle \int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{10}(k)dk=I_{5}(2.0)K_{5}(2.0)}$ big order of Bessel function (see [6])

(15)${\displaystyle \int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{100}(k)dk=I_{50}(2.0)K_{50}(2.0)}$ very big order of Bessel function (see [6])

(16)${\displaystyle \int _{0}^{\infty }J_{0}(k)dk=1}$ (see[2][3])

(17)${\displaystyle \int _{0}^{\infty }k^{4}J_{0}(k)dk=9}$ divergent oscillatory (see[5][7])

(18)${\displaystyle \int _{0}^{\infty }J_{0}({\frac {k^{4}+2k^{2}+5}{k^{2}+4}}){\sqrt {k^{2}+9k+20}}\,\ dk=2.627160401844}$ very oscillatory (see[8])

The exact solution of this integral (29) is considered as the last iteration.

(19)${\displaystyle \int _{0}^{\infty }J_{0}(2k){\frac {k{\sqrt {k^{2}+{\frac {1}{3}}}}(2k^{2}\cdot e^{-0.2{\sqrt {k^{2}+1}}}-(2k^{2}+1)e^{-0.2{\sqrt {k^{2}+{\frac {1}{3}}}}})}{(2k^{2}+1)^{2}-4k^{2}{\sqrt {k^{2}+{\frac {1}{3}}}}{\sqrt {k^{2}+1}}}}dk=0.02660899812797}$(see([3])

The exact solution of this integral (30) is considered as the last iteration.

(20) ${\displaystyle \int _{0}^{\infty }J_{0}(k)J_{1}(1.5k)dk={\frac {2}{3}}}$ product of Bessel functions (see[12]) - different argument different order

(21) ${\displaystyle \int _{0}^{\infty }J_{0}(k)k^{-4}J_{5}(2k)dk={\frac {27}{4096}}}$ product o/f Bessel functions (see[12]) - different argument different order

(22) ${\displaystyle \int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{0}(k)J_{14}(1.1k)dk=-0.007612589703}$ product of Bessel functions - different argument different order

in the case of product of Bessel functions the algorithm can not cope with high order Bessel function as J20 (see[12])

so it has been changed to J14 in integral 33. The exact solution of this integral (33) is considered as the last iteration.

computed results of the integrals compared with exact values - Excellent accuracy

## References

[1] W.F. Ford and A. Sidi. An algorithm for a generalization of the Richardson extrapolation process. SIAM J. Numer. Anal., 24:1212–1232, 1987.

[2] D. Levin and A. Sidi. Two new classes of nonlinear transformations for accelerating the convergence of infinite integrals and series. Appl. Math. Comp., 9:175–215, 1981. Originally appeared as a Tel Aviv University preprint in 1975.

[3] A. Sidi. Extrapolation methods for oscillatory infinite integrals. J. Inst. Maths. Applics., 26:1–20, 1980.

[4] A. Sidi. An algorithm for a special case of a generalization of the Richardson extrapolation process. Numer. Math., 38:299–307, 1982.

[5] A. Sidi. Extrapolation methods for divergent oscillatory infinite integrals that are defined in the sense of summability. J. Comp. Appl. Math., 17:105–114, 1987.

[6] A. Sidi. Computation of infinite integrals involving Bessel functions of arbitrary order by the D¯-transformation. J. Comp. Appl. Math., 78:125–130, 1997.

[7] A. Sidi. A User-Friendly extrapolation method for oscillatory infinite integrals. Math. Comp., 51,249–266, 1988.

[8] A. Sidi. The numerical evaluation of very oscillatory infinite integrals by exrtapolation. Math. Comp., 38, 517-529, 1982.
]9] A. Sidi. A User Friendly extrapolation method for computing infinite integrals of products of oscillatory functions. IMA J. Numer. Anal., 32:602-631, 2012.

[10] T. Hasegawa and A. Sidi. An automatic integration procedure for infinite range integrals involving oscillatory kernels. Numer. Algo., 13:1-19,1996.

[11] S.K. Lucas and H.A. Stone. Evaluating infinite integrals involving Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 217-231, 1995.

[12] S.K. Lucas . Evaluating infinite integrals involving products of Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 269-282, 1995.