# Convergence acceleration by the Ford-Sidi W(m) algorithm

## Computation of infinite integrals involving non-oscillating or oscillating functions (trigonometric or Bessel) multiplied by Bessel function of an arbitrary order

Although we deal with infinite integrals we will use ${d}^{(m)}$ transformation for infinite series  because these integrals are evaluated by accelerating the convergence of sequence of partial sums. Still using the ${D}^{(m)}$ transformation for infinite integrals  produces excellent results.The integrals are of the form :$\int _{0}^{\infty }c(x)w(x)dx$ where c(x) is non-oscillating function or oscillating function (trigonometric or Bessel) and w(x) is Bessel  function of an arbitrary order.

As in Sidi Modified W algorithm (mW algorithm ) (see) the only asymptotic information required is the eventual distance between the zeroes of the Bessel function, these partial sums are computed by integration between consecutive zeros of the Bessel function (see) by a low order Gaussian quadrature (see), although other numerical quadrature methods can be employed.

The Ford - Sidi ${W}^{(m)}$ algorithm routine is applied to these partial sums for convergence acceleration.

The ${d}^{(m)}$ transformation is implemented very efficiently by the Ford-Sidi ${W}^{(m)}$ algorithm  . Special case with m=1 (${W}^{(1)}$ algorithm) is the Sidi W algorithm .

This Ford-Sidi ${W}^{(m)}$ algorithm  is cheap mainly because set of sophisticated recursions that enable to compute the g function and the rest of the algorithm only up to the value m and not to the full length of LMAX+1 as needed by direct application of the Ford-Sidi algorithm.

Anyway the development of the Ford - Sidi ${W}^{(m)}$ algorithm depends heavily on the FS algorithm.

We propose here  Ford-Sidi ${W}^{(m)}$ algorithm computed by direct application of the Ford-Sidi algorithm, even if it costs more

than the Ford-Sidi ${W}^{(m)}$ algorithm  because :

1. It is simple and short as it does not include those sophisticated recursions.

2.The g function has no particular structure below and equal to m so the FS algorithm is applied directly. For values of the g function above m up to LMAX+1 the structured part of the g function is dealt also by applying directly the FS algorithm using the g(k)=t*g(k-m) (see 3.10 p. 1219 in ).

3. Most of the arrays dimensions are reduced to 1-D.

In computing the g function the simplest sampling of r(l)=l is chosen. It enables easily the efficiency of the code by saving previous values of the partial sums. Previous values of the g function can be also saved but it requests computing of two dimensional g instead of one in this algorithm. This sampling is also very useful for alternating series.

The proposed Ford-Sidi ${W}^{(m)}$ algorithm produces accurate results even with simplest sampling of  r(l)=l and small value of m=2.

This algorithm copes successfully with:

1. Integrals with non oscillating function multiplied by Bessel function of arbitrary order.
2. Integrals with oscillating function as Bessel multiplied by Bessel function of arbitrary order.
3. Integrals that involve Bessel function of very high order (see also  and the ${\overline {D}}^{(m)}$ transformation of Sidi ).
4. Integrals as (27)(28) that their integrands are not integrable at infinity and need to be defined in the sense of "Abel sums". (see).
5. integrals that involve products of Bessel functions of different arguments and different order.

22 integrals are tested successfully.

The proposed Ford-Sidi ${W}^{(m)}$ algorithm recursive scheme is :

FSA,FSI, and G are PSIA,PSII and PSIG in the FS algorithm accordingly (see).

$FSA_{0}^{(n)}={\frac {S^{(n)}}{g_{1}^{n}}}\qquad n=0...Nmax$ $FSI_{0}^{(n)}={\frac {1}{g_{1}^{(n)}}}\qquad n=0...Nmax$ $G_{0,i}^{(n)}={\frac {g_{i}^{(n)}}{g_{1}^{(n)}}}\qquad n=0...Nmax\qquad i=2...Nmax+1$ $FSA_{k+1}^{(n)}={\frac {FSA_{k}^{(n+1)}-FSA_{k}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1$ $FSI_{k+1}^{(n)}={\frac {FSI_{k}^{(n+1)}-FSI_{k}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1$ $G_{k+1,i}^{(n)}={\frac {G_{k,i}^{(n+1)}-G_{k,i}^{(n)}}{G_{k,k+2}^{(n+1)}-G_{k,k+2}^{(n)}}}\qquad n\geq 0\qquad k=0...Nmax-1\qquad i=k+3...Nmax+1$ $APPROX={\frac {FSA_{N_{max}}^{(0)}}{FSI_{N_{max}}^{(0)}}}$ Rewriting with mm=n-(k+1) :

$FSA_{0}^{(n)}={\frac {S^{(n)}}{g_{1}}}\qquad n=0...Nmax$ $FSI_{0}^{(n)}={\frac {1}{g_{1}}}\qquad n=0...Nmax$ $G_{i}^{(n)}={\frac {g_{i}}{g_{1}}}\qquad n=0...Nmax\qquad i=2...Nmax+1$ $FSA_{k+1}^{(mm)}={\frac {FSA_{k}^{(mm+1)}-FSA_{k}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0$ $FSI_{k+1}^{(mm)}={\frac {FSI_{k}^{(mm+1)}-FSI_{k}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0$ $G_{i}^{(mm)}={\frac {G_{i}^{(mm+1)}-G_{i}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0\qquad i=k+3...Nmax+1$ $APPROX={\frac {FSA_{N}^{(0)}}{FSI_{N}^{(0)}}}$ Formulating the arrays in  1-D with mm=n-(k+1) :

$FSA^{(n)}={\frac {S^{(n)}}{g_{1}}}\qquad n=0...Nmax$ $FSI^{(n)}={\frac {1}{g_{1}}}\qquad n=0...Nmax$ $G_{i}^{(n)}={\frac {g_{i}}{g_{1}}}\qquad n=0...Nmax\qquad i=2...Nmax+1$ $FSA^{(mm)}={\frac {FSA^{(mm+1)}-FSA^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0$ $FSI^{(mm)}={\frac {FSI^{(mm+1)}-FSI^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0$ $G_{i}^{(mm)}={\frac {G_{i}^{(mm+1)}-G_{i}^{(mm)}}{G_{k+2}^{(mm+1)}-G_{k+2}^{(mm)}}}\qquad n\geq 0\qquad k\geq 0\qquad i=k+3...Nmax+1$ $APPROX={\frac {FSA^{(0)}}{FSI^{(0)}}}$ with S=partial sum Nmax=number of partial sums

the simple and short semi formal code for the Ford - Sidi ${W}^{(m)}$ algorithm is :

 1 // MAIN PROGRAM
2
3 S = 0
4
5 //// calling FUNCTION WMALG that computes the Ford-Sidi W(m) algorithm for convergence acceleration.
6 // and also calling FUNCTION MLTAG that computes the g funcion and the partial sums
7
8
9
10
11 END PROGRAM
12
13 //Routine for computing Ford-Sidi W(m) algorithm
14 FUNCTION WMALG
15 // prevent zero denominator - not exists in .
16 IF (ABS(G1) >= 1D-77) THEN
17  FSA(N) = S / G(1)
18  FSI(N) = 1 / G(1)
19  FOR I=2 TO Nmax + 1 DO
20   FSG(I,N) = G(I) / G(1)
21  ENDFOR
22  ELSE
23  FSA(N) = S
24  FSI(N) = 1
25  FOR I=2 TO Nmax + 1 DO
26   FSG(I,N) = G(I)
27  ENDFOR
28  ENDIF
29  FOR K=0 TO N-1 DO
30   MM=N-(K+1)
31   D = FSG(K+2,MM+1) - FSG(K+2,MM)
32   FOR I=K+3 TO Nmax+1 DO
33    FSG(I,MM) = (FSG(I,MM+1) - FSG(I,MM)) / D
34   ENDFOR
35   FSA(MM) = (FSA(MM+1) - FSA(MM)) / D
36   FSI(MM) = (FSI(MM+1) - FSI(MM)) / D
37  ENDFOR
38 ENDFOR
39 // prevent zero denominator
40 IF (ABS(FSA(0)) >= 1D-77) THEN
41 APPROX = FSA(0) / FSI(0)
42 ELSE
43 APPROX = 1D-77
44 ENDIF
45
46 END FUNCTION


with AN = sequence element S = partial sum   the simple and short semi formal code for computing the partial sums and the g function is :

//routine for computing partial sums and g function
FUNCTION MLTAG
// making partial sums S
//the simplicity of this part of the code is enabled because the choice of the simplest sampling
//R(L)=L (see B.5 p. 1228 in ). It also enables efficiency of the code by saving previous values
//of the partial sums (S).
AN=
S =S+AN

// making g function up to M
FOR K=1 TO M DO
//computing the sequence elements AN(needed for the g function)

//efficiency of this part of the code can be also achived by saving previous results of the sequence elements
//AN that were computed already for the prtial sums S.
AN=  G(K)
ENDFOR

// forward difference
FOR I=2 TO M DO
FOR J=M TO I STEP=-1 DO
G(J)=G(J) - G(J-1)
ENDFOR
ENDFOR

T=1/(N +1)

// making g function from M to Nmax+1 (see 3.10 p. 1219 in ).
FOR K=1 TO Nmax+1 DO
IF K <= M THEN
G(K)=G(K) * ((N+1) ^ k)
ELSE
G(K)=T * G(K-M)
ENDIF
ENDFOR
ENDFOR

END FUNCTION


## the 22 tested integrals, with exact values, are:

(1)$\int _{0}^{\infty }e^{-k}J_{2}(kr)dk={\frac {({{\sqrt {r^{2}+1}}-1})^{2}}{r^{2}{\sqrt {r^{2}+1}}}}$ Bessel function order greater than 1

(2)$\int _{0}^{\infty }J_{1}(kr)\ J_{0}(ka)\ dk={\begin{cases}{\frac {1}{r}}&{\text{if }}r>a\\{\frac {1}{2a}}&{\text{if }}r=a\\0&{\text{if }}r products of Bessel functions-different argument different order

(3) $\int _{0}^{\infty }J_{4}(kr)J_{3}(ka)dk={\begin{cases}{\frac {a^{3}}{r^{4}}}&{\text{if }}r>a\\{\frac {1}{2a}}&{\text{if }}r=a\\0&{\text{if }}r products of Bessel functions - different argument different order

(4)$\int _{0}^{\infty }{\frac {k}{\sqrt {k^{2}+\alpha ^{2}}}}J_{0}(kr)dk={\frac {e^{-\alpha r}}{r}}$ alpha=1 (see)

(5) $\int _{0}^{\infty }k^{2}J_{0}(k)dk=-1$ divergent oscillatory (see)

(6)$\int _{0}^{\infty }{\frac {1}{2}}\ln(1+k^{2})J_{1}(k)=0.421024438240708333$ (see)

(7)$\int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{0}(k)dk=0.421024438240708333$ (see)

(8)$\int _{0}^{\infty }{\frac {1-e^{-k}}{k\ln(1+{\sqrt {2}})}}J_{0}(k)=1$ oscillatory kernel (see)

(9)$\int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{10}(k)dk=0.098970545308402$ (see) big order of Bessel function

(10)$\int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{100}(k)dk=0.0099989997000302$ (see) very big order of Bessel function

(11)$\int _{0}^{\infty }k^{2}J_{1}(k)[J_{0}(k)]^{2}dk={\frac {4}{3\pi {\sqrt {3}}}}$ see oscillatory kernel - product of Bessel functions - same argument

(12)$\int _{0}^{\infty }{\frac {1}{k}}J_{0}(k)J_{1}(k)dk={\frac {2}{\pi }}$ oscillatory kernel - product of Bessel functions (see ) - same argument

(13)$\int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{0}(k)dk=I_{0}(2.0)K_{0}(2.0)$ see 

(14)$\int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{10}(k)dk=I_{5}(2.0)K_{5}(2.0)$ big order of Bessel function (see )

(15)$\int _{0}^{\infty }{\frac {1}{\sqrt {16+k^{2}}}}J_{100}(k)dk=I_{50}(2.0)K_{50}(2.0)$ very big order of Bessel function (see )

(16)$\int _{0}^{\infty }J_{0}(k)dk=1$ (see)

(17)$\int _{0}^{\infty }k^{4}J_{0}(k)dk=9$ divergent oscillatory (see)

(18)$\int _{0}^{\infty }J_{0}({\frac {k^{4}+2k^{2}+5}{k^{2}+4}}){\sqrt {k^{2}+9k+20}}\,\ dk=2.627160401844$ very oscillatory (see)

The exact solution of this integral (29) is considered as the last iteration.

(19)$\int _{0}^{\infty }J_{0}(2k){\frac {k{\sqrt {k^{2}+{\frac {1}{3}}}}(2k^{2}\cdot e^{-0.2{\sqrt {k^{2}+1}}}-(2k^{2}+1)e^{-0.2{\sqrt {k^{2}+{\frac {1}{3}}}}})}{(2k^{2}+1)^{2}-4k^{2}{\sqrt {k^{2}+{\frac {1}{3}}}}{\sqrt {k^{2}+1}}}}dk=0.02660899812797$ (see()

The exact solution of this integral (30) is considered as the last iteration.

(20) $\int _{0}^{\infty }J_{0}(k)J_{1}(1.5k)dk={\frac {2}{3}}$ product of Bessel functions (see) - different argument different order

(21) $\int _{0}^{\infty }J_{0}(k)k^{-4}J_{5}(2k)dk={\frac {27}{4096}}$ product o/f Bessel functions (see) - different argument different order

(22) $\int _{0}^{\infty }{\frac {k}{1+k^{2}}}J_{0}(k)J_{14}(1.1k)dk=-0.007612589703$ product of Bessel functions - different argument different order

in the case of product of Bessel functions the algorithm can not cope with high order Bessel function as J20 (see)

so it has been changed to J14 in integral 33. The exact solution of this integral (33) is considered as the last iteration.

computed results of the integrals compared with exact values - Excellent accuracy