Convergence acceleration by the Ford-Sidi W(m) algorithm
Computation of infinite integrals involving non-oscillating or oscillating functions (trigonometric or Bessel) multiplied by Bessel function of an arbitrary order[edit | edit source]
Although we deal with infinite integrals we will use transformation for infinite series  because these integrals are evaluated by accelerating the convergence of sequence of partial sums. Still using the transformation for infinite integrals  produces excellent results.The integrals are of the form : where c(x) is non-oscillating function or oscillating function (trigonometric or Bessel) and w(x) is Bessel function of an arbitrary order.
As in Sidi Modified W algorithm (mW algorithm ) (see) the only asymptotic information required is the eventual distance between the zeroes of the Bessel function, these partial sums are computed by integration between consecutive zeros of the Bessel function (see) by a low order Gaussian quadrature (see), although other numerical quadrature methods can be employed.
The Ford - Sidi algorithm routine is applied to these partial sums for convergence acceleration.
The transformation is implemented very efficiently by the Ford-Sidi algorithm  . Special case with m=1 (algorithm) is the Sidi W algorithm .
This Ford-Sidi algorithm  is cheap mainly because set of sophisticated recursions that enable to compute the g function and the rest of the algorithm only up to the value m and not to the full length of LMAX+1 as needed by direct application of the Ford-Sidi algorithm.
Anyway the development of the Ford - Sidi algorithm depends heavily on the FS algorithm.
We propose here Ford-Sidi algorithm computed by direct application of the Ford-Sidi algorithm, even if it costs more
than the Ford-Sidi algorithm  because :
1. It is simple and short as it does not include those sophisticated recursions.
2.The g function has no particular structure below and equal to m so the FS algorithm is applied directly. For values of the g function above m up to LMAX+1 the structured part of the g function is dealt also by applying directly the FS algorithm using the g(k)=t*g(k-m) (see 3.10 p. 1219 in ).
3. Most of the arrays dimensions are reduced to 1-D.
In computing the g function the simplest sampling of r(l)=l is chosen. It enables easily the efficiency of the code by saving previous values of the partial sums. Previous values of the g function can be also saved but it requests computing of two dimensional g instead of one in this algorithm. This sampling is also very useful for alternating series.
The proposed Ford-Sidi algorithm produces accurate results even with simplest sampling of r(l)=l and small value of m=2.
This algorithm copes successfully with:
- Integrals with non oscillating function multiplied by Bessel function of arbitrary order.
- Integrals with oscillating function as Bessel multiplied by Bessel function of arbitrary order.
- Integrals that involve Bessel function of very high order (see also  and the transformation of Sidi ).
- Integrals as (27)(28) that their integrands are not integrable at infinity and need to be defined in the sense of "Abel sums". (see).
- integrals that involve products of Bessel functions of different arguments and different order.
22 integrals are tested successfully.
The proposed Ford-Sidi algorithm recursive scheme is :
FSA,FSI, and G are PSIA,PSII and PSIG in the FS algorithm accordingly (see).
Rewriting with mm=n-(k+1) :
Formulating the arrays in 1-D with mm=n-(k+1) :
with S=partial sum Nmax=number of partial sums
the simple and short semi formal code for the Ford - Sidi algorithm is :
1 // MAIN PROGRAM 2 3 S = 0 4 5 //// calling FUNCTION WMALG that computes the Ford-Sidi W(m) algorithm for convergence acceleration. 6 // and also calling FUNCTION MLTAG that computes the g funcion and the partial sums 7 8 9 10 11 END PROGRAM 12 13 //Routine for computing Ford-Sidi W(m) algorithm 14 FUNCTION WMALG 15 // prevent zero denominator - not exists in . 16 IF (ABS(G1) >= 1D-77) THEN 17 FSA(N) = S / G(1) 18 FSI(N) = 1 / G(1) 19 FOR I=2 TO Nmax + 1 DO 20 FSG(I,N) = G(I) / G(1) 21 ENDFOR 22 ELSE 23 FSA(N) = S 24 FSI(N) = 1 25 FOR I=2 TO Nmax + 1 DO 26 FSG(I,N) = G(I) 27 ENDFOR 28 ENDIF 29 FOR K=0 TO N-1 DO 30 MM=N-(K+1) 31 D = FSG(K+2,MM+1) - FSG(K+2,MM) 32 FOR I=K+3 TO Nmax+1 DO 33 FSG(I,MM) = (FSG(I,MM+1) - FSG(I,MM)) / D 34 ENDFOR 35 FSA(MM) = (FSA(MM+1) - FSA(MM)) / D 36 FSI(MM) = (FSI(MM+1) - FSI(MM)) / D 37 ENDFOR 38 ENDFOR 39 // prevent zero denominator 40 IF (ABS(FSA(0)) >= 1D-77) THEN 41 APPROX = FSA(0) / FSI(0) 42 ELSE 43 APPROX = 1D-77 44 ENDIF 45 46 END FUNCTION
with AN = sequence element S = partial sum
the simple and short semi formal code for computing the partial sums and the g function is :
//routine for computing partial sums and g function FUNCTION MLTAG // making partial sums S //the simplicity of this part of the code is enabled because the choice of the simplest sampling //R(L)=L (see B.5 p. 1228 in ). It also enables efficiency of the code by saving previous values //of the partial sums (S). AN= S =S+AN // making g function up to M FOR K=1 TO M DO //computing the sequence elements AN(needed for the g function) //efficiency of this part of the code can be also achived by saving previous results of the sequence elements //AN that were computed already for the prtial sums S. AN= G(K) ENDFOR // forward difference FOR I=2 TO M DO FOR J=M TO I STEP=-1 DO G(J)=G(J) - G(J-1) ENDFOR ENDFOR T=1/(N +1) // making g function from M to Nmax+1 (see 3.10 p. 1219 in ). FOR K=1 TO Nmax+1 DO IF K <= M THEN G(K)=G(K) * ((N+1) ^ k) ELSE G(K)=T * G(K-M) ENDIF ENDFOR ENDFOR END FUNCTION
the 22 tested integrals, with exact values, are:[edit | edit source]
(1) Bessel function order greater than 1
(2) products of Bessel functions-different argument different order
(3) products of Bessel functions - different argument different order
(4) alpha=1 (see)
(5) divergent oscillatory (see)
(8) oscillatory kernel (see)
(9) (see) big order of Bessel function
(10) (see) very big order of Bessel function
(11) see oscillatory kernel - product of Bessel functions - same argument
(12) oscillatory kernel - product of Bessel functions (see ) - same argument
(13) see 
(14) big order of Bessel function (see )
(15) very big order of Bessel function (see )
(17) divergent oscillatory (see)
(18) very oscillatory (see)
The exact solution of this integral (29) is considered as the last iteration.
The exact solution of this integral (30) is considered as the last iteration.
(20) product of Bessel functions (see) - different argument different order
(21) product o/f Bessel functions (see) - different argument different order
(22) product of Bessel functions - different argument different order
in the case of product of Bessel functions the algorithm can not cope with high order Bessel function as J20 (see)
so it has been changed to J14 in integral 33. The exact solution of this integral (33) is considered as the last iteration.
computed results of the integrals compared with exact values - Excellent accuracy
References[edit | edit source]
 W.F. Ford and A. Sidi. An algorithm for a generalization of the Richardson extrapolation process. SIAM J. Numer. Anal., 24:1212–1232, 1987.
 D. Levin and A. Sidi. Two new classes of nonlinear transformations for accelerating the convergence of infinite integrals and series. Appl. Math. Comp., 9:175–215, 1981. Originally appeared as a Tel Aviv University preprint in 1975.
 A. Sidi. Extrapolation methods for oscillatory infinite integrals. J. Inst. Maths. Applics., 26:1–20, 1980.
 A. Sidi. An algorithm for a special case of a generalization of the Richardson extrapolation process. Numer. Math., 38:299–307, 1982.
 A. Sidi. Extrapolation methods for divergent oscillatory infinite integrals that are defined in the sense of summability. J. Comp. Appl. Math., 17:105–114, 1987.
 A. Sidi. Computation of infinite integrals involving Bessel functions of arbitrary order by the D¯-transformation. J. Comp. Appl. Math., 78:125–130, 1997.
 A. Sidi. A User-Friendly extrapolation method for oscillatory infinite integrals. Math. Comp., 51,249–266, 1988.
 A. Sidi. The numerical evaluation of very oscillatory infinite integrals by exrtapolation. Math. Comp., 38, 517-529, 1982.
]9] A. Sidi. A User Friendly extrapolation method for computing infinite integrals of products of oscillatory functions. IMA J. Numer. Anal., 32:602-631, 2012.
 T. Hasegawa and A. Sidi. An automatic integration procedure for infinite range integrals involving oscillatory kernels. Numer. Algo., 13:1-19,1996.
 S.K. Lucas and H.A. Stone. Evaluating infinite integrals involving Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 217-231, 1995.
 S.K. Lucas . Evaluating infinite integrals involving products of Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 269-282, 1995.