Convergence acceleration by the Ford-Sidi W(m) algorithm

From Wikiversity
Jump to navigation Jump to search


Computation of infinite integrals involving non-oscillating or oscillating functions (trigonometric or Bessel) multiplied by Bessel function of an arbitrary order[edit | edit source]

Although we deal with infinite integrals we will use transformation for infinite series [2] because these integrals are evaluated by accelerating the convergence of sequence of partial sums. Still using the transformation for infinite integrals [2] produces excellent results.The integrals are of the form : where c(x) is non-oscillating function or oscillating function (trigonometric or Bessel) and w(x) is Bessel  function of an arbitrary order.

As in Sidi Modified W algorithm (mW algorithm ) (see[7]) the only asymptotic information required is the eventual distance between the zeroes of the Bessel function, these partial sums are computed by integration between consecutive zeros of the Bessel function (see[6]) by a low order Gaussian quadrature (see[7]), although other numerical quadrature methods can be employed.

The Ford - Sidi algorithm routine is applied to these partial sums for convergence acceleration.

The transformation is implemented very efficiently by the Ford-Sidi algorithm [1] . Special case with m=1 (algorithm) is the Sidi W algorithm [4].

This Ford-Sidi algorithm [1] is cheap mainly because set of sophisticated recursions that enable to compute the g function and the rest of the algorithm only up to the value m and not to the full length of LMAX+1 as needed by direct application of the Ford-Sidi algorithm.

Anyway the development of the Ford - Sidi algorithm depends heavily on the FS algorithm.

We propose here  Ford-Sidi algorithm computed by direct application of the Ford-Sidi algorithm, even if it costs more

than the Ford-Sidi algorithm [1] because :

1. It is simple and short as it does not include those sophisticated recursions.

2.The g function has no particular structure below and equal to m so the FS algorithm is applied directly. For values of the g function above m up to LMAX+1 the structured part of the g function is dealt also by applying directly the FS algorithm using the g(k)=t*g(k-m) (see 3.10 p. 1219 in [1]).

3. Most of the arrays dimensions are reduced to 1-D.

In computing the g function the simplest sampling of r(l)=l is chosen. It enables easily the efficiency of the code by saving previous values of the partial sums. Previous values of the g function can be also saved but it requests computing of two dimensional g instead of one in this algorithm. This sampling is also very useful for alternating series.

The proposed Ford-Sidi algorithm produces accurate results even with simplest sampling of  r(l)=l and small value of m=2.

This algorithm copes successfully with:

  1. Integrals with non oscillating function multiplied by Bessel function of arbitrary order.
  2. Integrals with oscillating function as Bessel multiplied by Bessel function of arbitrary order.
  3. Integrals that involve Bessel function of very high order (see also [6] and the transformation of Sidi [3]).
  4. Integrals as (27)(28) that their integrands are not integrable at infinity and need to be defined in the sense of "Abel sums". (see[5]).
  5. integrals that involve products of Bessel functions of different arguments and different order.

22 integrals are tested successfully.

The proposed Ford-Sidi algorithm recursive scheme is :

FSA,FSI, and G are PSIA,PSII and PSIG in the FS algorithm accordingly (see[1]).

Rewriting with mm=n-(k+1)  :

Formulating the arrays in  1-D with mm=n-(k+1) :

with S=partial sum Nmax=number of partial sums

the simple and short semi formal code for the Ford - Sidi algorithm is :


// MAIN PROGRAM

S = 0

//// calling FUNCTION WMALG that computes the Ford-Sidi W(m) algorithm for convergence acceleration.
// and also calling FUNCTION MLTAG that computes the g funcion and the partial sums 
 



END PROGRAM

//Routine for computing Ford-Sidi W(m) algorithm
FUNCTION WMALG
// prevent zero denominator - not exists in [1]. 
IF (ABS(G1) >= 1D-77) THEN
 FSA(N) = S / G(1)
 FSI(N) = 1 / G(1)
 FOR I=2 TO Nmax + 1 DO
  FSG(I,N) = G(I) / G(1)
 ENDFOR 
 ELSE
 FSA(N) = S 
 FSI(N) = 1 
 FOR I=2 TO Nmax + 1 DO
  FSG(I,N) = G(I) 
 ENDFOR 
 ENDIF 
 FOR K=0 TO N-1 DO
  MM=N-(K+1)
  D = FSG(K+2,MM+1) - FSG(K+2,MM)
  FOR I=K+3 TO Nmax+1 DO
   FSG(I,MM) = (FSG(I,MM+1) - FSG(I,MM)) / D
  ENDFOR 
  FSA(MM) = (FSA(MM+1) - FSA(MM)) / D
  FSI(MM) = (FSI(MM+1) - FSI(MM)) / D 
 ENDFOR 
ENDFOR
// prevent zero denominator
IF (ABS(FSA(0)) >= 1D-77) THEN
APPROX = FSA(0) / FSI(0)
ELSE
APPROX = 1D-77
ENDIF

END FUNCTION

with AN = sequence element S = partial sum   the simple and short semi formal code for computing the partial sums and the g function is :

//routine for computing partial sums and g function
FUNCTION MLTAG
 // making partial sums S  
 //the simplicity of this part of the code is enabled because the choice of the simplest sampling
 //R(L)=L (see B.5 p. 1228 in [1]). It also enables efficiency of the code by saving previous values
 //of the partial sums (S).
 AN=
 S =S+AN
 
 // making g function up to M
 FOR K=1 TO M DO
//computing the sequence elements AN(needed for the g function)

//efficiency of this part of the code can be also achived by saving previous results of the sequence elements
//AN that were computed already for the prtial sums S.
  AN=  G(K)
 ENDFOR

 // forward difference
 FOR I=2 TO M DO
  FOR J=M TO I STEP=-1 DO
   G(J)=G(J) - G(J-1)
  ENDFOR 
 ENDFOR

 T=1/(N +1)

 // making g function from M to Nmax+1 (see 3.10 p. 1219 in [1]).
 FOR K=1 TO Nmax+1 DO
  IF K <= M THEN
   G(K)=G(K) * ((N+1) ^ k)
  ELSE
   G(K)=T * G(K-M)
  ENDIF
 ENDFOR 
ENDFOR

END FUNCTION

the 22 tested integrals, with exact values, are:[edit | edit source]

(1) Bessel function order greater than 1

(2) products of Bessel functions-different argument different order

(3) products of Bessel functions - different argument different order

(4) alpha=1 (see[10])

(5) divergent oscillatory (see[5][7])

(6) (see[7])

(7) (see[2][11])

(8) oscillatory kernel (see[10])

(9) (see[11]) big order of Bessel function

(10) (see[11]) very big order of Bessel function

(11) see[9] oscillatory kernel - product of Bessel functions - same argument

(12) oscillatory kernel - product of Bessel functions (see[2] [3]) - same argument

(13) see [6]

(14) big order of Bessel function (see [6])

(15) very big order of Bessel function (see [6])

(16) (see[2][3])

(17) divergent oscillatory (see[5][7])

(18) very oscillatory (see[8])

The exact solution of this integral (29) is considered as the last iteration.

(19)(see([3])

The exact solution of this integral (30) is considered as the last iteration.

(20) product of Bessel functions (see[12]) - different argument different order

(21) product o/f Bessel functions (see[12]) - different argument different order

(22) product of Bessel functions - different argument different order

in the case of product of Bessel functions the algorithm can not cope with high order Bessel function as J20 (see[12])

so it has been changed to J14 in integral 33. The exact solution of this integral (33) is considered as the last iteration.

computed results of the integrals compared with exact values - Excellent accuracy

References[edit | edit source]

[1] W.F. Ford and A. Sidi. An algorithm for a generalization of the Richardson extrapolation process. SIAM J. Numer. Anal., 24:1212–1232, 1987.

[2] D. Levin and A. Sidi. Two new classes of nonlinear transformations for accelerating the convergence of infinite integrals and series. Appl. Math. Comp., 9:175–215, 1981. Originally appeared as a Tel Aviv University preprint in 1975.

[3] A. Sidi. Extrapolation methods for oscillatory infinite integrals. J. Inst. Maths. Applics., 26:1–20, 1980.

[4] A. Sidi. An algorithm for a special case of a generalization of the Richardson extrapolation process. Numer. Math., 38:299–307, 1982.

[5] A. Sidi. Extrapolation methods for divergent oscillatory infinite integrals that are defined in the sense of summability. J. Comp. Appl. Math., 17:105–114, 1987.

[6] A. Sidi. Computation of infinite integrals involving Bessel functions of arbitrary order by the D¯-transformation. J. Comp. Appl. Math., 78:125–130, 1997.

[7] A. Sidi. A User-Friendly extrapolation method for oscillatory infinite integrals. Math. Comp., 51,249–266, 1988.

[8] A. Sidi. The numerical evaluation of very oscillatory infinite integrals by exrtapolation. Math. Comp., 38, 517-529, 1982.
]9] A. Sidi. A User Friendly extrapolation method for computing infinite integrals of products of oscillatory functions. IMA J. Numer. Anal., 32:602-631, 2012.

[10] T. Hasegawa and A. Sidi. An automatic integration procedure for infinite range integrals involving oscillatory kernels. Numer. Algo., 13:1-19,1996.

[11] S.K. Lucas and H.A. Stone. Evaluating infinite integrals involving Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 217-231, 1995.

[12] S.K. Lucas . Evaluating infinite integrals involving products of Bessel functions of arbitrary order. J. Compute. Appl. Math., 64: 269-282, 1995.