Sums, products, integrals and similar functions

Although the gp calculator is programmable, it is useful to have a number of preprogrammed loops, including sums, products, and a certain number of recursions. Also, a number of functions from numerical analysis like numerical integration and summation of series will be described here.

One of the parameters in these loops must be the control variable, hence a simple variable name. In the descriptions, the letter X will always denote any simple variable name, and represents the formal parameter used in the function. The expression to be summed, integrated, etc. is any legal PARI expression, including of course expressions using loops.

Library mode. Since it is easier to program directly the loops in library mode, these functions are mainly useful for GP programming. On the other hand, numerical routines code a function (to be integrated, summed, etc.) with two parameters named

    GEN (*eval)(void*,GEN)
    void *E;  \\ context: eval(E, x) must evaluate your function at x.

see the Libpari manual for details.

Numerical integration. The "double exponential" (DE) univariate integration method is implemented in intnum and its variants. Romberg integration is still available under the name intnumromb, but superseded. It is possible to compute numerically integrals to thousands of decimal places in reasonable time, as long as the integrand is regular. It is also reasonable to compute numerically integrals in several variables, although more than two becomes lengthy. The integration domain may be noncompact, and the integrand may have reasonable singularities at endpoints. To use intnum, you must split the integral into a sum of subintegrals where the function has no singularities except at the endpoints. Polynomials in logarithms are not considered singular, and neglecting these logs, singularities are assumed to be algebraic (asymptotic to C(x-a) for some α > -1 when x is close to a), or to correspond to simple discontinuities of some (higher) derivative of the function. For instance, the point 0 is a singularity of abs(x).

Assume the bitprecision is b, so we try to achieve an absolute error less than 2-b. DE methods use O(b log b) function evaluations and should work for both compact and non-compact intervals as long as the integrand is the restriction of an analytic function to a suitable domain and its behaviour at infinity is correctly described. When integrating regular functions on a compact interval, away from poles of the integrand, Gauss-Legendre integration (intnumgauss) is the best choice, using O(b) function evaluations. To integrate oscillating functions on non-compact interval, the slower but robust intnumosc is available, performing Gaussian integration on intervals of length the half-period (or quasi-period) and using Sidi's mW algorithm to extrapolate their sum. If poles are close to the integration interval, Gaussian integration may run into difficulties and it is then advisable to split the integral using intnum to get away from poles, then intnumosc for the remainder.

For maximal efficiency, abscissas and integration weights can be precomputed, respectively using intnuminit (O(b2)) or intnumgaussinit (O(b3)).

Numerical summation.

Many numerical summation methods are available to approximate ∑n ≥ n_{0} f(n) at accuracy 2-b: the overall best choice should be sumnum, which uses Euler-MacLaurin (and O(blog b) function evaluations); initialization time (sumnuminit) is O(b3). Also available are

* Abel-Plana summation (sumnumap), also O(blog b) function evaluations and O(b3) initialization (sumnumapinit) with a larger implied constant;

* Lagrange summation (sumnumlagrange) uses O(b) evaluations but more brittle and the asymptotic behaviour of f must be correctly indicated. Initialization (sumnumlagrangeinit) can vary from O(b2) to O(b3) depending on the asymptotic behaviour.

* Sidi summation (sumnumsidi) uses O(b) evaluations and should be more robust than Lagrange summation. No initialization is needed.

* Monien summation (sumnummonien) uses O(b/log b) evaluations but is even more brittle than Lagrange and also has a O(b3) initialization (summonieninit).

* To sum rational functions, use sumnumrat.

All the function so far require f to be be the restriction to integers of a regular function on the reals, and even on the complex numbers for Monien summation. The following algorithms allow functions defined only on the integers, under asumptions that are hard to verify. They are best used heuristically since they in fact are often valid when those asumptions do not hold, and for instance often yield a result for divergent series (e.g., Borel resummation).

* To sum alternating series, use sumalt, which requires O(b) function evaluations.

* To sum functions of a fixed sign, sumpos uses van Wijngarten's trick to reduce to an alternating series, for a cost of O(blog b) function evaluations but beware that f must be evaluated at large integers, of the order of 2b/α if we assume that f(n) = O(1 / nα+1) for some α > 0.


asympnum(expr, {alpha = 1})

Asymptotic expansion of expr, corresponding to a sequence u(n), assuming it has the shape u(n) ~ ∑i ≥ 0 ai n-iα with rational coefficients ai with reasonable height; the algorithm is heuristic and performs repeated calls to limitnum, with alpha as in limitnum. As in limitnum, u(n) may be given either by a closure nu(n) or as a closure N [u(1),...,u(N)], the latter being often more efficient.

  ? f(n) = n! / (n^n*exp(-n)*sqrt(n));
  ? asympnum(f)
  %2 = []   \\ failure !
  ? localprec(57); l = limitnum(f)
  %3 = 2.5066282746310005024157652848110452530
  ? asympnum(n->f(n)/l) \\ normalize
  %4 =  [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880,
         5246819/75246796800]

and we indeed get a few terms of Stirling's expansion. Note that it definitely helps to normalize with a limit computed to higher accuracy (as a rule of thumb, multiply the bit accuracy by 1.612):

  ? l = limitnum(f)
  ? asympnum(n->f(n) / l) \\ failure again !!!
  %6 = []

We treat again the example of the Motzkin numbers Mn given in limitnum:

  \\ [Mk, Mk*2, ..., Mk*N] / (3^n / n^(3/2))
  ? vM(N, k = 1) =
  { my(q = k*N, V);
     if (q == 1, return ([1/3]));
     V = vector(q); V[1] = V[2] = 1;
     for(n = 2, q - 1,
       V[n+1] = ((2*n + 1)*V[n] + 3*(n - 1)*V[n-1]) / (n + 2));
     f = (n -> 3^n / n^(3/2));
     return (vector(N, n, V[n*k] / f(n*k)));
  }
  ? localprec(100); l = limitnum(n->vM(n,10)); \\ 3/sqrt(12*Pi)
  ? \p38
  ? asympnum(n->vM(n,10)/l)
  %2 = [1, -3/32, 101/10240, -1617/1638400, 505659/5242880000, ...]

If alpha is not a rational number, loss of accuracy is expected, so it should be precomputed to double accuracy, say:

  ? \p38
  ? asympnum(n->log(1+1/n^Pi),Pi)
  %1 = [0, 1, -1/2, 1/3, -1/4, 1/5]
  ? localprec(76); a = Pi;
  ? asympnum(n->log(1+1/n^Pi), a) \\ more terms
  %3 = [0, 1, -1/2, 1/3, -1/4, 1/5, -1/6, 1/7, -1/8, 1/9, -1/10, 1/11, -1/12]
  ? asympnum(n->log(1+1/sqrt(n)),1/2) \\ many more terms
  %4 = 49

The expression is evaluated for n = 1, 2,..., N for an N = O(B) if the current bit accuracy is B. If it is not defined for one of these values, translate or rescale accordingly:

  ? asympnum(n->log(1-1/n))  \\ can't evaluate at n = 1 !
   ***   at top-level: asympnum(n->log(1-1/n))
   ***                 ^ —  —  —  —  —  —  — --
   ***   in function asympnum: log(1-1/n)
   ***                         ^ —  —  — -
   *** log: domain error in log: argument = 0
  ? asympnum(n->-log(1-1/(2*n)))
  %5 = [0, 1/2, 1/8, 1/24, ...]
  ? asympnum(n->-log(1-1/(n+1)))
  %6 = [0, 1, -1/2, 1/3, -1/4, ...]

The library syntax is asympnum(void *E, GEN (*u)(void *,GEN,long), GEN alpha, long prec), where u(E, n, prec) must return either u(n) or [u(1),...,u(n)] in precision prec. Also available is GEN asympnum0(GEN u, GEN alpha, long prec), where u is a closure as above or a vector of sufficient length.


asympnumraw(expr, N, {alpha = 1})

Return the N+1 first terms of asymptotic expansion of expr, corresponding to a sequence u(n), as floating point numbers. Assume that the expansion has the shape u(n) ~ ∑i ≥ 0 ai n-iα and return approximation of [a0, a1,..., aN]. The algorithm is heuristic and performs repeated calls to limitnum, with alpha as in limitnum. As in limitnum, u(n) may be given either by a closure nu(n) or as a closure N [u(1),...,u(N)], the latter being often more efficient. This function is related to, but more flexible than, asympnum, which requires rational asymptotic expansions.

  ? f(n) = n! / (n^n*exp(-n)*sqrt(n));
  ? asympnum(f)
  %2 = []   \\ failure !
  ? v = asympnumraw(f, 10);
  ? v[1] - sqrt(2*Pi)
  %4 = 0.E-37
  ? bestappr(v / v[1], 2^60)
  %5 =  [1, 1/12, 1/288, -139/51840, -571/2488320, 163879/209018880,...]

and we indeed get a few terms of Stirling's expansion (the first 9 terms are correct). If u(n) has an asymptotic expansion in n with α not an integer, the default alpha = 1 is inaccurate:

  ? f(n) = (1+1/n^(7/2))^(n^(7/2));
  ? v1 = asympnumraw(f,10);
  ? v1[1] - exp(1)
  %8 = 4.62... E-12
  ? v2 = asympnumraw(f,10,7/2);
  ? v2[1] - exp(1)
  %7 0.E-37

As in asympnum, if alpha is not a rational number, loss of accuracy is expected, so it should be precomputed to double accuracy, say.

The library syntax is asympnumraw(void *E, GEN (*u)(void *,GEN,long), long N, GEN alpha, long prec), where u(E, n, prec) must return either u(n) or [u(1),...,u(n)] in precision prec. Also available is GEN asympnumraw0(GEN u, GEN alpha, long prec) where u is either a closure as above or a vector of sufficient length.


contfraceval(CF, t, {lim = -1})

Given a continued fraction CF output by contfracinit, evaluate the first lim terms of the continued fraction at t (all terms if lim is negative or omitted; if positive, lim must be less than or equal to the length of CF.

The library syntax is GEN contfraceval(GEN CF, GEN t, long lim).


contfracinit(M, {lim = -1})

Given M representing the power series S = ∑n ≥ 0 M[n+1]zn, transform it into a continued fraction in Euler form, using the quotient-difference algorithm; restrict to n ≤ lim if latter is nonnegative. M can be a vector, a power series, a polynomial; if the limiting parameter lim is present, a rational function is also allowed (and converted to a power series of that accuracy).

The result is a 2-component vector [A,B] such that S = M[1] / (1+A[1]z+B[1]z2/(1+A[2]z+B[2]z2/(1+...1/(1+A[lim/2]z)))). Does not work if any coefficient of M vanishes, nor for series for which certain partial denominators vanish.

The library syntax is GEN contfracinit(GEN M, long lim). Also available is GEN quodif(GEN M, long n) which returns the standard continued fraction, as a vector C such that S = c[1] / (1 + c[2]z / (1+c[3]z/(1+......c[lim]z))).


derivnum(X = a, expr, {ind = 1})

Numerical derivation of expr with respect to X at X = a. The order of derivation is 1 by default.

  ? derivnum(x=0, sin(exp(x))) - cos(1)
  %1 = 0.E-38

A clumsier approach, which would not work in library mode, is

  ? f(x) = sin(exp(x))
  ? f'(0) - cos(1)
  %2 = 0.E-38

* When a is a numerical type (integer, rational number, real number or t_COMPLEX of such), performs numerical derivation.

* When a is a (polynomial, rational function or) power series, compute derivnum(t = a,f) as f'(a) = (f(a))'/a':

  ? derivnum(x = 1 + t, sqrt(x))
  %1 = 1/2 - 1/4*t + 3/16*t^2 - 5/32*t^3 + ... + O(t^16)
  ? derivnum(x = 1/(1 + t), sqrt(x))
  %2 = 1/2 + 1/4*t - 1/16*t^2 + 1/32*t^3 + ... + O(t^16)
  ? derivnum(x = 1 + t + O(t^17), sqrt(x))
  %3 = 1/2 - 1/4*t + 3/16*t^2 - 5/32*t^3 + ... + O(t^16)

If the parameter ind is present, it can be

* a nonnegative integer m, in which case we return f(m)(x);

* or a vector of orders, in which case we return the vector of derivatives.

  ? derivnum(x = 0, exp(sin(x)), 16) \\ 16-th derivative
  %1 = -52635599.000000000000000000000000000000
  
  ? round( derivnum(x = 0, exp(sin(x)), [0..13]) )  \\ 0-13-th derivatives
  %2 = [1, 1, 1, 0, -3, -8, -3, 56, 217, 64, -2951, -12672, 5973, 309376]

The library syntax is derivfunk(void *E, GEN (*eval)(void*,GEN), GEN a, GEN ind, long prec). Also available is GEN derivfun(void *E, GEN (*eval)(void *, GEN), GEN a, long prec). If a is a numerical type (t_INT, t_FRAC, t_REAL or t_COMPLEX of such, we have GEN derivnumk(void *E, GEN (*eval)(void *, GEN, long), GEN a, GEN ind, long prec) and GEN derivnum(void *E, GEN (*eval)(void *, GEN, long prec), GEN a, long prec)


intcirc(X = a, R, expr, {tab})

Numerical integration of (2iπ)-1expr with respect to X on the circle |X-a |= R. In other words, when expr is a meromorphic function, sum of the residues in the corresponding disk; tab is as in intnum, except that if computed with intnuminit it should be with the endpoints [-1, 1].

  ? \p105
  ? intcirc(s=1, 0.5, zeta(s)) - 1
  time = 496 ms.
  %1 = 1.2883911040127271720 E-101 + 0.E-118*I

The library syntax is intcirc(void *E, GEN (*eval)(void*,GEN), GEN a,GEN R,GEN tab, long prec).


intfuncinit(t = a, b, f, {m = 0})

Initialize tables for use with integral transforms (such as Fourier, Laplace or Mellin transforms) in order to compute ∫ab f(t) k(t,z) dt for some kernel k(t,z). The endpoints a and b are coded as in intnum, f is the function to which the integral transform is to be applied and the nonnegative integer m is as in intnum: multiply the number of sampling points roughly by 2m, hopefully increasing the accuracy. This function is particularly useful when the function f is hard to compute, such as a gamma product.

Limitation. The endpoints a and b must be at infinity, with the same asymptotic behavior. Oscillating types are not supported. This is easily overcome by integrating vectors of functions, see example below.

Examples.

* numerical Fourier transform F(z) = ∫- oo + oo f(t)e-2iπ z t dt. First the easy case, assume that f decrease exponentially:

     f(t) = exp(-t^2);
     A = [-oo,1];
     B = [+oo,1];
     \p200
     T = intfuncinit(t = A,B , f(t));
     F(z) =
     { my(a = -2*I*Pi*z);
       intnum(t = A,B, exp(a*t), T);
     }
     ? F(1) - sqrt(Pi)*exp(-Pi^2)
     %1 = -1.3... E-212

Now the harder case, f decrease slowly: we must specify the oscillating behavior. Thus, we cannot precompute usefully since everything depends on the point we evaluate at:

     f(t) = 1 / (1+ abs(t));
     \p200
     \\ Fourier cosine transform
     FC(z) =
     { my(a = 2*Pi*z);
       intnum(t = [-oo, a*I], [+oo, a*I], cos(a*t)*f(t));
     }
     FC(1)

* Fourier coefficients: we must integrate over a period, but intfuncinit does not support finite endpoints. The solution is to integrate a vector of functions !

  FourierSin(f, T, k) =  \\ first k sine Fourier coeffs
  {
    my (w = 2*Pi/T);
    my (v = vector(k+1));
    intnum(t = -T/2, T/2,
       my (z = exp(I*w*t));
       v[1] = z;
       for (j = 2, k, v[j] = v[j-1]*z);
       f(t) * imag(v)) * 2/T;
  }
  FourierSin(t->sin(2*t), 2*Pi, 10)

The same technique can be used instead of intfuncinit to integrate f(t) k(t,z) whenever the list of z-values is known beforehand.

Note that the above code includes an unrelated optimization: the sin(j w t) are computed as imaginary parts of exp(i j w t) and the latter by successive multiplications.

* numerical Mellin inversion F(z) = (2iπ)-1c -i oo c+i oo f(s)z-s ds = (2π)-1- oo + oo f(c + i t)e-log z(c + it) dt. We take c = 2 in the program below:

     f(s) = gamma(s)^3;  \\ f(c+it) decrease as exp(-3Pi|t|/2)
     c = 2; \\ arbitrary
     A = [-oo,3*Pi/2];
     B = [+oo,3*Pi/2];
     T = intfuncinit(t=A,B, f(c + I*t));
     F(z) =
     { my (a = -log(z));
       intnum(t=A,B, exp(a*I*t), T)*exp(a*c) / (2*Pi);
     }

The library syntax is intfuncinit(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,long m, long prec).


intnum(X = a, b, expr, {tab})

Numerical integration of expr on ]a,b[ with respect to X, using the double-exponential method, and thus O(Dlog D) evaluation of the integrand in precision D. The integrand may have values belonging to a vector space over the real numbers; in particular, it can be complex-valued or vector-valued. But it is assumed that the function is regular on ]a,b[. If the endpoints a and b are finite and the function is regular there, the situation is simple:

  ? intnum(x = 0,1, x^2)
  %1 = 0.3333333333333333333333333333
  ? intnum(x = 0,Pi/2, [cos(x), sin(x)])
  %2 = [1.000000000000000000000000000, 1.000000000000000000000000000]

An endpoint equal to ± oo is coded as +oo or -oo, as expected:

  ? intnum(x = 1,+oo, 1/x^2)
  %3 = 1.000000000000000000000000000

In basic usage, it is assumed that the function does not decrease exponentially fast at infinity:

  ? intnum(x=0,+oo, exp(-x))
    ***   at top-level: intnum(x=0,+oo,exp(-
    ***                 ^ —  —  —  —  —  — --
    *** exp: overflow in expo().

We shall see in a moment how to avoid that last problem, after describing the last optional argument tab.

The tab argument. The routine uses weights wi, which are mostly independent of the function being integrated, evaluated at many sampling points xi and approximates the integral by ∑ wi f(xi). If tab is

* a nonnegative integer m, we multiply the number of sampling points by 2m, hopefully increasing accuracy. Note that the running time increases roughly by a factor 2m. One may try consecutive values of m until they give the same value up to an accepted error.

* a set of integration tables containing precomputed xi and wi as output by intnuminit. This is useful if several integrations of the same type are performed (on the same kind of interval and functions, for a given accuracy): we skip a precomputation of O(Dlog D) elementary functions in accuracy D, whose running time has the same order of magnitude as the evaluation of the integrand. This is in particular useful for multivariate integrals.

Specifying the behavior at endpoints. This is done as follows. An endpoint a is either given as such (a scalar, real or complex, oo or -oo for ± oo ), or as a two component vector [a,α], to indicate the behavior of the integrand in a neighborhood of a.

If a is finite, the code [a,α] means the function has a singularity of the form (x-a)α, up to logarithms. (If α \ge 0, we only assume the function is regular, which is the default assumption.) If a wrong singularity exponent is used, the result will lose decimals:

  ? c = -9/10;
  ? intnum(x=0, 1, x^c)         \\  assume x-9/10 is regular at 0
  %1 = 9.9999839078827082322596783301939063944
  ? intnum(x=[0,c], 1, x^c)  \\  no, it's not
  %2 = 10.000000000000000000000000000000000000
  ? intnum(x=[0,c/2], 1, x^c) \\  using a wrong exponent is bad
  %3 = 9.9999999997122749095442279375719919769

If a is ± oo , which is coded as +oo or -oo, the situation is more complicated, and [±oo,α] means:

* α = 0 (or no α at all, i.e. simply ±oo) assumes that the integrand tends to zero moderately quickly, at least as O(x-2) but not exponentially fast.

* α > 0 assumes that the function tends to zero exponentially fast approximately as exp(-α|x|). This includes oscillating but quickly decreasing functions such as exp(-x)sin(x).

  ? intnum(x=0, +oo, exp(-2*x))
    ***   at top-level: intnum(x=0,+oo,exp(-
    ***                 ^ —  —  —  —  —  — --
    *** exp: exponent (expo) overflow
  ? intnum(x=0, [+oo, 2], exp(-2*x))  \\  OK!
  %1 = 0.50000000000000000000000000000000000000
  ? intnum(x=0, [+oo, 3], exp(-2*x))  \\  imprecise exponent, still OK !
  %2 = 0.50000000000000000000000000000000000000
  ? intnum(x=0, [+oo, 10], exp(-2*x)) \\  wrong exponent  ==>  disaster
  %3 = 0.49999999999952372962457451698256707393

As the last exemple shows, the exponential decrease rate must be indicated to avoid overflow, but the method is robust enough for a rough guess to be acceptable.

* α < -1 assumes that the function tends to 0 slowly, like xα. Here the algorithm is less robust and it is essential to give a sharp α, unless α ≤ -2 in which case we use the default algorithm as if α were missing (or equal to 0).

  ? intnum(x=1, +oo, x^(-3/2))         \\ default
  %1 = 1.9999999999999999999999999999646391207
  ? intnum(x=1, [+oo,-3/2], x^(-3/2))  \\ precise decrease rate
  %2 = 2.0000000000000000000000000000000000000
  ? intnum(x=1, [+oo,-11/10], x^(-3/2)) \\ worse than default
  %3 = 2.0000000000000000000000000089298011973

The last two codes are reserved for oscillating functions. Let k > 0 real, and g(x) a nonoscillating function tending slowly to 0 (e.g. like a negative power of x), then

* α = k * I assumes that the function behaves like cos(kx)g(x).

* α = -k* I assumes that the function behaves like sin(kx)g(x).

Here it is critical to give the exact value of k. If the oscillating part is not a pure sine or cosine, one must expand it into a Fourier series, use the above codings, and sum the resulting contributions. Otherwise you will get nonsense. Note that cos(kx), and similarly sin(kx), means that very function, and not a translated version such as cos(kx+a). Note that the (slower) function intnumosc is more robust and should be able to integrate much more general quasi-periodic functions such as fractional parts or Bessel J and Y functions.

  ? \pb1664
  ? exponent(intnum(x=0,+oo, sinc(x)) - Pi/2)
  time = 308 ms.
  %1 = 5 \\ junk
  ? exponent(intnum(x=0,[+oo,-I], sinc(x)) - Pi/2)
  time = 493 ms.
  %2 = -1663 \\ perfect when k is given
  ? exponent(intnum(x=0,[+oo,-0.999*I], sinc(x)) - Pi/2)
  time = 604 ms.
  %3 = -14 \\ junk when k is off
  
  \\ intnumosc requires the half-period
  ? exponent(intnumosc(x=0, sinc(x), Pi) - Pi/2)
  time = 20,570 ms.
  %4 = -1663 \\ slower but perfect
  ? exponent(intnumosc(x=0, sinc(x), Pi, 1) - Pi/2)
  time = 7,976 ms.
  %4 = -1663 \\ also perfect in fast unsafe mode
  ? exponent(intnumosc(x=0, sinc(x), Pi+0.001, 1) - Pi/2)
  time = 23,115 ms.
  %5 = -1278 \\ loses some accuracy when period is off, but much less

Note. If f(x) = cos(kx)g(x) where g(x) tends to zero exponentially fast as exp(-α x), it is up to the user to choose between [±oo,α] and [±oo,k* I], but a good rule of thumb is that if the oscillations are weaker than the exponential decrease, choose [±oo,α], otherwise choose [±oo,k*I], although the latter can reasonably be used in all cases, while the former cannot. To take a specific example, in most inverse Mellin transforms, the integrand is a product of an exponentially decreasing and an oscillating factor. If we choose the oscillating type of integral we perhaps obtain the best results, at the expense of having to recompute our functions for a different value of the variable z giving the transform, preventing us to use a function such as intfuncinit. On the other hand using the exponential type of integral, we obtain less accurate results, but we skip expensive recomputations. See intfuncinit for more explanations.

Power series limits. The limits a and b can be power series of nonnegative valuation, giving a power series expansion for the integral -- provided it exists.

  ? intnum(t=0,X + O(X^3), exp(t))
  %4 = 1.000...*X - 0.5000...*X^2 + O(X^3)
  ? bestappr( intnum(t=0,X + O(X^17), exp(t)) )- exp(X) + 1
  %5 = O(X^17)

The valuation of the limit cannot be negative since ∫01/X(1+t2)-1 dt = π/2 - sign(X)+O(X2).

Polynomials and rational functions are also allowed and converted to power series using current seriesprecision:

  ? bestappr( intnum(t=1,1+X, 1/t) )
  %6 = X - 1/2*X^2 + 1/3*X^3 - 1/4*X^4 + [...] + 1/15*X^15 + O(X^16)

The function does not work if the integral is singular with the constant coefficient of the series as limit:

  ? intnum(t=X^2+O(X^4),1, 1/sqrt(t))
  %8 = 2.000... - 6.236608109630992528 E28*X^2 + O(X^4)

however you can use

  ? intnum(t=[X^2+O(X^4),-1/2],1, 1/sqrt(t))
  %10 = 2.000000000000000000000000000-2.000000000000000000000000000*X^2+O(X^4)

whis is translated internally to

  ? intnum(t=[0,-1/2],1, 1/sqrt(t))-intnum(t=[0,-1/2],X^2+O(X^4), 1/sqrt(t))

For this form the argument tab can be used only as an integer, not a table precomputed by intnuminit.

We shall now see many examples to get a feeling for what the various parameters achieve. All examples below assume precision is set to 115 decimal digits. We first type

  ? \p 115

Apparent singularities. In many cases, apparent singularities can be ignored. For instance, if f(x) = 1 /(exp(x)-1) - exp(-x)/x, then ∫0 oo f(x)dx = γ, Euler's constant Euler. But

  ? f(x) = 1/(exp(x)-1) - exp(-x)/x
  ? intnum(x = 0, [oo,1],  f(x)) - Euler
  %1 = 0.E-115

But close to 0 the function f is computed with an enormous loss of accuracy, and we are in fact lucky that it get multiplied by weights which are sufficiently close to 0 to hide this:

  ? f(1e-200)
  %2 = -3.885337784451458142 E84

A more robust solution is to define the function differently near special points, e.g. by a Taylor expansion

  ? F = truncate( f(t + O(t^10)) ); \\  expansion around t = 0
  ? poldegree(F)
  %4 = 7
  ? g(x) = if (x > 1e-18, f(x), subst(F,t,x)); \\  note that 7.18 > 105
  ? intnum(x = 0, [oo,1],  g(x)) - Euler
  %2 = 0.E-115

It is up to the user to determine constants such as the 10-18 and 10 used above.

True singularities. With true singularities the result is worse. For instance

  ? intnum(x = 0, 1,  x^(-1/2)) - 2
  %1 = -3.5... E-68 \\  only 68 correct decimals
  
  ? intnum(x = [0,-1/2], 1,  x^(-1/2)) - 2
  %2 = 0.E-114 \\  better

Oscillating functions.

  ? intnum(x = 0, oo, sin(x) / x) - Pi/2
  %1 = 16.19.. \\  nonsense
  ? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2
  %2 = -0.006.. \\  bad
  ? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2
  %3 = 0.E-115 \\  perfect
  ? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2  \\  oops, wrong k
  %4 = 0.06...
  ? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2
  %5 = 0.E-115 \\  perfect
  
  ? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4
  %6 = -0.0008... \\  bad
  ? sin(x)^3 - (3*sin(x)-sin(3*x))/4
  %7 = O(x^17)

We may use the above linearization and compute two oscillating integrals with endpoints [oo, -I] and [oo, -3*I] respectively, or notice the obvious change of variable, and reduce to the single integral (1/2)∫0 oo sin(x)/xdx. We finish with some more complicated examples:

  ? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2
  %1 = -0.0003... \\  bad
  ? intnum(x = 0, 1, (1-cos(x))/x^2) \
  + intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2
  %2 = 0.E-115 \\  perfect
  
  ? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3
  %3 = -7.34... E-55 \\  bad
  ? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3
  %4 = 8.9... E-103 \\  better. Try higher m
  ? tab = intnuminit(0,[oo,-I], 1); \\  double number of sampling points
  ? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3
  %6 = 0.E-115 \\  perfect

Warning. Like sumalt, intnum often assigns a reasonable value to diverging integrals. Use these values at your own risk! For example:

  ? intnum(x = 0, [oo, -I], x^2*sin(x))
  %1 = -2.0000000000...

Note the formula ∫0 oo sin(x)x-sdx = cos(π s/2) Γ(1-s) , a priori valid only for 0 < Re(s) < 2, but the right hand side provides an analytic continuation which may be evaluated at s = -2...

Multivariate integration. Using successive univariate integration with respect to different formal parameters, it is immediate to do naive multivariate integration. But it is important to use a suitable intnuminit to precompute data for the internal integrations at least!

For example, to compute the double integral on the unit disc x2+y2 ≤ 1 of the function x2+y2, we can write

  ? tab = intnuminit(-1,1);
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab),tab) - Pi/2
  %2 = -7.1... E-115 \\  OK
  

The first tab is essential, the second optional. Compare:

  ? tab = intnuminit(-1,1);
  time = 4 ms.
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2));
  time = 3,092 ms. \\  slow
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab);
  time = 252 ms.  \\  faster
  ? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab));
  time = 261 ms.  \\  the internal integral matters most

The library syntax is intnum(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b,GEN tab, long prec), where an omitted tab is coded as NULL.


intnumgauss(X = a, b, expr, {tab})

Numerical integration of expr on the compact interval [a,b] with respect to X using Gauss-Legendre quadrature; tab is either omitted or precomputed with intnumgaussinit. As a convenience, it can be an integer n in which case we call intnumgaussinit(n) and use n-point quadrature.

  ? test(n, b = 1) = T=intnumgaussinit(n);\
      intnumgauss(x=-b,b, 1/(1+x^2),T) - 2*atan(b);
  ? test(0) \\ default
  %1 = -9.490148553624725335 E-22
  ? test(40)
  %2 = -6.186629001816965717 E-31
  ? test(50)
  %3 = -1.1754943508222875080 E-38
  ? test(50, 2) \\ double interval length
  %4 = -4.891779568527713636 E-21
  ? test(90, 2) \\ n must almost be doubled as well!
  %5 = -9.403954806578300064 E-38

On the other hand, we recommend to split the integral and change variables rather than increasing n too much:

  ? f(x) = 1/(1+x^2);
  ? b = 100;
  ? intnumgauss(x=0,1, f(x)) + intnumgauss(x=1,1/b, f(1/x)*(-1/x^2)) - atan(b)
  %3 = -1.0579449157400587572 E-37

The library syntax is GEN intnumgauss0(GEN X, GEN b, GEN expr, GEN tab = NULL, long prec).


intnumgaussinit({n})

Initialize tables for n-point Gauss-Legendre integration of a smooth function f on a compact interval [a,b]. If n is omitted, make a default choice n ~ B / 4, where B is realbitprecision, suitable for analytic functions on [-1,1]. The error is bounded by

((b-a)2n+1 (n!)4)/((2n+1)!(2n)!) (f(2n))/((2n)!) (ξ) , a < ξ < b.

If r denotes the distance of the nearest pole to the interval [a,b], then this is of the order of ((b-a) / (4r))2n. In particular, the integral must be subdivided if the interval length b - a becomes close to 4r. The default choice n ~ B / 4 makes this quantity of order 2-B when b - a = r, as is the case when integrating 1/(1+t) on [0,1] for instance. If the interval length increases, n should be increased as well.

Specifically, the function returns a pair of vectors [x,w], where x contains the nonnegative roots of the n-th Legendre polynomial Pn and w the corresponding Gaussian integration weights Qn(xj)/P'n(xj) = 2 / ((1-xj2)P'n(xj))2 such that ∫-11 f(t) dt ~ ∑j wj f(xj) .

  ? T = intnumgaussinit();
  ? intnumgauss(t=-1,1,exp(t), T) - exp(1)+exp(-1)
  %1 = -5.877471754111437540 E-39
  ? intnumgauss(t=-10,10,exp(t), T) - exp(10)+exp(-10)
  %2 = -8.358367809712546836 E-35
  ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2 \\ b - a = 2r
  %3 = -9.490148553624725335 E-22 \\ ... loses half the accuracy
  
  ? T = intnumgaussinit(50);
  ? intnumgauss(t=-1,1,1/(1+t^2), T) - Pi/2
  %5 = -1.1754943508222875080 E-38
  ? intnumgauss(t=-5,5,1/(1+t^2), T) - 2*atan(5)
  %6 = -1.2[...]E-8

On the other hand, we recommend to split the integral and change variables rather than increasing n too much, see intnumgauss.

The library syntax is GEN intnumgaussinit(long n, long prec).


intnuminit(a, b, {m = 0})

Initialize tables for integration from a to b, where a and b are coded as in intnum. Only the compactness, the possible existence of singularities, the speed of decrease or the oscillations at infinity are taken into account, and not the values. For instance intnuminit(-1,1) is equivalent to intnuminit(0,Pi), and intnuminit([0,-1/2],oo) is equivalent to intnuminit([-1,-1/2], -oo); on the other hand, the order matters and intnuminit([0,-1/2], [1,-1/3]) is not equivalent to intnuminit([0,-1/3], [1,-1/2]) !

If m is present, it must be nonnegative and we multiply the default number of sampling points by 2m (increasing the running time by a similar factor).

The result is technical and liable to change in the future, but we document it here for completeness. Let x = φ(t), t ∈ ]- oo , oo [ be an internally chosen change of variable, achieving double exponential decrease of the integrand at infinity. The integrator intnum will compute h ∑|n| < N φ'(nh) F(φ(nh)) for some integration step h and truncation parameter N. In basic use, let

  [h, x0, w0, xp, wp, xm, wm] = intnuminit(a,b);

* h is the integration step

* x0 = φ(0) and w0 = φ'(0),

* xp contains the φ(nh), 0 < n < N,

* xm contains the φ(nh), 0 < -n < N, or is empty.

* wp contains the φ'(nh), 0 < n < N,

* wm contains the φ'(nh), 0 < -n < N, or is empty.

The arrays xm and wm are left empty when φ is an odd function. In complicated situations, intnuminit may return up to 3 such arrays, corresponding to a splitting of up to 3 integrals of basic type.

If the functions to be integrated later are of the form F = f(t) k(t,z) for some kernel k (e.g. Fourier, Laplace, Mellin,...), it is useful to also precompute the values of f(φ(nh)), which is accomplished by intfuncinit. The hard part is to determine the behavior of F at endpoints, depending on z.

The library syntax is GEN intnuminit(GEN a, GEN b, long m, long prec).


intnumosc(x = a, expr, H, {flag = 0}, {tab})

Numerical integration from a to oo of oscillating quasi-periodic function expr of half-period H, meaning that we at least expect the distance between the function's consecutive zeros to be close to H: the sine or cosine functions (H = π) are paradigmatic examples, but the Bessel Jν or Yν functions (H = π/2) can also be handled. The integral from a to oo is computed by summing the integral between two consecutive multiples of H; flag determines the summation algorithm used: either 0 (Sidi extrapolation, safe mode), 1 (Sidi extrapolation, unsafe mode), 2 (sumalt), 3 (sumnumlagrange) or 4 (sumpos). For the last two modes (Lagrange and Sumpos), one should input the period 2H instead of the half-period H.

The default is flag = 0; Sidi summation should be the most robust algorithm; you can try it in unsafe mode when the integrals between two consecutive multiples of H form an alternating series, this should be about twice faster than the default and not lose accuracy. Sumpos should be by far the slowest method, but also very robust and may be able to handle integrals where Sidi fails. Sumalt should be fast but often wrong, especially when the integrals between two consecutive multiples of H do not form an alternating series), and Lagrange should be as fast as Sumalt but more often wrong.

When one of the Sidi modes runs into difficulties, it will return the result to the accuracy believed to be correct (the other modes do not perform extrapolation and do not have this property) :

  ? f(x)=besselj(0,x)^4*log(x+1);
  ? \pb384
  ? intnumosc(x = 0, f(x), Pi)
  %1 = 0.4549032054850867417 \\ fewer digits than expected !
  ? bitprecision(%)
  %2 = 64
  ? \g1 \\ increase debug level to see diagnostics
  ? intnumosc(x = 0, f(x), Pi)
  sumsidi: reached accuracy of 23 bits.
  %2 = 0.4549032054850867417

The algorithm could extrapolate the series to 23 bits of accuracy, then diverged. So only the absolute error is likely to be around 2-23 instead of the possible 2-64 (or the requested 2-384). We'll come back to this example at the end.

In case of difficulties, you may try to replace the half-(quasi)-period H by a multiple, such as the quasi-period 2H: since we do not expect alternating behaviour, sumalt mode will almost surely be broken, but others may improve, in particular Lagrange or Sumpos.

tab is either omitted or precomputed with intnumgaussinit; if using Sidi summation in safe mode (flag = 0) and precompute tab, you should use a precision roughly 50% larger than the target (this is not necessary for any of the other summations).

First an alternating example:

  ? \pb384
  \\ Sidi, safe mode
  ? exponent(intnumosc(x=0,sinc(x),Pi) - Pi/2)
  time = 183 ms.
  %1 = -383
  ? exponent(intnumosc(x=0,sinc(x),2*Pi) - Pi/2)
  time = 224 ms.
  %2 = -383 \\ also works with 2H, a little slower
  
  \\ Sidi, unsafe mode
  ? exponent(intnumosc(x=0,sinc(x),Pi,1) - Pi/2)
  time = 79 ms.
  %3 = -383  \\ alternating: unsafe mode is fine and almost twice faster
  ? exponent(intnumosc(x=0,sinc(x),2*Pi,1) - Pi/2)
  time = 86 ms.
  %4 = -285 \\ but this time 2H loses accuracy
  
  \\ Sumalt
  ? exponent(intnumosc(x=0,sinc(x),Pi,2) - Pi/2)
  time = 115 ms. \\ sumalt is just as accurate and fast
  %5 = -383
  ? exponent(intnumosc(x=0,sinc(x),2*Pi,2) - Pi/2)
  time = 115 ms.
  %6 = -10 \\ ...but breaks completely with 2H
  
  \\ Lagrange
  ? exponent(intnumosc(x=0,sinc(x),Pi,2) - Pi/2)
  time = 100 ms. \\ junk
  %7 = 224
  ? exponent(intnumosc(x=0,sinc(x),2*Pi,2) - Pi/2)
  time = 100 ms.
  %8 = -238 \\ ...a little better with 2H
  
  \\ Sumpos
  ? exponent(intnumosc(x=0,sinc(x),Pi,4) - Pi/2)
  time = 17,961 ms.
  %9 = 7 \\ junk; slow
  ? exponent(intnumosc(x=0,sinc(x),2*Pi,4) - Pi/2)
  time = 19,105 ms.
  %10 = -4 \\ still junk

Now a non-alternating one:

  ? exponent(intnumosc(x=0,sinc(x)^2,Pi) - Pi/2)
  time = 277 ms.
  %1 = -383 \\ safe mode is still perfect
  ? exponent(intnumosc(x=0,sinc(x)^2,Pi,1) - Pi/2)
  time = 97 ms.
  %2 = -284 \\ non-alternating; this time, Sidi's unsafe mode loses accuracy
  ? exponent(intnumosc(x=0,sinc(x)^2,Pi,2) - Pi/2)
  time = 113 ms.
  %3 = -10 \\ this time sumalt fails completely
  ? exponent(intnumosc(x=0,sinc(x)^2,Pi,3) - Pi/2)
  time = 103 ms.
  %4 = -237 \\ Lagrange loses accuracy (same with 2H = 2*Pi)
  ? exponent(intnumosc(x=0,sinc(x)^2,Pi,4) - Pi/2)
  time = 17,681 ms.
  %4 = -381 \\ and Sumpos is good but slow (perfect with 2H)

Exemples of a different flavour:

  ? exponent(intnumosc(x = 0, besselj(0,x)*sin(3*x), Pi) - 1/sqrt(8))
  time = 4,615 ms.
  %1 = -385 \\ more expensive but correct
  ? exponent(intnumosc(x = 0, besselj(0,x)*sin(3*x), Pi, 1) - 1/sqrt(8))
  time = 1,424 ms.
  %2 = -279 \\ unsafe mode loses some accuracy (other modes return junk)
  
  ? S = log(2*Pi)- Euler - 1;
  ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1/2) - S)
  time = 21 ms.
  %4 = -6 \\ junk
  ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1) - S)
  time = 66ms.
  %5 = -384 \\ perfect with 2H
  ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 1) - S)
  time = 20 ms.
  %6 = -286 \\ unsafe mode loses accuracy
  ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 3) - S)
  time = 30 ms.
  %7 = -236  \\ and so does Lagrange (Sumalt fails)
  ? exponent(intnumosc(t=1, (frac(t)/t)^2, 1, 4) - S)
  time = 2,315 ms.
  %8 = -382 \\ Sumpos is perfect but slow

Again, Sidi extrapolation behaves well, especially in safe mode, but 2H is required here.

If the integrand has singularities close to the interval of integration, it is advisable to split the integral in two: use the more robust intnum to handle the singularities, then intnumosc for the remainder:

  ? \p38
  ? f(x) = besselj(0,x)^3 * log(x); \\ mild singularity at 0
  ? g() = intnumosc(x = 0, f(x), Pi); \\ direct
  ? h() = intnum(x = 0, Pi, f(x)) + intnumosc(x = Pi, f(x), Pi); \\ split at Pi
  ? G = g();
  time = 293 ms.
  ? H = h();
  time = 320 ms. \\ about as fast
  ? exponent(G-H)
  %6 = -12 \\ at least one of them is junk
  ? \p77 \\ increase accuracy
  ? G2=g(); H2=h();
  ? exponent(G - G2)
  %8 = -13  \\ g() is not consistent
  ? exponent(H - H2)
  %9 = -128  \\ not a proof, but h() looks good

Finally, here is an exemple where all methods fail, even when splitting the integral, except Sumpos:

  ? \p38
  ? f(x)=besselj(0,x)^4*log(x+1);
  ? F = intnumosc(x=0,f(x), Pi, 4)
  time = 2,437 ms.
  %2 = 0.45489838778971732178155161172638343214
  ? \p76 \\ double accuracy to check
  ? exponent(F - intnumosc(x = 0,f(x), Pi, 4))
  time = 18,817 ms.
  %3 = -122 \\ F was almost perfect

The library syntax is GEN intnumosc0(GEN x, GEN expr, GEN H, long flag, GEN tab = NULL, long prec).


intnumromb(X = a, b, expr, {flag = 0})

Numerical integration of expr (smooth in ]a,b[), with respect to X. Suitable for low accuracy; if expr is very regular (e.g. analytic in a large region) and high accuracy is desired, try intnum first.

Set flag = 0 (or omit it altogether) when a and b are not too large, the function is smooth, and can be evaluated exactly everywhere on the interval [a,b].

If flag = 1, uses a general driver routine for doing numerical integration, making no particular assumption (slow).

flag = 2 is tailored for being used when a or b are infinite using the change of variable t = 1/X. One must have ab > 0, and in fact if for example b = + oo , then it is preferable to have a as large as possible, at least a ≥ 1.

If flag = 3, the function is allowed to be undefined at a (but right continuous) or b (left continuous), for example the function sin(x)/x between x = 0 and 1.

The user should not require too much accuracy: realprecision about 30 decimal digits (realbitprecision about 100 bits) is OK, but not much more. In addition, analytical cleanup of the integral must have been done: there must be no singularities in the interval or at the boundaries. In practice this can be accomplished with a change of variable. Furthermore, for improper integrals, where one or both of the limits of integration are plus or minus infinity, the function must decrease sufficiently rapidly at infinity, which can often be accomplished through integration by parts. Finally, the function to be integrated should not be very small (compared to the current precision) on the entire interval. This can of course be accomplished by just multiplying by an appropriate constant.

Note that infinity can be represented with essentially no loss of accuracy by an appropriate huge number. However beware of real underflow when dealing with rapidly decreasing functions. For example, in order to compute the ∫0 oo e-x^{2}dx to 38 decimal digits, then one can set infinity equal to 10 for example, and certainly not to 1e1000.

The library syntax is GEN intnumromb(void *E, GEN (*eval)(void*,GEN), GEN a, GEN b, long flag, long bitprec), where eval(x, E) returns the value of the function at x. You may store any additional information required by eval in E, or set it to NULL.


laurentseries(f, {M = seriesprecision}, {x = 'x})

Expand f as a Laurent series around x = 0 to order M. This function computes f(x + O(xn)) until n is large enough: it must be possible to evaluate f on a power series with 0 constant term.

  ? laurentseries(t->sin(t)/(1-cos(t)), 5)
  %1 = 2*x^-1 - 1/6*x - 1/360*x^3 - 1/15120*x^5 + O(x^6)
  ? laurentseries(log)
    ***   at top-level: laurentseries(log)
    ***                 ^ —  —  —  —  —  — 
    ***   in function laurentseries: log
    ***                              ^ — 
    *** log: domain error in log: series valuation != 0

Note that individual Laurent coefficients of order ≤ M can be retrieved from s = laurentseries(f,M) via polcoef(s,i) for any i ≤ M. The series s may occasionally be more precise that the required O(xM+1).

With respect to successive calls to derivnum, laurentseries is both faster and more precise:

  ? laurentseries(t->log(3+t),1)
  %1 = 1.0986122886681096913952452369225257047 + 1/3*x - 1/18*x^2 + O(x^3)
  ? derivnum(t=0,log(3+t),1)
  %2 = 0.33333333333333333333333333333333333333
  ? derivnum(t=0,log(3+t),2)
  %3 = -0.11111111111111111111111111111111111111
  
  ? f = x->sin(exp(x));
  ? polcoef(laurentseries(x->f(x+2), 1), 1)
  %5 = 3.3129294231043339804683687620360224365
  ? exp(2) * cos(exp(2));
  %6 = 3.3129294231043339804683687620360224365
  ? derivnum(x = 2, f(x))
  %7 = 3.3129294231043339804683687620360224364 \\ 1 ulp off
  
  ? default(realprecision,115);
  ? for(i=1,10^4, laurentseries(x->f(x+2),1))
  time = 279 ms.
  ? for(i=1,10^4, derivnum(x=2,f(x)))  \\ ... and slower
  time = 1,134 ms.

The library syntax is laurentseries(void *E, GEN (*f)(void*,GEN,long), long M, long v, long prec).


limitnum(expr, {alpha = 1})

Lagrange-Zagier numerical extrapolation of expr, corresponding to a sequence un, either given by a closure n- > u(n). I.e., assuming that un tends to a finite limit ℓ, try to determine ℓ.

The routine assume that un has an asymptotic expansion in n : un = ℓ + ∑i ≥ 1 ai n-iα for some ai. It is purely numerical and heuristic, thus may or may not work on your examples. The expression will be evaluated for n = 1, 2, ..., N for an N = O(B) at a bit accuracy bounded by 1.612 B.

  ? limitnum(n -> n*sin(1/n))
  %1 = 1.0000000000000000000000000000000000000
  
  ? limitnum(n -> (1+1/n)^n) - exp(1)
  %2 = 0.E-37
  
  ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! ) - Pi
  %3 = 0.E -37

It is not mandatory to specify α when the un have an asymptotic expansion in n-1. However, if the series in n-1 is lacunary, specifying α allows faster computation:

  ? \p1000
  ? limitnum(n->(1+1/n^2)^(n^2)) - exp(1)
  time = 1min, 44,681 ms.
  %4 = 0.E-1001
  ? limitnum(n->(1+1/n^2)^(n^2), 2) - exp(1)
  time = 27,271 ms.
  %5 = 0.E-1001 \\ still perfect, 4 times faster

When un has an asymptotic expansion in n with α not an integer, leaving α unspecified will bring an inexact limit. Giving a satisfying optional argument improves precision; the program runs faster when the optional argument gives non lacunary series.

  ? \p50
  ? limitnum(n->(1+1/n^(7/2))^(n^(7/2))) - exp(1)
  time = 982 ms.
  %6 = 4.13[...] E-12
  ? limitnum(n->(1+1/n^(7/2))^(n^(7/2)), 1/2) - exp(1)
  time = 16,745 ms.
  %7 = 0.E-57
  ? limitnum(n->(1+1/n^(7/2))^(n^(7/2)), 7/2) - exp(1)
  time = 105 ms.
  %8 = 0.E-57

Alternatively, un may be given by a closure N[u1,..., uN] which can often be programmed in a more efficient way, for instance when un+1 is a simple function of the preceding terms:

  ? \p2000
  ? limitnum(n -> 2^(4*n+1)*(n!)^4 / (2*n)! /(2*n+1)! ) - Pi
  time = 1,755 ms.
  %9 = 0.E-2003
  ? vu(N) = \\ exploit hypergeometric property
    { my(v = vector(N)); v[1] = 8./3;\
      for (n=2, N, my(q = 4*n^2); v[n] = v[n-1]*q/(q-1));\
      return(v);
    }
  ? limitnum(vu) - Pi \\ much faster
  time = 106 ms.
  %11 = 0.E-2003

All sums and recursions can be handled in the same way. In the above it is essential that un be defined as a closure because it must be evaluated at a higher precision than the one expected for the limit. Make sure that the closure does not depend on a global variable which would be computed at a priori fixed accuracy. For instance, precomputing v1 = 8.0/3 first and using v1 in vu above would be wrong because the resulting vector of values will use the accuracy of v1 instead of the ambient accuracy at which limitnum will call it.

Alternatively, and more clumsily, un may be given by a vector of values: it must be long and precise enough for the extrapolation to make sense. Let B be the current realbitprecision, the vector length must be at least 1.102 B and the values computed with bit accuracy 1.612 B.

  ? limitnum(vector(10,n,(1+1/n)^n))
   ***                 ^ —  —  —  —  —  — --
   *** limitnum: nonexistent component in limitnum: index < 43
  \\ at this accuracy, we must have at least 43 values
  ? limitnum(vector(43,n,(1+1/n)^n)) - exp(1)
  %12 = 0.E-37
  
  ? v = vector(43);
  ? s = 0; for(i=1,#v, s += 1/i; v[i]= s - log(i));
  ? limitnum(v) - Euler
  %15 = -1.57[...] E-16
  
  ? v = vector(43);
  \\ ~ 128 bit * 1.612
  ? localbitprec(207);\
    s = 0; for(i=1,#v, s += 1/i; v[i]= s - log(i));
  ? limitnum(v) - Euler
  %18 = 0.E-38

Because of the above problems, the preferred format is thus a closure, given either a single value or the vector of values [u1,...,uN]. The function distinguishes between the two formats by evaluating the closure at N ! = 1 and 1 and checking whether it yields vectors of respective length N and 1 or not.

Warning. The expression is evaluated for n = 1, 2,..., N for an N = O(B) if the current bit accuracy is B. If it is not defined for one of these values, translate or rescale accordingly:

  ? limitnum(n->log(1-1/n))  \\ can't evaluate at n = 1 !
   ***   at top-level: limitnum(n->log(1-1/n))
   ***                 ^ —  —  —  —  —  —  — --
   ***   in function limitnum: log(1-1/n)
   ***                         ^ —  —  — -
   *** log: domain error in log: argument = 0
  ? limitnum(n->-log(1-1/(2*n)))
  %19 = -6.11[...] E-58

We conclude with a complicated example. Since the function is heuristic, it is advisable to check whether it produces the same limit for un, u2n,...ukm for a suitable small multiplier k. The following function implements the recursion for the Motzkin numbers Mn which count the number of ways to draw non intersecting chords between n points on a circle: Mn = Mn-1 + ∑i < n-1 Mi Mn-2-i = ((n+1)Mn-1+(3n-3)Mn-2) / (n+2). It is known that Mn^2 ~ (9n+1)/(12π n3).

  \\ [Mk, Mk*2, ..., Mk*N] / (3^n / n^(3/2))
  vM(N, k = 1) =
  { my(q = k*N, V);
     if (q == 1, return ([1/3]));
     V = vector(q); V[1] = V[2] = 1;
     for(n = 2, q - 1,
       V[n+1] = ((2*n + 1)*V[n] + 3*(n - 1)*V[n-1]) / (n + 2));
     f = (n -> 3^n / n^(3/2));
     return (vector(N, n, V[n*k] / f(n*k)));
  }
  ? limitnum(vM) - 3/sqrt(12*Pi) \\ complete junk
  %1 = 35540390.753542730306762369615276452646
  ? limitnum(N->vM(N,5)) - 3/sqrt(12*Pi) \\ M5n: better
  %2 = 4.130710262178469860 E-25
  ? limitnum(N->vM(N,10)) - 3/sqrt(12*Pi) \\ M10n: perfect
  %3 = 0.E-38
  ? \p2000
  ? limitnum(N->vM(N,10)) - 3/sqrt(12*Pi) \\ also at high accuracy
  time = 409 ms.
  %4 = 1.1048895470044788191 E-2004

In difficult cases such as the above a multiplier of 5 to 10 is usually sufficient. The above example is typical: a good multiplier usually remains sufficient when the requested precision increases!

The library syntax is limitnum(void *E, GEN (*u)(void *,GEN,long), GEN alpha, long prec), where u(E, n, prec) must return u(n) in precision prec. Also available is GEN limitnum0(GEN u, GEN alpha, long prec), where u must be a vector of sufficient length as above.


prod(X = a, b, expr, {x = 1})

Product of expression expr, initialized at x, the formal parameter X going from a to b. As for sum, the main purpose of the initialization parameter x is to force the type of the operations being performed. For example if it is set equal to the integer 1, operations will start being done exactly. If it is set equal to the real 1., they will be done using real numbers having the default precision. If it is set equal to the power series 1+O(Xk) for a certain k, they will be done using power series of precision at most k. These are the three most common initializations.

As an extreme example, compare

  ? prod(i=1, 100, 1 - X^i);  \\  this has degree 5050 !!
  time = 128 ms.
  ? prod(i=1, 100, 1 - X^i, 1 + O(X^101))
  time = 8 ms.
  %2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \
  X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)

Of course, in this specific case, it is faster to use eta, which is computed using Euler's formula.

  ? prod(i=1, 1000, 1 - X^i, 1 + O(X^1001));
  time = 589 ms.
  ? \ps1000
  seriesprecision = 1000 significant terms
  ? eta(X) - %
  time = 8ms.
  %4 = O(X^1001)

The library syntax is produit(GEN a, GEN b, char *expr, GEN x).


prodeuler(p = a, b, expr)

Product of expression expr, initialized at 1.0 (i.e. to a floating point number equal to 1 to the current realprecision), the formal parameter p ranging over the prime numbers between a and b.

  ? prodeuler(p = 2, 10^4, 1 - p^-2)
  %1 = 0.60793306911405513018380499671124428015
  ? P = 1; forprime(p = 2, 10^4, P *= (1 - p^-2))
  ? exponent(numerator(P))
  %3 = 22953

The function returns a floating point number because, as the second expression shows, such products are usually intractably large rational numbers when computed symbolically. If the expression is a rational funtction, prodeulerrat computes the product over all primes:

  ? prodeulerrat(1 - p^-2)
  %4 = 0.60792710185402662866327677925836583343
  ? 6/Pi^2
  %3 = 0.60792710185402662866327677925836583343

The library syntax is prodeuler(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b, long prec).


prodeulerrat(F, {s = 1}, {a = 2})

p ≥ aF(ps), where the product is taken over prime numbers and F is a rational function.

  ? prodeulerrat(1+1/q^3,1)
  %1 = 1.1815649490102569125693997341604542605
  ? zeta(3)/zeta(6)
  %2 = 1.1815649490102569125693997341604542606

The library syntax is GEN prodeulerrat(GEN F, GEN s = NULL, long a, long prec).


prodinf(X = a, expr, {flag = 0})

infinite product of expression expr, the formal parameter X starting at a. The evaluation stops when the relative error of the expression minus 1 is less than the default precision. In particular, divergent products result in infinite loops. The expressions must always evaluate to an element of ℂ.

If flag = 1, do the product of the (1+expr) instead.

The library syntax is prodinf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec) (flag = 0), or prodinf1 with the same arguments (flag = 1).


prodnumrat(F, a)

n ≥ aF(n), where F-1 is a rational function of degree less than or equal to -2.

  ? prodnumrat(1+1/x^2,1)
  %1 = 3.6760779103749777206956974920282606665

The library syntax is GEN prodnumrat(GEN F, long a, long prec).


solve(X = a, b, expr)

Find a real root of expression expr between a and b. If both a and b are finite, the condition is that expr(X = a) * expr(X = b) ≤ 0. (You will get an error message roots must be bracketed in solve if this does not hold.)

If only one between a and b is finite, say a, then b = ± oo . The routine will test all b = a± 2r, with r ≥ log2(|a|) until it finds a bracket for the root which satisfies the abovementioned condition.

If both a and b are infinite, the routine will test 0 and all ± 2r, r ≥ 0, until it finds a bracket for the root which satisfies the condition.

This routine uses Brent's method and can fail miserably if expr is not defined in the whole of [a,b] (try solve(x = 1, 2, tan(x))).

The library syntax is zbrent(void *E,GEN (*eval)(void*,GEN),GEN a,GEN b,long prec).


solvestep(X = a, b, step, expr, {flag = 0})

Find zeros of a continuous function in the real interval [a,b] by naive interval splitting. This function is heuristic and may or may not find the intended zeros. Binary digits of flag mean

* 1: return as soon as one zero is found, otherwise return all zeros found;

* 2: refine the splitting until at least one zero is found (may loop indefinitely if there are no zeros);

* 4: do a multiplicative search (we must have a > 0 and step > 1), otherwise an additive search; step is the multiplicative or additive step.

* 8: refine the splitting until at least one zero is very close to an integer.

  ? solvestep(X=0,10,1,sin(X^2),1)
  %1 = 1.7724538509055160272981674833411451828
  ? solvestep(X=1,12,2,besselj(4,X),4)
  %2 = [7.588342434..., 11.064709488...]

The library syntax is solvestep(void *E, GEN (*eval)(void*,GEN), GEN a,GEN b, GEN step,long flag,long prec).


sum(X = a, b, expr, {x = 0})

Sum of expression expr, initialized at x, the formal parameter going from a to b. As for prod, the initialization parameter x may be given to force the type of the operations being performed.

As an extreme example, compare

  ? sum(i=1, 10^4, 1/i); \\  rational number: denominator has 4345 digits.
  time = 236 ms.
  ? sum(i=1, 5000, 1/i, 0.)
  time = 8 ms.
  %2 = 9.787606036044382264178477904


sumalt(X = a, expr, {flag = 0})

Numerical summation of the series expr, which should be an alternating series (-1)k ak, the formal variable X starting at a. Use an algorithm of Cohen, Villegas and Zagier (Experiment. Math. 9 (2000), no. 1, 3–12).

If flag = 0, assuming that the ak are the moments of a positive measure on [0,1], the relative error is O(3+sqrt8)-n after using ak for k ≤ n. If realprecision is p, we thus set n = log(10)p/log(3+sqrt8) ~ 1.3 p; besides the time needed to compute the ak, k ≤ n, the algorithm overhead is negligible: time O(p2) and space O(p).

If flag = 1, use a variant with more complicated polynomials, see polzagier. If the ak are the moments of w(x)dx where w (or only xw(x2)) is a smooth function extending analytically to the whole complex plane, convergence is in O(14.4-n). If xw(x2) extends analytically to a smaller region, we still have exponential convergence, with worse constants. Usually faster when the computation of ak is expensive. If realprecision is p, we thus set n = log(10)p/log(14.4) ~ 0.86 p; besides the time needed to compute the ak, k ≤ n, the algorithm overhead is not negligible: time O(p3) and space O(p2). Thus, even if the analytic conditions for rigorous use are met, this variant is only worthwile if the ak are hard to compute, at least O(p2) individually on average: otherwise we gain a small constant factor (1.5, say) in the number of needed ak at the expense of a large overhead.

The conditions for rigorous use are hard to check but the routine is best used heuristically: even divergent alternating series can sometimes be summed by this method, as well as series which are not exactly alternating (see for example Section se:user_defined). It should be used to try and guess the value of an infinite sum. (However, see the example at the end of Section se:userfundef.)

If the series already converges geometrically, suminf is often a better choice:

  ? \p38
  ? sumalt(i = 1, -(-1)^i / i)  - log(2)
  time = 0 ms.
  %1 = 0.E-38
  ? suminf(i = 1, -(-1)^i / i)   \\  Had to hit Ctrl-C
    ***   at top-level: suminf(i=1,-(-1)^i/i)
    ***                                ^ —  — 
    *** suminf: user interrupt after 10min, 20,100 ms.
  ? \p1000
  ? sumalt(i = 1, -(-1)^i / i)  - log(2)
  time = 90 ms.
  %2 = 4.459597722 E-1002
  
  ? sumalt(i = 0, (-1)^i / i!) - exp(-1)
  time = 670 ms.
  %3 = -4.03698781490633483156497361352190615794353338591897830587 E-944
  ? suminf(i = 0, (-1)^i / i!) - exp(-1)
  time = 110 ms.
  %4 = -8.39147638 E-1000   \\   faster and more accurate

The library syntax is sumalt(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also available is sumalt2 with the same arguments (flag = 1).


sumdiv(n, X, expr)

Sum of expression expr over the positive divisors of n. This function is a trivial wrapper essentially equivalent to

    D = divisors(n);
    sum (i = 1, #D, my(X = D[i]); eval(expr))

If expr is a multiplicative function, use sumdivmult.


sumdivmult(n, d, expr)

Sum of multiplicative expression expr over the positive divisors d of n. Assume that expr evaluates to f(d) where f is multiplicative: f(1) = 1 and f(ab) = f(a)f(b) for coprime a and b. The library syntax is sumdivmultexpr(void *E, GEN (*eval)(void*,GEN), GEN d)


sumeulerrat(F, {s = 1}, {a = 2})

p ≥ aF(ps), where the sum is taken over prime numbers and F is a rational function.

  ? sumeulerrat(1/p^2)
  %1 = 0.45224742004106549850654336483224793417
  ? sumeulerrat(1/p, 2)
  %2 = 0.45224742004106549850654336483224793417

The library syntax is GEN sumeulerrat(GEN F, GEN s = NULL, long a, long prec).


suminf(X = a, expr)

Naive summation of expression expr, the formal parameter X going from a to infinity. The evaluation stops when the relative error of the expression is less than the default bit precision for 3 consecutive evaluations. The expressions must evaluate to a complex number.

If the expression tends slowly to 0, like n-a for some a > 1, make sure b = realbitprecision is low: indeed, the algorithm will require O(2b/a) function evaluations and we expect only about b(1-1/a) correct bits in the answer. If the series is alternating, we can expect b correct bits but the sumalt function should be used instead since its complexity is polynomial in b, instead of exponential. More generally, sumpos should be used if the terms have a constant sign and sumnum if the function is C oo .

  ? \pb25
    realbitprecision = 25 significant bits (7 decimal digits displayed)
  ? exponent(suminf(i = 1, (-1)^i / i) + log(2))
  time = 2min, 2,602 ms.
  %1 = -29
  ? \pb45
    realbitprecision = 45 significant bits (13 decimal digits displayed)
  ? exponent(suminf(i = 1, 1 / i^2) - zeta(2))
  time = 2,186 ms.
  %2 = -23
  
  \\ alternatives are much faster
  ? \pb 10000
    realbitprecision = 10000 significant bits (3010 decimal digits displayed)
  ? exponent(sumalt(i = 1, (-1)^i / i) + log(2))
  time = 25 ms.
  %3 = -10043
  
  ? \pb 4000
    realbitprecision = 4000 significant bits (1204 decimal digits displayed)))
  ? exponent(sumpos(i = 1, 1 / i^2) - zeta(2))
  time = 22,593 ms.
  %4 = -4030
  
  ? exponent(sumnum(i = 1, 1 / i^2) - zeta(2))
  time = 7,032 ms.
  %5 = -4031
  
  \\ but suminf is perfect for geometrically converging series
  ? exponent(suminf(i = 1, 2^-i) - 1)
  time = 25 ms.
  %6 = -4003

The library syntax is suminf(void *E, GEN (*eval)(void*,GEN), GEN a, long prec).


sumnum(n = a, f, {tab})

Numerical summation of f(n) at high accuracy using Euler-MacLaurin, the variable n taking values from a to + oo , where f is assumed to have positive values and is a C oo function; a must be an integer and tab, if given, is the output of sumnuminit. The latter precomputes abscissas and weights, speeding up the computation; it also allows to specify the behavior at infinity via sumnuminit([+oo, asymp]).

  ? \p500
  ? z3 = zeta(3);
  ? sumpos(n = 1, n^-3) - z3
  time = 2,332 ms.
  %2 = 2.438468843 E-501
  ? sumnum(n = 1, n^-3) - z3 \\ here slower than sumpos
  time = 2,752 ms.
  %3 = 0.E-500

Complexity. The function f will be evaluated at O(D log D) real arguments, where D ~ realprecision.log(10). The routine is geared towards slowly decreasing functions: if f decreases exponentially fast, then one of suminf or sumpos should be preferred. If f satisfies the stronger hypotheses required for Monien summation, i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1], then sumnummonien will be faster since it only requires O(D/log D) evaluations:

  ? sumnummonien(n = 1, 1/n^3) - z3
  time = 1,985 ms.
  %3 = 0.E-500

The tab argument precomputes technical data not depending on the expression being summed and valid for a given accuracy, speeding up immensely later calls:

  ? tab = sumnuminit();
  time = 2,709 ms.
  ? sumnum(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos
  time = 40 ms.
  %5 = 0.E-500
  
  ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too
  time = 1,781 ms.
  ? sumnummonien(n = 1, 1/n^3, tabmon) - z3
  time = 2 ms.
  %7 = 0.E-500

The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:

  ? sumnum(n = 1, lngamma(1+1/n)/n, tab);
  time = 14,180 ms.
  
  ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations
  time = 717 ms.

Behaviour at infinity. By default, sumnum assumes that expr decreases slowly at infinity, but at least like O(n-2). If the function decreases like nα for some -2 < α < -1, then it must be indicated via

    tab = sumnuminit([+oo, alpha]); /* alpha < 0 slow decrease */

otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via

    tab = sumnuminit([+oo, alpha]); /* alpha  > 0 exponential decrease */

otherwise exponent overflow will occur.

  ? sumnum(n=1,2^-n)
   ***   at top-level: sumnum(n=1,2^-n)
   ***                             ^ — -
   *** _^_: overflow in expo().
  ? tab = sumnuminit([+oo,log(2)]); sumnum(n=1,2^-n, tab)
  %1 = 1.000[...]

As a shortcut, one can also input

    sumnum(n = [a, asymp], f)

instead of

    tab = sumnuminit(asymp);
    sumnum(n = a, f, tab)

Further examples.

  ? \p200
  ? sumnum(n = 1, n^(-2)) - zeta(2) \\ accurate, fast
  time = 200 ms.
  %1 = -2.376364457868949779 E-212
  ? sumpos(n = 1, n^(-2)) - zeta(2)  \\ even faster
  time = 96 ms.
  %2 = 0.E-211
  ? sumpos(n=1,n^(-4/3)) - zeta(4/3)   \\ now much slower
  time = 13,045 ms.
  %3 = -9.980730723049589073 E-210
  ? sumnum(n=1,n^(-4/3)) - zeta(4/3)  \\ fast but inaccurate
  time = 365 ms.
  %4 = -9.85[...]E-85
  ? sumnum(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ with decrease rate, now accurate
  time = 416 ms.
  %5 = -4.134874156691972616 E-210
  
  ? tab = sumnuminit([+oo,-4/3]);
  time = 196 ms.
  ? sumnum(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations
  time = 216 ms.
  %5 = -4.134874156691972616 E-210
  ? sumnum(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3)
  time = 321 ms.
  %7 = 7.224147951921607329 E-210

Note that in the case of slow decrease (α < 0), the exact decrease rate must be indicated, while in the case of exponential decrease, a rough value will do. In fact, for exponentially decreasing functions, sumnum is given for completeness and comparison purposes only: one of suminf or sumpos should always be preferred.

  ? sumnum(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n)
  time = 240 ms.
  %8 = 1.000[...] \\ perfect
  ? sumpos(n=1, 2^-n)
  %9 = 1.000[...] \\ perfect and instantaneous

Beware cancellation. The function f(n) is evaluated for huge values of n, so beware of cancellation in the evaluation:

  ? f(n) = 2 - 1/n - 2*n*log(1+1/n); \\ result is O(1/n^2)
  ? z = -2 + log(2*Pi) - Euler;
  ? sumnummonien(n=1, f(n)) - z
  time = 149 ms.
  %12 = 0.E-212  \\ perfect
  ? sumnum(n=1, f(n)) - z
  time = 116 ms.
  %13 = -948.216[...] \\ junk

As sumnum(n = 1, print(n)) shows, we evaluate f(n) for n > 1e233 and our implementation of f suffers from massive cancellation since we are summing two terms of the order of O(1) for a result in O(1/n2). You can either rewrite your sum so that individual terms are evaluated without cancellation or locally replace f(n) by an accurate asymptotic expansion:

  ? F = truncate( f(1/x + O(x^30)) );
  ? sumnum(n=1, if(n > 1e7, subst(F,x,1/n), f(n))) - z
  %15 = 1.1 E-212 \\ now perfect

The library syntax is sumnum((void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec)) where an omitted tab is coded as NULL.


sumnumap(n = a, f, {tab})

Numerical summation of f(n) at high accuracy using Abel-Plana, the variable n taking values from a to + oo , where f is holomorphic in the right half-place Re(z) > a; a must be an integer and tab, if given, is the output of sumnumapinit. The latter precomputes abscissas and weights, speeding up the computation; it also allows to specify the behavior at infinity via sumnumapinit([+oo, asymp]).

  ? \p500
  ? z3 = zeta(3);
  ? sumpos(n = 1, n^-3) - z3
  time = 2,332 ms.
  %2 = 2.438468843 E-501
  ? sumnumap(n = 1, n^-3) - z3 \\ here slower than sumpos
  time = 2,565 ms.
  %3 = 0.E-500

Complexity. The function f will be evaluated at O(D log D) real arguments and O(D) complex arguments, where D ~ realprecision.log(10). The routine is geared towards slowly decreasing functions: if f decreases exponentially fast, then one of suminf or sumpos should be preferred. The default algorithm sumnum is usually a little slower than sumnumap but its initialization function sumnuminit becomes much faster as realprecision increases.

If f satisfies the stronger hypotheses required for Monien summation, i.e. if f(1/z) is holomorphic in a complex neighbourhood of [0,1], then sumnummonien will be faster since it only requires O(D/log D) evaluations:

  ? sumnummonien(n = 1, 1/n^3) - z3
  time = 1,128 ms.
  %3 = 0.E-500

The tab argument precomputes technical data not depending on the expression being summed and valid for a given accuracy, speeding up immensely later calls:

  ? tab = sumnumapinit();
  time = 2,567 ms.
  ? sumnumap(n = 1, 1/n^3, tab) - z3 \\ now much faster than sumpos
  time = 39 ms.
  %5 = 0.E-500
  
  ? tabmon = sumnummonieninit(); \\ Monien summation allows precomputations too
  time = 1,125 ms.
  ? sumnummonien(n = 1, 1/n^3, tabmon) - z3
  time = 2 ms.
  %7 = 0.E-500

The speedup due to precomputations becomes less impressive when the function f is expensive to evaluate, though:

  ? sumnumap(n = 1, lngamma(1+1/n)/n, tab);
  time = 10,762 ms.
  
  ? sumnummonien(n = 1, lngamma(1+1/n)/n, tabmon); \\ fewer evaluations
  time = 205 ms.

Behaviour at infinity. By default, sumnumap assumes that expr decreases slowly at infinity, but at least like O(n-2). If the function decreases like nα for some -2 < α < -1, then it must be indicated via

    tab = sumnumapinit([+oo, alpha]); /* alpha < 0 slow decrease */

otherwise loss of accuracy is expected. If the functions decreases quickly, like exp(-α n) for some α > 0, then it must be indicated via

    tab = sumnumapinit([+oo, alpha]); /* alpha  > 0 exponential decrease */

otherwise exponent overflow will occur.

  ? sumnumap(n=1,2^-n)
   ***   at top-level: sumnumap(n=1,2^-n)
   ***                             ^ — -
   *** _^_: overflow in expo().
  ? tab = sumnumapinit([+oo,log(2)]); sumnumap(n=1,2^-n, tab)
  %1 = 1.000[...]

As a shortcut, one can also input

    sumnumap(n = [a, asymp], f)

instead of

    tab = sumnumapinit(asymp);
    sumnumap(n = a, f, tab)

Further examples.

  ? \p200
  ? sumnumap(n = 1, n^(-2)) - zeta(2) \\ accurate, fast
  time = 169 ms.
  %1 = -4.752728915737899559 E-212
  ? sumpos(n = 1, n^(-2)) - zeta(2)  \\ even faster
  time = 79 ms.
  %2 = 0.E-211
  ? sumpos(n=1,n^(-4/3)) - zeta(4/3)   \\ now much slower
  time = 10,518 ms.
  %3 = -9.980730723049589073 E-210
  ? sumnumap(n=1,n^(-4/3)) - zeta(4/3)  \\ fast but inaccurate
  time = 309 ms.
  %4 = -2.57[...]E-78
  ? sumnumap(n=[1,-4/3],n^(-4/3)) - zeta(4/3) \\ decrease rate: now accurate
  time = 329 ms.
  %6 = -5.418110963941205497 E-210
  
  ? tab = sumnumapinit([+oo,-4/3]);
  time = 160 ms.
  ? sumnumap(n=1, n^(-4/3), tab) - zeta(4/3) \\ faster with precomputations
  time = 175 ms.
  %5 = -5.418110963941205497 E-210
  ? sumnumap(n=1,-log(n)*n^(-4/3), tab) - zeta'(4/3)
  time = 258 ms.
  %7 = 9.125239518216767153 E-210

Note that in the case of slow decrease (α < 0), the exact decrease rate must be indicated, while in the case of exponential decrease, a rough value will do. In fact, for exponentially decreasing functions, sumnumap is given for completeness and comparison purposes only: one of suminf or sumpos should always be preferred.

  ? sumnumap(n=[1, 1], 2^-n) \\ pretend we decrease as exp(-n)
  time = 240 ms.
  %8 = 1.000[...] \\ perfect
  ? sumpos(n=1, 2^-n)
  %9 = 1.000[...] \\ perfect and instantaneous

The library syntax is sumnumap((void *E, GEN (*eval)(void*,GEN), GEN a, GEN tab, long prec)) where an omitted tab is coded as NULL.


sumnumapinit({asymp})

Initialize tables for Abel-Plana summation of a series ∑ f(n), where f is holomorphic in a right half-plane. If given, asymp is of the form [+oo, α], as in intnum and indicates the decrease rate at infinity of functions to be summed. A positive α > 0 encodes an exponential decrease of type exp(-α n) and a negative -2 < α < -1 encodes a slow polynomial decrease of type nα.

  ? \p200
  ? sumnumap(n=1, n^-2);
  time = 163 ms.
  ? tab = sumnumapinit();
  time = 160 ms.
  ? sumnumap(n=1, n^-2, tab); \\ faster
  time = 7 ms.
  
  ? tab = sumnumapinit([+oo, log(2)]); \\ decrease like 2^-n
  time = 164 ms.
  ? sumnumap(n=1, 2^-n, tab) - 1
  time = 36 ms.
  %5 = 3.0127431466707723218 E-282
  
  ? tab = sumnumapinit([+oo, -4/3]); \\ decrease like n^(-4/3)
  time = 166 ms.
  ? sumnumap(n=1, n^(-4/3), tab);
  time = 181 ms.

The library syntax is GEN sumnumapinit(GEN asymp = NULL, long prec).


sumnuminit({asymp})

Initialize tables for Euler-MacLaurin delta summation of a series with positive terms. If given, asymp is of the form [+oo, α], as in intnum and indicates the decrease rate at infinity of functions to be summed. A positive α > 0 encodes an exponential decrease of type exp(-α n) and a negative -2 < α < -1 encodes a slow polynomial decrease of type nα.

  ? \p200
  ? sumnum(n=1, n^-2);
  time = 200 ms.
  ? tab = sumnuminit();
  time = 188 ms.
  ? sumnum(n=1, n^-2, tab); \\ faster
  time = 8 ms.
  
  ? tab = sumnuminit([+oo, log(2)]); \\ decrease like 2^-n
  time = 200 ms.
  ? sumnum(n=1, 2^-n, tab)
  time = 44 ms.
  
  ? tab = sumnuminit([+oo, -4/3]); \\ decrease like n^(-4/3)
  time = 200 ms.
  ? sumnum(n=1, n^(-4/3), tab);
  time = 221 ms.

The library syntax is GEN sumnuminit(GEN asymp = NULL, long prec).


sumnumlagrange(n = a, f, {tab})

Numerical summation of f(n) from n = a to + oo using Lagrange summation; a must be an integer, and the optional argument tab is the output of sumnumlagrangeinit. By default, the program assumes that the Nth remainder has an asymptotic expansion in integral powers of 1/N. If not, initialize tab using sumnumlagrangeinit(al), where the asymptotic expansion of the remainder is integral powers of 1/Nal; al can be equal to 1 (default), 1/2, 1/3, or 1/4, and also equal to 2, but in this latter case it is the Nth remainder minus one half of the last summand which has an asymptotic expansion in integral powers of 1/N2.

  ? \p1000
  ? z3 = zeta(3);
  ? sumpos(n = 1, n^-3) - z3
  time = 4,440 ms.
  %2 = -2.08[...] E-1001
  ? sumnumlagrange(n = 1, n^-3) - z3 \\ much faster than sumpos
  time = 25 ms.
  %3 = 0.E-1001
  ? tab = sumnumlagrangeinit();
  time = 21 ms.
  ? sumnumlagrange(n = 1, n^-3, tab) - z3
  time = 2 ms. /* even faster */
  %5 = 0.E-1001
  
  ? \p115
  ? tab = sumnumlagrangeinit([1/3,1/3]);
  time = 316 ms.
  ? sumnumlagrange(n = 1, n^-(7/3), tab) - zeta(7/3)
  time = 24 ms.
  %7 = 0.E-115
  ? sumnumlagrange(n = 1, n^(-2/3) - 3*(n^(1/3)-(n-1)^(1/3)), tab) - zeta(2/3)
  time = 32 ms.
  %8 = 1.0151767349262596893 E-115

Complexity. The function f is evaluated at O(D) integer arguments, where D ~ realprecision.log(10).

The library syntax is sumnumlagrange((void *E, GEN (*eval)(void*, GEN), GEN a, GEN tab, long prec)) where an omitted tab is coded as NULL.


sumnumlagrangeinit({asymp}, {c1})

Initialize tables for Lagrange summation of a series. By default, assume that the remainder R(n) = ∑m ≥ n f(m) has an asymptotic expansion R(n) = ∑m ≥ n f(n) ~ ∑i ≥ 1 ai / ni at infinity. The argument asymp allows to specify different expansions:

* a real number β means R(n) = ni ≥ 1 ai / ni

* a t_CLOSURE g means R(n) = g(n) ∑i ≥ 1 ai / ni (The preceding case corresponds to g(n) = n.)

* a pair [α,β] where β is as above and α ∈ {2, 1, 1/2, 1/3, 1/4}. We let R2(n) = R(n) - f(n)/2 and Rα(n) = R(n) for α ! = 2. Then Rα(n) = g(n) ∑i ≥ 1 ai / n Note that the initialization times increase considerable for the α is this list (1/4 being the slowest).

The constant c1 is technical and computed by the program, but can be set by the user: the number of interpolation steps will be chosen close to c1.B, where B is the bit accuracy.

  ? \p2000
  ? sumnumlagrange(n=1, n^-2);
  time = 173 ms.
  ? tab = sumnumlagrangeinit();
  time = 172 ms.
  ? sumnumlagrange(n=1, n^-2, tab);
  time = 4 ms.
  
  ? \p115
  ? sumnumlagrange(n=1, n^(-4/3)) - zeta(4/3);
  %1 = -0.1093[...] \\ junk: expansion in n^(1/3)
  time = 84 ms.
  ? tab = sumnumlagrangeinit([1/3,0]); \\ alpha = 1/3
  time = 336 ms.
  ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3)
  time = 84 ms.
  %3 = 1.0151767349262596893 E-115 \\ now OK
  
  ? tab = sumnumlagrangeinit(1/3); \\ alpha = 1, beta = 1/3: much faster
  time = 3ms
  ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3) \\ ... but wrong
  %5 = -0.273825[...]   \\ junk !
  ? tab = sumnumlagrangeinit(-2/3); \\ alpha = 1, beta = -2/3
  time = 3ms
  ? sumnumlagrange(n=1, n^(-4/3), tab) - zeta(4/3)
  %6 = 2.030353469852519379 E-115 \\ now OK

in The final example with ζ(4/3), the remainder R1(n) is of the form n-1/3i ≥ 0 ai / ni, i.e. n2/3i ≥ 1 ai / ni. The explains the wrong result for β = 1/3 and the correction with β = -2/3.

The library syntax is GEN sumnumlagrangeinit(GEN asymp = NULL, GEN c1 = NULL, long prec).


sumnummonien(n = a, f, {tab})

Numerical summation ∑n ≥ a f(n) at high accuracy, the variable n taking values from the integer a to + oo using Monien summation, which assumes that f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].

The function f is evaluated at O(D / log D) real arguments, where D ~ realprecision.log(10). By default, assume that f(n) = O(n-2) and has a nonzero asymptotic expansion f(n) = ∑i ≥ 2 ai n-i at infinity. To handle more complicated behaviors and allow time-saving precomputations (for a given realprecision), see sumnummonieninit.

The library syntax is GEN sumnummonien0(GEN n, GEN f, GEN tab = NULL, long prec).


sumnummonieninit({asymp}, {w}, {n0 = 1})

Initialize tables for Monien summation of a series ∑n ≥ n_{0} f(n) where f(1/z) has a complex analytic continuation in a (complex) neighbourhood of the segment [0,1].

By default, assume that f(n) = O(n-2) and has a nonzero asymptotic expansion f(n) = ∑i ≥ 2 ai / ni at infinity. Note that the sum starts at i = 2! The argument asymp allows to specify different expansions:

* a real number β > 0 means f(n) = ∑i ≥ 1 ai / ni + β (Now the summation starts at 1.)

* a vector [α,β] of reals, where we must have α > 0 and α + β > 1 to ensure convergence, means that f(n) = ∑i ≥ 1 ai / nα i + β Note that asymp = [1, β] is equivalent to asymp = β.

  ? \p57
  ? s = sumnum(n = 1, sin(1/sqrt(n)) / n); \\ reference point
  
  ? \p38
  ? sumnummonien(n = 1, sin(1/sqrt(n)) / n) - s
  %2 = -0.001[...] \\ completely wrong
  
  ? t = sumnummonieninit(1/2);  \\ f(n) = sumi 1 / n^(i+1/2)
  ? sumnummonien(n = 1, sin(1/sqrt(n)) / n, t) - s
  %3 = 0.E-37 \\ now correct

(As a matter of fact, in the above summation, the result given by sumnum at \p38 is slighly incorrect, so we had to increase the accuracy to \p57.)

The argument w is used to sum expressions of the form ∑n ≥ n_{0} f(n) w(n), for varying f as above, and fixed weight function w, where we further assume that the auxiliary sums gw(m) = ∑n ≥ n_{0} w(n) / nα m + β converge for all m ≥ 1. Note that for nonnegative integers k, and weight w(n) = (log n)k, the function gw(m) = ζ(k)(α m + β) has a simple expression; for general weights, gw is computed using sumnum. The following variants are available

* an integer k ≥ 0, to code w(n) = (log n)k;

* a t_CLOSURE computing the values w(n), where we assume that w(n) = O(nε) for all ε > 0;

* a vector [w, fast], where w is a closure as above and fast is a scalar; we assume that w(n) = O(nfast); note that w = [w, 0] is equivalent to w = w. Note that if w decreases exponentially, suminf should be used instead.

The subsequent calls to sumnummonien must use the same value of n0 as was used here.

  ? \p300
  ? sumnummonien(n = 1, n^-2*log(n)) + zeta'(2)
  time = 328 ms.
  %1 = -1.323[...]E-6 \\ completely wrong, f does not satisfy hypotheses !
  ? tab = sumnummonieninit(, 1); \\ codes w(n) = log(n)
  time = 3,993 ms.
  ? sumnummonien(n = 1, n^-2, tab) + zeta'(2)
  time = 41 ms.
  %3 = -5.562684646268003458 E-309  \\ now perfect
  
  ? tab = sumnummonieninit(, n->log(n)); \\ generic, slower
  time = 9,808 ms.
  ? sumnummonien(n = 1, n^-2, tab) + zeta'(2)
  time = 40 ms.
  %5 = -5.562684646268003458 E-309  \\ identical result

The library syntax is GEN sumnummonieninit(GEN asymp = NULL, GEN w = NULL, GEN n0 = NULL, long prec).


sumnumrat(F, a)

n ≥ aF(n), where F is a rational function of degree less than or equal to -2 and where poles of F at integers ≥ a are omitted from the summation. The argument a must be a t_INT or -oo.

  ? sumnumrat(1/(x^2+1)^2,0)
  %1 = 1.3068369754229086939178621382829073480
  ? sumnumrat(1/x^2, -oo) \\ value at x=0 is discarded
  %2 = 3.2898681336964528729448303332920503784
  ? 2*zeta(2)
  %3 = 3.2898681336964528729448303332920503784

When deg F = -1, we define ∑- oo oo F(n) := ∑n ≥ 0 (F(n) + F(-1-n)):

  ? sumnumrat(1/x, -oo)
  %4 = 0.E-38

The library syntax is GEN sumnumrat(GEN F, GEN a, long prec).


sumnumsidi(n = a, f, {safe = 1})

Numerical summation of f(n) from n = a to + oo using Sidi summation; a must be an integer. The optional argument safe (set by default to 1) can be set to 0 for a faster but much less robust program; this is likely to lose accuracy when the sum is non-alternating.

  ? \pb3328
  ? z = zeta(2);
  ? exponent(sumnumsidi(n = 1, 1/n^2) - z)
  time = 1,507 ms.
  %2 = -3261 \\ already loses some decimals
  ? exponent(sumnumsidi(n = 1, 1/n^2, 0) - z)
  time = 442 ms. \\ unsafe is much faster
  %3 = -2108     \\ ... but very wrong
  
  ? l2 = log(2);
  ? exponent(sumnumsidi(n = 1,(-1)^(n-1)/n) - z)
  time = 718 ms.
  %5 = -3328 \\ not so slow and perfect
  ? exponent(sumnumsidi(n = 1,(-1)^(n-1)/n, 0) - z)
  time = 504 ms.
  %5 = -3328 \\ still perfect in unsafe mode, not so much faster

Complexity. If the bitprecision is b, we try to achieve an absolute error less than 2-b. The function f is evaluated at O(b) consecutive integer arguments at bit accuracy 1.56 b (resp. b) in safe (resp. unsafe) mode.

The library syntax is GEN sumnumsidi0(GEN n, GEN f, long safe, long prec).


sumpos(X = a, expr, {flag = 0})

Numerical summation of the series expr, which must be a series of terms having the same sign, the formal variable X starting at a. The algorithm uses Van Wijngaarden's trick for converting such a series into an alternating one, then sumalt. For regular functions, the function sumnum is in general much faster once the initializations have been made using sumnuminit. Contrary to sumnum, sumpos allows functions defined only at integers:

  ? sumnum(n = 0, 1/n!)
   ***   at top-level: sumnum(n=1,1/n!)
   ***                              ^ — 
   ***   incorrect type in gtos [integer expected] (t_FRAC).
  ? sumpos(n = 0, 1/n!) - exp(1)
  %2 = -1.0862155548773347717 E-33

On the other hand, when the function accepts general real numbers, it is usually advantageous to replace n by n * 1.0 in the sumpos call in particular when rational functions are involved:

  ? \p500
  ? sumpos(n = 0, n^7 / (n^9+n+1));
  time = 6,108 ms.
  ? sumpos(n = 0, n *= 1.; n^7 / (n^9+n+1));
  time = 2,788 ms.
  ? sumnumrat(n^7 / (n^9+n+1), 0);
  time = 4 ms.

In the last example, sumnumrat is of course much faster but it only applies to rational functions.

The routine is heuristic and assumes that expr is more or less a decreasing function of X. In particular, the result will be completely wrong if expr is 0 too often. We do not check either that all terms have the same sign: as sumalt, this function should be used to try and guess the value of an infinite sum.

If flag = 1, use sumalt(,1) instead of sumalt(,0), see Section se:sumalt. Requiring more stringent analytic properties for rigorous use, but allowing to compute fewer series terms.

To reach accuracy 10-p, both algorithms require O(p2) space; furthermore, assuming the terms decrease polynomially (in O(n-C)), both need to compute O(p2) terms. The sumpos(,1) variant has a smaller implied constant (roughly 1.5 times smaller). Since the sumalt(,1) overhead is now small compared to the time needed to compute series terms, this last variant should be about 1.5 faster. On the other hand, the achieved accuracy may be much worse: as for sumalt, since conditions for rigorous use are hard to check, the routine is best used heuristically.

The library syntax is sumpos(void *E, GEN (*eval)(void*,GEN),GEN a,long prec). Also available is sumpos2 with the same arguments (flag = 1).