Vectors, matrices, linear algebra and sets

Note that most linear algebra functions operating on subspaces defined by generating sets (such as mathnf, qflll, etc.) take matrices as arguments. As usual, the generating vectors are taken to be the columns of the given matrix.

Since PARI does not have a strong typing system, scalars live in unspecified commutative base rings. It is very difficult to write robust linear algebra routines in such a general setting. We thus assume that the base ring is a domain and work over its field of fractions. If the base ring is not a domain, one gets an error as soon as a nonzero pivot turns out to be noninvertible. Some functions, e.g. mathnf or mathnfmod, specifically assume that the base ring is ℤ.


algdep(z, k, {flag = 0})

z being real/complex, or p-adic, finds a polynomial (in the variable 'x) of degree at most k, with integer coefficients, having z as approximate root. Note that the polynomial which is obtained is not necessarily the "correct" one. In fact it is not even guaranteed to be irreducible. One can check the closeness either by a polynomial evaluation (use subst), or by computing the roots of the polynomial given by algdep (use polroots or polrootspadic).

Internally, lindep([1,z,...,zk], flag) is used. A nonzero value of flag may improve on the default behavior if the input number is known to a huge accuracy, and you suspect the last bits are incorrect: if flag > 0 the computation is done with an accuracy of flag decimal digits; to get meaningful results, the parameter flag should be smaller than the number of correct decimal digits in the input. But default values are usually sufficient, so try without flag first:

  ? \p200
  ? z = 2^(1/6)+3^(1/5);
  ? algdep(z, 30);      \\ right in 63ms
  ? algdep(z, 30, 100); \\ wrong in 39ms
  ? algdep(z, 30, 170); \\ right in 61ms
  ? algdep(z, 30, 200); \\ wrong in 146ms
  ? \p250
  ? z = 2^(1/6)+3^(1/5); \\ recompute to new, higher, accuracy !
  ? algdep(z, 30);      \\ right in 68ms
  ? algdep(z, 30, 200); \\ right in 68ms
  ? \p500
  ? algdep(2^(1/6)+3^(1/5), 30); \\ right in 138ms
  ? \p1000
  ? algdep(2^(1/6)+3^(1/5), 30); \\ right in 276s

The changes in realprecision only affect the quality of the initial approximation to 21/6 + 31/5, algdep itself uses exact operations. The size of its operands depend on the accuracy of the input of course: a more accurate input means slower operations.

Proceeding by increments of 5 digits of accuracy, algdep with default flag produces its first correct result at 195 digits, and from then on a steady stream of correct results:

    \\ assume T contains the correct result, for comparison
    forstep(d=100, 250, 5, \
      localprec(d);        \
      print(d, " ", algdep(2^(1/6)+3^(1/5),30) == T))

This example is the test case studied in a 2000 paper by Borwein and Lisonek: Applications of integer relation algorithms, Discrete Math., 217, p. 65–82. The version of PARI tested there was 1.39, which succeeded reliably from precision 265 on, in about 1000 as much time as the current version (on slower hardware of course).

Note that this function does not work if z is a power series. The function seralgdep can be used in this case to find linear relations wich polynomial coefficients between powers of z.

The library syntax is GEN algdep0(GEN z, long k, long flag). Also available is GEN algdep(GEN z, long k) (flag = 0).


bestapprnf(V, T, {rootT})

T being an integral polynomial and V being a scalar, vector, or matrix with complex coefficients, return a reasonable approximation of V with polmods modulo T. T can also be any number field structure, in which case the minimal polynomial attached to the structure (T.pol) is used. The rootT argument, if present, must be an element of polroots(T) (or T.pol), i.e. a complex root of T fixing an embedding of ℚ[x]/(T) into ℂ.

  ? bestapprnf(sqrt(5), polcyclo(5))
  %1 = Mod(-2*x^3 - 2*x^2 - 1, x^4 + x^3 + x^2 + x + 1)
  ? bestapprnf(sqrt(5), polcyclo(5), exp(4*I*Pi/5))
  %2 = Mod(2*x^3 + 2*x^2 + 1, x^4 + x^3 + x^2 + x + 1)

When the output has huge rational coefficients, try to increase the working realbitprecision: if the answer does not stabilize, consider that the reconstruction failed. Beware that if T is not Galois over ℚ, some embeddings may not allow to reconstruct V:

  ? T = x^3-2; vT = polroots(T); z = 3*2^(1/3)+1;
  ? bestapprnf(z, T, vT[1])
  %2 = Mod(3*x + 1, x^3 - 2)
  ? bestapprnf(z, T, vT[2])
  %3 = 4213714286230872/186454048314072  \\ close to 3*2^(1/3) + 1

The library syntax is GEN bestapprnf(GEN V, GEN T, GEN rootT = NULL, long prec).


charpoly(A, {v = 'x}, {flag = 5})

characteristic polynomial of A with respect to the variable v, i.e. determinant of v*I-A if A is a square matrix.

  ? charpoly([1,2;3,4]);
  %1 = x^2 - 5*x - 2
  ? charpoly([1,2;3,4],, 't)
  %2 = t^2 - 5*t - 2

If A is not a square matrix, the function returns the characteristic polynomial of the map "multiplication by A" if A is a scalar:

  ? charpoly(Mod(x+2, x^3-2))
  %1 = x^3 - 6*x^2 + 12*x - 10
  ? charpoly(I)
  %2 = x^2 + 1
  ? charpoly(quadgen(5))
  %3 = x^2 - x - 1
  ? charpoly(ffgen(ffinit(2,4)))
  %4 = Mod(1, 2)*x^4 + Mod(1, 2)*x^3 + Mod(1, 2)*x^2 + Mod(1, 2)*x + Mod(1, 2)

The value of flag is only significant for matrices, and we advise to stick to the default value. Let n be the dimension of A.

If flag = 0, same method (Le Verrier's) as for computing the adjoint matrix, i.e. using the traces of the powers of A. Assumes that n! is invertible; uses O(n4) scalar operations.

If flag = 1, uses Lagrange interpolation which is usually the slowest method. Assumes that n! is invertible; uses O(n4) scalar operations.

If flag = 2, uses the Hessenberg form. Assumes that the base ring is a field. Uses O(n3) scalar operations, but suffers from coefficient explosion unless the base field is finite or ℝ.

If flag = 3, uses Berkowitz's division free algorithm, valid over any ring (commutative, with unit). Uses O(n4) scalar operations.

If flag = 4, x must be integral. Uses a modular algorithm: Hessenberg form for various small primes, then Chinese remainders.

If flag = 5 (default), uses the "best" method given x. This means we use Berkowitz unless the base ring is ℤ (use flag = 4) or a field where coefficient explosion does not occur, e.g. a finite field or the reals (use flag = 2).

The library syntax is GEN charpoly0(GEN A, long v = -1, long flag) where v is a variable number. Also available are GEN charpoly(GEN x, long v) (flag = 5), GEN caract(GEN A, long v) (flag = 1), GEN carhess(GEN A, long v) (flag = 2), GEN carberkowitz(GEN A, long v) (flag = 3) and GEN caradj(GEN A, long v, GEN *pt). In this last case, if pt is not NULL, *pt receives the address of the adjoint matrix of A (see matadjoint), so both can be obtained at once.


concat(x, {y})

Concatenation of x and y. If x or y is not a vector or matrix, it is considered as a one-dimensional vector. All types are allowed for x and y, but the sizes must be compatible. Note that matrices are concatenated horizontally, i.e. the number of rows stays the same. Using transpositions, one can concatenate them vertically, but it is often simpler to use matconcat.

  ? x = matid(2); y = 2*matid(2);
  ? concat(x,y)
  %2 =
  [1 0 2 0]
  
  [0 1 0 2]
  ? concat(x~,y~)~
  %3 =
  [1 0]
  
  [0 1]
  
  [2 0]
  
  [0 2]
  ? matconcat([x;y])
  %4 =
  [1 0]
  
  [0 1]
  
  [2 0]
  
  [0 2]

To concatenate vectors sideways (i.e. to obtain a two-row or two-column matrix), use Mat instead, or matconcat:

  ? x = [1,2];
  ? y = [3,4];
  ? concat(x,y)
  %3 = [1, 2, 3, 4]
  
  ? Mat([x,y]~)
  %4 =
  [1 2]
  
  [3 4]
  ? matconcat([x;y])
  %5 =
  [1 2]
  
  [3 4]

Concatenating a row vector to a matrix having the same number of columns will add the row to the matrix (top row if the vector is x, i.e. comes first, and bottom row otherwise).

The empty matrix [;] is considered to have a number of rows compatible with any operation, in particular concatenation. (Note that this is not the case for empty vectors [ ] or [ ]~.)

If y is omitted, x has to be a row vector or a list, in which case its elements are concatenated, from left to right, using the above rules.

  ? concat([1,2], [3,4])
  %1 = [1, 2, 3, 4]
  ? a = [[1,2]~, [3,4]~]; concat(a)
  %2 =
  [1 3]
  
  [2 4]
  
  ? concat([1,2; 3,4], [5,6]~)
  %3 =
  [1 2 5]
  
  [3 4 6]
  ? concat([%, [7,8]~, [1,2,3,4]])
  %5 =
  [1 2 5 7]
  
  [3 4 6 8]
  
  [1 2 3 4]

The library syntax is GEN gconcat(GEN x, GEN y = NULL). GEN gconcat1(GEN x) is a shortcut for gconcat(x,NULL).


dirpowers(n, x)

For nonnegative n and complex number x, return the vector with n components [1×,2×,...,n×].

  ? dirpowers(5, 2)
  %1 = [1, 4, 9, 16, 25]
  ? dirpowers(5, 1/2)
  %2 = [1, 1.414..., 1.732..., 2.000..., 2.236...]

When n ≤ 0, the function returns the empty vector [].

The library syntax is GEN dirpowers(long n, GEN x, long prec).


forqfvec(v, q, b, expr)

q being a square and symmetric integral matrix representing a positive definite quadratic form, evaluate expr for all pairs of nonzero vectors (-v,v) such that q(v) ≤ b. The formal variable v runs through representatives of all such pairs in turn.

  ? forqfvec(v, [3,2;2,3], 3, print(v))
  [0, 1]~
  [1, 0]~
  [-1, 1]~

The library syntax is void forqfvec0(GEN v, GEN q = NULL, GEN b). The following functions are also available: void forqfvec(void *E, long (*fun)(void *, GEN, GEN, double), GEN q, GEN b): Evaluate fun(E,U,v,m) on all v such that q(U v) < b, where U is a t_MAT, v is a t_VECSMALL and m = q(v) is a C double. The function fun must return 0, unless forqfvec should stop, in which case, it should return 1.

void forqfvec1(void *E, long (*fun)(void *, GEN), GEN q, GEN b): Evaluate fun(E,v) on all v such that q(v) < b, where v is a t_COL. The function fun must return 0, unless forqfvec should stop, in which case, it should return 1.


lindep(v, {flag = 0})

finds a small nontrivial integral linear combination between components of v. If none can be found return an empty vector.

If v is a vector with real/complex entries we use a floating point (variable precision) LLL algorithm. If flag = 0 the accuracy is chosen internally using a crude heuristic. If flag > 0 the computation is done with an accuracy of flag decimal digits. To get meaningful results in the latter case, the parameter flag should be smaller than the number of correct decimal digits in the input.

  ? lindep([sqrt(2), sqrt(3), sqrt(2)+sqrt(3)])
  %1 = [-1, -1, 1]~

If v is p-adic, flag is ignored and the algorithm LLL-reduces a suitable (dual) lattice.

  ? lindep([1, 2 + 3 + 3^2 + 3^3 + 3^4 + O(3^5)])
  %2 = [1, -2]~

If v is a matrix (or a vector of column vectors, or a vector of row vectors), flag is ignored and the function returns a non trivial kernel vector if one exists, else an empty vector.

  ? lindep([1,2,3;4,5,6;7,8,9])
  %3 = [1, -2, 1]~
  ? lindep([[1,0], [2,0]])
  %4 = [2, -1]~
  ? lindep([[1,0], [0,1]])
  %5 = []~

If v contains polynomials or power series over some base field, finds a linear relation with coefficients in the field.

  ? lindep([x*y, x^2 + y, x^2*y + x*y^2, 1])
  %4 = [y, y, -1, -y^2]~

For better control, it is preferable to use t_POL rather than t_SER in the input, otherwise one gets a linear combination which is t-adically small, but not necessarily 0. Indeed, power series are first converted to the minimal absolute accuracy occurring among the entries of v (which can cause some coefficients to be ignored), then truncated to polynomials:

  ? v = [t^2+O(t^4), 1+O(t^2)]; L=lindep(v)
  %1 = [1, 0]~
  ? v*L
  %2 = t^2+O(t^4)  \\ small but not 0

The library syntax is GEN lindep0(GEN v, long flag).


matadjoint(M, {flag = 0})

adjoint matrix of M, i.e. a matrix N of cofactors of M, satisfying M*N = det(M)*Id. M must be a (not necessarily invertible) square matrix of dimension n. If flag is 0 or omitted, we try to use Leverrier-Faddeev's algorithm, which assumes that n! invertible. If it fails or flag = 1, compute T = charpoly(M) independently first and return (-1)n-1 (T(x)-T(0))/x evaluated at M.

  ? a = [1,2,3;3,4,5;6,7,8] * Mod(1,4);
  ? matadjoint(a)
  %2 =
  [Mod(1, 4) Mod(1, 4) Mod(2, 4)]
  
  [Mod(2, 4) Mod(2, 4) Mod(0, 4)]
  
  [Mod(1, 4) Mod(1, 4) Mod(2, 4)]

Both algorithms use O(n4) operations in the base ring. Over a field, they are usually slower than computing the characteristic polynomial or the inverse of M directly.

The library syntax is GEN matadjoint0(GEN M, long flag). Also available are GEN adj(GEN x) (flag = 0) and GEN adjsafe(GEN x) (flag = 1).


matcompanion(x)

The left companion matrix to the nonzero polynomial x.

The library syntax is GEN matcompanion(GEN x).


matconcat(v)

Returns a t_MAT built from the entries of v, which may be a t_VEC (concatenate horizontally), a t_COL (concatenate vertically), or a t_MAT (concatenate vertically each column, and concatenate vertically the resulting matrices). The entries of v are always considered as matrices: they can themselves be t_VEC (seen as a row matrix), a t_COL seen as a column matrix), a t_MAT, or a scalar (seen as an 1 x 1 matrix).

  ? A=[1,2;3,4]; B=[5,6]~; C=[7,8]; D=9;
  ? matconcat([A, B]) \\ horizontal
  %1 =
  [1 2 5]
  
  [3 4 6]
  ? matconcat([A, C]~) \\ vertical
  %2 =
  [1 2]
  
  [3 4]
  
  [7 8]
  ? matconcat([A, B; C, D]) \\ block matrix
  %3 =
  [1 2 5]
  
  [3 4 6]
  
  [7 8 9]

If the dimensions of the entries to concatenate do not match up, the above rules are extended as follows:

* each entry vi,j of v has a natural length and height: 1 x 1 for a scalar, 1 x n for a t_VEC of length n, n x 1 for a t_COL, m x n for an m x n t_MAT

* let Hi be the maximum over j of the lengths of the vi,j, let Lj be the maximum over i of the heights of the vi,j. The dimensions of the (i,j)-th block in the concatenated matrix are Hi x Lj.

* a scalar s = vi,j is considered as s times an identity matrix of the block dimension min (Hi,Lj)

* blocks are extended by 0 columns on the right and 0 rows at the bottom, as needed.

  ? matconcat([1, [2,3]~, [4,5,6]~]) \\ horizontal
  %4 =
  [1 2 4]
  
  [0 3 5]
  
  [0 0 6]
  ? matconcat([1, [2,3], [4,5,6]]~) \\ vertical
  %5 =
  [1 0 0]
  
  [2 3 0]
  
  [4 5 6]
  ? matconcat([B, C; A, D]) \\ block matrix
  %6 =
  [5 0 7 8]
  
  [6 0 0 0]
  
  [1 2 9 0]
  
  [3 4 0 9]
  ? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
  ? matconcat(matdiagonal([U, V])) \\ block diagonal
  %7 =
  [1 2 0 0 0]
  
  [3 4 0 0 0]
  
  [0 0 1 2 3]
  
  [0 0 4 5 6]
  
  [0 0 7 8 9]

The library syntax is GEN matconcat(GEN v).


matdet(x, {flag = 0})

Determinant of the square matrix x.

If flag = 0, uses an appropriate algorithm depending on the coefficients:

* integer entries: modular method due to Dixon, Pernet and Stein.

* real or p-adic entries: classical Gaussian elimination using maximal pivot.

* intmod entries: classical Gaussian elimination using first nonzero pivot.

* other cases: Gauss-Bareiss.

If flag = 1, uses classical Gaussian elimination with appropriate pivoting strategy (maximal pivot for real or p-adic coefficients). This is usually worse than the default.

The library syntax is GEN det0(GEN x, long flag). Also available are GEN det(GEN x) (flag = 0), GEN det2(GEN x) (flag = 1) and GEN ZM_det(GEN x) for integer entries.


matdetint(B)

Let B be an m x n matrix with integer coefficients. The determinant D of the lattice generated by the columns of B is the square root of det(BT B) if B has maximal rank m, and 0 otherwise.

This function uses the Gauss-Bareiss algorithm to compute a positive multiple of D. When B is square, the function actually returns D = |det B|.

This function is useful in conjunction with mathnfmod, which needs to know such a multiple. If the rank is maximal but the matrix is nonsquare, you can obtain D exactly using

    matdet( mathnfmod(B, matdetint(B)) )

Note that as soon as one of the dimensions gets large (m or n is larger than 20, say), it will often be much faster to use mathnf(B, 1) or mathnf(B, 4) directly.

The library syntax is GEN detint(GEN B).


matdetmod(x, d)

Given a matrix x with t_INT entries and d an arbitrary positive integer, return the determinant of x modulo d.

  ? A = [4,2,3; 4,5,6; 7,8,9]
  
  ? matdetmod(A,27)
  %2 = 9

Note that using the generic function matdet on a matrix with t_INTMOD entries uses Gaussian reduction and will fail in general when the modulus is not prime.

  ? matdet(A * Mod(1,27))
   ***   at top-level: matdet(A*Mod(1,27))
   ***                 ^ —  —  —  —  —  — 
   *** matdet: impossible inverse in Fl_inv: Mod(3, 27).

The library syntax is GEN matdetmod(GEN x, GEN d).


matdiagonal(x)

x being a vector, creates the diagonal matrix whose diagonal entries are those of x.

  ? matdiagonal([1,2,3]);
  %1 =
  [1 0 0]
  
  [0 2 0]
  
  [0 0 3]

Block diagonal matrices are easily created using matconcat:

  ? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
  ? matconcat(matdiagonal([U, V]))
  %1 =
  [1 2 0 0 0]
  
  [3 4 0 0 0]
  
  [0 0 1 2 3]
  
  [0 0 4 5 6]
  
  [0 0 7 8 9]

The library syntax is GEN diagonal(GEN x).


mateigen(x, {flag = 0})

Returns the (complex) eigenvectors of x as columns of a matrix. If flag = 1, return [L,H], where L contains the eigenvalues and H the corresponding eigenvectors; multiple eigenvalues are repeated according to the eigenspace dimension (which may be less than the eigenvalue multiplicity in the characteristic polynomial).

This function first computes the characteristic polynomial of x and approximates its complex roots (λi), then tries to compute the eigenspaces as kernels of the x - λi. This algorithm is ill-conditioned and is likely to miss kernel vectors if some roots of the characteristic polynomial are close, in particular if it has multiple roots.

  ? A = [13,2; 10,14]; mateigen(A)
  %1 =
  [-1/2 2/5]
  
  [   1   1]
  ? [L,H] = mateigen(A, 1);
  ? L
  %3 = [9, 18]
  ? H
  %4 =
  [-1/2 2/5]
  
  [   1   1]
  ? A * H == H * matdiagonal(L)
  %5 = 1

For symmetric matrices, use qfjacobi instead; for Hermitian matrices, compute

   A = real(x);
   B = imag(x);
   y = matconcat([A, -B; B, A]);

and apply qfjacobi to y.

The library syntax is GEN mateigen(GEN x, long flag, long prec). Also available is GEN eigen(GEN x, long prec) (flag = 0)


matfrobenius(M, {flag}, {v = 'x})

Returns the Frobenius form of the square matrix M. If flag = 1, returns only the elementary divisors as a vector of polynomials in the variable v. If flag = 2, returns a two-components vector [F,B] where F is the Frobenius form and B is the basis change so that M = B-1FB.

The library syntax is GEN matfrobenius(GEN M, long flag, long v = -1) where v is a variable number.


mathess(x)

Returns a matrix similar to the square matrix x, which is in upper Hessenberg form (zero entries below the first subdiagonal).

The library syntax is GEN hess(GEN x).


mathilbert(n)

Creates the Hilbert matrix of order n ≥ 0, i.e. the square matrix H whose coefficient H[i,j] is 1 / (i+j-1). This matrix is ill-conditionned but its inverse has integer entries.

The library syntax is GEN mathilbert(long n).


mathnf(M, {flag = 0})

Let R be a Euclidean ring, equal to ℤ or to K[X] for some field K. If M is a (not necessarily square) matrix with entries in R, this routine finds the upper triangular Hermite normal form of M. If the rank of M is equal to its number of rows, this is a square matrix. In general, the columns of the result form a basis of the R-module spanned by the columns of M.

The values of flag are:

* 0 (default): only return the Hermite normal form H

* 1 (complete output): return [H,U], where H is the Hermite normal form of M, and U is a transformation matrix such that MU = [0|H]. The matrix U belongs to GL(R). When M has a large kernel, the entries of U are in general huge.

For these two values, we use a naive algorithm, which behaves well in small dimension only. Larger values correspond to different algorithms, are restricted to integer matrices, and all output the unimodular matrix U. From now on all matrices have integral entries.

* flag = 4, returns [H,U] as in "complete output" above, using a variant of LLL reduction along the way. The matrix U is provably small in the L2 sense, and often close to optimal; but the reduction is in general slow, although provably polynomial-time.

If flag = 5, uses Batut's algorithm and output [H,U,P], such that H and U are as before and P is a permutation of the rows such that P applied to MU gives H. This is in general faster than flag = 4 but the matrix U is usually worse; it is heuristically smaller than with the default algorithm.

When the matrix is dense and the dimension is large (bigger than 100, say), flag = 4 will be fastest. When M has maximal rank, then

    H = mathnfmod(M, matdetint(M))

will be even faster. You can then recover U as M-1H.

  ? M = matrix(3,4,i,j,random([-5,5]))
  %1 =
  [ 0 2  3  0]
  
  [-5 3 -5 -5]
  
  [ 4 3 -5  4]
  
  ? [H,U] = mathnf(M, 1);
  ? U
  %3 =
  [-1 0 -1 0]
  
  [ 0 5  3 2]
  
  [ 0 3  1 1]
  
  [ 1 0  0 0]
  
  ? H
  %5 =
  [19 9 7]
  
  [ 0 9 1]
  
  [ 0 0 1]
  
  ? M*U
  %6 =
  [0 19 9 7]
  
  [0  0 9 1]
  
  [0  0 0 1]

For convenience, M is allowed to be a t_VEC, which is then automatically converted to a t_MAT, as per the Mat function. For instance to solve the generalized extended gcd problem, one may use

  ? v = [116085838, 181081878, 314252913,10346840];
  ? [H,U] = mathnf(v, 1);
  ? U
  %2 =
  [ 103 -603    15  -88]
  
  [-146   13 -1208  352]
  
  [  58  220   678 -167]
  
  [-362 -144   381 -101]
  ? v*U
  %3 = [0, 0, 0, 1]

This also allows to input a matrix as a t_VEC of t_COLs of the same length (which Mat would concatenate to the t_MAT having those columns):

  ? v = [[1,0,4]~, [3,3,4]~, [0,-4,-5]~]; mathnf(v)
  %1 =
  [47 32 12]
  
  [ 0  1  0]
  
  [ 0  0  1]

The library syntax is GEN mathnf0(GEN M, long flag). Also available are GEN hnf(GEN M) (flag = 0) and GEN hnfall(GEN M) (flag = 1). To reduce huge relation matrices (sparse with small entries, say dimension 400 or more), you can use the pair hnfspec / hnfadd. Since this is quite technical and the calling interface may change, they are not documented yet. Look at the code in basemath/hnf_snf.c.


mathnfmod(x, d)

If x is a (not necessarily square) matrix of maximal rank with integer entries, and d is a multiple of the (nonzero) determinant of the lattice spanned by the columns of x, finds the upper triangular Hermite normal form of x.

If the rank of x is equal to its number of rows, the result is a square matrix. In general, the columns of the result form a basis of the lattice spanned by the columns of x. Even when d is known, this is in general slower than mathnf but uses much less memory.

The library syntax is GEN hnfmod(GEN x, GEN d).


mathnfmodid(x, d)

Outputs the (upper triangular) Hermite normal form of x concatenated with the diagonal matrix with diagonal d. Assumes that x has integer entries. Variant: if d is an integer instead of a vector, concatenate d times the identity matrix.

  ? m=[0,7;-1,0;-1,-1]
  %1 =
  [ 0  7]
  
  [-1  0]
  
  [-1 -1]
  ? mathnfmodid(m, [6,2,2])
  %2 =
  [2 1 1]
  
  [0 1 0]
  
  [0 0 1]
  ? mathnfmodid(m, 10)
  %3 =
  [10 7 3]
  
  [ 0 1 0]
  
  [ 0 0 1]

The library syntax is GEN hnfmodid(GEN x, GEN d).


mathouseholder(Q, v)

applies a sequence Q of Householder transforms, as returned by matqr(M,1) to the vector or matrix v.

  ? m = [2,1; 3,2]; \\ some random matrix
  ? [Q,R] = matqr(m);
  ? Q
  %3 =
  [-0.554... -0.832...]
  
  [-0.832... 0.554...]
  
  ? R
  %4 =
  [-3.605... -2.218...]
  
  [0         0.277...]
  
  ? v = [1, 2]~; \\ some random vector
  ? Q * v
  %6 = [-2.218..., 0.277...]~
  
  ? [q,r] = matqr(m, 1);
  ? exponent(r - R) \\ r is the same as R
  %8 = -128
  ? q \\ but q has a different structure
  %9 = [[0.0494..., [5.605..., 3]]]]
  ? mathouseholder(q, v) \\ applied to v
  %10 = [-2.218..., 0.277...]~

The point of the Householder structure is that it efficiently represents the linear operator v Q v in a more stable way than expanding the matrix Q:

  ? m = mathilbert(20); v = vectorv(20,i,i^2+1);
  ? [Q,R] = matqr(m);
  ? [q,r] = matqr(m, 1);
  ? \p100
  ? [q2,r2] = matqr(m, 1); \\ recompute at higher accuracy
  ? exponent(R - r)
  %5 = -127
  ? exponent(R - r2)
  %6 = -127
  ? exponent(mathouseholder(q,v) - mathouseholder(q2,v))
  %7 = -119
  ? exponent(Q*v - mathouseholder(q2,v))
  %8 = 9

We see that R is OK with or without a flag to matqr but that multiplying by Q is considerably less precise than applying the sequence of Householder transforms encoded by q.

The library syntax is GEN mathouseholder(GEN Q, GEN v).


matid(n)

Creates the n x n identity matrix.

The library syntax is GEN matid(long n).


matimage(x, {flag = 0})

Gives a basis for the image of the matrix x as columns of a matrix. A priori the matrix can have entries of any type. If flag = 0, use standard Gauss pivot. If flag = 1, use matsupplement (much slower: keep the default flag!).

The library syntax is GEN matimage0(GEN x, long flag). Also available is GEN image(GEN x) (flag = 0).


matimagecompl(x)

Gives the vector of the column indices which are not extracted by the function matimage, as a permutation (t_VECSMALL). Hence the number of components of matimagecompl(x) plus the number of columns of matimage(x) is equal to the number of columns of the matrix x.

The library syntax is GEN imagecompl(GEN x).


matimagemod(x, d, &U)

Gives a Howell basis (unique representation for submodules of (ℤ/dℤ)n) for the image of the matrix x modulo d as columns of a matrix H. The matrix x must have t_INT entries, and d can be an arbitrary positive integer. If U is present, set it to a matrix such that AU = H.

  ? A = [2,1;0,2];
  ? matimagemod(A,6,&U)
  %2 =
  [1 0]
  
  [0 2]
  
  ? U
  %3 =
  [5 1]
  
  [3 4]
  
  ? (A*U)%6
  %4 =
  [1 0]
  
  [0 2]

Caveat. In general the number of columns of the Howell form is not the minimal number of generators of the submodule. Example:

  ? matimagemod([1;2],4)
  %5 =
  [2 1]
  
  [0 2]

Caveat 2. In general the matrix U is not invertible, even if A and H have the same size. Example:

  ? matimagemod([4,1;0,4],8,&U)
  %6 =
  [2 1]
  
  [0 4]
  
  ? U
  %7 =
  [0 0]
  
  [2 1]

The library syntax is GEN matimagemod(GEN x, GEN d, GEN *U = NULL).


matindexrank(M)

M being a matrix of rank r, returns a vector with two t_VECSMALL components y and z of length r giving a list of rows and columns respectively (starting from 1) such that the extracted matrix obtained from these two vectors using vecextract(M,y,z) is invertible. The vectors y and z are sorted in increasing order.

The library syntax is GEN indexrank(GEN M).


matintersect(x, y)

x and y being two matrices with the same number of rows, finds a basis of the vector space equal to the intersection of the spaces spanned by the columns of x and y respectively. For efficiency, the columns of x (resp. y) should be independent.

The faster function idealintersect can be used to intersect fractional ideals (projective ℤK modules of rank 1); the slower but more general function nfhnf can be used to intersect general ℤK-modules.

The library syntax is GEN intersect(GEN x, GEN y).


matinverseimage(x, y)

Given a matrix x and a column vector or matrix y, returns a preimage z of y by x if one exists (i.e such that x z = y), an empty vector or matrix otherwise. The complete inverse image is z + Ker x, where a basis of the kernel of x may be obtained by matker.

  ? M = [1,2;2,4];
  ? matinverseimage(M, [1,2]~)
  %2 = [1, 0]~
  ? matinverseimage(M, [3,4]~)
  %3 = []~    \\  no solution
  ? matinverseimage(M, [1,3,6;2,6,12])
  %4 =
  [1 3 6]
  
  [0 0 0]
  ? matinverseimage(M, [1,2;3,4])
  %5 = [;]    \\  no solution
  ? K = matker(M)
  %6 =
  [-2]
  
  [1]

The library syntax is GEN inverseimage(GEN x, GEN y).


matinvmod(x, d)

Computes a left inverse of the matrix x modulo d. The matrix x must have t_INT entries, and d can be an arbitrary positive integer.

  ? A = [3,1,2;1,2,1;3,1,1];
  ? U = matinvmod(A,6)
  %2 =
  [1 1 3]
  
  [2 3 5]
  
  [1 0 5]
  
  ? (U*A)%6
  %3 =
  [1 0 0]
  
  [0 1 0]
  
  [0 0 1]
  ? matinvmod(A,5)
   ***   at top-level: matinvmod(A,5)
   ***                 ^ —  —  —  — --
   *** matinvmod: impossible inverse in gen_inv: 0.

The library syntax is GEN matinvmod(GEN x, GEN d).


matisdiagonal(x)

Returns true (1) if x is a diagonal matrix, false (0) if not.

The library syntax is GEN isdiagonal(GEN x).


matker(x, {flag = 0})

Gives a basis for the kernel of the matrix x as columns of a matrix. The matrix can have entries of any type, provided they are compatible with the generic arithmetic operations (+, x and /).

If x is known to have integral entries, set flag = 1.

The library syntax is GEN matker0(GEN x, long flag). Also available are GEN ker(GEN x) (flag = 0), GEN ZM_ker(GEN x) (flag = 1).


matkerint(x, {flag = 0})

Gives an LLL-reduced ℤ-basis for the lattice equal to the kernel of the matrix x with rational entries. flag is deprecated, kept for backward compatibility. The function matsolvemod allows to solve more general linear systems over ℤ.

The library syntax is GEN matkerint0(GEN x, long flag). Use directly GEN kerint(GEN x) if x is known to have integer entries, and Q_primpart first otherwise.


matkermod(x, d, &im)

Gives a Howell basis (unique representation for submodules of (ℤ/dℤ)n, cf. matimagemod) for the kernel of the matrix x modulo d as columns of a matrix. The matrix x must have t_INT entries, and d can be an arbitrary positive integer. If im is present, set it to a basis of the image of x (which is computed on the way).

  ? A = [1,2,3;5,1,4]
  %1 =
  [1 2 3]
  
  [5 1 4]
  
  ? K = matkermod(A,6)
  %2 =
  [2 1]
  
  [2 1]
  
  [0 3]
  
  ? (A*K)%6
  %3 =
  [0 0]
  
  [0 0]

The library syntax is GEN matkermod(GEN x, GEN d, GEN *im = NULL).


matmuldiagonal(x, d)

Product of the matrix x by the diagonal matrix whose diagonal entries are those of the vector d. Equivalent to, but much faster than x*matdiagonal(d).

The library syntax is GEN matmuldiagonal(GEN x, GEN d).


matmultodiagonal(x, y)

Product of the matrices x and y assuming that the result is a diagonal matrix. Much faster than x*y in that case. The result is undefined if x*y is not diagonal.

The library syntax is GEN matmultodiagonal(GEN x, GEN y).


matpascal(n, {q})

Creates as a matrix the lower triangular Pascal triangle of order x+1 (i.e. with binomial coefficients up to x). If q is given, compute the q-Pascal triangle (i.e. using q-binomial coefficients).

The library syntax is GEN matqpascal(long n, GEN q = NULL). Also available is GEN matpascal(GEN x).


matpermanent(x)

Permanent of the square matrix x using Ryser's formula in Gray code order.

  ? n = 20; m = matrix(n,n,i,j, i!=j);
  ? matpermanent(m)
  %2 = 895014631192902121
  ? n! * sum(i=0,n, (-1)^i/i!)
  %3 = 895014631192902121

This function runs in time O(2n n) for a matrix of size n and is not implemented for n large.

The library syntax is GEN matpermanent(GEN x).


matqr(M, {flag = 0})

Returns [Q,R], the QR-decomposition of the square invertible matrix M with real entries: Q is orthogonal and R upper triangular. If flag = 1, the orthogonal matrix is returned as a sequence of Householder transforms: applying such a sequence is stabler and faster than multiplication by the corresponding Q matrix. More precisely, if

    [Q,R] = matqr(M);
    [q,r] = matqr(M, 1);

then r = R and mathouseholder(q, M) is (close to) R; furthermore

    mathouseholder(q, matid(#M)) == Q~

the inverse of Q. This function raises an error if the precision is too low or x is singular.

The library syntax is GEN matqr(GEN M, long flag, long prec).


matrank(x)

Rank of the matrix x.

The library syntax is long rank(GEN x).


matreduce(m)

Let m be a factorization matrix, i.e., a 2-column matrix whose columns contains arbitrary "generators" and integer "exponents" respectively. Returns the canonical form of m: the first column is sorted with unique elements and the second one contains the merged "exponents" (exponents of identical entries in the first column of m are added, rows attached to 0 exponents are deleted). The generators are sorted with respect to the universal cmp routine; in particular, this function is the identity on true integer factorization matrices, but not on other factorizations (in products of polynomials or maximal ideals, say). It is idempotent.

For convenience, this function also allows a vector m, which is handled as a factorization with all exponents equal to 1, as in factorback.

  ? A=[x,2;y,4]; B=[x,-2; y,3; 3, 4]; C=matconcat([A,B]~)
  %1 =
  [x  2]
  
  [y  4]
  
  [x -2]
  
  [y  3]
  
  [3  4]
  
  ? matreduce(C)
  %2 =
  [3 4]
  
  [y 7]
  
  ? matreduce([x,x,y,x,z,x,y]) \\ vector argument
  %3 =
  [x 4]
  
  [y 2]
  
  [z 1]

The following one-line functions will list elements occurring exactly once (resp. more than once) in the vector or list v:

  unique(v) = [ x[1] | x <- matreduce(v)~, x[2] == 1 ];
  duplicates(v) = [ x[1] | x <- matreduce(v)~, x[2] > 1 ];
  
  ? v = [0,1,2,3,1,2];
  ? unique(v)
  %2 = [0, 3]
  
  ? duplicates(v)
  %3 = [1, 2]

The library syntax is GEN matreduce(GEN m).


matrix(m, {n = m}, {X}, {Y}, {expr = 0})

Creation of the m x n matrix whose coefficients are given by the expression expr. There are two formal parameters in expr, the first one (X) corresponding to the rows, the second (Y) to the columns, and X goes from 1 to m, Y goes from 1 to n. If one of the last 3 parameters is omitted, fill the matrix with zeroes. If n is omitted, return a square m x m matrix.


matrixqz(A, {p = 0})

A being an m x n matrix in Mm,n(ℚ), let Im A (resp. Im A) the ℚ-vector space (resp. the ℤ-module) spanned by the columns of A. This function has varying behavior depending on the sign of p:

If p ≥ 0, A is assumed to have maximal rank n ≤ m. The function returns a matrix B ∈ Mm,n(ℤ), with Im B = Im A, such that the GCD of all its n x n minors is coprime to p; in particular, if p = 0 (default), this GCD is 1.

If p = -1, returns a basis of the lattice ℤn ∩ Im A.

If p = -2, returns a basis of the lattice ℤn ∩ Im A.

Caveat. (p = -1 or -2) For efficiency reason, we do not compute the HNF of the resulting basis.

  ? minors(x) = vector(#x[,1], i, matdet(x[^i,]));
  ? A = [3,1/7; 5,3/7; 7,5/7]; minors(A)
  %1 = [4/7, 8/7, 4/7]   \\ determinants of all 2x2 minors
  ? B = matrixqz(A)
  %2 =
  [3 1]
  
  [5 2]
  
  [7 3]
  ? minors(%)
  %3 = [1, 2, 1]   \\ B integral with coprime minors
  ? matrixqz(A,-1)
  %4 =
  [3 1]
  
  [5 3]
  
  [7 5]
  
  ? matrixqz(A,-2)
  %5 =
  [3 1]
  
  [5 2]
  
  [7 3]
  

The library syntax is GEN matrixqz0(GEN A, GEN p = NULL).


matsize(x)

x being a vector or matrix, returns a row vector with two components, the first being the number of rows (1 for a row vector), the second the number of columns (1 for a column vector).

The library syntax is GEN matsize(GEN x).


matsnf(X, {flag = 0})

If X is a (singular or nonsingular) matrix outputs the vector of elementary divisors of X, i.e. the diagonal of the Smith normal form of X, normalized so that dn | dn-1 | ... | d1. X must have integer or polynomial entries; in the latter case, X must be a square matrix.

The binary digits of flag mean:

1 (complete output): if set, outputs [U,V,D], where U and V are two unimodular matrices such that UXV is the diagonal matrix D. Otherwise output only the diagonal of D. If X is not a square matrix, then D will be a square diagonal matrix padded with zeros on the left or the top.

4 (cleanup): if set, cleans up the output. This means that elementary divisors equal to 1 will be deleted, i.e. outputs a shortened vector D' instead of D. If complete output was required, returns [U',V',D'] so that U'XV' = D' holds. If this flag is set, X is allowed to be of the form `vector of elementary divisors' or [U,V,D] as would normally be output with the cleanup flag unset.

If v is an output from matsnf and p is a power of an irreducible element, then snfrank(v, p) returns the p-rank of the attached module.

  ? X = [27,0; 0,3; 1,1; 0,0]; matsnf(X)
  %1 = [0, 0, 3, 1]
  ? [U,V,D] = v = matsnf(X, 1); U*X*V == D
  %2
  ? U
  %3 =
  [0 0   0 1]
  
  [1 9 -27 0]
  
  [0 1   0 0]
  
  [0 0   1 0]
  
  ? V
  %4 =
  [-1 1]
  
  [ 1 0]
  
  ? snfrank(v, 3)
  %5 = 3

Continuing the same example after cleanup:

  ? [U,V,D] = v = matsnf(X, 1+4); U*X*V == D
  %6 = 1
  
  ? D
  %7 =
  [0]
  
  [0]
  
  [3]
  
  ? snfrank(v, 3)
  %8 = 3
  
  ? snfrank(v, 2)
  %9 = 2

The library syntax is GEN matsnf0(GEN X, long flag).


matsolve(M, B)

Let M be a left-invertible matrix and B a column vector such that there exists a solution X to the system of linear equations MX = B; return the (unique) solution X. This has the same effect as, but is faster, than M-1*B. Uses Dixon p-adic lifting method if M and B are integral and Gaussian elimination otherwise. When there is no solution, the function returns an X such that MX - B is nonzero although it has at least #M zero entries:

  ? M = [1,2;3,4;5,6];
  ? B = [4,6,8]~; X = matsolve(M, B)
  %2 = [-2, 3]~
  ? M*X == B
  %3 = 1
  ? B = [1,2,4]~; X = matsolve(M, [1,2,4]~)
  %4 = [0, 1/2]~
  ? M*X - B
  %5 = [0, 0, -1]~

Raises an exception if M is not left-invertible, even if there is a solution:

  ? M = [1,1;1,1]; matsolve(M, [1,1]~)
   ***   at top-level: matsolve(M,[1,1]~)
   ***                 ^ —  —  —  —  —  — 
   *** matsolve: impossible inverse in gauss: [1, 1; 1, 1].

The function also works when B is a matrix and we return the unique matrix solution X provided it exists. Again, if there is no solution, the function returns an X such that MX - B is nonzero although it has at least #M zero rows.

The library syntax is GEN gauss(GEN M, GEN B).


matsolvemod(M, D, B, {flag = 0})

M being any integral matrix, D a column vector of nonnegative integer moduli, and B an integral column vector, gives an integer solution to the system of congruences ∑i mi,jxj = bi (mod di) if one exists, otherwise returns the integer zero. Note that we explicitly allow di = 0 corresponding to an equality in ℤ. Shorthand notation: B (resp. D) can be given as a single integer, in which case all the bi (resp. di) above are taken to be equal to B (resp. D). Again, D = 0 solves the linear system of equations over ℤ.

  ? M = [1,2;3,4];
  ? matsolvemod(M, [3,4]~, [1,2]~)
  %2 = [10, 0]~
  ? matsolvemod(M, 3, 1) \\ M X = [1,1]~ over F3
  %3 = [2, 1]~
  ? matsolvemod(M, [3,0]~, [1,2]~) \\ x + 2y = 1 (mod 3), 3x + 4y = 2 (in Z)
  %4 = [6, -4]~
  ? matsolvemod(M, 0, [1,2]~) \\ no solution in Z for x + 2y = 1, 3x + 4y = 2

If flag = 1, all solutions are returned in the form of a two-component row vector [x,u], where x is an integer solution to the system of congruences and u is a matrix whose columns give a basis of the homogeneous system (so that all solutions can be obtained by adding x to any linear combination of columns of u). If no solution exists, returns zero.

The library syntax is GEN matsolvemod(GEN M, GEN D, GEN B, long flag). Also available are GEN gaussmodulo(GEN M, GEN D, GEN B) (flag = 0) and GEN gaussmodulo2(GEN M, GEN D, GEN B) (flag = 1).


matsupplement(x)

Assuming that the columns of the matrix x are linearly independent (if they are not, an error message is issued), finds a square invertible matrix whose first columns are the columns of x, i.e. supplement the columns of x to a basis of the whole space.

  ? matsupplement([1;2])
  %1 =
  [1 0]
  
  [2 1]

Raises an error if x has 0 columns, since (due to a long standing design bug), the dimension of the ambient space (the number of rows) is unknown in this case:

  ? matsupplement(matrix(2,0))
    ***   at top-level: matsupplement(matrix
    ***                 ^ —  —  —  —  —  — --
    *** matsupplement: sorry, suppl [empty matrix] is not yet implemented.

The library syntax is GEN suppl(GEN x).


mattranspose(x)

Transpose of x (also x~). This has an effect only on vectors and matrices.

The library syntax is GEN gtrans(GEN x).


minpoly(A, {v = 'x})

minimal polynomial of A with respect to the variable v., i.e. the monic polynomial P of minimal degree (in the variable v) such that P(A) = 0.

The library syntax is GEN minpoly(GEN A, long v = -1) where v is a variable number.


norml2(x)

Square of the L2-norm of x. More precisely, if x is a scalar, norml2(x) is defined to be the square of the complex modulus of x (real t_QUADs are not supported). If x is a polynomial, a (row or column) vector or a matrix, norml2(x) is defined recursively as ∑i norml2(xi), where (xi) run through the components of x. In particular, this yields the usual ∑i |xi|2 (resp. ∑i,j |xi,j|2) if x is a polynomial or vector (resp. matrix) with complex components.

  ? norml2( [ 1, 2, 3 ] )      \\ vector
  %1 = 14
  ? norml2( [ 1, 2; 3, 4] )   \\ matrix
  %2 = 30
  ? norml2( 2*I + x )
  %3 = 5
  ? norml2( [ [1,2], [3,4], 5, 6 ] )   \\ recursively defined
  %4 = 91

The library syntax is GEN gnorml2(GEN x).


normlp(x, {p = oo})

Lp-norm of x; sup norm if p is omitted or +oo. More precisely, if x is a scalar, normlp(x, p) is defined to be abs(x). If x is a polynomial, a (row or column) vector or a matrix:

* if p is omitted or +oo, then normlp(x) is defined recursively as maxi normlp(xi)), where xi runs through the components of x. In particular, this yields the usual sup norm if x is a polynomial or vector with complex components.

* otherwise, normlp(x, p) is defined recursively as (∑i normlpp(xi,p))1/p. In particular, this yields the usual (∑i |xi|p)1/p if x is a polynomial or vector with complex components.

  ? v = [1,-2,3]; normlp(v)      \\ vector
  %1 = 3
  ? normlp(v, +oo)               \\ same, more explicit
  %2 = 3
  ? M = [1,-2;-3,4]; normlp(M)   \\ matrix
  %3 = 4
  ? T = (1+I) + I*x^2; normlp(T)
  %4 = 1.4142135623730950488016887242096980786
  ? normlp([[1,2], [3,4], 5, 6])   \\ recursively defined
  %5 = 6
  
  ? normlp(v, 1)
  %6 = 6
  ? normlp(M, 1)
  %7 = 10
  ? normlp(T, 1)
  %8 = 2.4142135623730950488016887242096980786

The library syntax is GEN gnormlp(GEN x, GEN p = NULL, long prec).


powers(x, n, {x0})

For nonnegative n, return the vector with n+1 components [1,x,...,xn] if x0 is omitted, and [x0, x0*x, ..., x0*xn] otherwise.

  ? powers(Mod(3,17), 4)
  %1 = [Mod(1, 17), Mod(3, 17), Mod(9, 17), Mod(10, 17), Mod(13, 17)]
  ? powers(Mat([1,2;3,4]), 3)
  %2 = [[1, 0; 0, 1], [1, 2; 3, 4], [7, 10; 15, 22], [37, 54; 81, 118]]
  ? powers(3, 5, 2)
  %3 = [2, 6, 18, 54, 162, 486]

When n < 0, the function returns the empty vector [].

The library syntax is GEN gpowers0(GEN x, long n, GEN x0 = NULL). Also available is GEN gpowers(GEN x, long n) when x0 is NULL.


qfauto(G, {fl})

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, outputs the automorphism group of the associate lattice. Since this requires computing the minimal vectors, the computations can become very lengthy as the dimension grows. G can also be given by an qfisominit structure. See qfisominit for the meaning of fl.

The output is a two-components vector [o,g] where o is the group order and g is the list of generators (as a vector). For each generator H, the equality G = {t}H G H holds.

The interface of this function is experimental and will likely change in the future.

This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.

  ? K = matkerint(Mat(concat([vector(23,i,2*i+1), 51, 145])));
  ? M = matdiagonal(vector(25,i,if(i==25,-1,1)));
  ? L24 = K~ * M * K; \\ the Leech lattice
  ? [o,g] = qfauto(L24); o
  %4 = 8315553613086720000
  ? #g
  %5 = 2

The library syntax is GEN qfauto0(GEN G, GEN fl = NULL). The function GEN qfauto(GEN G, GEN fl) is also available where G is a vector of zm matrices.


qfautoexport(qfa, {flag})

qfa being an automorphism group as output by qfauto, export the underlying matrix group as a string suitable for (no flags or flag = 0) GAP or (flag = 1) Magma. The following example computes the size of the matrix group using GAP:

  ? G = qfauto([2,1;1,2])
  %1 = [12, [[-1, 0; 0, -1], [0, -1; 1, 1], [1, 1; 0, -1]]]
  ? s = qfautoexport(G)
  %2 = "Group([[-1, 0], [0, -1]], [[0, -1], [1, 1]], [[1, 1], [0, -1]])"
  ? extern("echo \"Order("s");\" | gap -q")
  %3 = 12

The library syntax is GEN qfautoexport(GEN qfa, long flag).


qfbil(x, y, {q})

This function is obsolete, use qfeval.

The library syntax is GEN qfbil(GEN x, GEN y, GEN q = NULL).


qfeval({q}, x, {y})

Evaluate the quadratic form q (given by a symmetric matrix) at the vector x; if y is present, evaluate the polar form at (x,y); if q omitted, use the standard Euclidean scalar product, corresponding to the identity matrix.

Roughly equivalent to x~ * q * y, but a little faster and more convenient (does not distinguish between column and row vectors):

  ? x = [1,2,3]~; y = [-1,3,1]~; q = [1,2,3;2,2,-1;3,-1,9];
  ? qfeval(q,x,y)
  %2 = 23
  ? for(i=1,10^6, qfeval(q,x,y))
  time = 661ms
  ? for(i=1,10^6, x~*q*y)
  time = 697ms

The speedup is noticeable for the quadratic form, compared to x~ * q * x, since we save almost half the operations:

  ? for(i=1,10^6, qfeval(q,x))
  time = 487ms

The special case q = Id is handled faster if we omit q altogether:

  ? qfeval(,x,y)
  %6 = 8
  ? q = matid(#x);
  ? for(i=1,10^6, qfeval(q,x,y))
  time = 529 ms.
  ? for(i=1,10^6, qfeval(,x,y))
  time = 228 ms.
  ? for(i=1,10^6, x~*y)
  time = 274 ms.

We also allow t_MATs of compatible dimensions for x, and return x~ * q * x in this case as well:

  ? M = [1,2,3;4,5,6;7,8,9]; qfeval(,M) \\ Gram matrix
  %5 =
  [66  78  90]
  
  [78  93 108]
  
  [90 108 126]
  
  ? q = [1,2,3;2,2,-1;3,-1,9];
  ? for(i=1,10^6, qfeval(q,M))
  time = 2,008 ms.
  ? for(i=1,10^6, M~*q*M)
  time = 2,368 ms.
  
  ? for(i=1,10^6, qfeval(,M))
  time = 1,053 ms.
  ? for(i=1,10^6, M~*M)
  time = 1,171 ms.

If q is a t_QFB, it is implicitly converted to the attached symmetric t_MAT. This is done more efficiently than by direct conversion, since we avoid introducing a denominator 2 and rational arithmetic:

  ? q = Qfb(2,3,4); x = [2,3];
  ? qfeval(q, x)
  %2 = 62
  ? Q = Mat(q)
  %3 =
   [  2 3/2]
  
   [3/2   4]
  ? qfeval(Q, x)
  %4 = 62
  ? for (i=1, 10^6, qfeval(q,x))
  time = 758 ms.
  ? for (i=1, 10^6, qfeval(Q,x))
  time = 1,110 ms.

Finally, when x is a t_MAT with integral coefficients, we allow a t_QFB for q and return the binary quadratic form q o M. Again, the conversion to t_MAT is less efficient in this case:

  ? q = Qfb(2,3,4); Q = Mat(q); x = [1,2;3,4];
  ? qfeval(q, x)
  %2 = Qfb(47, 134, 96)
  ? qfeval(Q,x)
  %3 =
  [47 67]
  
  [67 96]
  ? for (i=1, 10^6, qfeval(q,x))
  time = 701 ms.
  ? for (i=1, 10^6, qfeval(Q,x))
  time = 1,639 ms.

The library syntax is GEN qfeval0(GEN q = NULL, GEN x, GEN y = NULL).


qfgaussred(q, {flag = 0})

decomposition into squares of the quadratic form represented by the symmetric matrix q. If flag = 0 (default), the result is a matrix M whose diagonal entries are the coefficients of the squares, and the off-diagonal entries on each line represent the bilinear forms. More precisely, if (aij) denotes the output, one has q(x) = ∑i ai,i (xi + ∑j ! = i ai,j xj)2

  ? qfgaussred([0,1;1,0])
  %1 =
  [1/2 1]
  
  [-1 -1/2]

This means that 2xy = (1/2)(x+y)2 - (1/2)(x-y)2. Singular matrices are supported, in which case some diagonal coefficients vanish:

  ? qfgaussred([1,1;1,1])
  %2 =
  [1 1]
  
  [1 0]

This means that x2 + 2xy + y2 = (x+y)2.

If flag = 1, return [U,V] where U is a square matrix and V a vector, such that if D = matdiagonal(V), q = {t} U D U. More precisely q(x) = ∑i Di (∑j Ui,j xj)2 and the matrix M is recovered as M = U + D - 1.

  ? q = [0,1;1,0];
  ? [U,V] = qfgaussred(q,1); D = matdiagonal(V);
  ? U~*D*U
  %5 =
  [0 1]
  
  [1 0]
  ? U+D-1
  %6 =
  [1/2    1]
  
  [ -1 -1/2]

The library syntax is GEN qfgaussred0(GEN q, long flag). See also the functions GEN qfgaussred(GEN a) (for qfgaussred(a,0)), GEN qfgaussred2(GEN a) (for qfgaussred0(a,1)). Finally, the function GEN qfgaussred_positive(GEN q) assumes that q is positive definite and is a little faster; returns NULL if a vector with negative norm occurs (non positive matrix or too many rounding errors).


qfisom(G, H, {fl}, {grp})

G, H being square and symmetric matrices with integer entries representing positive definite quadratic forms, return an invertible matrix S such that G = {t}S H S. This defines a isomorphism between the corresponding lattices. Since this requires computing the minimal vectors, the computations can become very lengthy as the dimension grows. See qfisominit for the meaning of fl. If grp is given it must be the automorphism group of H. It will be used to speed up the computation.

G can also be given by an qfisominit structure which is preferable if several forms H need to be compared to G.

This function implements an algorithm of Plesken and Souvignier, following Souvignier's implementation.

The library syntax is GEN qfisom0(GEN G, GEN H, GEN fl = NULL, GEN grp = NULL). Also available is GEN qfisom(GEN G, GEN H, GEN fl, GEN grp) where G is a vector of zm, and H is a zm, and grp is either NULL or a vector of zm.


qfisominit(G, {fl}, {m})

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, return an isom structure allowing to compute isomorphisms between G and other quadratic forms faster.

The interface of this function is experimental and will likely change in future release.

If present, the optional parameter fl must be a t_VEC with two components. It allows to specify the invariants used, which can make the computation faster or slower. The components are

* fl[1] Depth of scalar product combination to use.

* fl[2] Maximum level of Bacher polynomials to use.

If present, m must be the set of vectors of norm up to the maximal of the diagonal entry of G, either as a matrix or as given by qfminim. Otherwise this function computes the minimal vectors so it become very lengthy as the dimension of G grows.

The library syntax is GEN qfisominit0(GEN G, GEN fl = NULL, GEN m = NULL). Also available is GEN qfisominit(GEN F, GEN fl) where F is a vector of zm.


qfjacobi(A)

Apply Jacobi's eigenvalue algorithm to the real symmetric matrix A. This returns [L, V], where

* L is the vector of (real) eigenvalues of A, sorted in increasing order,

* V is the corresponding orthogonal matrix of eigenvectors of A.

  ? \p19
  ? A = [1,2;2,1]; mateigen(A)
  %1 =
  [-1 1]
  
  [ 1 1]
  ? [L, H] = qfjacobi(A);
  ? L
  %3 = [-1.000000000000000000, 3.000000000000000000]~
  ? H
  %4 =
  [ 0.7071067811865475245 0.7071067811865475244]
  
  [-0.7071067811865475244 0.7071067811865475245]
  ? norml2( (A-L[1])*H[,1] )       \\ approximate eigenvector
  %5 = 9.403954806578300064 E-38
  ? norml2(H*H~ - 1)
  %6 = 2.350988701644575016 E-38   \\ close to orthogonal

The library syntax is GEN jacobi(GEN A, long prec).


qflll(x, {flag = 0})

LLL algorithm applied to the columns of the matrix x. The columns of x may be linearly dependent. The result is by default a unimodular transformation matrix T such that x.T is an LLL-reduced basis of the lattice generated by the column vectors of x. Note that if x is not of maximal rank T will not be square. The LLL parameters are (0.51,0.99), meaning that the Gram-Schmidt coefficients for the final basis satisfy |μi,j| ≤ 0.51, and the Lovász's constant is 0.99.

If flag = 0 (default), assume that x has either exact (integral or rational) or real floating point entries. The matrix is rescaled, converted to integers and the behavior is then as in flag = 1.

If flag = 1, assume that x is integral. Computations involving Gram-Schmidt vectors are approximate, with precision varying as needed (Lehmer's trick, as generalized by Schnorr). Adapted from Nguyen and Stehlé's algorithm and Stehlé's code (fplll-1.3).

If flag = 2, x should be an integer matrix whose columns are linearly independent. Returns a partially reduced basis for x, using an unpublished algorithm by Peter Montgomery: a basis is said to be partially reduced if |vi ± vj| ≥ |vi| for any two distinct basis vectors vi, vj. This is faster than flag = 1, esp. when one row is huge compared to the other rows (knapsack-style), and should quickly produce relatively short vectors. The resulting basis is not LLL-reduced in general. If LLL reduction is eventually desired, avoid this partial reduction: applying LLL to the partially reduced matrix is significantly slower than starting from a knapsack-type lattice.

If flag = 3, as flag = 1, but the reduction is performed in place: the routine returns x.T. This is usually faster for knapsack-type lattices.

If flag = 4, as flag = 1, returning a vector [K, T] of matrices: the columns of K represent a basis of the integer kernel of x (not LLL-reduced in general) and T is the transformation matrix such that x.T is an LLL-reduced ℤ-basis of the image of the matrix x.

If flag = 5, case as case 4, but x may have polynomial coefficients.

If flag = 8, same as case 0, but x may have polynomial coefficients.

  ? \p500
    realprecision = 500 significant digits
  ? a = 2*cos(2*Pi/97);
  ? C = 10^450;
  ? v = powers(a,48); b = round(matconcat([matid(48),C*v]~));
  ? p = b * qflll(b)[,1]; \\ tiny linear combination of powers of 'a'
     time = 4,470 ms.
  ? exponent(v * p / C)
  %5 = -1418
  ? p3 = qflll(b,3)[,1]; \\ compute in place, faster
     time = 3,790 ms.
  ? p3 == p \\ same result
  %7 = 1
  ? p2 = b * qflll(b,2)[,1]; \\ partial reduction: faster, not as good
     time = 343 ms.
  ? exponent(v * p2 / C)
  %9 = -1190

The library syntax is GEN qflll0(GEN x, long flag). Also available are GEN lll(GEN x) (flag = 0), GEN lllint(GEN x) (flag = 1), and GEN lllkerim(GEN x) (flag = 4).


qflllgram(G, {flag = 0})

Same as qflll, except that the matrix G = x~ * x is the Gram matrix of some lattice vectors x, and not the coordinates of the vectors themselves. In particular, G must now be a square symmetric real matrix, corresponding to a positive quadratic form (not necessarily definite: x needs not have maximal rank). The result is a unimodular transformation matrix T such that x.T is an LLL-reduced basis of the lattice generated by the column vectors of x. See qflll for further details about the LLL implementation.

If flag = 0 (default), assume that G has either exact (integral or rational) or real floating point entries. The matrix is rescaled, converted to integers and the behavior is then as in flag = 1.

If flag = 1, assume that G is integral. Computations involving Gram-Schmidt vectors are approximate, with precision varying as needed (Lehmer's trick, as generalized by Schnorr). Adapted from Nguyen and Stehlé's algorithm and Stehlé's code (fplll-1.3).

flag = 4: G has integer entries, gives the kernel and reduced image of x.

flag = 5: same as 4, but G may have polynomial coefficients.

The library syntax is GEN qflllgram0(GEN G, long flag). Also available are GEN lllgram(GEN G) (flag = 0), GEN lllgramint(GEN G) (flag = 1), and GEN lllgramkerim(GEN G) (flag = 4).


qfminim(x, {B}, {m}, {flag = 0})

x being a square and symmetric matrix of dimension d representing a positive definite quadratic form, this function deals with the vectors of x whose norm is less than or equal to B, enumerated using the Fincke-Pohst algorithm, storing at most m pairs of vectors: only one vector is given for each pair ± v. There is no limit if m is omitted: beware that this may be a huge vector! The vectors are returned in no particular order.

The function searches for the minimal nonzero vectors if B is omitted. The behavior is undefined if x is not positive definite (a "precision too low" error is most likely, although more precise error messages are possible). The precise behavior depends on flag.

* If flag = 0 (default), return [N, M, V], where N is the number of vectors enumerated (an even number, possibly larger than 2m), M ≤ B is the maximum norm found, and V is a matrix whose columns are found vectors.

* If flag = 1, ignore m and return [M,v], where v is a nonzero vector of length M ≤ B. If no nonzero vector has length ≤ B, return []. If no explicit B is provided, return a vector of smallish norm, namely the vector of smallest length (usually the first one but not always) in an LLL-reduced basis for x.

In these two cases, x must have integral small entries: more precisely, we definitely must have d.|x| oo 2 < 253 but even that may not be enough. The implementation uses low precision floating point computations for maximal speed and gives incorrect results when x has large entries. That condition is checked in the code and the routine raises an error if large rounding errors occur. A more robust, but much slower, implementation is chosen if the following flag is used:

* If flag = 2, x can have non integral real entries, but this is also useful when x has large integral entries. Return [N, M, V] as in case flag = 0, where M is returned as a floating point number. If x is inexact and B is omitted, the "minimal" vectors in V only have approximately the same norm (up to the internal working accuracy). This version is very robust but still offers no hard and fast guarantee about the result: it involves floating point operations performed at a high floating point precision depending on your input, but done without rigorous tracking of roundoff errors (as would be provided by interval arithmetic for instance). No example is known where the input is exact but the function returns a wrong result.

  ? x = matid(2);
  ? qfminim(x)  \\  4 minimal vectors of norm 1: ±[0,1], ±[1,0]
  %2 = [4, 1, [0, 1; 1, 0]]
  ? { x = \\ The Leech lattice
  [4, 2, 0, 0, 0,-2, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1, 0,-1, 0, 0, 0,-2;
   2, 4,-2,-2, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1, 0, 1,-1,-1;
   0,-2, 4, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 0, 1,-1,-1, 0, 0;
   0,-2, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1,-1, 0, 1,-1, 1, 0;
   0, 0,-2, 0, 4, 0, 0, 0, 1,-1, 0, 0, 1, 0, 0, 0,-2, 0, 0,-1, 1, 1, 0, 0;
  -2, -2,0, 0, 0, 4,-2, 0,-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,-1, 1, 1;
   0, 0, 0, 0, 0,-2, 4,-2, 0, 0, 0, 0, 0, 1, 0, 0, 0,-1, 0, 0, 0, 1,-1, 0;
   0, 0, 0, 0, 0, 0,-2, 4, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1,-1,-1, 0, 1, 0;
   0, 0, 0, 0, 1,-1, 0, 0, 4, 0,-2, 0, 1, 1, 0,-1, 0, 1, 0, 0, 0, 0, 0, 0;
   0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 0, 0, 1, 1,-1, 1, 0, 0, 0, 1, 0, 0, 1, 0;
   0, 0, 0, 0, 0, 0, 0, 0,-2, 0, 4,-2, 0,-1, 0, 0, 0,-1, 0,-1, 0, 0, 0, 0;
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0,-2, 4,-1, 1, 0, 0,-1, 1, 0, 1, 1, 1,-1, 0;
   1, 0,-1, 1, 1, 0, 0,-1, 1, 1, 0,-1, 4, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1,-1;
  -1,-1, 1,-1, 0, 0, 1, 0, 1, 1,-1, 1, 0, 4, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1;
   0, 0, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0;
   0, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 1, 0, 4, 0, 0, 0, 0, 1, 1, 0, 0;
   0, 0, 1, 0,-2, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 1, 1, 1, 0, 0, 1, 1;
   1, 0, 0, 1, 0, 0,-1, 0, 1, 0,-1, 1, 1, 0, 0, 0, 1, 4, 0, 1, 1, 0, 1, 0;
   0, 0, 0,-1, 0, 1, 0,-1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 4, 0, 1, 1, 0, 1;
  -1, -1,1, 0,-1, 1, 0,-1, 0, 1,-1, 1, 0, 1, 0, 0, 1, 1, 0, 4, 0, 0, 1, 1;
   0, 0,-1, 1, 1, 0, 0,-1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 4, 1, 0, 1;
   0, 1,-1,-1, 1,-1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 4, 0, 1;
   0,-1, 0, 1, 0, 1,-1, 1, 0, 1, 0,-1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 4, 1;
  -2,-1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 4]; }
  ? qfminim(x,,0)  \\ 0: don't store minimal vectors
  time = 121 ms.
  %4 = [196560, 4, [;]] \\ 196560 minimal vectors of norm 4
  ? qfminim(x)  \\ store all minimal vectors !
  time = 821 ms.
  ? qfminim(x,,0,2); \\ safe algorithm. Slower and unnecessary here.
  time = 5,540 ms.
  %6 = [196560, 4.000061035156250000, [;]]
  ? qfminim(x,,,2); \\ safe algorithm; store all minimal vectors
  time = 6,602 ms.

In this example, storing 0 vectors limits memory use; storing all of them requires a parisize about 50MB. All minimal vectors are nevertheless enumerated in both cases of course, which means the speedup is likely to be marginal.

The library syntax is GEN qfminim0(GEN x, GEN B = NULL, GEN m = NULL, long flag, long prec). Also available are GEN minim(GEN x, GEN B = NULL, GEN m = NULL) (flag = 0), GEN minim2(GEN x, GEN B = NULL, GEN m = NULL) (flag = 1). GEN minim_raw(GEN x, GEN B = NULL, GEN m = NULL) (do not perform LLL reduction on x and return NULL on accuracy error). GEN minim_zm(GEN x, GEN B = NULL, GEN m = NULL) (flag = 0, return vectors as t_VECSMALL to save memory)


qfminimize(G)

Given a square symmetric matrix G with rational coefficients, and non-zero determinant, of dimension n ≥ 1, return [H,U,c] such that H = c*U~*G*U for some rational c, and H integral with minimal determinant. The coefficients of U are usually nonintegral.

  ? G = matdiagonal([650, -104329, -104329]);
  ? [H,U,c]=qfminimize(G); H
  %2 = [-1,0,0;0,-1,0;0,0,1]
  ? U
  %3 = [0,0,1/5;5/323,-1/323,0;-1/323,-5/323,0]
  ? c
  %4 = 1/26
  ? c * U~ * G * U
  %4 = [-1,0,0;0,-1,0;0,0,1]

The library syntax is GEN qfminimize(GEN G).


qfnorm(x, {q})

This function is obsolete, use qfeval.

The library syntax is GEN qfnorm(GEN x, GEN q = NULL).


qforbits(G, V)

Return the orbits of V under the action of the group of linear transformation generated by the set G. It is assumed that G contains minus identity, and only one vector in {v, -v} should be given. If G does not stabilize V, the function return 0.

In the example below, we compute representatives and lengths of the orbits of the vectors of norm ≤ 3 under the automorphisms of the lattice ℤ6.

  ?  Q=matid(6); G=qfauto(Q); V=qfminim(Q,3);
  ?  apply(x->[x[1],#x],qforbits(G,V))
  %2 = [[[0,0,0,0,0,1]~,6],[[0,0,0,0,1,-1]~,30],[[0,0,0,1,-1,-1]~,80]]

The library syntax is GEN qforbits(GEN G, GEN V).


qfparam(G, sol, {flag = 0})

Coefficients of binary quadratic forms that parametrize the solutions of the ternary quadratic form G, using the particular solution sol. flag is optional and can be 1, 2, or 3, in which case the flag-th form is reduced. The default is flag = 0 (no reduction).

  ? G = [1,0,0;0,1,0;0,0,-34];
  ? M = qfparam(G, qfsolve(G))
  %2 =
  [ 3 -10 -3]
  
  [-5  -6  5]
  
  [ 1   0  1]

Indeed, the solutions can be parametrized as (3x2-10xy-3y2)2 + (-5x2-6xy+5y2)2 -34(x2+y2)2 = 0.

  ? v = y^2 * M*[1,x/y,(x/y)^2]~
  %3 = [3*x^2 - 10*y*x - 3*y^2, -5*x^2 - 6*y*x + 5*y^2, -x^2 - y^2]~
  ? v~*G*v
  %4 = 0

The library syntax is GEN qfparam(GEN G, GEN sol, long flag).


qfperfection(G)

G being a square and symmetric matrix with integer entries representing a positive definite quadratic form, outputs the perfection rank of the form. That is, gives the rank of the family of the s symmetric matrices v{t}v, where v runs through the minimal vectors.

A form is perfect if and only if its perfection rank is d(d+1)/2 where d is the dimension of G.

The algorithm computes the minimal vectors and its runtime is exponential in d.

The library syntax is GEN qfperfection(GEN G).


qfrep(q, B, {flag = 0})

q being a square and symmetric matrix with integer entries representing a positive definite quadratic form, count the vectors representing successive integers.

* If flag = 0, count all vectors. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors v such that q(v) = i.

* If flag = 1, count vectors of even norm. Outputs the vector whose i-th entry, 1 ≤ i ≤ B is half the number of vectors such that q(v) = 2i.

  ? q = [2, 1; 1, 3];
  ? qfrep(q, 5)
  %2 = Vecsmall([0, 1, 2, 0, 0]) \\ 1 vector of norm 2, 2 of norm 3, etc.
  ? qfrep(q, 5, 1)
  %3 = Vecsmall([1, 0, 0, 1, 0]) \\ 1 vector of norm 2, 0 of norm 4, etc.

This routine uses a naive algorithm based on qfminim, and will fail if any entry becomes larger than 231 (or 263).

The library syntax is GEN qfrep0(GEN q, GEN B, long flag).


qfsign(x)

Returns [p,m] the signature of the quadratic form represented by the symmetric matrix x. Namely, p (resp. m) is the number of positive (resp. negative) eigenvalues of x. The result is computed using Gaussian reduction.

The library syntax is GEN qfsign(GEN x).


qfsolve(G)

Given a square symmetric matrix G of dimension n ≥ 1, solve over ℚ the quadratic equation {t}X G X = 0. The matrix G must have rational coefficients. When G is integral, the argument can also be a vector [G,F] where F is the factorization matrix of the absolute value of the determinant of G.

The solution might be a single nonzero column vector (t_COL) or a matrix (whose columns generate a totally isotropic subspace).

If no solution exists, returns an integer, that can be a prime p such that there is no local solution at p, or -1 if there is no real solution, or -2 if n = 2 and -det G is not a square (which implies there is a real solution, but no local solution at some p dividing det G).

  ? G = [1,0,0;0,1,0;0,0,-34];
  ? qfsolve(G)
  %1 = [-3, -5, 1]~
  ? qfsolve([1,0; 0,2])
  %2 = -1   \\ no real solution
  ? qfsolve([1,0,0;0,3,0; 0,0,-2])
  %3 = 3    \\ no solution in Q3
  ? qfsolve([1,0; 0,-2])
  %4 = -2   \\ no solution, n = 2

The library syntax is GEN qfsolve(GEN G).


setbinop(f, X, {Y})

The set whose elements are the f(x,y), where x,y run through X,Y. respectively. If Y is omitted, assume that X = Y and that f is symmetric: f(x,y) = f(y,x) for all x,y in X.

  ? X = [1,2,3]; Y = [2,3,4];
  ? setbinop((x,y)->x+y, X,Y) \\ set X + Y
  %2 = [3, 4, 5, 6, 7]
  ? setbinop((x,y)->x-y, X,Y) \\ set X - Y
  %3 = [-3, -2, -1, 0, 1]
  ? setbinop((x,y)->x+y, X)   \\ set 2X = X + X
  %2 = [2, 3, 4, 5, 6]

The library syntax is GEN setbinop(GEN f, GEN X, GEN Y = NULL).


setdelta(x, y)

Symmetric difference of the two sets x and y (see setisset). If x or y is not a set, the result is undefined.

  ? a=[1,2,2,3];b=[4,2,3,4];
  ? setdelta(Set(a), Set(b))
  %2 = [1, 4]      \\ the symmetric difference of the two sets
  ? setdelta(a,b)
  %3 = [1, 2, 2, 3, 4, 2, 3, 4] \\ undefined result

The library syntax is GEN setdelta(GEN x, GEN y).


setintersect(x, y)

Intersection of the two sets x and y (see setisset). If x or y is not a set, the result is undefined.

The library syntax is GEN setintersect(GEN x, GEN y).


setisset(x)

Returns true (1) if x is a set, false (0) if not. In PARI, a set is a row vector whose entries are strictly increasing with respect to a (somewhat arbitrary) universal comparison function. To convert any object into a set (this is most useful for vectors, of course), use the function Set.

  ? a = [3, 1, 1, 2];
  ? setisset(a)
  %2 = 0
  ? Set(a)
  %3 = [1, 2, 3]

The library syntax is long setisset(GEN x).


setminus(x, y)

Difference of the two sets x and y (see setisset), i.e. set of elements of x which do not belong to y. If x or y is not a set, the result is undefined.

The library syntax is GEN setminus(GEN x, GEN y).


setsearch(S, x, {flag = 0})

Determines whether x belongs to the set S (see setisset).

We first describe the default behavior, when flag is zero or omitted. If x belongs to the set S, returns the index j such that S[j] = x, otherwise returns 0.

  ? T = [7,2,3,5]; S = Set(T);
  ? setsearch(S, 2)
  %2 = 1
  ? setsearch(S, 4)      \\ not found
  %3 = 0
  ? setsearch(T, 7)      \\ search in a randomly sorted vector
  %4 = 0 \\ WRONG !

If S is not a set, we also allow sorted lists with respect to the cmp sorting function, without repeated entries, as per listsort(L,1); otherwise the result is undefined.

  ? L = List([1,4,2,3,2]); setsearch(L, 4)
  %1 = 0 \\ WRONG !
  ? listsort(L, 1); L    \\ sort L first
  %2 = List([1, 2, 3, 4])
  ? setsearch(L, 4)
  %3 = 4                 \\ now correct

If flag is nonzero, this function returns the index j where x should be inserted, and 0 if it already belongs to S. This is meant to be used for dynamically growing (sorted) lists, in conjunction with listinsert.

  ? L = List([1,5,2,3,2]); listsort(L,1); L
  %1 = List([1,2,3,5])
  ? j = setsearch(L, 4, 1)  \\ 4 should have been inserted at index j
  %2 = 4
  ? listinsert(L, 4, j); L
  %3 = List([1, 2, 3, 4, 5])

The library syntax is long setsearch(GEN S, GEN x, long flag).


setunion(x, y)

Union of the two sets x and y (see setisset). If x or y is not a set, the result is undefined.

The library syntax is GEN setunion(GEN x, GEN y).


snfrank(D, {q = 0})

Assuming that D is a Smith normal form (i.e. vector of elementary divisors) for some module and q a power of an irreducible element or 0, return the minimal number of generators for D/qD. For instance, if q = pn where p is a prime number, this is the dimension of (pn-1D)/pnD as an 𝔽p-vector space. An argument q = 0 may be omitted.

  ? snfrank([4,4,2], 2)
  %1 = 3
  ? snfrank([4,4,2], 4)
  %2 = 2
  ? snfrank([4,4,2], 8)
  %3 = 0
  ? snfrank([4,4,2])   \\ or snfrank([4,4,2], 0)
  %4 = 3

The function also works for K[x]-modules:

  ? D=matsnf([-x-5,-1,-1,0; 0,x^2+10*x+26,-1,-x-5; 1,-x-5,-x-5,1; -1,0,0,1]);
  ? snfrank(D, x^2 + 10*x + 27)
  %6 = 2
  ? A=matdiagonal([x-1,x^2+1,x-1,(x^2+1)^2,x,(x-1)^2]); D=matsnf(A);
  ? snfrank(D,x-1)
  %8 = 3
  ? snfrank(D,(x-1)^2)
  %9 = 1
  ? snfrank(D,(x-1)^3)
  %9 = 0
  ? snfrank(D,x^2+1)
  %10 = 2

Finally this function supports any output from matsnf (e.g., with transformation matrices included, with or without cleanup).

The library syntax is long snfrank(GEN D, GEN q = NULL).


trace(x)

This applies to quite general x. If x is not a matrix, it is equal to the sum of x and its conjugate, except for polmods where it is the trace as an algebraic number.

For x a square matrix, it is the ordinary trace. If x is a nonsquare matrix (but not a vector), an error occurs.

The library syntax is GEN gtrace(GEN x).


vecextract(x, y, {z})

Extraction of components of the vector or matrix x according to y. In case x is a matrix, its components are the columns of x. The parameter y is a component specifier, which is either an integer, a string describing a range, or a vector.

If y is an integer, it is considered as a mask: the binary bits of y are read from right to left, but correspond to taking the components from left to right. For example, if y = 13 = (1101)2 then the components 1,3 and 4 are extracted.

If y is a vector (t_VEC, t_COL or t_VECSMALL), which must have integer entries, these entries correspond to the component numbers to be extracted, in the order specified.

If y is a string, it can be

* a single (nonzero) index giving a component number (a negative index means we start counting from the end).

* a range of the form "a..b", where a and b are indexes as above. Any of a and b can be omitted; in this case, we take as default values a = 1 and b = -1, i.e. the first and last components respectively. We then extract all components in the interval [a,b], in reverse order if b < a.

In addition, if the first character in the string is ^, the complement of the given set of indices is taken.

If z is not omitted, x must be a matrix. y is then the row specifier, and z the column specifier, where the component specifier is as explained above.

  ? v = [a, b, c, d, e];
  ? vecextract(v, 5)         \\  mask
  %1 = [a, c]
  ? vecextract(v, [4, 2, 1]) \\  component list
  %2 = [d, b, a]
  ? vecextract(v, "2..4")    \\  interval
  %3 = [b, c, d]
  ? vecextract(v, "-1..-3")  \\  interval + reverse order
  %4 = [e, d, c]
  ? vecextract(v, "^2")      \\  complement
  %5 = [a, c, d, e]
  ? vecextract(matid(3), "2..", "..")
  %6 =
  [0 1 0]
  
  [0 0 1]

The range notations v[i..j] and v[^i] (for t_VEC or t_COL) and M[i..j, k..l] and friends (for t_MAT) implement a subset of the above, in a simpler and faster way, hence should be preferred in most common situations. The following features are not implemented in the range notation:

* reverse order,

* omitting either a or b in a..b.

The library syntax is GEN extract0(GEN x, GEN y, GEN z = NULL).


vecprod(v)

Return the product of the components of the vector v. Return 1 on an empty vector.

  ? vecprod([1,2,3])
  %1 = 6
  ? vecprod([])
  %2 = 1

The library syntax is GEN vecprod(GEN v).


vecsearch(v, x, {cmpf})

Determines whether x belongs to the sorted vector or list v: return the (positive) index where x was found, or 0 if it does not belong to v.

If the comparison function cmpf is omitted, we assume that v is sorted in increasing order, according to the standard comparison function lex, thereby restricting the possible types for x and the elements of v (integers, fractions, reals, and vectors of such). We also transparently allow a t_VECSMALL x in this case, for the natural ordering of the integers.

If cmpf is present, it is understood as a comparison function and we assume that v is sorted according to it, see vecsort for how to encode comparison functions.

  ? v = [1,3,4,5,7];
  ? vecsearch(v, 3)
  %2 = 2
  ? vecsearch(v, 6)
  %3 = 0 \\ not in the list
  ? vecsearch([7,6,5], 5) \\ unsorted vector: result undefined
  %4 = 0

Note that if we are sorting with respect to a key which is expensive to compute (e.g. a discriminant), one should rather precompute all keys, sort that vector and search in the vector of keys, rather than searching in the original vector with respect to a comparison function.

By abuse of notation, x is also allowed to be a matrix, seen as a vector of its columns; again by abuse of notation, a t_VEC is considered as part of the matrix, if its transpose is one of the matrix columns.

  ? v = vecsort([3,0,2; 1,0,2]) \\ sort matrix columns according to lex order
  %1 =
  [0 2 3]
  
  [0 2 1]
  ? vecsearch(v, [3,1]~)
  %2 = 3
  ? vecsearch(v, [3,1])  \\ can search for x or x~
  %3 = 3
  ? vecsearch(v, [1,2])
  %4 = 0 \\ not in the list

The library syntax is long vecsearch(GEN v, GEN x, GEN cmpf = NULL).


vecsort(x, {cmpf}, {flag = 0})

Sorts the vector x in ascending order, using a mergesort method. x must be a list, vector or matrix (seen as a vector of its columns). Note that mergesort is stable, hence the initial ordering of "equal" entries (with respect to the sorting criterion) is not changed.

If cmpf is omitted, we use the standard comparison function lex, thereby restricting the possible types for the elements of x (integers, fractions or reals and vectors of those). We also transparently allow a t_VECSMALL x in this case, for the standard ordering on the integers.

If cmpf is present, it is understood as a comparison function and we sort according to it. The following possibilities exist:

* an integer k: sort according to the value of the k-th subcomponents of the components of x.

* a vector: sort lexicographically according to the components listed in the vector. For example, if cmpf = [2,1,3], sort with respect to the second component, and when these are equal, with respect to the first, and when these are equal, with respect to the third.

* a comparison function: t_CLOSURE with two arguments x and y, and returning a real number which is < 0, > 0 or = 0 if x < y, x > y or x = y respectively.

* a key: t_CLOSURE with one argument x and returning the value f(x) with respect to which we sort.

  ? vecsort([3,0,2; 1,0,2]) \\ sort columns according to lex order
  %1 =
  [0 2 3]
  
  [0 2 1]
  ? vecsort(v, (x,y)->y-x)            \\  reverse sort
  ? vecsort(v, (x,y)->abs(x)-abs(y))  \\  sort by increasing absolute value
  ? vecsort(v, abs)  \\  sort by increasing absolute value, using key
  ? cmpf(x,y) = my(dx = poldisc(x), dy = poldisc(y)); abs(dx) - abs(dy);
  ? v = [x^2+1, x^3-2, x^4+5*x+1] vecsort(v, cmpf) \\  comparison function
  ? vecsort(v, x->abs(poldisc(x)))  \\  key

The abs and cmpf examples show how to use a named function instead of an anonymous function. It is preferable to use a key whenever possible rather than include it in the comparison function as above since the key is evaluated O(n) times instead of O(nlog n), where n is the number of entries.

A direct approach is also possible and equivalent to using a sorting key:

  ? T = [abs(poldisc(x)) | x<-v];
  ? perm = vecsort(T,,1); \\  indirect sort
  ? vecextract(v, perm)

This also provides the vector T of all keys, which is interesting for instance in later vecsearch calls: it is more efficient to sort T (T = vecextract(T, perm)) then search for a key in T rather than to search in v using a comparison function or a key. Note also that mapisdefined is often easier to use and faster than vecsearch.

The binary digits of flag mean:

* 1: indirect sorting of the vector x, i.e. if x is an n-component vector, returns a permutation of [1,2,...,n] which applied to the components of x sorts x in increasing order. For example, vecextract(x, vecsort(x,,1)) is equivalent to vecsort(x).

* 4: use descending instead of ascending order.

* 8: remove "duplicate" entries with respect to the sorting function (keep the first occurring entry). For example:

    ? vecsort([Pi,Mod(1,2),z], (x,y)->0, 8)   \\  make everything compare equal
    %1 = [3.141592653589793238462643383]
    ? vecsort([[2,3],[0,1],[0,3]], 2, 8)
    %2 = [[0, 1], [2, 3]]

The library syntax is GEN vecsort0(GEN x, GEN cmpf = NULL, long flag).


vecsum(v)

Return the sum of the components of the vector v. Return 0 on an empty vector.

  ? vecsum([1,2,3])
  %1 = 6
  ? vecsum([])
  %2 = 0

The library syntax is GEN vecsum(GEN v).


vector(n, {X}, {expr = 0})

Creates a row vector (type t_VEC) with n components whose components are the expression expr evaluated at the integer points between 1 and n. If the last two arguments are omitted, fills the vector with zeroes.

  ? vector(3,i, 5*i)
  %1 = [5, 10, 15]
  ? vector(3)
  %2 = [0, 0, 0]

The variable X is lexically scoped to each evaluation of expr. Any change to X within expr does not affect subsequent evaluations, it still runs 1 to n. A local change allows for example different indexing:

  vector(10, i, i=i-1; f(i)) \\ i = 0, ..., 9
  vector(10, i, i=2*i; f(i)) \\ i = 2, 4, ..., 20

This per-element scope for X differs from for loop evaluations, as the following example shows:

  n = 3
  v = vector(n); vector(n, i, i++)             — -> [2, 3, 4]
  v = vector(n); for (i = 1, n, v[i] = i++)    — -> [2, 0, 4]


vectorsmall(n, {X}, {expr = 0})

Creates a row vector of small integers (type t_VECSMALL) with n components whose components are the expression expr evaluated at the integer points between 1 and n.


vectorv(n, {X}, {expr = 0})

As vector, but returns a column vector (type t_COL).