Matfunc.py was written by Raymond Hettinger in 2001 and placed in the Public Domain. Send comments and suggestions to python@rcn.com and get updates from users.rcn.com/python/download/matfunc.py .

Matfunc.Py is a 100% pure Python module for vector, matrix, and table math operations which runs on Python 2.1 or later. It neither conflicts with nor requires NumPy.

For matrices, it includes least matrix multiplication, division, least squares solutions, the LU and QR factorizations, inverses, determinants, and eigenvalue determination (for real matrices). Vector operations include dot product, outer product, cross product, L^{2} norm, and polynomial evaluation. Table operations provide implicit looping and convenient expression of functions applied to whole tables. For ease of use, these operations provide broadcasting for convenient expression of vector/scalar, matrix/scalar, and matrix/vector operations. In addition, several helper functions are provided to create elementary matrices and for polynomial and rational curve-fitting.

The Matfunc module is organized into a class Table, two sub-classes Matrix and Vec, and assorted helper functions.

Table extends UserList to include math operations which apply to every element with implicit looping while retaining other useful list features such as slices. It is best described by example:

`>>> from matfunc import Table`

>>> inventoryStart=Table( [100, 80, 90, 200, 105] ) # Creates object from a list

>>> purchases=Table( [75, 35, 23, 41, 30] )

>>> sales=Table( [19,56, 22,60,65] )

>>> inventoryEnd=inventoryStart + purchases – sales # Note implicit looping

>>> inventoryEnd

[156, 59, 91, 181, 70]

>>> markup=1.25

>>> unitPrice=unitCost * markup # Note scalar broadcast to list

>>> unitPrice

[1.375, 1.3125, 1.4749999999999999, 2.0, 1.1500000000000001]

>>> totalPurchases=(unitCost * purchases).sum() # Sum of prices times quantities

>>> totalPurchases

239.59

>>> totalSales=(unitPrice * sales).sum()

>>> totalSales

326.82499999999999

**CONSTRUCTING TABLE INSTANCES**

Call the Table constructor with a list argument. For two-dimensional arrays, call the constructor with of list of one dimensional Table objects.

`>>> a=`

**Table**( [1,2,3,4] )

>>> b=Table( [2,3,5,7] )

>>> c=a * 3

>>> arr=Table( [a,b,c] ) # 2-D array consisting of three 1-D arrays

>>> arr

[[1, 2, 3, 4], [2, 3, 5, 7], [3, 6, 9, 12]]

>>> print a.dim, arr.**dim** # show array dimensions

1 2

>>> arr[2][3] # access elements using [row][col] references

12

>>> arr.**prod**() # Compute the product of all elements

9797760.0

>>> arr - Table( [c,b,a] ) # Subtract one 2-D array from another

[[-2, -4, -6, -8], [0, 0, 0, 0], [2, 4, 6, 8]]

>>> arr * 3 # Multiply all elements by a scalar

[[3, 6, 9, 12], [6, 9, 15, 21], [9, 18, 27, 36]]

>>> arr + b # Broadcast a row to every row of a 2-D array

[[3, 5, 8, 11], [4, 6, 10, 14], [5, 9, 14, 19]]

METHODS OF THE TABLE CLASS

**map**( unaryOperator ) -- Applies a function of one variable to every element.

```
>>> b.map( math.cos )
```

[-0.4161468365471, -0.989992496600, 0.283662185463, 0.75390225434]

>>> arr.map( lambda x: x%2 and 3*x+1 or x/2 )

[[4, 1, 10, 2], [1, 10, 16, 22], [10, 3, 28, 6]]

**map**( binaryOperator, secondArgument ) – Applied a function of two variables to corresponding elements in two arrays. If the arrays are not of the same dimension, broadcasting occurs (rows or scalars are repeated to match the array of higher dimension).

`>>> a.map( lambda x,y: 10*x+y, b ) # Pair elements of two 1-D arrays`

[12, 23, 35, 47]

>>> a.map( lambda x,y: 10*x+y, 5 ) # Broadcast the 5 to every element of a 1-D array

[15, 25, 35, 45]

>>> arr.map( lambda x,y: 10*x+y, b ) # Broadcast a 1D row to every row of a 2-D array

[[12, 23, 35, 47], [22, 33, 55, 77], [32, 63, 95, 127]]

**flatten**() – Collapse an table object to a 1D list.

```
>>> arr.flatten()
```

[3, 6, 9, 12, 2, 3, 5, 7, 1, 2, 3, 4]

**exists**(predicate) – Checks every element to see if even once instance is true. The predicate is a boolean function of one variable.

`>>> def iseven(x): return x%2==0`

...

>>> b.exists(iseven)

1

>>> (b*2+1).exists(iseven)

0

**forall**(predicate) -- Checks every element to see if all instances are true. The predicate is a boolean function of one variable.

`>>> def isprime(x): return x==2 or 2**(x-1)%x==1`

...

>>> a.forall( isprime )

0

>>> b.forall(isprime)

1

**concat**( element ) – Makes a new table object with the new element added to the end. For lists, this function is normally provided the plus operator; however, a substitute function is needed for table objects because plus has been overridden with table addition.

`>>> [1,2,3,4] + [5] # With lists, the plus operator appends to a list`

[1, 2, 3, 4, 5]

>>> Table([1,2,3,4]) + 5 # Table object use plus for addition

[6, 7, 8, 9]

>>> Table([1,2,3,4]).concat( [5] ) # So a concat method is needed for appending

[1, 2, 3, 4, 5]

The Vec class extends Table and is intended for 1-D arrays. It adds pretty printing and standard math vector operations. It is best described by example.

`>>> from matfunc import Vec`

>>> a=Vec( [4,-2, 5] ) # Constructed just like the Table class

>>> b=Vec( [3,10,-6] )

>>> a

[4, -2, 5] # Normal table representation

>>> print a # Pretty printed with built-in **__str__**() method

4.000 -2.000 5.000

>>> a.**dot**(b) # Computes the dot or inner product of 'a' and 'b'

-38.0

>>> a.**norm**() # Computs the length, hypoteneuse or L^{2} norm of 'a'

6.7082039324993694

>>> print a.**normalize**() # Vector of length 1 in same direction as 'a'

0.596 -0.298 0.745

>>> print a.**outer**(b) # Constructs a multiplication table from 'a' and 'b'

12.000 40.000 -24.000

-6.000 -20.000 12.000

15.000 50.000 -30.000

>>> print a.**cross**(b) # The vector cross product in perpendicular to 'a' and 'b'

-38.000 39.000 46.000

>>> Vec([6,3,4]).**polyval**(5) #evaluates 6*x**2 + 3*x + 4 at x=5

169.0

The .**house**(index) method is a helper for the matrix class and is used to compute Householder reflection vectors which are used to zero out parts of a vector.

Matrix routines are organized into a class hierarchy starting with Matrix, its subclass Square which has a Triangular subclass which splits into UpperTri and LowerTri. This organization assigns efficient algorithms to each structure (i.e. solving upper triangular matrices with back-substitution) and adds structure specific methods (i.e. eigenvalues and inverses only have meaning for square matrices).

**Mat**(listOflists, type) is a factory method for creating new matrices from a list of lists or a list of Vecs. It automatically selects the Matrix class for rectangular matrices, Square when the row and column lengths agree, and LowerTri or UpperTri if there are blocks of zeroes.

The methods **tr** (for transpose), **star** (for Hermetian adjoints), **diag**, **trace**, and **augment** all have their usual mathematical meanings. Matrix multiplication is accomplished by **mmul** and matrix division by **solve**(b). When the systems are inconsistent or overdetermined, the least squares solution is computed.

**qr** returns the QR decompostion where Q is orthonormal (a rotation in space where Q^{t}Q=I) and R is upper triangular. If the original matrix A is m-by-n, then A=Q*R, Q is m-by-n, and R is n-by-n.

Square matrices add several other methods. **det** and **inverse** have their normal mathematical meanings. **hessenberg** returns a similar matrix (having the same eigenvalues) where all elements below the first sub-diagonal are zero. **eigs** estimates the eigenvalues of real matrices.

**lu** returns the LU decomposition of square matrices where L is lower triangular, U is upper triangular, and L*U is the original matrix.

**__pow__** raises matrices to positive integer powers; for example, A**3 gives A*A*A. Note, this may be unexpected because all other math operators are applied table; for example, A+3 adds 3 to every element individually.

Helper Functions

**iszero**(x) This predicate evaluates to true when x is nearly zero. It is used equality testing in the post-condition assertion checks and for determining when a row has been zeroed in the eigenvalue routine.

**zeroes**(m,n), **eye**(m,n), **hilb**(m,n), **rand**(m,n) are factory functions that create elementary matrices. If one argument is given, then a square matrix is created. With two arguments, a rectangular m-by-n matrix is created. Zeroes have all elements set to zero. Eye creates an identity matrix where the diagonal elements are set to one. Hilb creates a Hilbert matrix where element[i][j] is 1/(i+j+1) – these matrices are very close to being singular. Rand creates a matrix filled with random numbers.

**polyfit**(points, degree) and **ratfit**(points, degree) create polynomial and rational polynomial approximations (curve-fits) of the specified degree (defaults to 2) for a given set of x,y values. There need to be more points (preferably many more) than the specified degree (i.e. 4 points determine a cubic equation). The points variable is 2-by-n matrix with the x values in the first row and y values in the second row. The answer returned in format suitable for evaluation by Vec.polyval and Vec.ratval. The coefficient of the highest power term is stored first. For ratval of degree two, the first three terms belong to the numerator and the last two belong to the denominator and there is an implied extra term which is always 1. For example: [10,20,30,40,50] translates to (10*x**2 + 20*x + 30) / (40*x**2 + 50*x + 1).

**funToVec**( tgtfun, low, high, steps, EqualSpacing ) is meant to compute the 2-by-n matrix used by polyfit and ratfit. Steps (default 30) are the number of points. If EqualSpacing is set to true, the abscissas are evenly spaced within the interval (low,high); otherwise, Chebyshev nodes are used. The tgtfun is a function of one variable that will be evaluated at those abscissas.

`>>> print funToVec( lambda x: math.sin(x), steps=5, EqualSpacing=1 )`

-0.800 -0.400 0.000 0.400 0.800 # spacing of .4 within the interval (-1..1)

-0.717 -0.389 0.000 0.389 0.717 # sin(x) at those points

>>> print polyfit(funToVec( lambda x: math.sin(x), steps=5, EqualSpacing=1 ))

0.000 0.912 0.000

>>> print polyfit(funToVec( lambda x: math.sin(x)), degree=5 )

0.008 0.000 -0.166 0.000 1.000 0.000 # polynomial approx to sin(x)

**funfit**(points, basisfunctions) complements polyfit and ratfit by computing the least squares best fit for coefficients of arbitrary basis functions.

`>>> print funfit(funToVec( lambda x: math.sin(x)), [math.exp, lambda x: x**3])`

0.029 1.000 # sin(x) is approx. .029*exp(x) + x**3 over (-1..1)

**[Golub]** Gene H. Golub and Charles F. Van Loan (1996) Matrix Computations 3^{rd} Edition.

**[Kincaid]** David Kincaid and Ward Cheney (1996) Numerical Recipes 2^{nd} Edition.

**[Mathews]** John H. Mathews and Kurtis D. Fink (1999) Numerical Methods Using Matlab 3^{rd} Edition.

**[Press]** William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery (1993) Numerical Recipes in C: The Art of Scientific Computing 2nd Edition.

ALGORITHMS

- Determinants are computed as the product of diagonal elements on the upper-triangular matrix in a QR decomposition. A=QR, so det A=det Q * det R, and det Q=1, so det A=det R. When R is upper-triangular, det R=prod( diag( R ) ).
- Solve applies a custom solver to a matrix and performs iterative improvement [Press section 2.5] up to 10 times or until the solution stops improving. Solving Ax=b, and improve to x + solve(A, b-Ax). The custom solver is back-substitution for an upper triangular matrix , forward-substitution for a lower triangular matrix, and QR decomposition for general matrices. Ax=b is transformed to QRx=b then Rx=Q
^{t}b which is then solved by back-substitution. This method works for non-square matrices and returns the least squares solution when Ax=b is inconsistent or over-determined. Note, [Golub 3.5.3] comments that this implementation of iterative improvement is unlikely to increase the accuracy of the solution since there are few if any significant digits in b-Ax; however, my experiments do show improvement. To guard against loss of precision, the last step that does not produce an improvement (as measured by the norm squared) is thrown away. - The QR decomposition uses Householder reflections [Golub algorithm 5.1.1] applied in vector form rather than matrix form [Golub section 5.1.4].
- The eigs routine first makes a similarity transformation (keeping eigenvalues constant) to upper Hessenberg form [Golub algorithm 7.4.2] and then applies the QR method with shifts to force an eigenvalue into the lower right corner. The matrix is then deflated by one row and column and the method is repeated until all eigenvalues are found [Kincaid section 5.5].
- Function approximation defaults to Chebyshev nodes rather than equally spaced points in order to avoid the Runge phenomenon [Matthews section 4.5] where approximation accuracy decreases when the number of sample points are increased. The nodes are located at t(n)=cos( (2N+1-2k)*Pi/(2N+2) ) for k=0, 1, … N. The nodes are then rescaled from [-1,1] to the specified interval.
- Polynomial evaluation uses Horner's method of nested multiplication [Kincaid section 1.2].

LIMITATIONS

- The determinant has a 50% chance of having the wrong sign because the QR decomposition may be a reflection of R rather than just a rotation.
- Matrix sizes are pre-computed when the matrix is constructed. That size does not update if an element is deleted. Recommend building a new matrix from the desired vectors.
- The LU decomposition does not use pivoting or scaling resulting in less numerical stability and inability to bypass zero elements on the diagonal.
- The QR decomposition results in a Q that is slightly off from being orthonormal because it prioritizes the other post-conditions (that A=Q*R and R is upper-triangular). As a result the post-condition assertion sometimes fails for Q
^{t}Q=I within a tight tolerance. Either lower the tolerance or disable post-condition assertion checking. - The QR decomposition has not been tested for matrices with complex values.
- The .eigs() method handles only real matrices with real eigenvalues.
- Matrix exponentation is only defined for positive integer powers.