There are many ways to perform arithmetic operations on IVSparse Matrices. This page will show you how to perform the basic arithmetic operations on IVSparse Matrices. These can be broken down into a few groups. These are BLAS routines, vector operations, and matrix operations. For dense matrices since IVSparse has no dense format the Eigen Linear Algebra Library is used for returns of dense matrices and vectors.
Firstly, to set up some examples we need to create some matrices and vectors. Here are some examples of how to do that:
The above code will print out the following:
For BLAS routines IVSparse supports the following:
Scalar Multiplication
Starting with scalar multiplication there is both an in-place and new return version of this method. The in-place version is done with the *=
operator and the new return version is done with the *
operator. This operation goes through the entire matrix and multiplies each value. In VCSC this is very fast due to keeping only unique values for a column in contiguous memory. Here are some examples:
The above code will print out the following:
SpMV
For multiplying a IVSparse Matrix by a vector there are two supported ways of doing this, using Eigen vectors or IVSparse vectors. This is done in O(n^2) and is slower than the Eigen implementation of this routine. Since the multiplication process is the same for both the example will only use Eigen vectors for simplicity. Here are some examples:
The above code will print out the following:
SpMM
For multiplying an IVSparse Matrix by a dense matrix the only supported way is by using an Eigen matrix to multiply. This algorithm is done in O(n^3) and returns an Eigen::Matrix. Currently there is only support for Sparse * Dense operations. Here are some examples:
The above code will print out the following:
Vector operations that IVSparse supports are norm, sum, and dot products with eigen vectors. Some notes are that because of the support for eigen vectors only in dot products its impossible to take the dot product of a IVSparse Vector with itself but this is something we intend to add in future work. Here are some examples:
The above code will print out the following:
There are a number of matrix operations supported for IVSparse compression formats. These are:
These are supported for all compression formats and are fairly self explanitory.
Outer/Inner Sums
Outer sums returns a vector where each element is the sum of the corresponding outer dimension of the matrix. For column major matrices outer sum will return a vector of all sums for each column. Inner sums is the same but for the inner dimesion. This method is also very efficient for VCSC due to the value compression techniques.
Max/Min Col/Row Coefficients
This is a set of four methods that return a vector of the minimum or maximum coefficents in either each column or row. These are used to find out information about the matrix such as the range of values across rows or columns. These are also very efficient for VCSC due to the value compression techniques.
Trace
Trace returns the sum of the diagonal of the matrix. This is a very simple method that is supported for all compression formats.
Sum
Sum returns the sum of all values in the matrix. This is a very simple method that is supported for all compression formats.
Norm
Norm returns the Frobenius norm of the matrix. The Frobenius Norm is the sum of the squares of all the matrix coefficients. This is a very simple method that is supported for all compression formats.
Vector Length
Vector length returns the length of the indicated vector. This is calculated by taking the square root of the sum of squares of the vector coefficients. This is a very simple method that is supported for all compression formats.