In
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
and
scientific computing
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and s ...
, a sparse matrix or sparse array is a
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
in which most of the elements are zero.
There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense.
[ The number of zero-valued elements divided by the total number of elements (e.g., ''m'' × ''n'' for an ''m'' × ''n'' matrix) is sometimes referred to as the sparsity of the matrix.
Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in ]combinatorics
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many ...
and application areas such as network theory
In mathematics, computer science, and network science, network theory is a part of graph theory. It defines networks as Graph (discrete mathematics), graphs where the vertices or edges possess attributes. Network theory analyses these networks ...
and numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific
Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into twoor threemajor branches: the natural sciences, which stu ...
or engineering
Engineering is the practice of using natural science, mathematics, and the engineering design process to Problem solving#Engineering, solve problems within technology, increase efficiency and productivity, and improve Systems engineering, s ...
applications when solving partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to ho ...
s.
When storing and manipulating sparse matrices on a computer
A computer is a machine that can be Computer programming, programmed to automatically Execution (computing), carry out sequences of arithmetic or logical operations (''computation''). Modern digital electronic computers can perform generic set ...
, it is beneficial and often necessary to use specialized algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
s and data structure
In computer science, a data structure is a data organization and storage format that is usually chosen for Efficiency, efficient Data access, access to data. More precisely, a data structure is a collection of data values, the relationships amo ...
s that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices, as they are common in the machine learning field. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory
Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembe ...
are wasted on the zeros. Sparse data is by nature more easily compressed and thus requires significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Special cases
Banded
An important special type of sparse matrices is a band matrix
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal ''band'', comprising the main diagonal and zero or more diagonals on either side.
Band matrix
Bandwidt ...
, defined as follows. The lower bandwidth of a matrix is the smallest number such that the entry vanishes whenever . Similarly, the upper bandwidth is the smallest number such that whenever . For example, a tridiagonal matrix
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
has lower bandwidth and upper bandwidth . As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.
Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.
By rearranging the rows and columns of a matrix it may be possible to obtain a matrix with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.
Diagonal
A very efficient structure for an extreme case of band matrices, the ''diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
'', is to store just the entries in the main diagonal
In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
as a one-dimensional array
In computer science, an array is a data structure consisting of a collection of ''elements'' ( values or variables), of same memory size, each identified by at least one ''array index'' or ''key'', a collection of which may be a tuple, known ...
, so a diagonal matrix requires only entries.
Symmetric
A symmetric sparse matrix arises as the adjacency matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph (discrete mathematics), graph. The elements of the matrix (mathematics), matrix indicate whether pairs of Vertex (graph theory), vertices ...
of an undirected graph
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called '' vertices'' (also call ...
; it can be stored efficiently as an adjacency list
In graph theory and computer science, an adjacency list is a collection of unordered lists used to represent a finite graph. Each unordered list within an adjacency list describes the set of neighbors of a particular vertex in the graph. This ...
.
Block diagonal
A block-diagonal matrix
In mathematics, a block matrix or a partitioned matrix is a matrix (mathematics), matrix that is interpreted as having been broken into sections called blocks or submatrices.
Intuitively, a matrix interpreted as a block matrix can be visualized as ...
consists of sub-matrices along its diagonal blocks. A block-diagonal matrix has the form
where is a square matrix for all .
Use
Reducing fill-in
The ''fill-in'' of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition
In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.
Al ...
can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
.
There are other methods than the Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.
Solving sparse matrix equations
Both iterative and direct methods exist for sparse matrix solving.
Iterative methods, such as conjugate gradient
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an it ...
method and GMRES utilize fast computations of matrix-vector products , where matrix is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.
Storage
A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element of the matrix and is accessed by the two indices and . Conventionally, is the row index, numbered from top to bottom, and is the column index, numbered from left to right. For an matrix, the amount of memory required to store the matrix in this format is proportional to (disregarding the fact that the dimensions of the matrix also need to be stored).
In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.
Formats can be divided into two groups:
* Those that support efficient modification, such as DOK (Dictionary of keys), LIL (List of lists), or COO (Coordinate list). These are typically used to construct the matrices.
* Those that support efficient access and matrix operations, such as CSR (Compressed Sparse Row) or CSC (Compressed Sparse Column).
Dictionary of keys (DOK)
DOK consists of a dictionary
A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged Alphabetical order, alphabetically (or by Semitic root, consonantal root for Semitic languages or radical-and-stroke sorting, radical an ...
that maps - pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.
List of lists (LIL)
LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.
Coordinate list (COO)
COO stores a list of tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.
Compressed sparse row (CSR, CRS or Yale format)
The ''compressed sparse row'' (CSR) or ''compressed row storage'' (CRS) or Yale format represents a matrix by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.
The CSR format stores a sparse matrix in row form using three (one-dimensional) arrays . Let denote the number of nonzero entries in . (Note that zero-based indices shall be used here.)
* The arrays and are of length , and contain the non-zero values and the column indices of those values respectively
* contains the column in which the corresponding entry is located.
* The array is of length and encodes the index in and where the given row starts. This is equivalent to encoding the total number of nonzeros above row . The last element is , i.e., the fictitious index in immediately after the last valid index .
For example, the matrix
is a matrix with 4 nonzero elements, hence
V = 5 8 3 6 COL_INDEX = 0 1 2 1 ROW_INDEX = 0 1 2 3 4
assuming a zero-indexed language.
To extract a row, we first define:
row_start = ROW_INDEX ow row_end = ROW_INDEXow + 1
OW, O.W. or ow may refer to:
* ''Ow!'', an interjection that denotes pain
* ow (digraph), an English digraph
* "Ow!" (composition), a Dizzy Gillespie bebop jazz composition
* Obwalden, a canton of Switzerland
* Organization Workshop, a method ...
Then we take slices from V and COL_INDEX starting at row_start and ending at row_end.
To extract the row 1 (the second row) of this matrix we set row_start=1
and row_end=2
. Then we make the slices V :2= /code> and COL_INDEX :2= /code>. We now know that in row 1 we have one element at column 1 with value 8.
In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only when .
Another example, the matrix
is a matrix (24 entries) with 8 nonzero elements, so
V = 10 20 30 40 50 60 70 80 COL_INDEX = 0 1 1 3 2 3 4 5
ROW_INDEX = 0 2 4 7 8
The whole is stored as 21 entries: 8 in , 8 in , and 5 in .
* splits the array into rows: (10, 20) (30, 40) (50, 60, 70) (80)
, indicating the index of (and ) where each row starts and ends;
* aligns values in columns: (10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0, 0, 0, 80)
.
Note that in this format, the first value of is always zero and the last is always , so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored, would not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formula works for any row . Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix.
The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines and into a single array and handles the diagonal of the matrix separately.
For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation.
It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.
Compressed sparse column (CSC or CCS)
CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is , where is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; is the row indices corresponding to the values; and, is the list of indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via the sparse
function).
Software
Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:
* PETSc, a large C library, containing many different matrix solvers for a variety of matrix storage formats.
* Trilinos, a large C++ library, with sub-libraries dedicated to the storage of dense and sparse matrices and solution of corresponding linear systems.
* Eigen3 is a C++ library that contains several sparse matrix solvers. However, none of them are parallelized.
* MUMPS
MUMPS ("Massachusetts General Hospital Utility Multi-Programming System"), or M, is an imperative, high-level programming language with an integrated transaction processing key–value database. It was originally developed at Massachusetts Gen ...
(MUltifrontal Massively Parallel sparse direct Solver), written in Fortran90, is a frontal solver.
* deal.II, a finite element library that also has a sub-library for sparse linear systems and their solution.
* DUNE
A dune is a landform composed of wind- or water-driven sand. It typically takes the form of a mound, ridge, or hill. An area with dunes is called a dune system or a dune complex. A large dune complex is called a dune field, while broad, flat ...
, another finite element library that also has a sub-library for sparse linear systems and their solution.
* Armadillo
Armadillos () are New World placental mammals in the order (biology), order Cingulata. They form part of the superorder Xenarthra, along with the anteaters and sloths. 21 extant species of armadillo have been described, some of which are dis ...
provides a user-friendly C++ wrapper for BLAS and LAPACK.
* SciPy
SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing.
SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, fast Fourier ...
provides support for several sparse matrix formats, linear algebra, and solvers.
* ALGLIB is a C++ and C# library with sparse linear algebra support
* ARPACK Fortran 77 library for sparse matrix diagonalization and manipulation, using the Arnoldi algorithm
* SLEPc Library for solution of large scale linear systems and sparse matrices
* scikit-learn
scikit-learn (formerly scikits.learn and also known as sklearn) is a free and open-source machine learning library for the Python programming language.
It features various classification, regression and clustering algorithms including support ...
, a Python library for machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, provides support for sparse matrices and solvers
SparseArrays
is a Julia standard library.
* PSBLAS, software toolkit to solve sparse linear systems supporting multiple formats also on GPU.
History
The term ''sparse matrix'' was possibly coined by Harry Markowitz
Harry Max Markowitz (August 24, 1927 – June 22, 2023) was an American economist who received the 1989 John von Neumann Theory Prize and the 1990 Nobel Memorial Prize in Economic Sciences.
Markowitz was a professor of finance at the Rady Scho ...
who initiated some pioneering work but then left the field.Oral history interview with Harry M. Markowitz
pp. 9, 10.
See also
Notes
References
*
*
* (This book, by a professor at the State University of New York at Stony Book, was the first book exclusively dedicated to Sparse Matrices. Graduate courses using this as a textbook were offered at that University in the early 1980s).
*
*
* Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville, MD. Referencing .
* (Open Access)
Further reading
*
*
at the Texas A&M University.
SuiteSparse Matrix Collection
SMALL project
A EU-funded project on sparse models, algorithms and dictionary learning for large-scale data.
*
*
*
{{Numerical linear algebra
Arrays