GraphBLAS Resources

Why Bother using Linear Algebra packages?

So why go through the effort of installing the library, figuring out how to link, and dealing withC/Fortran issues?The libraries are tuned and optimized. Users can focus on their code details by reducing the amount of code to produce/debug, and more options to switch methods if necessary.In the case of matrix multiplication, for example, the time complexity of the best implementationusing a triple loop does not compare with the best parallel and vector implementation using saidlibraries. The performance is better by a factor of about 4-5.

What is BLAS?

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; Many highly tuned implementations exist for various problems that can be cast into (Atlas, Flame, LAPACK, etc… )

Why do we need a BLAS for Graphs?

Graph problems are challenging to program due to irregular access patterns, poor locality, and difficulty in caching and parallelization. Further, contemporary computer architectures are good at processing linear and hierarchical data structures, such as lists, stacks, or trees, and a massive amount of random data access is required, CPU has frequent cache misses, and implementing parallelism is difficult.

Graph algorithms have a high communication-to-computation ratio. This means that standard latency hiding techniques break down, e.g. pre-fetching and branch prediction provide little benefit.

What is GraphBLAS?

GraphBLAS is a graph-oriented version of linear algebra routines.

s an API specification that defines standard building blocks for graph algorithms in the language of linear algebra.GraphBLAS is built upon the notion that a sparse matrix can be used to represent graphs as either an adjacency matrix or an incidence matrix. The GraphBLAS specification describes how to graph operations (e.g. traversing and transforming graphs) can be efficiently implemented via linear algebraic methods (e.g. matrix multiplication) over different semirings.

It is believed that the state of the art in constructing a large collection of graph algorithms in terms of linear algebraic operations is mature enough to support the emergence of a standard set of primitive building blocks.

A key insight behind this work is that when a graph is represented by a sparse incidence or adjacency matrix, sparse matrix-vector multiplication is a step of breadth-first search. By generalizing the pair of scalar operations involved in the linear algebra computations to define a semiring, we can extend the range of these primitives to support a wide range of parallel graph algorithms.

In the implementation of GraphBLAS, graphs are encoded as sparse adjacency matrices and use vector/matrix operations to express graph algorithms.

How should I get started learning about GraphBLAS?

There are a wide variety of resources GraphBLAS-Pointers repo. The following listed resources (with stars) are likely the best place to start. We suggest the videos followed by Gábor’s tutorial.


Other Suggested Papers/Talks

What should I do now that I know more about GraphBLAS?

We suggest that you use this new GraphBLAS expertise with the Lucata Pathfinder system. Please see this page for specific information on running GraphBLAS with the Pathfinder.