ACO The ACO Seminar (2020–2021)

November 19, 3:30pm, https://cmu.zoom.us/j/98951280265
Santosh Vempala, Georgia Tech
Solving Sparse Linear Systems Faster than Matrix Multiplication

Abstract:

Can linear systems be solved faster than matrix multiplication?

A celebrated line of work shows that the important special case of Laplacian linear systems can be solved in nearly linear time. In the general setting, however, the bit complexity of solving an $n$-by-$n$ linear system $Ax=b$ is $n^\omega$, where $\omega < 2.372864$ is the matrix multiplication exponent. Even for sparse linear systems with poly(n) condition number, which arise throughout scientific computing, the famous Conjugate Gradient/Lanczos methods require $\Omega(n)$ bits for intermediate numbers; improving the resulting $n^2 * nnz(A)$ complexity has been an open problem.

We present an algorithm that solves linear systems with sparse A asymptotically faster than matrix multiplication for any \omega > 2. This speedup holds for any input matrix A with $o(n^{\omega-1}/log(\kappa(A)))$ non-zeros, where $\kappa(A)$ is the condition number of $A$. For $poly(n)$-conditioned matrices with $O(n)$ nonzeros, and the current value of $\omega$, the bit complexity of our algorithm to solve to within any $1/poly(n)$ error is $O(n^{2.331645})$.

Our algorithm can be viewed as an efficient randomized implementation of the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algebraic algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC 06, 07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to lower bound the smallest singular value and the smallest gap in singular values of semi-random matrices.

Joint work with Richard Peng, manuscript at https://arxiv.org/abs/2007.10254

Please email Boris Bukh (bbukh ~at~ math) for a password.


Back to the ACO home page Back to the ACO Seminar schedule