Eigenvalue computations for large sparse matrices such as the Lanczos method are commonly based on Krylov subspace techniques. One of the dominant operations in such algorithms are iterated computations of inner products with the same vector in order to preserve orthogonality of the Krylov basis. These operations can be accelerated by existing BLAS functionality using GPUs. However, this is not fully efficient due to unnecessary memory transfers. We present improved implementations in CUDA and OpenCL, which are now available in ViennaCL, PETSc and SLEPc, and demonstrate an up to two-fold performance gain over existing GPU vendor libraries.

%B ICNAAM 11th International Conference of Numerical Analysis and Applied Mathematics %C Rhodes, Greece %G eng %U http://www.mcs.anl.gov/papers/P4098-0713_1.pdf