Date of Award

12-2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mathematical Sciences

Committee Chair/Advisor

Fei Xue

Committee Member

Eleanor Jenkins

Committee Member

Leo Rebholz

Committee Member

Vincent Ervin

Abstract

This thesis studies two classes of numerical linear algebra problems, approximating the product of a function of a matrix with a vector, and solving the linear eigenvalue problem $Av=\lambda Bv$ for a small number of eigenvalues. These problems are solved by rational Krylov subspace methods (RKSM). We present several improvements in two directions: pole selection and applying inexact methods.

In Chapter 3, a flexible extended Krylov subspace method ($\mathcal{F}$-EKSM) is considered for numerical approximation of the action of a matrix function $f(A)$ to a vector $b$, where the function $f$ is of Markov type. $\mathcal{F}$-EKSM has the same framework as the extended Krylov subspace method (EKSM), replacing the zero pole in EKSM with a properly chosen fixed nonzero poles. For symmetric positive definite matrices, the optimal fixed pole is derived for $\mathcal{F}$-EKSM to achieve the lowest possible upper bound on the asymptotic convergence factor, which is lower than that of EKSM. The analysis is based on properties of Faber polynomials of $A$ and $(I-A/s)^{-1}$. For large and sparse matrices that can be handled efficiently by LU factorizations, numerical experiments show that $\mathcal{F}$-EKSM and a variant of RKSM based on a small number of fixed poles outperform EKSM in both storage and runtime, and they usually have advantage over adaptive RKSM in runtime.

Chapter 4 concerns the theory and development of inexact RKSM for approximating the action of a function of matrix $f(A)$ to a column vector $b$. At each step of RKSM, a shifted linear system of equations needs to be solved to enlarge the subspace. For large-scale problems, arising from discretizations of PDEs in 3D domains, such a linear system is usually solved by an iterative method approximately. The main question is how to relax the accuracy of these linear solves without negatively affecting the convergence for approximating $f(A)b$. Our insight into this issue is obtained by exploring the residual bounds on the rational Krylov subspace approximations to $f(A)b$, based on the decaying behavior of the entries in the first column of the matrix function of the block Rayleigh quotient of $A$ with respect to the rational Krylov subspaces. The decay bounds on these entries for both analytic functions and Markov functions can be efficiently and accurately evaluated by appropriate quadrature rules. A heuristic based on these bounds is proposed to relax the tolerances of the linear solves arising from each step of RKSM. As the algorithm progresses toward convergence, the linear solves can be performed with increasingly lower accuracy and computational cost. Numerical experiments for large nonsymmetric matrices show the effectiveness of the tolerance relaxation strategy for the inexact linear solves of RKSM.

In Chapter 5, inexact RKSM are studied to solve large-scale nonsymmetric eigenvalue problems. Similar to the problem setting in Chapter 4, each iteration (outer step) of RKSM requires solution to a shifted linear system to enlarge the subspace, but these linear solves by direct methods are prohibitive due to the problem scale. Errors are introduced at each outer step if these linear systems are solved approximately by iterative methods (inner step), and these errors accumulate in the rational Krylov subspace. In this thesis, we derive an upper bound on the errors that can be introduced at each outer step to maintain the same convergence as exact RKSM for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the desired invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact RKSM is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.