Date of Award

5-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

School of Mathematical and Statistical Sciences

Committee Chair/Advisor

Fei Xue

Committee Member

Qingshan Chen

Committee Member

Lea Jenkins

Committee Member

Matthew Saltzman

Abstract

This study concerns two main issues in numerical linear algebra: convergence estimate of minimal residual methods based on explicit construction of approximate min-max polynomials for in- definite matrices, and development and analysis of Krylov subspace methods using non-orthonormal basis vectors based on random sketching. For a matrix A with spectrum Λ(A), it is well known that the min-max polynomial problem min max |pk (z)| pk ∈Pk, pk (0)=1, z∈Λ(A) is used to bound the relative error of Krylov subspace minimum residual methods or similar methods. For a symmetric positive definite matrix A, the min-max polynomial for the Conjugate Gradient (CG) algorithm is explicitly known, and in certain cases, provides a sharp bound. We provide an explicit construction for real, symmetric indefinite and nonsymmetric matrices, adding to the limited results found in the literature. It is also well known that minimum residual methods converge slowly when the spectrum of A is large, or the ratio of the largest to smallest eigenvalues (in modulus) is large. As the eigenvalues along the exterior of Λ(A) are resolved, minimum residual methods experience an acceleration in convergence. Subspace recycling takes advantage of this by approximating the subspace spanned by eigenvectors corresponding to the k smallest eigenvalues, and using those k basis vectors to start the Krylov subspace in the next iteration (deflated restart) or linear system (subspace recycling). We consider subspace recycling as it applies to the GCRO algorithm. To reduce the cost associated with full orthogonalization, we use random sketching to form a non-orthonormal basis, allowing us to reduce computation cost while still preventing the basis vectors from getting too ill-conditioned. Then we extend this practice to constructing non-orthonormal basis vectors for Krylov subspace methods for approximating the matrix exponential times a vector and approximating the largest singular triplets. Our methodology is also effective to save considerable orthogonalization cost in these problem settings, without obviously slowing down the convergence of the original algorithms. We discuss insights into the convergence (error analysis) of the sketched variants. The discussions of linear system solvers, matrix exponential times a vector, and dominant singular triplets cover a range of fundamental numerical linear algebra problems for large (and typically sparse) matrices.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.