Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Mathematical Sciences

Committee Member

Dr. Colin Gallagher, Committee Chair

Committee Member

Dr. Christopher McMahan

Committee Member

Dr. Robert Lund

Committee Member

Dr. Xiaoqian Sun


The dissertation consists of three distinct but related projects. We consider regression model fitting, variable selection in regression, and autocorrelation estimation in time series. In each procedure we formulate the problem in terms of minimizing an objective function which adapts to the given data. First we propose a robust M-estimation procedure for regression. The main purpose of the proposed methodology is to develop a procedure that adapts to light/heavy tailed, symmetric/asymmetric distributions with/without outliers. We focus on studying the properties of the maximum likelihood estimator of the asymmetric exponential power distribution, a broad distribution class that holds both Normal and asymmetric Laplace distributions as special cases. The proposed methodology unifies least squares and quantile regression in a data driven manner to capture both tail decay and asymmetry of the underlying distributions. Finite sample performance of the method is exhibited via extensive Monte Carlo simulation and real data applications. Second, we capitalize on the success of the proposed method and extend it to a variable selection procedure that selects the important predictors under a sparse setting. Quantile regression Lasso, i.e., quantile regession with $L_1$ norm on the regression coefficients for regularization is a robust technique to perform variable selection. However which quantile should be adopted is unclear. The proposed methodology introduces a way to choose the most ``informative' quantile of interest that is used in the adaptive quantile regression Lasso. A modified BIC criterion is used to select the optimal tuning parameter. The proposed procedure selects the quantile based on the log-likelihood of the asymmetric Laplace distribution, and aims to perform the best quantile regression Lasso which is confirmed in both simulation study and a real data analysis. Third, we focus on alleviating the underestimation issue of the sample autocorrelation in linear stationary time series. We first formulate autocorrelation estimation into a least squares problem and then apply a penalization to regulate the autocorrelation estimate. An adaptive sequence is proposed for tuning parameter and is shown to work well for stationary time series when the sample size is small and correlation is high.