Date of Award
Master of Science (MS)
Stepsizes for optimization problems play a crucial role in algorithm convergence, where the stepsize must undergo tedious manual tuning to obtain near-optimal convergence. Recently, an adaptive method for automating stepsizes was proposed for centralized optimization. However, this method is not directly applicable to decentralized optimization because it allows for heterogeneous agent stepsizes. Furthermore, directly using consensus between agent stepsizes to mitigate stepsize heterogeneity can decrease performance and even lead to divergence.
This thesis proposes an algorithm to remedy the tedious manual tuning of stepsizes in decentralized optimization. Our proposed algorithm automates the stepsize and uses dynamic consensus between agents’ stepsizes with a simple filter to reduce stepsize heterogeneity. Without using a simple filter, we show experimentally that consensus between agents can cause divergence due to rapid changes in the local stepsize. Furthermore, we support our algorithm with theoretical guarantees and experimental results. We present experiments on standard machine learning problems like logistic regression, matrix factorization (gradients are not globally Lipschitz), and CIFAR-10 image classification.
Liggett, Benjamin, "Distributed Learning with Automated Stepsizes" (2022). All Theses. 3883.