Date of Award

8-2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Industrial Engineering

Committee Chair/Advisor

Dr. Yongjia Song

Committee Member

Dr. Amin Khademi

Committee Member

Dr. Hamed Rahimian

Committee Member

Dr. Boshi Yang

Abstract

This dissertation comprises four di˙erent topics related to multistage stochastic program-ming (MSP) algorithms, modeling, and applications.

First, we extend the adaptive partition-based approach for solving two-stage stochastic programs with a fixed recourse matrix and a fixed cost vector to the MSP setting, where the stochastic process is assumed to be stage-wise independent. The proposed algorithms integrate the adaptive partition-based strategy with a popular approach for solving multistage stochastic programs, the stochastic dual dynamic programming (SDDP) algorithm, according to two main strategies. These two strategies are distinct from each other in the manner by which they refine the partitions during the solution process. In particular, we propose a refinement outside SDDP strategy whereby we iteratively solve a coarse scenario tree induced by the partitions and refine the partitions in a separate step outside of SDDP, only when necessary. We also propose a refinement within SDDP strategy where the partitions are refined in conjunction with the machinery of the SDDP algorithm. We then use, within the two di˙erent refinement schemes, di˙erent tree-traversal strategies, which allow us to have some control over the size of the partitions. Our numerical experiments on a hydro-thermal power generation planning problem show the e˙ectiveness of the proposed algorithms in comparison to the standard SDDP algorithm. Moreover, the results show that the algorithms with the refinement outside SDDP strategy outperform the ones with the refinement within SDDP strategy.

Second, we study an important question related to solving MSP problems with a large number of stages. A common approach to tackle MSP problems with a large number of stages is a rolling-horizon (RH) procedure, where one solves a sequence of MSP problems with a smaller number of stages. This leads to a delicate issue of how many stages to include in the smaller problems used in the RH procedure. We study this question for both finite and infinite horizon MSP problems where the stochastic process is assumed to be stationary. For the infinite horizon case with discounted costs, we derive a bound which can be used to prescribe an −suÿcient number of stages. For the finite horizon case, we propose a heuristic approach from the perspective of approximate dynamic programming to provide a suÿcient number of stages for each roll in the RH procedure. Our numerical experiments on a hydrothermal power generation planning problem show the e˙ectiveness of the proposed approaches.

Third, we introduce a computationally practical approach for solving a class of partially observable multistage stochastic programming problems. In this class of problems, the underlying stochastic process is assumed to follow a hidden Markov chain. Recent advances in the literature introduce a derivative of the SDDP method, known as the partially observable SDDP algorithm. This approach, however, introduces a belief-state vector to the stochastic programming formulation, which increases the computation significantly. Instead, we solve the problem assuming that this Markov chain is fully observable. However, when evaluating the resulting policy, we use a Bayesian update of the belief vector and use the state’s decisions that have the highest posterior. Our numerical experiments on a hydrothermal planning problem show the practicality of this approach.

Fourth, we consider a typical logistics planning problem for prepositioning relief items in preparation for an impending hurricane landfall. In this problem, the goal of the decision-maker is to improve the post-disaster relief e˙ort by prepositioning relief items, over multiple periods, at di˙erent supply points while minimizing the total cost. The total cost consists of the logistics costs and a penalty for failing to satisfy the demand (if any shortage is present). We assume that the demand for relief items can be derived from two of the hurricane characteristics, namely, its intensity and landfall location. However, since these characteristics cannot be known in advance, we assume that they evolve and can be modeled as a Markov chain. We consider this problem in two di˙erent settings: first, we assume that the time of landfall is deterministic and known a priori, and second, we assume that it is random. To solve the resulting decision problem in each setting, we feed the Markov chain into a fully adaptive MSP model. This MSP model allows the decision-maker to adapt their operational logistics decisions sequentially, over multiple stages, as the hurricane’s landfall characteristics become more and more clear. We test each MSP model in a case study and compare its performance to a clairvoyance solution and other alternative approximation policies. Our numerical results provide several insights into the value of using an MSP model the adaptive logistics planning for hurricane relief.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.