Date of Award

August 2020

Document Type


Degree Name

Master of Science (MS)


Mechanical Engineering

Committee Member

Yue Wang

Committee Member

Mohammad Naghnaeian


This research focuses on studying data-driven control with dynamics that are actively learned from machine learning algorithms. With system dynamics being identified using neural networks either explicitly or implicitly, we can apply control following either a model-based approach or a model-free approach. In this thesis, the two different methods are explained in detail and finally compared to shed light on the emerging data-driven control research field.

In the first part of the thesis, we first introduce state-of-art Reinforcement Learning (RL) algorithm representing data-driven control using a model-free learning approach. We discuss the advantages and shortcomings of the current RL algorithms and motivate our study to search for a model-based control which is physics-based and also provides better model interpretability. We then propose a novel data-driven, model-based approach for the optimal control of the dynamical system. The proposed approach relies on the Deep Neural Network (DNN) based learning of Koopman operator and therefore is named as Deep Learning of Koopman Representation for Control (DKRC). In particular, DNN is employed for the data-driven identification of basis function used in the linear lifting of nonlinear control system dynamics. One a linear representation of system dynamics is learned, we can implement classic control algorithms such as iterative Linear Quadratic Regulator (iLQR) and Model Predictive Control (MPC) for optimal control design. The controller synthesis is purely data-driven and does not rely on prior domain knowledge. The OpenAI Gym environment is used for simulations of various control problems. The method is applied to three classic dynamical systems on OpenAI Gym environment to demonstrate the capability.

In the second part, we compare the proposed method with a state-of-art model-free control method based on an actor-critic architecture – Deep Deterministic Policy Gradient (DDPG), which has been proved to be effective in various dynamical systems. Two examples are provided for comparison, i.e., classic Inverted Pendulum and Lunar Lander Continuous Control. We compare these two methods in terms of control strategies and the effectiveness under various initialization conditions from the results of the experiments. We also examine the learned dynamic model from DKRC with the analytical model derived from the Euler-Lagrange Linearization method, demonstrating the accuracy in the learned model for unknown dynamics from a data-driven sample-efficient approach.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.