Research

I am interested in theory of control, learning, and optimization of dynamical systems, and their applications to intelligient traffic systems (ITS). My research developed the methodological tools for Intelligent Transportation Systems (ITS) applications from the following three research areas.


Overview


  • PDE/ODE model-based control: backstepping boundary control
  • Model-free optimization: extremum seeking
  • Data-driven and learning-based control: reinforcement learning

The application of the above methodologies are applied for the ITS topics including:

  • Traffic control and estimation
  • Macroscopic traffic modeling
  • Mixed autonomy traffic flow
  • Connected and autonomous vehicles
  • Energy and risk management of transportation network

Highlights


  • Model-based Control: Boundary control of stop-and-go traffic

Stop-and-go traffic, source by Ford Motor Company.



This thread of work provides control tools that have been previously unavailable for suppressing stop-and-go oscillations in congested traffic using actuation that is very sparsely located along the freeway, such as ramp metering or variable speed limits. The macroscopic PDEs are particularly suited for modeling the large-scale and congested traffic flow pattern, such as the stop-and-go traffic. The aggregated state values (i.e. density, speed, and flow rate) in the models evolve in continuous temporal and spatial domains. Focused on several macroscopic-level traffic problems, my research developed a methodological PDE model-based control framework for boundary actuation and estimation. The backstepping control method is employed which only requires sensing and actuating of state values at boundaries to regulate continuous in-domain values to the desired reference system. The proposed methodology is practical meaningful and relevant since the point actuation and sensing overcomes the technical and financial limitations of implementing sensors and actuators in large-scale transportation systems. We also consider the boundary control problem on freeway traffic of multi-lane, multi-class and multi-segment.

The practical implementations include data-validation of traffic state estimation and event-triggered for digital implementation.

  • H. Yu, Q. Gan, A. M. Bayen, and M. Krstic, “PDE Traffic Observer Validated on Freeway Data,” IEEE Transactions on Control Systems Technology, pp.1048-1060, vol.29, 2021. DOI: 10.1109/TCST.2020.2989101.
  • N. Espita, J. Auriol, H. Yu, and M. Krstic, “Traffic flow control on cascaded roads by event-triggered output feedback,” International Journal of Robust and Nonlinear Control, under review.
  • Model-free optimization: Extremum seeking of optimal throughput

For traffic dynamics that cannot be accurately described with models, my research use extremum seeking (ES), a real-time, model-free, adaptive optimization approach to tackle such problems. I applied ES algorithms to solve a downstream traffic bottleneck problem. Traffic congestion forms upstream of the bottleneck because the traffic flow rate overflows its capacity. Since the traffic dynamics of the bottleneck are hard to model, the optimal input density at the downstream bottleneck area is unknown and needs to be found out in order to maximize the discharging flow rate from the bottleneck. A small excitation is used to perturb the input density being tuned and to produce estimates of the gradient of a cost function. An extremum seeking controller is designed with its delay effect being compensated with a predictor feedback design.

  • Learning-based control: Reinforcement Control of Traffic Flow

Model-based approaches usually rely on assumption and knowledge of the system dynamics. For traffic system, the calibration of model parameters can be laborious, time-consuming and highly associated with certain transient traffic conditions. Considering the uncertain dynamics and different performance metrics, it is desirable to have an approach with modest tuning to adapt to various problems. Recent developments in Reinforcement Learning (RL) has enabled model-free control of high-dimensional continuous control systems through a complete data-driven process. The model-free RL approach does not have prior assumptions of the model structure and learns a control policy through interactions with the system directly. I developed RL state feedback controllers for congested traffic on a freeway segment. I employed the proximal policy optimization, a deep neural network based policy gradient algorithm, to obtain RL controllers through an iterative training process with a macroscopic traffic simulator. RL controllers are found to have comparable performance with the conventional feedback controllers in a traffic system with the perfect knowledge of model parameters. Remarkably, the RL controllers that were obtained from stochastic training processes outperformed the conventional controllers in an uncertain environment.