Publication | Closed Access
Deep Reinforcement Learning for Event-Triggered Control
71
Citations
39
References
2018
Year
Unknown Venue
Artificial IntelligenceModel-based LearningEngineeringMachine LearningDeep Reinforcement LearningDrl ApproachIntelligent ControlComputer EngineeringSystems EngineeringAction Model LearningComputer ScienceIntelligent SystemsRobot LearningLearning ControlWorld ModelMulti-agent LearningEvent-triggered Control
Event‑triggered control (ETC) methods achieve high‑performance control with far fewer samples than time‑triggered methods and typically rely on a system model and specific controller and trigger designs. The paper demonstrates that deep reinforcement learning can simultaneously learn control and communication policies from scratch, offering a DRL approach tailored for event‑triggered control. The authors validate the DRL‑based ETC approach on multiple control tasks and benchmark it against model‑based event‑triggering frameworks. This is the first application of DRL to ETC, and the method is shown to be readily applicable to nonlinear systems beyond many model‑based designs.
Event-triggered control (ETC) methods can achieve high-performance control with a significantly lower number of samples compared to usual, time-triggered methods. These frameworks are often based on a mathematical model of the system and specific designs of controller and event trigger. In this paper, we show how deep reinforcement learning (DRL) algorithms can be leveraged to simultaneously learn control and communication behavior from scratch, and present a DRL approach that is particularly suitable for ETC. To our knowledge, this is the first work to apply DRL to ETC. We validate the approach on multiple control tasks and compare it to model-based event-triggering frameworks. In particular, we demonstrate that it can, other than many model-based ETC designs, be straightforwardly applied to nonlinear systems.
| Year | Citations | |
|---|---|---|
Page 1
Page 1