Publication | Closed Access
Autonomous and cooperative control of UAV cluster with multi-agent reinforcement learning
37
Citations
31
References
2022
Year
Artificial IntelligenceMulti-agent Reinforcement LearningEngineeringGlobal PlanningEducationReinforcement Learning (Educational Psychology)Intelligent SystemsAutonomous SystemsLearning ControlLifelong Reinforcement LearningMulti-agent LearningReinforcement Learning (Computer Engineering)Unmanned SystemSystems EngineeringRobot LearningMulti-agent PlanningMultirobot SystemUav ClusterComputer ScienceCurrent Uav ClusterDeep Reinforcement LearningAerospace EngineeringCooperative ControlRoboticsSwarm Robotics
Abstract In this paper, we expolore Multi-Agent Reinforcement Learning (MARL) methods for unmanned aerial vehicle (UAV) cluster. Considering that the current UAV cluster is still in the program control stage, the fully autonomous and intelligent cooperative combat has not been realised. In order to realise the autonomous planning of the UAV cluster according to the changing environment and cooperate with each other to complete the combat goal, we propose a new MARL framework. It adopts the policy of centralised training with decentralised execution, and uses Actor-Critic network to select the execution action and then to make the corresponding evaluation. The new algorithm makes three key improvements on the basis of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. The first is to improve learning framework; it makes the calculated Q value more accurate. The second is to add collision avoidance setting, which can increase the operational safety factor. And the third is to adjust reward mechanism; it can effectively improve the cluster’s cooperative ability. Then the improved MADDPG algorithm is tested by performing two conventional combat missions. The simulation results show that the learning efficiency is obviously improved, and the operational safety factor is further increased compared with the previous algorithm.
| Year | Citations | |
|---|---|---|
Page 1
Page 1