Multi-agent Reinforcement Learning-Based UAS Control for Logistics Environments
Springer LNEE, volume 913 (SCOPUS)
Abstact
With recent technological developments, The UAS (unmanned aerial system) has been recognized for its value and usefulness in various fields. Prior researchers have utilized several drones in collaboration to navigate to achieve common goals such as target tracking, rescue operations, and target-finding with multi-UAS systems. Multi-agent reinforcement learning algorithms are a type of artificial intelligence technology in which many agents collaborate to perform tasks. When a multi-UAS cooperative navigation technique is deployed to a complicated environment such as an urban logistics system, the agents’ learning capacities become more tedious. In this study, we present what is termed the improved Multi-Actor-Attention-Critic (iMAAC) approach, a modified multi-agent reinforcement learning method for application to urban air mobility logistic services. A virtual simulation environment based on Unity is created to validate the suggested method. In the virtual environment, the real-world situation of UAS logistics development services is replicated. When the findings are compared to those of other landmark reinforcement algorithms, iMAAC shows a higher learning rate than those by the other algorithms when utilized in multi-agent systems.
Keywords
Air Logistics, Multi-Agent Reinforcement Learning, Actor-Attention-Critic, Urban Aerial Mobility
https://link.springer.com/chapter/10.1007/978-981-19-2635-8_71