Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Quadrotor navigation in dynamic environments with deep reinforcement learning

Quadrotor navigation in dynamic environments with deep reinforcement learning This work aims to combine the cloud robotics technologies with deep reinforcement learning to build a distributed training architecture and accelerate the learning procedure of autonomous systems. Especially, a distributed training architecture for navigating unmanned aerial vehicles (UAVs) in complicated dynamic environments is proposed.Design/methodology/approachThis study proposes a distributed training architecture named experience-sharing learner-worker (ESLW) for deep reinforcement learning to navigate UAVs in dynamic environments, which is inspired by cloud-based techniques. With the ESLW architecture, multiple worker nodes operating in different environments can generate training data in parallel, and then the learner node trains a policy through the training data collected by the worker nodes. Besides, this study proposes an extended experience replay (EER) strategy to ensure the method can be applied to experience sequences to improve training efficiency. To learn more about dynamic environments, convolutional long short-term memory (ConvLSTM) modules are adopted to extract spatiotemporal information from training sequences.FindingsExperimental results demonstrate that the ESLW architecture and the EER strategy accelerate the convergence speed and the ConvLSTM modules specialize in extract sequential information when navigating UAVs in dynamic environments.Originality/valueInspired by the cloud robotics technologies, this study proposes a distributed ESLW architecture for navigating UAVs in dynamic environments. Besides, the EER strategy is proposed to speed up training processes of experience sequences, and the ConvLSTM modules are added to networks to make full use of the sequential experiences. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Assembly Automation Emerald Publishing

Quadrotor navigation in dynamic environments with deep reinforcement learning

Assembly Automation , Volume 41 (3): 9 – Jul 22, 2021

Loading next page...
 
/lp/emerald-publishing/quadrotor-navigation-in-dynamic-environments-with-deep-reinforcement-ivgedbm0Qd

References (38)

Publisher
Emerald Publishing
Copyright
© Emerald Publishing Limited
ISSN
0144-5154
eISSN
0144-5154
DOI
10.1108/aa-11-2020-0183
Publisher site
See Article on Publisher Site

Abstract

This work aims to combine the cloud robotics technologies with deep reinforcement learning to build a distributed training architecture and accelerate the learning procedure of autonomous systems. Especially, a distributed training architecture for navigating unmanned aerial vehicles (UAVs) in complicated dynamic environments is proposed.Design/methodology/approachThis study proposes a distributed training architecture named experience-sharing learner-worker (ESLW) for deep reinforcement learning to navigate UAVs in dynamic environments, which is inspired by cloud-based techniques. With the ESLW architecture, multiple worker nodes operating in different environments can generate training data in parallel, and then the learner node trains a policy through the training data collected by the worker nodes. Besides, this study proposes an extended experience replay (EER) strategy to ensure the method can be applied to experience sequences to improve training efficiency. To learn more about dynamic environments, convolutional long short-term memory (ConvLSTM) modules are adopted to extract spatiotemporal information from training sequences.FindingsExperimental results demonstrate that the ESLW architecture and the EER strategy accelerate the convergence speed and the ConvLSTM modules specialize in extract sequential information when navigating UAVs in dynamic environments.Originality/valueInspired by the cloud robotics technologies, this study proposes a distributed ESLW architecture for navigating UAVs in dynamic environments. Besides, the EER strategy is proposed to speed up training processes of experience sequences, and the ConvLSTM modules are added to networks to make full use of the sequential experiences.

Journal

Assembly AutomationEmerald Publishing

Published: Jul 22, 2021

Keywords: Unmanned aerial vehicles; Deep reinforcement learning; Cloud robotics; Dynamic navigation; Learning and adaptive systems

There are no references for this article.