Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach

Published in 2025 IEEE Latin American Robotics Symposium (LARS), 2025

Inspecting confined industrial infrastructure, such as ventilation shafts, is a hazardous and inefficient task for humans. Unmanned Aerial Vehicles (UAVs) offer a promising alternative, but GPS-denied environments require robust control policies to prevent collisions. Deep Reinforcement Learning (DRL) has emerged as a powerful framework for developing such policies, and this paper provides a comparative study of two leading DRL algorithms for this task: the on-policy Proximal Policy Optimization (PPO) and the off-policy Soft Actor-Critic (SAC). The training was conducted with procedurally generated duct environments in Genesis simulation environment. A reward function was designed to guide a drone through a series of waypoints while applying a significant penalty for collisions. PPO learned a stable policy that completed all evaluation episodes without collision, producing smooth trajectories. By contrast, SAC consistently converged to a suboptimal behavior that traversed only the initial segments before failure. These results suggest that, in hazard-dense navigation, the training stability of on-policy methods can outweigh the nominal sample efficiency of off-policy algorithms. More broadly, the study provides evidence that procedurally generated, high-fidelity simulations are effective testbeds for developing and benchmarking robust navigation policies.

Recommended citation: Tayar, M.S., de Oliveira, L.K., Negri, J.D., Segreto, T.H., Godoy, R.V. and Becker, M., 2025. Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach. arXiv preprint arXiv:2508.16807.
Download Paper