Modern artificial intelligence technics for unmanned aerial vehicles path planning and control

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Unmanned aerial vehicles (UAVs) require effective path planning algorithms to navigate through complex environments. This study investigates the application of Deep Q-learning and Dyna Q-learning methods for UAV path planning and incorporates fuzzy logic for enhanced control. Deep Q-learning, a reinforcement learning technique, employs a deep neural network to approximate Q-values, allowing the UAV to improve its path planning capabilities by maximizing cumulative rewards. Conversely, Dyna Q-learning leverages simulated scenarios to update Q- values, refining the UAV’s decision-making process and adaptability to dynamic environments. Additionally, fuzzy logic control is integrated to manage UAV movements along the planned path. This control system uses linguistic variables and fuzzy rules to handle uncertainties and imprecise information, enabling real-time adjustments to speed, altitude, and heading for accurate path following and obstacle avoidance. The research evaluates the effectiveness of these methods individually, with a focus on model-free learning in a gradual training approach, and compares their performance in terms of path planning accuracy, adaptability, and obstacle avoidance. The paper contributes to a deeper understanding of UAV path planning techniques and their practical applications in various scenarios.

Description

Keywords

Deep Q-learning, Dyna Q-learning, Fuzzy logic, Quadrotor, Unmanned aerial vehicle path, Planning

Citation

Endorsement

Review

Supplemented By

Referenced By