No CrossRef data available.
Published online by Cambridge University Press: 24 January 2025
Actuator faults in unmanned aerial vehicles (UAVs) can have significant and potentially adverse effects on their safety and performance, highlighting the critical importance of fault diagnosis in UAV design. Ensuring the reliability of these systems in various applications often requires the use of advanced diagnostic algorithms. Artificial intelligence methods, such as deep learning and machine learning techniques, enable fault diagnosis through sample-based learning without the need for prior knowledge of fault mechanisms or physics-based models. However, UAV fault datasets are typically small due to stringent safety standards, which presents challenges for achieving high-performance fault diagnosis. To address this, deep reinforcement learning (DRL) algorithms offer a unique advantage by combining deep learning’s automatic feature extraction with reinforcement learning’s interactive learning approach, improving both learning capabilities and robustness. In this study, we propose and evaluate two DRL-based fault diagnosis models, which demonstrate remarkable accuracy in fault diagnosis, consistently exceeding $99{\rm{\% }}$. Notably, under small sample scenarios, the proposed models significantly outperform traditional classifiers such as decision trees, support vector machines, and multilayer perceptron neural networks. These findings suggest that the integration of DRL enhances fault diagnosis performance, particularly in data-limited environments.