Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-14T06:00:20.713Z Has data issue: false hasContentIssue false

Autonomous multicopter landing on a moving vehicle based on RSSI

Published online by Cambridge University Press:  12 February 2024

Jongwoo An*
Affiliation:
Department of Electronics Engineering, Pusan National University, Kumjeong-ku, Republic of Korea
Hosun Kang
Affiliation:
Department of Electronics Engineering, Pusan National University, Kumjeong-ku, Republic of Korea
Jiwook Choi
Affiliation:
Department of Electronics Engineering, Pusan National University, Kumjeong-ku, Republic of Korea
Jangmyung Lee
Affiliation:
Department of Electronics Engineering, Pusan National University, Kumjeong-ku, Republic of Korea
*
*Corresponding author: Jongwoo An; Email: jongwoo7379@pusan.ac.kr
Rights & Permissions [Opens in a new window]

Abstract

Currently, most of the studies on unmanned aerial vehicle (UAV) automatic landing systems mainly depend on image information to determine the landing location. However, the system requires a camera, a gimbal system and a separate image-processing device, which increases the weight and power consumption of the UAV, resulting in a shorter flight time. In addition, a large amount of computation and slow reaction speed can cause the camera to miss a proper landing moment. To solve these problems, in this study, the moving direction and relative distance between an object and the automatic landing system were measured using a receive signal strength indicator of the radio-frequency (RF) signal. To improve the movement direction and relative distance estimation accuracy, the noise in the RF signal was minimised using a low pass filter and moving average filter. Based on the filtered RF signal, the acceleration of the multicopter to reach the object was estimated by adopting the proportional navigation algorithm. The performance of the proposed algorithm for precise landing on a moving vehicle was demonstrated through experiments.

Type
Research Article
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of The Royal Institute of Navigation

1. Introduction

Recently, unmanned aerial vehicles (UAVs) have been receiving considerable attention from the world's leading research institutes and related industries. UAVs are relatively small compared with manned aircrafts, and the cost of operation and risk of aircraft loss are also low. Owing to these advantages, UAVs are currently being applied to new fields in various countries such as the United States, Japan and Europe, and are actively used as test equipment for the latest electronic devices and various control algorithms. In particular, UAVs developed for military purposes are spreading to various business fields as their utilisation is highly evaluated. UAVs can be largely divided into fixed-wing and rotary-wing. In terms of energy efficiency, rotary-wing UAVs are inferior to fixed-wing UAVs. However, rotary-wing UAVs can take-off and land without a runway while also being useful for monitoring purposes because maintenance flights are possible for a specific area (Foster et al., Reference Foster, Li and Cheung2014; Amanatiadis, Reference Amanatiadis2016).

The flight of a multicopter consists of four stages: take-off, ascent, descent and landing. Most commercial multicopters are often restricted from automatic landing because of their high risk and low reliability. High accuracy and precision are required for automatic landing; otherwise, the UAVs may miss the landing point and fall to the ground. For example, in the case of rotorcraft operated by the US military, approximately 50 % of the accidents are said to occur during the landing process (Wang et al., Reference Wang, Deng, Liu, Xia and Fu2013a, Reference Wang, Yang, Zhao, Liu and Cuthbert2013b; Pramod, Reference Pramod2014).

To prevent such accidents, a precise and stable automatic landing system is required. Currently, most research on the UAV automatic landing systems is mainly on the image-based landing system. However, because such a system requires a camera, a gimbal system and a separate image-processing device, the weight and power consumption of the aircraft increase, which shortens the flight time.

UAVs are difficult to use at long distances from the landing location because they must recognise image information of the landing location. In addition, when the object for recognising the landing location is outside the field of view of the camera mounted on the UAV, the object needs to be searched again.

The purpose of this paper is to study the multicopter landing system to limit the problems associated with a vision-based systems as described above (limited camera viewing angle and recognition distance, relatively long computation time, high cost for system configuration, etc.).

In this study, we implement a multicopter automatic landing system based on a receive signal strength indicator (RSSI) of the radio-frequency (RF) signal to overcome the problems of the existing image-based automatic landing system. The localisation system using RSSI estimates the location of an object based on the strength of the signal received between devices connected through the Wireless Local Area Network (WLAN). The signals received from each sensor are mainly installed in an array of sensors capable of wireless communication, such as a beacon. The location of the object can be estimated using the signal intensities. However, owing to the characteristics of radio signals in wireless devices, the precision of the measured values may be poor owing to the influence of noise and surrounding environments (Yuan et al., Reference Yuan, Huo, Ou, Zhang, Chai, Wen and Grenfell2008; Zhu et al., Reference Zhu, Zhao and Lin2013; Li et al., Reference Li, Dick, Ge, Heise, Wickert and Bender2014).

In this study, the noise is minimised using a low pass filter (LPF) and moving average filter (MAF) to improve the precision of the RSSI. Based on this improved RSSI information, the moving direction and relative distance of the landing platform are estimated, and the multicopter automatic landing system is implemented by linking this with proportional navigation (PN).

The rest of this paper is organised as follows. Section 2 illustrates the modelling of the multicopter to control the target position. Section 3 describes the estimation of the multicopter heading using RSSI and the adoption of the PN algorithm for landing. Section 4 illustrates the experimental results of the multicopter landing on a moving vehicle compared with the conventional approaches. In Section 5, the performance of the proposed multicopter landing system is compared with the other systems. The conclusions of this research are provided in Section 6 with a few future research works.

2. Modelling of multicopter

The coordinate system model based on the structure of the multicopter is shown in Figure 1.

Figure 1. Coordinate of multicopter

The linear velocity and angular velocity of the multicopter displayed in the body-fixed frame coordinate system are defined as (Lee et al., Reference Lee, Yun, Chang, Park and Choi2011; Choi et al., Reference Choi, Hwang, An and Lee2019)

(1)\begin{align}\dot{p} & = Rv\end{align}
(2)\begin{align}\omega & = C\dot{\eta }\end{align}

In Equation (1), v is the velocity ${v_x}$, ${v_y}$, ${v_z}$ in the body-fixed frame coordinate system and R is a matrix that rotates the body-fixed frame coordinate system with respect to the inertial coordinate system, and is expressed as (An and Lee, Reference An and Lee2018; Choi et al., Reference Choi, Hwang, An and Lee2019)

(3)\begin{equation}R = {R_z}(\psi ){R_x}(\theta ){R_y}(\phi )\end{equation}

where

\[{R_x}(\phi ) = \left[ {\begin{array}{*{20}{c}} 1& 0& 0\\ 0& {\cos \phi }& { - \sin \phi }\\ 0& {\sin \phi }& {\cos \phi } \end{array}} \right],\,{R_y}(\theta ) = \left[ {\begin{array}{*{20}{c}} {\cos \theta }& 0& {\sin \theta }\\ 0& 1& 0\\ { - \sin \theta }& 0& {\cos \phi } \end{array}} \right],\;\textrm{and}\;{R_z}(\psi ) = \left[ {\begin{array}{*{20}{c}} {\cos \psi }& { - \sin \psi }& 0\\ {\sin \psi }& {\cos \psi }& 0\\ 0& 0& 1 \end{array}} \right]\textrm{.}\]

In Equation (2), $\eta$ indicates Euler angles $\phi$, $\theta$ and $\psi$ of the multicopter, and C is a matrix expressing the relationship between the Euler angular velocity component of the inertial coordinate and the angular velocity vector of the body-fixed frame coordinate, which is expressed as (An and Lee, Reference An and Lee2018; Choi et al., Reference Choi, Hwang, An and Lee2019)

(4)\begin{equation}C = \left[ {\begin{array}{*{20}{c}} 1& 0& { - \sin \theta }\\ 0& {\cos \phi }& {\sin \phi \cos \theta }\\ 0& { - \sin \phi }& {\cos \phi \cos \theta } \end{array}} \right]\end{equation}

When Equations (1) and (2) are differentiated, they are expressed as (Choi et al., Reference Choi, Hwang, An and Lee2019)

(5)\begin{align}\ddot{p} & = R\dot{v} + \dot{R}v\end{align}
(6)\begin{align}\dot{\omega } & = C\ddot{\eta } + \dot{C}\dot{\eta }\end{align}

where $\dot{C}$ is defined as

(7)\begin{equation}\begin{aligned} \dot{C} & = \left[ {\dfrac{{\partial C}}{{\partial \phi }}\dot{\phi } + \dfrac{{\partial C}}{{\partial \phi }}\dot{\theta } + \dfrac{{\partial C}}{{\partial \phi }}\dot{\psi }} \right]\\ & = \left[ {\begin{array}{*{20}{c}} 0& 0& { - \dot{\theta }\cos \theta }\\ 0& { - \dot{\phi }\sin \phi }& {\dot{\phi }\cos \phi \cos \theta - \dot{\theta }\sin \phi \sin \theta }\\ 0& { - \dot{\phi }\cos \phi }& { - \dot{\phi }\sin \phi \cos \theta - \dot{\theta }\cos \phi \sin \theta } \end{array}} \right] \end{aligned}\end{equation}

Using Newton's second law, the law of conservation of force and moment acting on a multicopter can be represented as follows (An and Lee, Reference An and Lee2018; Choi et al., Reference Choi, Hwang, An and Lee2019):

(8)\begin{align}m\dot{v} + \omega \times (mv) = F + {F_g}\end{align}
(9)\begin{align}I\dot{\omega } + \omega \times (I\omega ) = Q - {Q_g}\end{align}

where m is the mass of the multicopter and I is the moment of inertia. The multicopter is designed to be linearly symmetric, so the moment of inertia is defined as (An and Lee, Reference An and Lee2018; Choi et al., Reference Choi, Hwang, An and Lee2019)

(10)\begin{equation}I = \left[ {\begin{array}{*{20}{c}} {{I_{xx}}}& 0& 0\\ 0& {{I_{yy}}}& 0\\ 0& 0& {{I_{zz}}} \end{array}} \right]\end{equation}

where Ixx = Iyy.

In Equation (8), the gravity acting on the multicopter must be expressed in the body-fixed frame. As a result, the gravitational vector displayed in the inertial coordinate must be rotated to the body-fixed frame as (Choi et al., Reference Choi, Hwang, An and Lee2019)

(11)\begin{equation}{F_g} = m{R^T}{g^o}\end{equation}

In Equation (9), ${Q_G}$ is the Gyro effect, and is defined from the four rotor angular velocities ${\Omega _1},{\Omega _2},{\Omega _3}$ and ${\Omega _4}$ of the four rotors mounted on the multicopter as (Choi et al., Reference Choi, Hwang, An and Lee2019)

(12)\begin{equation}{Q_G} = \omega \times {I_R}{\Omega _G}\end{equation}

where ${I_R}$ is the moment of inertia of the rotor.

From Equations (3), (8) and (9), the equation for the acceleration of the multicopter in the inertial frame is derived as (Choi et al., Reference Choi, Hwang, An and Lee2019)

(13)\begin{align}m{R^T}\ddot{p} & = F + m{R^T}{g^o}\end{align}
(14)\begin{align}\ddot{p} & = {g^o} + \frac{1}{m}RF.\end{align}

Additionally, the equation for the angular acceleration of the multicopter in the inertial frame is derived as (Choi et al., Reference Choi, Hwang, An and Lee2019)

(15)\begin{align}I(C\ddot{\eta } + \dot{C}\dot{\eta }) + C\dot{\eta } \times (IC\dot{\eta }) = Q - C\dot{\eta } \times {I_R}{\Omega _R}\end{align}
(16)\begin{align}\ddot{\eta } = {(IC)^{ - 1}}(Q - I\dot{C}\dot{\eta } - C\dot{\eta } \times (IC\dot{\eta } + {I_R}{\Omega _G}))\end{align}

Using the output of each rotor derived through the above process, the direction of movement of the multicopter motion can be controlled by the rotations of the four rotors, as shown in Figure 2.

Figure 2. Multicopter movement direction with respect to the rotation direction of the rotor

The red and blue arrows in Figure 2 each mean high-speed rotation and low-speed rotation, respectively. As shown in Figure 2, the multicopter can control the direction of movement by controlling the rotational speed of each rotor. For example, if f m1 and f m2 rotate at low speed and f m3 and f m4 rotate at high speed, as shown in ② (Forward) in Figure 2, the aircraft tilts forward and moves forward.

3. Landing algorithm

3.1 Multicopter heading estimation

To apply the strength of the RF signal to the positioning of the target object, a schedule and relationship between the strength of the signal and distance must be determined. From various studies (Zhao et al., Reference Zhao, Gao, Wang, Li, Song and Sun2018; Li et al., Reference Li, Huang and Liao2020; Shin et al., Reference Shin, Lee, Shin, Yu, Kyung and Lee2020), it can be seen that the strength of the RF signal is inversely proportional to the distance transmitted by the radio wave. Using this characteristic, the distance information between the two nodes can be obtained from the strength of the received RF signal.

To convert the strength of the RF signal into distance information, a path-loss model is required. When using the RSSI method, the free-space path-loss model in an ideal free space can be derived through the Friis formula as (Jeon and Kim, Reference Jeon and Kim2011; Kim and Kim, Reference Kim and Kim2011)

(17)\begin{equation}{P_r} = {P_t}\frac{{{G_t}{G_r}{\alpha ^2}}}{{{{(4\pi d)}^2}}}\end{equation}

where${P_r}$ is the received power, ${P_t}$ is the transmitted power, ${G_t},\,{G_r}$ is the antenna gain, $\alpha$ is the wavelength of the radio wave and d is the distance between the two nodes. If the antenna gain ${G_t},\,{G_r}$ is not considered in Equation (17), the path loss $P{L_F}$ in free space can be defined as (Kim and Kim, Reference Kim and Kim2011)

(18)\begin{equation}P{L_F}[dB] = 10\log \frac{{{P_t}}}{{{P_r}}} ={-} 10\log {\left( {\frac{\alpha }{{4\pi d}}} \right)^2}\end{equation}

In most path-loss models and free-space path-loss models, the average power decreases logarithmically with distance. This loss model is called the log distance path-loss model and can be defined as (Yi and Kim, Reference Yi and Kim2017)

(19)\begin{equation}PL(d) = PL({d_0}) + 10n\log {\left( {\frac{d}{{{d_0}}}} \right)^2}\end{equation}

where ${d_0}$ is the reference distance $({d_0} < d)$, $PL({d_0})$ is the path loss at the reference distance and n is the path loss coefficient.

The path-loss coefficient is generally 2 in free space, 2 ⋅ 7–3 ⋅ 5 in urban areas and 1 ⋅ 6–1 ⋅ 8 indoors. Because the multicopter landing experiment in this study was conducted outdoors, the path-loss coefficient range was based on the coefficient value in the city centre and the optimal path-loss coefficient value was selected through the experiment.

When the reference distance ${d_0}$ is set to 1 m in Equation (19), the strength of the RF signal can be defined as (Wang et al., Reference Wang, Deng, Liu, Xia and Fu2013a, Reference Wang, Yang, Zhao, Liu and Cuthbert2013b)

(20)\begin{equation}RSSI[dBm] ={-} (A + 10n\log d)\end{equation}

where$A$ is the RSSI size at a reference distance of 1 m.

In Figure 3, RF1, RF2 and RF3 refer to the RF sensors. RF1 is attached to the landing platform and RF2, RF3 are attached to the left and right sides of the multicopter. The distance between RF2 and RF3 is 1 m.

Figure 3. Multicopter heading estimation

As shown in Figure 3, when the landing platform moves to the left (Case 2) or right (Case 3) centring on the multicopter, the values of RSSLL (RF1 and RF2) and RSSIR (RF1–RF3) measured by each RF sensor are different.

When the multicopter and the landing platform are positioned in a straight line (Case 1), the values of RSSIL and RSSIR are similarly measured.

Figure 4 shows the experimental results to confirm the changes in RSSIL and RSSIR values according to the case shown in Figure 3.

Figure 4. RSSI data

Cases 1, 2 and 3 are the results obtained after comparing the RSSI size measured when the landing platform is located in the middle, right and left, based on the multicopter. The RSSI measurement period in Figure 4 is 200 ms. As shown in Figure 4, the RSSI values of RSSIL and RSSIR differ depending on the direction in which the landing platform is located. In Case 1, the values of RSSIL and RSSIR are similar. In Case 2, RSSIL is smaller than RSSIR, whereas in Case 3, RSSIL is greater than RSSIR.

However, as shown in Figure 4, the RSSI values of RSSIL and RSSIR measured by the multicopter are too noisy to be used for localisation. Therefore, it is difficult for the multicopter to determine the moving direction using the RSSI.

To obtain feasible data from the measured RSSI, the MAF was adopted, which is an algorithm to calculate the average of the latest measured values and not all measured data. Using this algorithm, more stable data can be obtained while the number of data points is kept constant. The MAF can be defined as a recursive expression as (Kim et al., Reference Kim, Park, Lee and Kim2019)

(21)\begin{equation}{\bar{x}_m} = {\bar{x}_{m - 1}} + \frac{{{x_m} - {x_{m - c}}}}{c}\end{equation}

where ${\bar{x}_m} = \dfrac{{{x_{m - c + 1}} + {x_{m - c + 2}} + \cdots + {x_m}}}{c}$ and ${\bar{x}_{m - 1}} = \dfrac{{{x_{m - c}} + {x_{m - c + 1}} + \cdots + {x_{m - 1}}}}{c}$.

Figure 5 is the result of applying LPF and MAF to the RSSI of RSSIL and RSSIR in Figure 4.

Figure 5. RSSI correction data from Figure 4

As shown in Figure 5, the direction of movement of the multicopter is defined by the following equation, using the RSSI of RSSIL and RSSIR with noise filtering:

(22)\begin{equation}f(x) = \left\{ \begin{array}{@{}ll} 1& \quad RSS{I_L} > RSS{I_R}\\ 0& \quad RSS{I_L} \approx RSS{I_R}\\ 2& \quad RSS{I_L} < RSS{I_R} \end{array} \right.\end{equation}

Figure 6 shows the results of measuring the relative distance at intervals of 5 m, 10 m and 15 m based on the improved RSSI information.

Figure 6. Distance measurement value using RSSI

3.2 Proportional navigation

PN is a method that is generally used to guide a missile to a target point. It calculates the future position of the target and estimates the path of the missile to the future position, and calculates the position, speed, angle, and so forth. It is a navigation that guides the target to be hit at a future location (Brighton et al., Reference Brighton, Thomas and Taylor2017; Zheng et al., Reference Zheng, Lin, Xu and Tian2017; Garcia et al., Reference Garcia, Casbeer, Pham and Pachter2018; Shiraishi et al., Reference Shiraishi, Takano, Yamasaki and Yamaguchi2019).

As shown in Figure 7, PN is a navigation technology focused on the fact that as the direct viewing distance from the target increases based on the multicopter, and as long as the driving direction of each aircraft does not change, it will move in the direction of the collision path (Xiao et al., Reference Xiao, Dufek, Woodbury and Murphy2017; Jung et al., Reference Jung, Hwang, Shin and Shim2018).

Figure 7. Proportional navigation

The PN induces the vehicle's velocity vector to rotate in the same direction as the target (landing platform), moving forward at a speed proportional to the rotational speed of the line of sight (LOS), as shown in Figure 7. The estimated angle navigation simply calculates the angle and speed of the current target to determine the flight path of the missile. However, the PN calculates the rate of change at which the angle between the missile and target changes, and calculates the course to collide with the target to estimate the arrival acceleration and direction. It is a method of guiding a missile to the future position of the target.

The change in LOS through PN and the command of acceleration to reach the target point are defined by the following equation (Borowczyk et al., Reference Borowczyk, Nguyen, Nguyen, Nguyen, Saussié and Le Ny2017):

(23)\begin{equation}{\vec{a}_n} = \lambda |{\vec{v}} |\frac{{\vec{p}}}{{|{\vec{p}} |}} \times \vec{\beta }\end{equation}

where $\lambda$ is a proportional constant and generally has a value of 3 to 5, $\vec{v}$ is the speed to reach the moving landing platform for the multicopter and $\vec{p}$ is the relative distance from the multicopter to the landing platform (Girard and Kabamba, Reference Girard and Kabamba2015).

Here, $\vec{v}$ and $\vec{p}$ are defined as follows:

(24)\begin{equation}\vec{v} = {\vec{V}_M} - {\vec{V}_T}\end{equation}

where ${\vec{V}_m}$ and ${\vec{V}_v}$ are the speeds of the multicopter and landing platform, respectively, and

(25)\begin{equation}\vec{p} = {\vec{p}_M} - {\vec{p}_T}\end{equation}

where ${\vec{p}_M}$ and ${\vec{p}_T}$ are the locations of the multicopter and landing platform, respectively (Cho and Kim, Reference Cho and Kim2016; Borowczyk et al., Reference Borowczyk, Nguyen, Nguyen, Nguyen, Saussié and Le Ny2017).

In this research, the relative distance $\vec{p}$ is measured through the RF sensor attached to the two aircraft. Relative speed $\vec{v}$ is calculated based on the amount of change in the relative distance between the multicopter and mobile robot that changed per unit time when the mobile robot is moving at a constant speed.

In Equation (23), $\vec{\beta }$ is the LOS rotation vector for the object that the multicopter is moving in front, and is defined as

(26)\begin{equation}\vec{\beta } = \frac{{\vec{p} \times \vec{v}}}{{\vec{p} \ {\cdot}\ \vec{p}}}\end{equation}

In this study, when the moving direction of the landing platform based on the multicopter corresponds to Case 1 by comparing the RSSI values measured by the RF sensor, the LOS rotation vector $\vec{\beta }$ in the PN can be replaced with a unit vector. Likewise, when the moving direction of the multicopter and the landing platform coincide, the distance vector $\vec{p}$ and velocity vector $\vec{v}$ are replaced by scalars, and the acceleration to reach the target point in Equation (23) can be obtained as

(27)\begin{equation}{a_n} = \lambda v.\end{equation}

4. Experimental results

To prove the performance of the multicopter automatic landing system proposed in this paper, an experiment was conducted in the environment, as shown in Figure 8. Table 1 shows the specifications of the multicopter in Figure 8.

Figure 8. Multicopter with RF sensor

Table 1. Multicopter specifications

As shown in Figure 8, the RF sensor is installed on the multicopter's leg at 1 m intervals. The sub-controller estimates the landing direction and closing velocity using the data measured by the RF sensor installed on the leg. Thereafter, the sub-controller transmits a control command to the FC to proceed with landing. The FC and sub-controllers are linked using the onboard software development kit (SDK) provided by DJI Corp.

Figure 9 illustrates the mobile platform used for this experiment. The speed of the mobile robot was set and the start of the motion was remotely controlled.

Figure 9. Landing platform (mobile robot)

Figure 10 shows the RF sensor used in the experiment of this paper. Table 2 provides the communication protocol of the RF sensor.

Figure 10. RF sensor

Table 2. RF sensor protocol

As for the RF sensor attached to the multicopter and the landing platform, an ID is assigned to each sensor as shown in Table 2, so the risk of receiving external signals can be eliminated. The RF measurement period was 200 ms.

The experiment was conducted on the rooftop of the 10th engineering building at Pusan National University, and the mobile robot ran straight from point B to point C as shown in Figure 11 at a constant speed of approximately 0 ⋅ 5 m/s.

Figure 11. Experiment environment

Figure 12 shows a block diagram of the proposed algorithm.

Figure 12. Block diagram of the proposed algorithm

In the experiment of this research, it was limited to start landing when the straight-line distance between the multicopter and the mobile robot measured through the RF signal was within 15 m, due to spatial constraints.

For this reason, the multicopter, which was waiting in the hovering state at A, moves to B through manual control when the mobile robot starts moving.

When the relative distance between the two vehicles is within 15 m while the multicopter moves to B, the direction of movement of the multicopter is determined by comparing the size of RSSIL and RSSIR. When the sizes of RSSIL and RSSIR became similar, the multicopter determines that it is moving in a straight line with the mobile robot. After that, while estimating the relative distance and relative speed, the multicopter lands on the mobile robot.

Figure 13 shows the experimental results of applying the multicopter automatic landing algorithm proposed in this paper.

Figure 13. Proposed algorithm experiment result (3D view)

Figure 14 shows the results of Figure 13 in two dimensions to check the position error during landing.

Figure 14. Proposed algorithm experiment result (top view)

As described above for the experiment method, when the mobile robot starts to travel in a straight line, the multicopter in flight moves to position B and estimates the movement direction of the mobile robot. When it is properly positioned in the landing direction, the multicopter proceeds to land on the mobile robot. The experiment was conducted a total of five times, and as a result of the experiment, it was confirmed that the average error occurred within 0 ⋅ 23 m and the minimum error occurred within 0 ⋅ 2 m.

5. Analysis of performance with other systems

5.1 Processing time

Vision-based multicopter landing systems have the disadvantage of slow computation speed. To solve this weakness, various studies based on machine learning are being conducted, as shown in Table 3. However, these studies depend on the performance of the hardware. In Table 3, although the same algorithm has been used, the processing time differs depending on the hardware.

Table 3. Image-processing time per image/FPS

To obtain a faster processing speed, a high-performance processing device is essential, and these devices have large power consumption and heavy weight, which shortens the flight time of the multicopter.

The system proposed in this paper takes less time to process data than an image-based system. Measuring the processing time taken to determine the direction and acceleration of the multicopter is determined to be up to 39 ms. The system consumes processing time similar to the desktop computer in Table 3 without high-performance hardware.

5.2 Conventional landing system (non vision)

Figure 15 shows the flight path for RTH performance verification. The multicopter takes-off from point A and flies remotely through point B to point C. When the multicopter arrives at point C, the pilot sends an RTH command, and the multicopter moves to the stored GPS coordinates of Take-off point-A and lands.

Figure 15. Conventional landing system RTH (top view)

It used the same GPS coordinates as Take-off point-A, it should land at the same location as A, but it was confirmed that it landed at point D away from A due to a GPS error.

In Figure 16, to check the landing position error using the RTH function in more detail, the part marked with a black circle was enlarged and the GPS information was converted into m units. As can be seen from the results in Figure 16, it was confirmed that the landing error using RTH has a landing error of approximately 1 ⋅ 2 m for the x-axis and approximately 1 m for the y-axis error, based on the take-off position.

Figure 16. Conventional landing system position error (RTH)

As a result of the experiment, a maximum position error of 1 ⋅ 2 m occurred in the landing using the RTH. Compared with the experimental results of Section 4 (landing position error 0 ⋅ 2 m), which involved landing on a mobile robot moving on the ground, the RTH exhibited a landing position error of 1 m or more, which is larger than the proposed algorithm. These results show that the proposed algorithm is superior to RTH.

5.3 Conventional landing system (using vision)

Table 4 below summarises the landing errors of vision-based landing systems.

Table 4. Position error of vision-based landing systems

The average landing position error of the vision-based landing systems in Table 4 is 0 ⋅ 19 m. The multicopter landing system in this research has similar results to the 0 ⋅ 23 m landing. Because the experimental environment in Table 4 and the experimental environment in this study are different, there cannot be a definitive comparison.

However, compared with the vision-based multicopter landing system, the proposed algorithm proved the superiority of research in that it can produce similar results even though it uses a relatively simple and inexpensive system.

6. Conclusions

In this study, automatic landing of a multicopter on a dynamic landing location (mobile robot) was implemented using RF signals. Existing multicopter automatic landing systems have high dependence on image information, which can be obtained by a camera attached to the gimbal. Because they rely on image information, it is difficult to use them at a distance where the camera cannot recognise the object clearly. Moreover, they have large computation time, which limits the speed of the moving object to be landed. To overcome this weakness, multicopter automatic landing on a mobile robot has been implemented by estimating the moving direction and relative distance of the mobile robot using RSSI of the RF signal. In addition, the arrival acceleration for landing on the moving mobile robot was obtained through the PN algorithm.

As a result of the experiment, the average landing error was 0 ⋅ 23 m and the minimum landing error was 0 ⋅ 2 m. This shows results similar to the landing error of the vision-based systems shown in Table 4. This proved the superiority of the research in that similar results could be obtained even though a relatively simple and inexpensive system compared with the vision-based landing system was used.

However, although the proposed algorithm can estimate the direction and relative distance for the left and right, it is difficult to estimate the front and rear due to the nature of the system configuration.

Of course, using GPS, it is possible to estimate all directions. However, since this study aims to implement an automatic landing system using only RF sensors, we plan to conduct research on improving the precision of RF signals and omnidirectional estimation using RF sensors as future work.

Acknowledgement

This study was financially supported by the 2020 Post-Doc. Development Program of Pusan National University.

Footnotes

Died 24 July 2021

References

Amanatiadis, A. (2016). A multisensor indoor localization system for biped robots operating in industrial environments. IEEE Transactions on Industrial Electronics, 63(12), 75977606.CrossRefGoogle Scholar
An, J. and Lee, J. (2018). Using IMU Sensor and EKF Algorithm in Attitude Control of A Quad-Rotor Helicopter. International Conference on Intelligent Autonomous Systems, pp. 933–942.Google Scholar
Borowczyk, A., Nguyen, D.T., Nguyen, A.P.V., Nguyen, D.Q., Saussié, D. and Le Ny, J. (2017). Autonomous landing of a quadcopter on a high-speed ground vehicle. Journal of Guidance, Control, and Dynamics, 40(9), 23782385.CrossRefGoogle Scholar
Brighton, C.H., Thomas, A.L. and Taylor, G.K. (2017). Terminal attack trajectories of peregrine falcons are described by the proportional navigation guidance law of missiles. Proceedings of the National Academy of Sciences, 114(51), 1349513500.CrossRefGoogle ScholarPubMed
Cho, N. and Kim, Y. (2016). Optimality of augmented ideal proportional navigation for maneuvering target interception. IEEE Transactions on Aerospace and Electronic Systems, 52(2), 948954.CrossRefGoogle Scholar
Choi, J., Hwang, D., An, J. and Lee, J.M. (2019). Object detection using CNN for automatic landing of drones. Journal of the Institute of Electronics and Information Engineers, 56(5), 8290.CrossRefGoogle Scholar
Feng, Y., Zhang, C., Baek, S., Rawashdeh, S. and Mohammadi, A. (2018). Autonomous landing of a UAV on a moving platform using model predictive control. Drones, 2(4), 34.CrossRefGoogle Scholar
Foster, J., Li, N. and Cheung, K.F. (2014). Sea state determination from ship-based geodetic GPS. Journal of Atmospheric and Oceanic Technology, 31(11), 25562564.CrossRefGoogle Scholar
Garcia, E., Casbeer, D., Pham, K.D. and Pachter, M. (2018). Cooperative Aircraft Defense From an Attacking Missile Using Proportional Navigation. Proceeding of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, USAGoogle Scholar
Girard, A.R. and Kabamba, P.T. (2015). Proportional navigation: Optimal homing and optimal evasion. SIAM Review, 57(4), 611624.CrossRefGoogle Scholar
Jeon, S.Y. and Kim, M.J. (2011). RSSI Compensation Scheme Based on Successive Distance Comparison for Weighted Centroid Localization Algorithm. Proceedings of Symposium of the Korean Institute of Communications and Information Sciences, pp. 109–110.Google Scholar
Jiang, T., Lin, D. and Song, T. (2019). Vision-based autonomous landing of a quadrotor using a gimbaled camera. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 233(14), 50935106.CrossRefGoogle Scholar
Jung, S., Hwang, S., Shin, H. and Shim, D.H. (2018). Perception, guidance, and navigation for indoor autonomous drone racing using deep learning. IEEE Robotics and Automation Letters, 3(3), 25392544.CrossRefGoogle Scholar
Kim, S. and Kim, Y. (2011). A study on indoor location estimation using RSSI of low power tag in RFID/USN environments. Journal of Korean Information Technology, 9(10), 6774.Google Scholar
Kim, G.W., Park, J.H., Lee, S.J. and Kim, J.H. (2019). Study on Improvement of Noise Control and SOC Estimation Using Moving Average Filter and Adaptive Kalman Filter. The Korean Institute of Power Electronics Conference, pp. 198–200.Google Scholar
Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. and Matas, J. (2018). Deblurgan: Blind Motion Deblurring Using Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Slat Lake city, USA, pp. 8183–8195.CrossRefGoogle Scholar
Lee, K.U., Yun, Y.H., Chang, W., Park, J.B. and Choi, Y.H. (2011). Modeling and Controller Design of Quadrotor UAV. Proceedings of the Korean Institute of Electrical Engineers, Pyeonchang, KOR.Google Scholar
Li, Z., Chen, Y., Lu, H., Wu, H. and Cheng, L. (2019). Uav Autonomous Landing Technology Based on Apriltags Vision Positioning Algorithm. 2019 Chinese Control Conference, pp. 8148–8153.CrossRefGoogle Scholar
Li, X., Dick, G., Ge, M., Heise, S., Wickert, J. and Bender, M. (2014). Real-time GPS sensing of atmospheric water vapor: Precise point positioning with orbit, clock, and phase delay corrections. Geophysical Research Letters, 41(10), 36153621.CrossRefGoogle Scholar
Li, C., Huang, H. and Liao, B. (2020). An improved fingerprint algorithm with access point selection and reference point selection strategies for indoor positioning. The Journal of Navigation, 73(6), 11821201.CrossRefGoogle Scholar
Nah, S., Hyun Kim, T. and Mu Lee, K. (2017). Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 3883–3891.Google Scholar
Palafox, P.R., Garzón, M., Valente, J., Roldán, J.J. and Barrientos, A. (2019). Robust visual-aided autonomous takeoff, tracking, and landing of a small UAV on a moving landing platform for life-long operation. Applied Sciences, 9(13), 2661.CrossRefGoogle Scholar
Pramod, P. (2014). GPS based advanced soldier tracking with emergency messages & communication system. International Journal of Advance Research in Computer Science and Management Studies Research Article, 2(6), 2532.Google Scholar
Shin, B., Lee, J.H., Shin, D., Yu, C., Kyung, H. and Lee, T. (2020). Performance enhancement of emergency rescue system using surface correlation technology. Journal of Positioning, Navigation, and Timing, 9(3), 183189.Google Scholar
Shiraishi, Y., Takano, H., Yamasaki, T. and Yamaguchi, I. (2019). A Study on the Improvement of Modified Proportional Navigation Guidance. AIAA Scitech 2019 Forum, p. 2346.CrossRefGoogle Scholar
Truong, N.Q., Lee, Y.W., Owais, M., Nguyen, D.T., Batchuluun, G., Pham, T.D. and Park, K.R. (2020). SlimDeblurGAN-based motion deblurring and marker detection for autonomous drone landing. Sensors, 20(14), 3918.CrossRefGoogle ScholarPubMed
Tzoumanikas, D., Li, W., Grimm, M., Zhang, K., Kovac, M. and Leutenegger, S. (2019). Fully autonomous micro air vehicle flight and landing on a moving target using visual– inertial estimation and model-predictive control. Journal of Field Robotics, 36(1), 4977.CrossRefGoogle Scholar
Wang, B., Deng, Z., Liu, C., Xia, Y. and Fu, M. (2013a). Estimation of information sharing error by dynamic deformation between inertial navigation systems. IEEE Transactions on Industrial Electronics, 61(4), 20152023.CrossRefGoogle Scholar
Wang, J. and Olson, E. (2016). AprilTag 2: Efficient and Robust Fiducial Detection. Proceeding of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, KOR, pp. 4193–4198.CrossRefGoogle Scholar
Wang, Y., Yang, X., Zhao, Y., Liu, Y. and Cuthbert, L. (2013b). Bluetooth Positioning Using RSSI and Triangulation Methods. IEEE 10th Consumer Communications and Networking Conference, pp. 837–842 .Google Scholar
Wubben, J., Fabra, F., Calafate, C.T., Krzeszowski, T., Marquez-Barja, J.M., Cano, J.C. and Manzoni, P. (2019). Accurate landing of unmanned aerial vehicles using ground pattern recognition. Electronics, 8(12), 1532.Google Scholar
Xiao, X., Dufek, J., Woodbury, T. and Murphy, R. (2017). Uav Assisted usv Visual Navigation for Marine Mass Casualty Incident Response. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6105–6110.CrossRefGoogle Scholar
Yi, D.H. and Kim, S.C. (2017). Analysis of computer simulated and field experimental results of LoRa considering path loss under LoS and NLoS environment. The Journal of Korean Institute of Communications and Information Sciences, 42(2), 444452.CrossRefGoogle Scholar
Yuan, Y., Huo, X., Ou, J., Zhang, K., Chai, Y., Wen, D. and Grenfell, R. (2008). Refining the klobuchar ionospheric coefficients based on GPS observations. IEEE Transactions on Aerospace and Electronic Systems, 44(4), 14981510.Google Scholar
Zhao, J., Gao, X., Wang, X., Li, C., Song, M. and Sun, Q. (2018). An efficient radio map updating algorithm based on K-means and Gaussian process regression. The Journal of Navigation, 71(5), 10551068.CrossRefGoogle Scholar
Zheng, D., Lin, D., Xu, X. and Tian, S. (2017). Dynamic stability of rolling missile with proportional navigation & PI autopilot considering parasitic radome loop. Aerospace Science and Technology, 67, 4148.CrossRefGoogle Scholar
Zhu, Q., Zhao, Z. and Lin, L. (2013). Real time estimation of slant path tropospheric delay at very low elevation based on singular ground-based global positioning system station. IET Radar, Sonar & Navigation, 7(7), 808814.Google Scholar
Figure 0

Figure 1. Coordinate of multicopter

Figure 1

Figure 2. Multicopter movement direction with respect to the rotation direction of the rotor

Figure 2

Figure 3. Multicopter heading estimation

Figure 3

Figure 4. RSSI data

Figure 4

Figure 5. RSSI correction data from Figure 4

Figure 5

Figure 6. Distance measurement value using RSSI

Figure 6

Figure 7. Proportional navigation

Figure 7

Figure 8. Multicopter with RF sensor

Figure 8

Table 1. Multicopter specifications

Figure 9

Figure 9. Landing platform (mobile robot)

Figure 10

Figure 10. RF sensor

Figure 11

Table 2. RF sensor protocol

Figure 12

Figure 11. Experiment environment

Figure 13

Figure 12. Block diagram of the proposed algorithm

Figure 14

Figure 13. Proposed algorithm experiment result (3D view)

Figure 15

Figure 14. Proposed algorithm experiment result (top view)

Figure 16

Table 3. Image-processing time per image/FPS

Figure 17

Figure 15. Conventional landing system RTH (top view)

Figure 18

Figure 16. Conventional landing system position error (RTH)

Figure 19

Table 4. Position error of vision-based landing systems