Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-28T17:54:16.426Z Has data issue: false hasContentIssue false

Broad learning control of a two-link flexible manipulator with prescribed performance and actuator faults

Published online by Cambridge University Press:  14 February 2023

Wenkai Niu
Affiliation:
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
Linghuan Kong
Affiliation:
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
Yifan Wu
Affiliation:
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
Haifeng Huang
Affiliation:
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
Wei He*
Affiliation:
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
*
*Corresponding author. E-mail: weihe@ieee.org
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we present a broad learning control method for a two-link flexible manipulator with prescribed performance (PP) and actuator faults. The trajectory tracking errors are processed through two consecutive error transformations to achieve the constraints in terms of the overshoot, transient error, and steady-state error. And the barrier Lyapunov function is employed to implement constraints on the transition state variable. Then, the improved radial basis function neural networks combined with broad learning theory are constructed to approximate the unknown model dynamics of flexible robotic manipulator. The proposed fault-tolerant PP control cannot only ensure tracking errors converge into a small region near zero within the preset finite time but also address the problem caused by actuator faults. All the closed-loop error signals are uniformly ultimately bounded via the Lyapunov stability theory. Finally, the feasibility of the proposed control is verified by the simulation results.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

In recent years, as rigid robots have been widely used in various fields [Reference Li, Du, Ma, Dong, Wang, Gao and Chen1Reference Huang, He, Wang, Zhang and Fu4], flexible manipulators have also received extensive attention for its lighter weight, higher load capacity, and lower energy consumption [Reference He, Tang, Wang and Liu5Reference Chang, Li and Tong7]. At present, control methods of flexible manipulators mainly include proportional-integral-derivative control [Reference Pradhan and Subudhi8, Reference Fadilah, Abdul, Zaharuddin and Ariffanan9], fuzzy control [Reference Sun, Su, Xia and Nguyen10Reference Kong, He, Yang, Li and Sun12], adaptive control [Reference Li, Li, Li, Su, Kan and He13Reference Kong, He, Liu, Yu and Silvestre15], sliding mode control [Reference Huang and Chen16, Reference Salgado and Chairez17], and compound control [Reference Liang, Song, Sun and Jin18]. Although effective, due to uncertain factors such as parameter perturbation and external environmental interference [Reference Feng, Shi, Cheng, Huang and Liu19], the ordinary differential equation model established for the robotic arm is an uncertainty model [Reference Feng, Yang and Cheng20]. Neural networks (NNs) are an effective way to deal with uncertainty of flexible manipulator for its excellent learning ability [Reference Kong, He, Chen, Zhang and Wang21Reference Yang, Kiumarsi, Modares and Xu24]. However, in practical applications, the selected NNs nodes cannot cover all the input information, which affects the ability to approximate the unknown model. In order to further improve the learning ability of the NNs and reduce the complexity of the network structure, Chen and Liu [Reference Chen and Liu25] proposed the broad learning system (BLS) and proved its ability to approximate nonlinear functions [Reference Chen, Liu and Feng26]. Different from the deep learning system, the BLS includes input layer, enhancement layer, and output layer and retrains the model by expanding the width of the enhancement layer. Due to the simple structure and strong learning ability, BLS is widely used in pattern recognition, data classification, and model approximation. Combining radial basis function neural networks (RBFNNs) and BLS, Huang et al. [Reference Huang, Zhang, Yang and Chen27] proposed a novel NNs control for the rigid manipulator system, which realized the approximation of the system model. Peng et al. [Reference Peng, Chen, He and Yang28] proposed an improved broad neural networks (BNNs) control to interact with unknown environment for robotic arm with input dead-zone. Although effective, there is barely an adaptive NNs control strategy concerning BLS for the more complex flexible manipulator.

Actually, actuator faults often occur because of aging and wear, which will cause the deterioration of the system performance [Reference Ghaf-Ghanbari, Mazare and Taghizadeh29, Reference Liu, Han, Zhao and He30]. In general, actuator faults can be classified as: (1) effectiveness loss; (2) locked-in-place; (3) floating around trim, and (4) hard-cover. Fault-tolerant control (FTC), which is divided into passive fault-tolerant control (PFTC) and active fault-tolerant control (AFTC), is a suitable strategy against the aforementioned actuator faults [Reference Smaeilzadeh and Golestani31]. The AFTC approach compensates for the system component malfunctions based on the fault detection and diagnosis technologies [Reference Van, Ge and Ren32, Reference Shen, Yue, Goh and Wang33]. The PFTC strategy is simpler from a practical point of view because it deals with the faults by the robust or adaptive controller. Besides, PFTC method also can avoid the time delay caused by the fault detection and diagnosis in AFTC. Considering the aforementioned factors, a passive fault-tolerant (FT) controller will be designed in this paper for a two-link flexible manipulator with actuator faults.

In order to ensure the transient performance and steady-state performance of the flexible manipulator system, the influence of system output constraints on the system is usually considered in the process of controller design. At present, the commonly used output constraint methods include barrier Lyapunov function (BLF) method [Reference Kong, He, Yang, Li and Kaynak34Reference Li, Zhao, He and Lu36], funnel control method [Reference Ilchmann, Ryan and Trenn37, Reference Wang, Ren, Jing and Zeng38], and prescribed performance control (PPC) method [Reference Bechlioulis and Rovithakis39Reference Sun, Wu and Sun41]. In ref. [Reference Bechlioulis and Rovithakis39], PPC is proposed for the first time, and the main idea is to convert the error variable constrained by the inequality into an unconstrained variable and ensure that the system tracking error converges to a prescribed small area while ensuring that the convergence speed and overshoot meet the prescribed conditions. Subsequently, many scholars adopt this method to the robotic manipulator system such as rigid robotic manipulator [Reference Guo, Li, Cai, Feng and Shi42Reference Guo, Zhang, Celler and Su44] and single-link flexible manipulator [Reference Chen, Wang and Zou45]. However, the convergence speed of the system depends on the decay function $\rho =(\rho _{0}-\rho _{\infty })e^{-l_{0}t}+\rho _{\infty }$ and the preset convergence time is not considered. The convergence time of the tracking error will decrease as $l_{0}$ increases, but at the expense of overshoot. And the convergence time of transient error cannot be determined. Although the prescribed time control method [Reference Zhou and Zhang46, Reference Espitia and Perruquetti47] proposed in recent years can solve the problem of transient error convergence time, it will ignore overshoot, steady-state error, and other factors. Therefore, it is necessary to design a unified predetermined performance control strategy considering overshoot, transient convergence time, transient error, and steady-state error. In this paper, we focus on improving the transient performance of a two-link flexible robotic manipulator. Different from the structure of a single-link flexible manipulator, the two-link flexible manipulator has a more complex nonlinear model and stronger mutual link coupling effect [Reference Zhang, Liu, Peng and He48]. Moreover, in practical applications, the elastic deformation and high-frequency vibration of the flexible connecting rod will also increase the complexity of real-time control. Therefore, it is difficult but valuable to improve the rapidity and steady-state accuracy of the two-link flexible manipulator system under the premise of ensuring the overall trajectory tracking and elastic vibration suppression.

Based on the above discussion, aiming at the fast tracking and vibration suppression problem of a two-link flexible robotic manipulator with uncertainties and actuator faults, this paper presents an adaptive NNs FTC strategy considering specified performance. The difficulty is how to force the tracking errors of two joints converge to the preset small area within the specified time with a small overshoot. Inspired by neural adaptive PPC proposed in ref. [Reference Huang, Song and Lai49], an asymmetric scaling function and a behavior-shaping function are introduced to constraint overshoot and steady-state errors, respectively. The unknown model dynamics of flexible robotic manipulator is approximated by the improved RBFNNs. Besides, the multiplicative factor of the actuator faults is estimated by an adaptive estimation method, and the additive part is compensated by the upper bound of the faults. Compared with existing flexible manipulator control strategies, the main contributions of this paper are summarized as follows:

  1. 1. Different from most of performance constraint for flexible manipulators [Reference Chen, Wang and Zou45, Reference Ahanda, Mbede, Melingui and Zobo50], this paper starts from the transient performance index, and the purpose of controlling overshoot, transient error, and steady-state error can be achieved by introducing two auxiliary functions. Then, a logarithmic BLF is utilized to constrain the transformed state variable.

  2. 2. Compared with the traditional PPC shown in refs. [Reference Bechlioulis and Rovithakis39, Reference Guo, Li, Cai, Feng and Shi42], the performance function applied in this paper cannot only make the tracking errors of two joints converge to a small residual set within the given time and finally stabilize in a smaller range but also ensure a small overshoot.

  3. 3. Different from the traditional RBFNNs control, this paper presents a new node selection rule combined with broad learning and incremental learning. The centers of the Gaussian function are self-adjusting, and the number of neural nodes is limited to reduce the complexity of the NNs.

The rest of this paper is organized as follows. The dynamic description and the problem formulation are given in Section 2. Adaptive NN FTC with prescribed performance (PP) based on broad learning is presented in Section 3. In Section 4, simulations are conducted to illustrate the effectiveness of the proposed control. Finally, the conclusions are given in Section 5.

2. Preliminaries and problem formulation

2.1. Dynamic description

A typical structure of two-link flexible robotic manipulator is shown in Fig. 1. On the premise of small deflection, the dynamic model of manipulator system obtained via assumed mode method and Lagrange’s equation can be written as ref. [Reference Gao, He, Zhou and Sun51]

(1) \begin{equation} M(q)\ddot{q}+B(q,\dot{q})\dot{q}+K(q)=u(t) \end{equation}

where $q=[\theta,\phi ]^{T}$ , $\theta =[\theta _{1},\theta _{2}]^{T}$ is the vector of joint angular position, $\phi =[\phi _{11},\ldots,\phi _{1n_{1}},\phi _{21},\ldots,\phi _{2n_{2}}]^{T}$ represents the flexible generalized coordinate vector, and $n=n_{1}+n_{2}$ is the total number of flexible variables. $M(q)\in R^{(n+2)\times (n+2)}$ is the positive inertia matrix, $B(q,\dot{q})\in R^{(n+2)\times (n+2)}$ denotes the centripetal and Coriolis forces, and $K(q)\in R^{(n+2)}$ is the vector of gravity and elasticity forces. $u=[u_{1},u_{2},0,\cdots,0]^{T}\in R^{(n+2)}$ is the input torque vector.

Fig. 1. Diagram of the two-link flexible manipulator.

The actuator fault is mathematically modeled as

(2) \begin{equation} u(t)=D\tau (t)+\bar{u} \end{equation}

where $\tau (t)\in R^{n+2}$ denotes the desired control torque, $D=\textrm{diag}\{l_{1},\cdots,l_{n+2}\}\in R^{(n+2)\times (n+2)}$ represents the actuator effectiveness matrix with $0\lt \eta _{i}\leq l_{i}\leq 1,i=1,2,\cdots,n+2$ , and $\bar{u}=[\bar{u}_{1},\cdots,\bar{u}_{n+2}]^{T}\in R^{n+2}$ is the additive actuator fault vector.

Remark. As shown in ( 2 ), the multiplicative factor $l_{i}$ represents the uncertain gain of the desired actuator while the additive factor $\bar{u}_{i}$ represents the disturbance or noise imposed on the actuator. Further, it indicates that the $i$ th actuator is complete failure when $l_{i}=0$ and $\bar{u}_{i}=0$ , and healthy when $l_{i}=1$ and $\bar{u}_{i}=0$ .

Assumption 1. [Reference Chen, Song and Lewis52] Assuming that the additive fault introduced in ( 2 ) satisfies $|\bar{u}_{i}|\leq u_{ci}$ with $u_{ci}$ being a positive constant.

Property 1. [Reference Gao, He, Zhou and Sun51] The matrix $M(q)$ is symmetric and positive definite.

Property 2. [Reference Gao, He, Zhou and Sun51] The matrix $(\dot{M}(q)-2B(q,\dot{q}))$ is skew symmetric.

2.2. Prescribed tracking performance

Let $x_{1}=q,x_{2}=\dot{q}$ , the dynamic model (1) can be rewritten as

(3) \begin{equation} \dot{x}_{1} = x_{2} \end{equation}
(4) \begin{equation} \dot{x}_{2}= M^{-1}(x_{1})[u(t)-B(x_{1},x_{2})x_{2}-K(x_{1})] \end{equation}

Define the desired trajectory as $x_{d}=[x_{d1},\cdots,x_{d(n+2)}]^{T}$ , which is a continuously bounded and known variable. The tracking error vector can be expressed as $e_{1}=x_{1}-x_{d}=[e_{11},\cdots,e_{1(n+2)}]^{T}$ .

The given performance specifications of tracking error vector $e_{1}(t)$ are set as

(5a) \begin{equation} |e_{1i}(t)|\lt \varepsilon,\quad \forall t\geq T\gt t_{0} \end{equation}
(5b) \begin{equation} \lim \limits _{t\to \infty }|e_{1i}(t)|\lt \mu \lt \varepsilon \end{equation}
(5c) \begin{equation} -\delta _{1}\lt e_{1i}(t)\lt \delta _{2}, \quad e_{1i}(t_{0})\geq 0 \end{equation}
(5d) \begin{equation} -\delta _{2}\lt e_{1i}(t)\lt \delta _{1}, \quad e_{1i}(t_{0})\lt 0 \end{equation}

where $T\gt t_{0}$ , $\varepsilon \gt 0$ , $\mu \gt 0$ , $\delta _{2}\gt |e_{1i}(t_{0})|\gt \delta _{1}\gt 0$ are the specific parameters to be set, $t_{0}\geq 0$ is the initial time. The above descriptions constrain the transient and steady-state performance of the tracking error, respectively. More specifically, $e_{1i}(t)$ converges to a small region $(\!-\varepsilon,\varepsilon )$ within finite time $T$ and finally shrinks into a smaller range $(\!-\mu,\mu )$ . By (5c), we know that $-\delta _{1}$ and $\delta _{2}$ are the upper and lower bounds of the error $e_{1i}(t)$ when $e_{1i}(t_{0})\geq 0$ , for all $t\geq t_{0}$ . The same analysis applies when $e_{1i}(t_{0})\lt 0$ in (5d). Take $e_{1i}(t_{0})\geq 0$ as an example, the PP curve is shown as Fig. 2.

Fig. 2. Diagram of tracking error with preset performance.

2.3. Error transformation

It is difficult to ensure that the transient and steady-state performance meeting the above requirements if the error is directly controlled. Thus, we introduce two error transfer functions to control overshoot and steady-state error, respectively. For the overshoot, we define the function as

(6) \begin{equation} f(e_{1i})=\frac{\gamma e_{1i}^{m}}{(e_{1i}^{2m}+g)^{\frac{1}{2}}} \end{equation}

where $g$ is a small positive constant, $m$ is an odd integer with $m\geq \textrm{max}\{3,n+2\}$ , $\gamma =\gamma _{1}\gt 1$ for all $e_{1i}\geq 0$ and $\gamma =\gamma _{2}\gt 1$ for all $e_{1i}\lt 0$ . $\dot{f}(e_{1i})=gm\gamma e_{1i}^{m-1}/(e_{1i}^{2m}+g)^{\frac{3}{2}}\geq 0$ shows that $f(e_{1i})$ is monotonously increasing with respect to $e_{1i}$ . Since $f(e_{1i})$ is monotonous and

(7) \begin{equation} \lim \limits _{e_{1i}\to +\infty }f(e_{1i}) =\gamma _{1}, \lim \limits _{e_{1i}\to -\infty }f(e_{1i}) =-\gamma _{2} \end{equation}

we can obtain $f(e_{1i})\in (\!-\gamma _{2},\gamma _{1})$ .

In terms of steady-state error, a behavior-shaping function is imported as

(8) \begin{equation} Q(t)=\frac{k_{bi}}{(k_{bi}-\xi \varepsilon )\kappa ^{-1}(t)+\xi \varepsilon } \end{equation}

where $\kappa (t)\geq \kappa (t_{0})=1$ is a rate function with $\dot{\kappa }(t)\gt 0$ and $\ddot{\kappa }(t)\gt 0$ for all $t\geq t_{0}$ ; $k_{bi}\gt 1$ , $0\lt \xi \lt 1,0\lt \varepsilon \lt 1$ are parameters to be designed. It is different from the traditional PP function. $\xi$ and $\varepsilon$ represent the transient accuracy and steady-state accuracy of the error, respectively, and $\kappa (t)$ specifies the time of error convergence. More properties of function $Q(t)$ are described in Lemma 2.3.

2.4. Broad learning system

As shown in Fig. 3, different from the general NNs structure, the hidden layer of the BLS includes feature nodes and enhancement nodes. For the presented input vector $X$ , the feature mapping of $X$ can be extracted by following $r_{1}$ feature mappings

(9) \begin{equation} F_{xi}=\phi _{xi}(XW_{xi}+\epsilon _{xi}), \quad i=1,\cdots,r_{1} \end{equation}

where $\phi _{xi}$ is the linear or nonlinear activation function, $W_{xi}$ and $\epsilon _{xi}$ are the weight and bias generated randomly and then remained constant during training. Define the feature vector as $F^{r_{1}}=[F_{x1},\cdots,F_{xr_{1}}]$ , and the $r_{2}$ enhancement nodes are obtained by following feature mappings

(10) \begin{equation} H_{fj}=\phi _{fj}(F^{r_{1}}W_{fj}+\epsilon _{fj}), \quad j=1,\cdots,r_{2} \end{equation}

where $\phi _{fj}$ is the nonlinear activation function, $W_{fi}$ and $\epsilon _{fi}$ are the weight and bias generated randomly. Define the enhanced node vector as $H^{r_2}=[H_{f1},\cdots,H_{fr_{2}}]$ and define $A=[F^{r_{1}}|H^{r_{2}}]$ , then the output of BLS can be expressed as $Y=AW^{(r_{1}+r_{2})}$ with $W^{(r_{1}+r_{2})}$ is the weight from the hidden layer to the output layer.

Fig. 3. Neural networks structure with BLS.

2.5. RBFNNs with broad learning theory

In this paper, we use RBFNNs to approximate the uncertain dynamic system described as (1). Mathematically, RBFNNs function is expressed as

\begin{equation*} F(Z)=W^{T}R(Z)+\epsilon (Z) \end{equation*}
\begin{equation*} R_{i}(Z)=\textrm {exp}\bigg [\!-\frac {(Z-\varphi _{i})^{T}(Z-\varphi _{i})}{\eta _{i}^2}\bigg ],(i=1,\cdots,l) \end{equation*}

where $F(Z)\in R^{n+2}$ is the output of NNs, $Z\in R^{m_{z}}$ is the NN input vector, $W\in R^{l\times (n+2)}$ is the NNs weights matrix with $l$ being the number of NNs nodes, $\epsilon (Z)\in R^{n+2}$ is the approximation error vector, $R(Z)\in R^{l}$ is the basis function vector with $R_{i}(Z)$ is the Gaussian function, and $\varphi =[\varphi _{1},\cdots,\varphi _{l}]\in R^{m_{z}\times l}$ and $\eta _{i}$ denote the centers and widths.

Inspired by the ideas of BLS [Reference Chen and Liu25] and incremental learning theory [Reference Huang, Zhang, Yang and Chen27], an improved RBFNNs learning algorithm is presented in this section. In the traditional RBFNNs control, the designed NNs node space cannot guarantee to completely cover the input vector, so it is valuable to add nodes close to the input vector and discard nodes far away from the input vector.

Define the NNs node information as $[\varphi _{t_{0}},\eta _{t_{0}},W_{t_{0}}]$ when $t=t_{0}$ and the added NNs node information as $[\varphi _{\textrm{new}},\eta _{\textrm{new}},W_{\textrm{new}}]$ . Select $n_{p}$ center vectors close to the input vector $c_{\textrm{min}}=\{c_{1},\cdots,c_{n_{p}}\}\in \varphi$ and the mean is $a_v=\sum _{i=1}^{n_{p}}c_{i}/n_{p}$ , then the added node can be designed by following rule [Reference Huang, Zhang, Yang and Chen27]

(11) \begin{equation} \varphi _{\textrm{new}}=\bar{c}_{\textrm{min}}+k_{z}(Z-\bar{c}_{\textrm{min}}) \end{equation}
(12) \begin{equation} \eta _{\textrm{new}}=\eta _{t_{0}},W_{\textrm{new}}=[0,\cdots,0]\in R^{m_{z}} \end{equation}

where $k_{z}\in R^{m_{z}\times m_{z}}$ is a positive diagonal matrix, $\bar{c}_{\textrm{min}}\in R^{m_{z}}$ denotes the average distance vector to the closest centers $c_{\textrm{min}}$ , which is defined as $\bar{c}_{\textrm{min}}=[a_{v},\cdots,a_{v}]\in R^{m_{z}}$ . Thus, the NNs center is updated as

(13) \begin{equation} \varphi _{t_{0}+t_{s}}=\left \{ \begin{array}{c}\begin{aligned} &[\varphi _{t_{0}},\varphi _{\textrm{new}}],\quad \Vert \bar{c}_{\textrm{min}}\Vert \gt \varXi \ \textrm{and} \ l\lt \bar{l} \\ \\[-9pt] &\varphi _{t_{0}},\qquad \qquad \textrm{otherwise}.\\ \end{aligned}\end{array} \right. \end{equation}

where $t_{s}$ is the sampling interval, $\varXi$ is the preset threshold, and $\bar{l}$ is the preset maximum number of neural nodes. According to the idea of broad learning, we choose the hidden layer of NNs as $G(t_{0}+t_{s})=[Z|R(Z)]\in R^{l+m_{z}}$ . Meanwhile, initialize the weight from the hidden layer to the output layer with $W(t_{0}+t_{s})=[W_{t_{0}}|W_{\textrm{new}}]\in R^{(l+m_{z})\times (n+2)}$ . Consequently, the output of BNNs can be expressed as

(14) \begin{equation} Y(t_{0}+t_{s})=W(t_{0}+t_{s})^{T}G(t_{0}+t_{s}) \end{equation}

Remark. The design of $\varphi _{\textrm{new}}$ in ( 11 ) ensures that the subsequently updated node is always near the input vector, which can improve the estimation accuracy of the NNs for unknown nonlinear function. Moreover, the design of threshold $\varXi$ in ( 13 ) provides a convenient way to save the number of neural nodes, which helps reduce the computational burden of the system. It should be pointed out that in practical applications, a small $\varXi$ will lead to a large number of nodes. So when without prior knowledge, we can preset the maximum number of neural nodes $\bar{l}$ to reduce the computational complexity of the NNs.

Remark. In practice, the value of $\bar{l}$ is related to the input information of NN. If the fluctuation range of the input information is large, we need to set a higher value of $\bar{l}$ . For example, we can set a larger value for $\bar{l}$ at the initial time, such as $200$ . If it is observed that the unknown function is well fitted, the value of $\bar{l}$ can be reduced. If it is observed that the unknown function is poorly fitted, the value of $\bar{l}$ needs to be increased. If the fitting effect is not good when the value of $\bar{l}$ reaches $400$ , we need to consider modifying other parameter values.

2.6. Useful technical lemmas

Lemma 2.1. [Reference Zhang, Dong, Ouyang, Yin and Peng53] For $\forall x\in R$ , there is a positive constant $\zeta \in R$ satisfying that $|x|\leq |\zeta |$ with $\zeta$ being the constraint value, the following inequality holds:

(15) \begin{equation} \textrm{ln}\frac{\zeta ^2}{\zeta ^{2}-x^2}\leq \frac{x^2}{\zeta ^{2}-x^2} \end{equation}

Lemma 2.2. [Reference Smaeilzadeh and Golestani31] For any constants $a_{1},a_{2},\cdots,a_{n}$ and $0\leq x\leq 1$ , the following inequality holds:

(16) \begin{equation} (|a_{1}|+\cdots +|a_{n}|)^{x}\leq |a_{1}|^{x}+\cdots +|a_{n}|^{x} \end{equation}

Lemma 2.3. [Reference Huang, Song and Lai49] The behavior-shaping function $Q(t)$ shown in (8) possesses the following properties:

  1. (1) ${Q}(t_{0})=1$ ;

  2. (2) $\dot{Q}(t)\gt 0$ and $Q^{(i)}\in \ell _{\infty }$ for all $t\geq t_{0}$ ;

  3. (3) $\lim \limits _{t\to \infty }Q^{(i)}=0$ , $i$ is a positive integer.

3. Control design

3.1. Control objectives

In this section, the backstepping method is performed to obtain an adaptive NNs FT controller.

  1. (1) The desired reference angular vector $x_{d}$ can be tracked within the prescribed time. And the tracking errors of two flexible manipulator joints can satisfy the prescribed requirements in terms of overshoot, transient, and steady-state performance.

  2. (2) The actuator faults described in (2) can be compensated by a PFTC scheme.

  3. (3) All the close-loop error signals are uniformly ultimately bounded.

3.2. Adaptive NN PPC with actuator faults

To facilitate the controller design, we define the first error variable $z_{1}$ as

(17) \begin{equation} z_{1}=f(e_{1})Q(t) \end{equation}

where $f(e_{1})=[f(e_{1i}),\cdots,f(e_{(n+2)i})]^{T}$ . If $|z_{1i}|\lt k_{bi}(i=1,2,\cdots,n+2)$ with $k_{bi}\lt \textrm{min}\{\delta _{1},\delta _{2}\}$ , we get

(18) \begin{equation} |f(e_{1i})|=Q^{-1}(t)|z_{1i}|\lt (k_{b}-\xi \varepsilon )\kappa ^{-1}(t)+\xi \varepsilon \end{equation}

when $t\geq T$ , the $\kappa (T)$ is chosen as $(k_{b}-\xi \varepsilon )/(\varepsilon -\xi \varepsilon )$ , then

(19) \begin{equation} |f(e_{1i})||_{t\geq T}\lt (k_{bi}-\xi \varepsilon )\kappa ^{-1}(T)+\xi \varepsilon =\varepsilon \end{equation}

Considering that $\kappa (t)$ is monotonically increasing function with $\kappa (t_{0})=1$ , we get $\lim \limits _{t\to +\infty }\kappa ^{-1}(t)=0$ ; thus, we can further obtain

(20) \begin{equation} \lim \limits _{t\to +\infty }|f(e_{1i})|\lt \xi \varepsilon =\mu, \quad 0\lt \xi \lt 1 \end{equation}

Next, we discuss the relationship between $e_{1}$ and $f(e_{1})$ . From (6), we can get the inverse function of $f(e_{1})$

(21) \begin{equation} e_{1i}(f)=\frac{g^{\frac{1}{2m}}f^{\frac{1}{m}}}{(\gamma ^{2}-f^{2})^{\frac{1}{2m}}}, \quad f\in (\!-\gamma _{2},\gamma _{1}) \end{equation}

Differentiating $e_{1i}(f)$ with respect to $f$ yields $\dot{e}_{1i}=f^{(\frac{1}{m}-1)}g^{\frac{1}{m}}\gamma ^{2}(\gamma ^{2}-f^{2})^{(\frac{1}{2m}-1)}/m(\gamma ^{2}-f^{2})^{\frac{1}{m}}\geq 0$ with $g\gt 0$ and $m$ is an odd integer which is not less than $3$ . Thus, $e_{1i}(f)$ is strictly increasing for $f\in (\!-\gamma _1,\gamma _2)$ . Selecting $g=(\gamma ^{2}-\varepsilon ^{2})\varepsilon ^{(2m-2)}$ and combining (19) and (21), we conclude

(22) \begin{equation} |e_{1i}||_{t\geq T}\lt \frac{(\gamma ^{2}-\varepsilon ^{2})^{\frac{1}{2m}}\varepsilon ^{(1-\frac{1}{m})}\varepsilon ^{\frac{1}{m}}}{(\gamma ^{2}-\varepsilon ^{2})^{\frac{1}{2m}}}=\varepsilon \end{equation}

Substituting (20) into (21) yields

(23) \begin{equation} \lim \limits _{t\to +\infty }|e_{1i}|\lt \frac{(\gamma ^{2}-\varepsilon ^{2})^{\frac{1}{2m}}\varepsilon ^{(1-\frac{1}{m})}\xi ^{\frac{1}{m}}\varepsilon ^{\frac{1}{m}}}{(\gamma ^{2}-\varepsilon ^{2})^{\frac{1}{2m}}}=\xi ^{\frac{1}{m}}\varepsilon \end{equation}

Thus, the constraints of $e_{1i}$ in (5aa) and (5ab) are guaranteed.

According to the properties of $Q(t)$ , we know $0\lt Q^{-1}(t)\lt 1$ and further get

(24) \begin{equation} |f(e_{1i})|=Q^{-1}(t)|z_{1i}|\lt |z_{1i}|\lt k_{bi} \end{equation}

where $k_{bi}\lt \textrm{min}\{\gamma _{1},\gamma _{2}\}$ .

when $e_{1i}(t_{0})\geq 0$ , $\gamma =\gamma _{1}$ , we can get from (21) and (24)

(25) \begin{equation} e_{1i}(t)=\frac{g^{\frac{1}{2m}}f^{\frac{1}{m}}}{(\gamma _{1}^{2}-f^{2})^{\frac{1}{2m}}}\lt \frac{g^{\frac{1}{2m}}k_{bi}^{\frac{1}{m}}}{(\gamma _{1}^{2}-k_{bi}^{2})^{\frac{1}{2m}}}=P_{u} \end{equation}

when $e_{1i}(t_{0})\lt 0$ , $\gamma =\gamma _{2}$ , similarly, we get

(26) \begin{equation} e_{1i}(t)=\frac{g^{\frac{1}{2m}}f^{\frac{1}{m}}}{(\gamma _{2}^{2}-f^{2})^{\frac{1}{2m}}}\gt \frac{-g^{\frac{1}{2m}}k_{bi}^{\frac{1}{m}}}{(\gamma _{2}^{2}-k_{bi}^{2})^{\frac{1}{2m}}}=-P_{l} \end{equation}

where $P_{u}, -P_{l}$ are the upper and lower bounds of the tracking error, respectively. More specifically,

  1. (1) When $0\leq e_{1i}(t_{0}) \leq P_{u}$ , the size of overshoot is determined by $P_{l}$ . Considering the definitions of $P_{l}$ and $g$ , we find that once $k_{bi}$ and $\varepsilon$ are fixed, the overshoot is only determined by $\gamma$ . According to the definition of $\gamma$ , it is easy to know that the overshoot is only determined by $\gamma _{1}$ .

  2. (2) When $-P_{l}\leq e_{1i}(t_{0}) \lt 0$ , the size of overshoot is determined by $P_{u}$ . Considering the definitions of $P_{u}$ and $g$ , we find that once $k_{bi}$ and $\varepsilon$ are fixed, the overshoot is only determined by $\gamma$ . Similarly, it is easy to know that the overshoot is only determined by $\gamma _{2}$ .

The above analysis hold under the condition $|z_{1i}|\lt k_{bi}$ ; the preset performance can be achieved by transforming the error $e_{1i}$ twice with (6) and (8). In the following controller design process, a BLF $\sum _{i=1}^{n+2}\textrm{ln}\frac{k_{bi}^{2}}{k_{bi}^{2}-z_{1i}^2}$ is considered to guarantee that $z_{1i}$ remain into the region $|z_{1i}| \lt k_{bi}$ [Reference Ren, Ge, Tee and Lee54].

Define the second error variable $z_{2}$ as

(27) \begin{equation} z_{2}=x_{2}-\alpha \end{equation}

where $\alpha =[\alpha _{1},\cdots,\alpha _{n+2}]^{T}$ is the virtual error variable. Take the derivative of $z_{1}$ with respect to time, we have

(28) \begin{align} \dot{z}_{1i}& =\frac{gm\gamma e_{1i}^{m-1}(z_{2i}+\alpha _{i}-\dot{x}_{di})}{(e_{1i}^{2m}+g)^{\frac{3}{2}}}Q(t)+\dot{Q}(t)Q^{-1}(t)z_{1i} \nonumber \\& =L_{i}(z_{2i}+\alpha _{i}-\dot{x}_{di})Q(t)+\dot{Q}(t)Q^{-1}(t)z_{1i} \end{align}

where

\begin{equation*} L_{i}=\frac{gm\gamma e_{1i}^{m-1}}{(e_{1i}^{2m}+g)^{\frac{3}{2}}}, \quad i=1,\cdots,n+2 \end{equation*}

Since the term $z_{2}^{T}Mz_{2}$ is positive and continuously differentiable, we choose the Lyapunov function candidate as

(29) \begin{equation} V_{1}=\frac{1}{2}\sum _{i=1}^{n+2}\textrm{ln}\frac{k_{bi}^{2}}{k_{bi}^{2}-z_{1i}^2}+\frac{1}{2}z_{2}^{T}Mz_{2} \end{equation}

Taking the time derivative of $V_{1}$ along (4) and (28), we obtain

(30) \begin{align} \dot{V}_{1}&=\sum _{i=1}^{n+2}\frac{z_{1i}\dot{z}_{1i}}{k_{bi}^{2}-z_{1i}^2}+\frac{1}{2}z_{2}^{T}\dot{M}z_{2}+z_{2}^{T}M(\dot{x}_{2}-\dot{\alpha }_{1})\nonumber\\ \nonumber\\[-8pt]& =\sum _{i=1}^{n+2}\frac{z_{1i}}{k_{bi}^{2}-z_{1i}^2}\big (L_{i}(z_{2i}+\alpha _{i}-\dot{x}_{di})Q+\dot{Q}Q^{-1}z_{1i}\big ) \nonumber\\ \nonumber\\[-8pt]&\quad +z_{2}^{T}\big (D\tau +\bar{u}-B(q,\dot{q})\alpha -K(q)-M(q)\dot{\alpha }\big )+z_{2}^{T}(\frac{1}{2}\dot{M}(q)-B(q,\dot{q}))z_{2} \end{align}

According to Property 2, the term $z_{2}^{T}(\frac{1}{2}\dot{M}(q)-B(q,\dot{q}))z_{2}=0$ . To proceed, we define virtual control law $\alpha$ as

(31) \begin{equation} \alpha =\dot{x}_{d}-K_{1}M_{1}-\dot{Q}Q^{-1}M_{1} \end{equation}

with $K_{1}=\textrm{diag}\{K_{11},\cdots,K_{1(n+2)}\}$ is a positive-definite gain matrix and

\begin{equation*} M_{1}=\left[\frac{e_{11}(e_{11}^{2m}+g)}{gm},\cdots,\frac{e_{1i}(e_{1i}^{2m}+g)}{gm}\right]^{T} \end{equation*}

Combining $L_{i}, M_{1i}$ , and $Q$ , we get

(32) \begin{equation} \begin{aligned} M_{1i}L_{i}Q&=\frac{\gamma e_{1i}^{m}}{(e_{1i}^{2m}+g)^{\frac{1}{2}}}Q=z_{1i} \end{aligned} \end{equation}

Substituting (31) and (32) into (30), one has

(33) \begin{align} \dot{V}_{1}& =\sum _{i=1}^{n+2}\frac{z_{1i}}{k_{bi}^{2}-z_{1i}^2}(L_{i}z_{2i}Q-K_{1i}z_{1i})+z_{2}^{T}\big (D\tau +\bar{u}-B(q,\dot{q})\alpha -K(q)-M(q)\dot{\alpha }\big ) \nonumber\\[3pt]& =z_{1}^TwLQz_{2}-\sum _{i=1}^{n+2}w_{i}K_{1i}z_{1i}^2+z_{2}^{T}(D\tau +\bar{u}-B(q,\dot{q})\alpha -K(q)-M(q)\dot{\alpha }) \end{align}

where $w=\textrm{diag}\{w_{1},\cdots,w_{i}\}$ with $w_{i}=\frac{1}{k_{bi}^{2}-z_{1i}^2}$ , $L=\textrm{diag}\{L_{1},\cdots,L_{i}\}$ .

Since uncertainties in $M(q), B(q,\dot{q}), K(q)$ are difficult to obtain, we cannot realize the model-based control actually. Therefore, BNNs are utilized to approximate the uncertainties of system model. Design the adaptive controller as

(34) \begin{equation} \begin{aligned} \tau _{1}=\hat{P}\tau _{1}^{*}=\hat{P}\big [wLQz_{1}-K_{2}z_{2}+\hat{W}^{T}R(Z)-\textrm{sgn}(z_{2})u_{c}\big ] \end{aligned} \end{equation}

where $\hat{P}=\textrm{diag}\{\hat{P}_{1},\cdots,\hat{P}_{n+2}\}$ is the estimation matrix of $P$ with $P=D^{-1}$ , $\tau _{1}^{*}=[\tau _{11}^{*},\tau _{12}^{*},0,\cdots,0]^{T}\in R^{n+2}$ . $\hat{W}$ is the weight of NNs and $R(Z)$ is the basis function. The term $\hat{W}^{T}R(Z)$ denotes the estimation of ${W}^{*T}R(Z)$ , while ${W}^{*T}R(Z)$ represents the approximation of the uncertain item in the control (34) with

(35) \begin{equation} W^{*T} R(Z)=B(q,\dot{q})\alpha +K(q)+M(q)\dot{\alpha }-\epsilon (Z) \end{equation}

where $Z=[q^{T},\dot{q}^{T},\alpha ^{T},\dot{\alpha }^{T}]$ is the input of the basis function to the NNs and $\epsilon (Z)\in R^{n+2}$ is the approximation error. Define the NNs adaptive laws as

(36) \begin{equation} \dot{\hat W}_{i}=-\Gamma _{i}[R_{i}(Z)z_{2i}+\sigma _{i}\hat{W}_{i}] \end{equation}

where $\Gamma _{i}\in R^{(n+2)(n+2)}$ is the constant positive-definite gain matrix; $\sigma _{i}$ is a small positive constant.

Let $P_{i}=\frac{1}{l_{i}}, (i=1,\cdots,n+2)$ , and the adaptive laws of $\hat{P}_{i}$ are designed as

(37) \begin{equation} \begin{aligned} \dot{\hat{P}}_{i}=-z_{2i}\tau _{1i}^{*}-\hat{P}_{i} \end{aligned} \end{equation}

Then, a theorem that guarantees prescribed tracking performance under actuation faults is proposed as the following.

Theorem 3.1. For the system described by (1), the controller ( 34 ) with the adaptive law ( 36 ) and ( 37 ), if the initial conditions $ (q(0),\dot{q}(0),\hat{W}_{i}(0),\hat{P}_{i}(0) )$ are bounded and $z_{1i}(0)\lt k_{bi}$ , $i=1,2,\cdots,n+2$ , the semi-global stability of the system can be achieved. The error signals $z_{1}, z_{2}, \tilde{W}$ , and $\tilde{P}_{i}$ will remain within $\Omega _{z_{1}},\Omega _{z_{2}},\Omega _{W}$ , and $\Omega _{P}$ , respectively, which are defined as

\begin{align*} &\Omega _{z1}=\left \{z_{1}\in R^{n+2}||z_{1i}|\leq k_{bi}\sqrt{ \big (1-e^{-2J})\big )}\right \} \\[3pt]&\Omega _{z2}=\left \{z_{2}\in R^{n+2}|\Vert z_{2}\Vert \leq \sqrt{\frac{2J}{\lambda _{\textrm{min}}(M)}}\right \} \\[3pt]&\Omega _{W}=\left \{\tilde{W}\in R^{l\times (n+2)}|\Vert \tilde{W}\Vert \leq \sqrt{\frac{2J}{\lambda _{\textrm{min}}(\varGamma ^{-1}_{i})}}\right \} \\[3pt]& \Omega _{P_{i}}=\left \{\tilde{P_{i}}\in R^{n+2}||\tilde{P}_{i}| \leq \sqrt{\frac{2J}{\textrm{min}(l_{i})}} \right \} \end{align*}

with $i=1,\ldots,n+2$ and $J=V_{2}(0)+\frac{l_{0}}{p_{0}}$ with $l_{0}$ and $p_{0}$ are two positive constants.

$Proof:$ The Lyapunov function candidate is considered as

(38) \begin{equation} \begin{aligned} V_{2}&=\frac{1}{2}\sum _{i=1}^{n+2}\textrm{ln}\frac{k_{bi}^{2}}{k_{bi}^{2}-z_{1i}^2}+\frac{1}{2}z_{2}^{T}Mz_{2}+\frac{1}{2}\sum _{i=1}^{n+2}\tilde{W_{i}}^T\varGamma _{i}^{-1}\tilde{W_{i}}+\frac{1}{2}\sum _{i=1}^{n+2}l_{i}\tilde{P}_{i}^2 \end{aligned} \end{equation}

where $\tilde{P_{i}}=\hat{P}_{i}-P_{i}$ , $\tilde{W_{i}}=\hat{W}_{i}-W_{i}^{*}$ , and $\tilde{W_{i}}$ , $\hat{W}_{i}$ , $W_{i}^{*}$ are the NNs weight error, estimate value, and actual value, respectively.

Differentiating $V_{2}$ with respect to time leads to

(39) \begin{align} \dot{V}_{2}&=\sum _{i=1}^{n+2}\frac{z_{1i}}{k_{bi}^{2}-z_{1i}^2}\big (L_{i}(z_{2i}+\alpha _{i}-\dot{x}_{di})Q+\dot{Q}Q^{-1}z_{1i}\big ) \nonumber\\[3pt]&\quad +z_{2}^{T}[D\tau _{1}+\bar{u}-B(q,\dot{q})\alpha -K(q)-M(q)\dot{\alpha }]-\sum _{i=1}^{n+2}\tilde{W_{i}}^T[R_{i}(z)z_{2i}+\sigma _{i}\hat{W}_{i}]+\sum _{i=1}^{n+2}l_{i}\tilde{P}_{i}\dot{\hat{P}}_{i} \end{align}

According to the control law (34), NNs (35) and (37), we get

(40) \begin{align} &\quad z_{2}^{T}\big (D\tau _{1}-B(q,\dot{q})\alpha -K(q)-M(q)\dot{\alpha }\big )+\sum _{i=1}^{n+2}l_{i}\tilde{P_{i}}\dot{\hat{P}}_{i} \nonumber\\[3pt]& =z_{2}^{T}\big (D(\hat{P}-\tilde{P})\tau _{1}^{*}-W^{*T}R(Z)-\epsilon (Z)\big )-\sum _{i=1}^{n+2}l_{i}\tilde{P_{i}}\hat{P}_{i} \end{align}

Note that $l_{i}(\hat{P}_{i}-\tilde{P}_{i})=l_{i}P_{i}=1$ . Substituting (31), (34), and (40) into (39), we get

(41) \begin{align} \dot{V}_{2}&=-\sum _{i=1}^{n+2}w_{i}K_{1}z_{1i}^{2} +z_{2}^{T}\big (\!-K_{2}z_{2}-\epsilon (Z)+\bar{u}-\textrm{sgn}(z_{2})u_{c}\big )-\sum _{i=1}^{n+2}l_{i}\tilde{P}_{i}\hat{P}_{i}-\sum _{i=1}^{n+2}\sigma _{i}\tilde{W_{i}}^T\hat{W}_{i} \nonumber\\[3pt] & \leq -\sum _{i=1}^{n+2}w_{i}K_{1}z_{1i}^{2}-z_{2}^{T}K_{2}z_{2}-z_{2}^{T}\epsilon (Z) -\sum _{i=1}^{n+2}\sigma _{i}\tilde{W_{i}}^T\hat{W}_{i}-\sum _{i=1}^{n+2}l_{i}\tilde{P}_{i}\hat{P}_{i} \end{align}

By applying Young’s inequality, the following inequalities $-\sigma _{i}\tilde{W}_{i}^{T}\hat{W}_{i}\leq -\frac{\sigma _{i}}{2}(\Vert \tilde{W}_{i}\Vert ^{2}-\Vert{W_{i}^{*}}\Vert ^{2}),$ $-\tilde{P}_{i}\hat P_{i}\leq -\frac{1}{2}(\tilde P_{i}^2-P_{i}^2), -z_{2}^{T}\epsilon (Z)\leq \frac{1}{2}z_{2}^{T}z_{2}+\frac{1}{2}\Vert \epsilon (Z)\Vert ^{2}$ hold. And considering Lemmas 2.1 and 2.2, (41) can be written as

(42) \begin{align} \dot{V}_{2}& \leq -\sum _{i=1}^{n+2}K_{1i}\textrm{ln}\frac{k_{bi}^{2}}{k_{bi}^{2}-z_{1i}^2}-z_{2}^{T}\big (K_{2}-\frac{1}{2}I_{(n+2)(n+2)}\big )z_{2} -\frac{\sigma _{i}}{2}\sum _{i=1}^{n+2}\Vert \tilde{W}_{i}\Vert ^{2} \nonumber\\[3pt]&\quad -\frac{l_{i}}{2}\sum _{i=1}^{n+2}\tilde P_{i}^2+ \frac{\sigma _{i}}{2}\sum _{i=1}^{n+2}\Vert{W_{i}^{*}}\Vert ^{2}+\frac{l_{i}}{2}\sum _{i=1}^{n+2}P_{i}^2+\frac{1}{2}\Vert \epsilon (Z)\Vert ^{2} \nonumber\\[3pt]& \leq -l_{0}V_{2}+p_{0} \end{align}

where $I\in R^{(n+2)(n+2)}$ is an identity matrix, $l_{0}$ and $p_{0}$ are two constants defined as

(43) \begin{equation} \begin{aligned} l_{0}=\textrm{min}\bigg (\lambda _{\textrm{min}}(2K_{1}),\frac{\lambda _{\textrm{min}}(2K_{2}-I_{(n+2)(n+2)})}{\lambda _{\textrm{max}}(M)}, \textrm{min}\big (\frac{\sigma _{i}}{{\lambda _{\rm max}}(\varGamma _{i}^{-1})}\big ), 1\bigg ) \end{aligned} \end{equation}
(44) \begin{equation} p_{0}=\frac{\sigma _{i}}{2}\sum _{i=1}^{n+2}\Vert{W_{i}^{*}}\Vert ^{2}+\frac{l_{i}}{2}\sum _{i=1}^{n+2}P_{i}^2+\frac{1}{2}\Vert \epsilon (Z)\Vert ^{2} \end{equation}

with $\lambda _{\textrm{min}}(\!*\!)$ and $\lambda _{\textrm{max}}(\!*\!)$ refer to the minimum and maximum eigenvalues of the matrix $*$ , respectively. In order to ensure that $l_{0}\gt 0$ , the gains $K_{1}$ and $K_{2}$ should be satisfied

(45) \begin{equation} \lambda _{\textrm{min}}(2K_{1})\gt 0,\quad \lambda _{\textrm{min}}(2K_{2}-I_{(n+2)(n+2)})\gt 0 \end{equation}

Multiplying (42) by $e^{l_{0}t}$ yields

(46) \begin{equation} \frac{d}{dt}(V_{2}e^{l_{0}t})\leq p_{0}e^{l_{0}t} \end{equation}

Integrating $t$ on both sides of inequality (46), we have

(47) \begin{equation} V_{2}(t)\leq \bigg (V_{2}(0)-\frac{l_{0}}{p_{0}}\bigg )e^{l_{0}t}+\frac{l_{0}}{p_{0}}\leq V_{2}(0)+\frac{l_{0}}{p_{0}} \end{equation}

Combining (38) and (47), we get $z_{1i}^{2}\leq k_{bi}^2\big (1-e^{-2(V_{2}(0)+\frac{l_{0}}{p_{0}}})\big )$ , $\Vert z_{2}\Vert ^{2}\leq 2\big ( V_{2}(0)+\frac{l_{0}}{p_{0}}\big )\lambda _{\textrm{min}}^{-1}(M)$ , $\Vert \tilde{W}\Vert ^{2}\leq 2\big ( V_{2}(0)+\frac{l_{0}}{p_{0}}\big )\lambda _{\textrm{min}}^{-1}(\Gamma ^{-1})$ , $\tilde{P}_{i}^{2}\leq 2\big ( V_{2}(0)+\frac{l_{0}}{p_{0}}\big )(\textrm{min}(l_{i}))^{-1}$ . Therefore, we can conclude that $z_{1},z_{2},\tilde{W}$ , and $\tilde{P}$ all converge into the compact set shown in Theorem 1. The proof has been finished.

According to Theorem 1, the error variable $z_{1}$ is uniformly ultimately bounded with respect to the set $\Omega _{z1}$ . Thus, the preset performance described in (5a) can be achieved. Moreover, the error signals $z_{2}$ , $\tilde{W_{i}}$ , and $\tilde{P_{i}}$ are uniformly ultimately bounded. Because $W^{*}_{i}$ and $P_{i}$ are bounded with $\tilde{W_{i}}=\hat{W}_{i}-W_{i}^{*}$ , $\tilde{P_{i}}=P_{i}-\hat{P}_{i}$ , $\hat{W}_{i}$ and $\hat{P}_{i}$ must be bounded.

Remark. In control law ( 34 ), $\tau _{1}$ will be unbounded if $z_{1i}=k_{bi}$ . However, we can restrict $z_{1i}$ within the range [- $k_{bi}$ , $k_{bi}$ ] by adjusting parameters $\xi, \gamma$ , and $\varepsilon$ at the initial moment $t_{0}$ . Therefore, the control law ( 34 ) will be bounded with no singular values.

Remark. Different from existing works [Reference Bechlioulis and Rovithakis39], [Reference Karayiannidis and Doulgeri55], the proposed control with PP in this paper cannot only ensure that the tracking error reaches the preset range within a finite time but also can finally enter a smaller residual set. Although the performance function $\rho (t)$ can constrain the error convergence process, the system converges in infinite time and the convergence speed is slow. Therefore, since the convergence time $T$ is set in advance, the proposed controller has a faster convergence rate, which will be proved in the simulation part.

4. Simulations

In this section, simulations of a two-link flexible manipulator are carried out to demonstrate the validity of proposed control. The simulation environment is based on MATLAB 2020a. The detailed platform parameters are shown in Table I. The reference trajectories of two joints are square waves which are chosen as

(48) \begin{equation} x_{d1}(t)=\left \{ \begin{array}{@{}c@{\quad}c} 0.5\ \text{rad}& t\lt 10\ \text{s}, \\ \\[-9pt]-0.5\ \text{rad}& t\geq 10\ \text{s}.\\ \end{array} \right. \quad x_{d2}(t)=\left \{ \begin{array}{@{}c@{\quad}c} 0.25\ \text{rad}& t\lt 10\ \text{s}, \\ \\[-9pt]-0.25\ \text{rad}& t\geq 10\ \text{s}.\\ \end{array} \right. \end{equation}

Our control objective is to force the tracking error of two joints to satisfy the given performance requirements in (5a)–(5d). The performance parameters are designed as $\varepsilon =0.01$ , $\mu =0.005$ , and $T=0.5\ \text{s}$ . The boundaries of two tracking errors are $-\delta _{1}=-0.01$ and $\delta _{2}=0.5$ . The initial conditions of states information $q(0)$ and $\dot{q}(0)$ are set to zero, which ensures that the initial errors of the system is within the constraint range.

Table I. Parameters of the flexible robotic manipulator.

In order to show the superiority of the proposed controller, three cases are considered in the following simulations. First, we examine the effect of proportional differential (PD) controller. Second, the method of combining PD and PP for the manipulator proposed in ref. [Reference Karayiannidis and Doulgeri55] is shown. Third, the simulation results of proposed control with PP are presented. Finally, the effectiveness of adaptive neural networks fault-tolerant (NNFT) controller is verified with actuator faults being taken into consideration.

4.1. Simulation results without actuator faults

First, we prove the effectiveness of the proposed PP controller. Considering the initial state of errors of first joint and second joint is negative, so we choose $\gamma =\gamma _{2}$ . For the proposed controller, the specific parameters are set as: $K_{1}=\textrm{diag}\{12,12,1,1,1,1\}$ , $K_{2}=\textrm{diag}\{11,11,1,1,1,1\}$ , $m=3$ , $\gamma _{1}=\gamma _{2}=2$ , $\xi =0.5$ , $k_{bi}=1.2(i=1,\cdots,6)$ .

For the RBFNNs with BLS, the initial number of nodes in the NNs is $2^{6}=64$ , the initial center parameters are chosen as $-1$ or $1$ , and the widths are set as $\eta =2$ . The initial weights are $\hat{W}_{i}(0)=0(i=1,\cdots,64)$ , $\Gamma _{i}=0.1(i=1,\cdots,6)$ , $\sigma _{i}=0.02(i=1,\cdots,6)$ , the threshold is $\varXi =1.2$ , and the maximum number of neural nodes is $\bar{l}=100$ . According the definition and properties of rate function $\kappa (t)$ , we choose the $\kappa (t)$ as $e^{9.21t}$ . The simulation results are shown in Fig. 4.

Fig. 4. Tracking performance of the two joints.

From the tracking performance of the first and second joints shown in Fig. 4 (black dotted line), we find the tracking error can converge into the prescribed small region within $0.5\ \text{s}$ and then shrink to a smaller range at $t=1\ \text{s}$ , but the overshoot is too large to satisfy the prescribed requirement in (5d). Then we select a more larger value of $\gamma$ . So we set $\gamma =5$ in the next step, and the simulation results are shown as the red lines. We can clearly find that the tracking errors of two joints meet the PP described in (5a)–(5d) with small overshoot. Figure 5 shows that the uncertain function in (35) is better approximated under the improved RBFNNs and the number of neural nodes is only $100$ . Therefore, the proposed RBFNNs algorithm with BLS is effective. The first subfigure in Fig. 6 shows that the control inputs of both joints are bounded and smooth. The second subfigure in Fig. 6 is the trajectory change process of two joints in three-dimensional space, which shows the vibration of the flexible rod is effectively suppressed.

Fig. 5. Approximation effect of neural networks to unknown function.

Fig. 6. Control inputs and tip positions of two joints.

Second, to further demonstrate the superiority of the proposed NNs PP controller, the traditional PD control and the method proposed in ref. [Reference Karayiannidis and Doulgeri55] which has combined PD and PP are used as comparisons.

Design the PD control as

(49) \begin{equation} \tau =-K_{p}e_{1}-K_{d}\dot{e}_{1},\quad i=1,2 \end{equation}

where $e_{1}=x_{1}-x_{d}$ is the tracking error, and $K_{p}$ and $K_{d}$ are proportional and differential gains matrices. Considering the transient and steady-state performance of two joints, we get the optimal parameters with $K_{p}=10I_{6\times 6}$ and $K_{d}=10I_{6\times 6}$ .

For the model-free control with PP proposed in ref. [Reference Karayiannidis and Doulgeri55], the exponential performance function is set as

(50) \begin{equation} \rho (t)=(\rho _{0}-\rho _{\infty })\textrm{exp}(\!-lt)+\rho _{\infty } \end{equation}

with $\rho _{0}=1.1$ , $\rho _{\infty }=0.01$ , and $l=2$ . Taking into account the characteristics of the desired trajectory, we set the performance function as $\rho (t-10)$ at $t=10\,\text{s}$ .

Then, designing the model-free PPC $\tau$ as

(51) \begin{equation} \tau =-K_{p}e-K_{d}\dot{e}-K_{\varepsilon }J_{T}(x,t)\varepsilon (x) \end{equation}

where $K_{p},K_{d},K_{\varepsilon }$ are positive diagonal gain matrices, $x=\frac{e(t)}{\rho (t)}$ , $\varepsilon (x)$ is transformation function. For better comparison, we also choose $K_{p}=10I_{6\times 6}$ , $K_{d}=10I_{6\times 6}$ , and $K_{\varepsilon }=2I_{6\times 6}$ . See ref. [Reference Karayiannidis and Doulgeri55] for some other specific parameters. The simulation results of three cases are shown as Figs. 7 and 8.

Fig. 7. Tracking performance comparison of two joints.

Fig. 8. Tracking errors comparison of two joints.

From Fig. 7, we notice that although PD control can keep the system stable, it is much worse in terms of speed. Under the same proportional and derivative gains, the controller in ref. [Reference Karayiannidis and Doulgeri55] can significantly improve the rapidity of the system and keep $e(t)$ staying between $\rho (t)$ and $-\rho (t)$ , but still cannot meet our preset convergence time $T$ . Observing the proposed controller in Fig. 7, it can be found that the proposed NNs PP controller not only has higher stability accuracy but also faster convergence speed, which meet our preset requirements and effectively improve the transient and steady-state performance of flexible robotic manipulator. Therefore, the superiority of the proposed adaptive NNs PPC has been proved.

4.2. Simulation results with actuator faults

This part is mainly to verify the performance of the proposed FT controller in the case of actuator faults. The actuator faults in (2) include multiplicative and additive faults which are described as follows:

(52) \begin{equation} l_{1}(t)=\left \{ \begin{array}{l@{\quad}l} 1& t\lt 5\ \text{s}, \\ \\[-9pt]0.6& t\geq 5\ \text{s}.\\ \end{array} \right. \quad l_{2}(t)=\left \{ \begin{array}{l@{\quad}l} 1& t\lt 5\ \text{s}, \\ \\[-9pt]0.3& t\geq 5\ \text{s}.\\ \end{array} \right. \quad \bar u_{i}(t)=\left \{ \begin{array}{l@{\quad}l} 0& t\lt 5\ \text{s}, \\ \\[-9pt]0.3 sin(0.8t)+0.2& t\geq 5\ \text{s}.\\ \end{array} \right. \end{equation}

Through the above actuator faults settings, it is noted that the actuator for the first joint and the second joint lose $40\%$ and $70\%$ of its effectiveness, respectively, and both experience an additive bias fault after $5\ \text{s}$ . For time-varying additive fault, the upper bound of the fault is $u_{ci}=0.5, i=1,2$ . For the NNFT PPC (34), the parameters are same as described in section $B$ . Simulation results are shown as Figs. 9 and 10.

Fig. 9. Tracking performance of two joints with actuator faults.

Fig. 10. Control inputs of two joints with actuator faults.

From Fig. 9, the proposed adaptive FT control can still tracking the desired trajectories rapidly when actuators $\tau _{1}$ and $\tau _{2}$ occur faults. It is noted that the red line and blue line represent control input $u$ and control signal $\tau$ , respectively. Thus, we can conclude that the proposed NNFT control (34) possesses strong robustness against the actuator faults. In Fig. 10, when actuator faults occur at $5\ \text{s}$ , the proposed controller reacts rapidly to offset the adverse effects, so that the signal amplitude of the tracking error converges the prescribed range.

5. Conclusion

An adaptive NNFT controller with PP has been presented for a two-link flexible manipulator in this paper. The overshoot and tracking errors of two joints can satisfy the preset performance specifications by using a behavior-shaping function and an asymmetric scaling function. Based on the RBFNNs and broad learning theory, the uncertain dynamics of flexible robotic manipulator has been well approximated. Simulation results show the proposed control method cannot only ensure tracking errors meet the preset performance but also effectively suppress the vibration of the flexible rod. Comparative simulation results with other methods show the superiority of proposed control in terms of rapidity and stability, and vibration of the flexible rod can be suppressed. Besides, the proposed adaptive controller can solve the problem caused by actuator faults. In the future, we will continue to focus on the research of flexible manipulator by combining reinforcement learning control and optimization control.

Author contributions

Wei He, Linghuan Kong, and Wenkai Niu proposed the research project. Wenkai Niu designed the control algorithm, carried out the simulation test, and wrote the first draft. Yifan Wu and Haifeng Huang revised the manuscript. And Wei He and Linghuan Kong provided advice and supervision.

Funding statement

This work was supported in part by the National Natural Science Foundation of China under Grants 62225304, 62061160371, and U20A20225, by the National Key Research and Development Program of China under Grant 2019YFB1703600, in part by the Fundamental Research Funds for the China Central Universities of USTB under Grant FRF-MP-20-36, in part by the Beijing Natural Science Foundation under Grant JQ20026, and in part by the Beijing Top Discipline for Artificial Intelligence Science and Engineering, University of Science and Technology Beijing.

Conflicts of interest

The authors declare that there is no conflict of interest.

References

Li, M., Du, Z., Ma, X., Dong, W., Wang, Y., Gao, Y. and Chen, W., “A robot chamfering system for special-shaped and thin-walled workpieces,” Assembly Autom. 41(1), 116130 (2021).CrossRefGoogle Scholar
Lin, R., Huang, H. and Li, M., “An automated guided logistics robot for pallet transportation,” Assembly Autom. 41(1), 4554 (2021).CrossRefGoogle Scholar
Sun, T., Chen, L., Hou, Z. and Tan, M., “Novel sliding-mode disturbance observer-based tracking control with applications to robot manipulators,” Sci. China Inf. Sci. 64(7), 172205 (2021).CrossRefGoogle Scholar
Huang, H., He, W., Wang, J., Zhang, L. and Fu, Q., “An all servo-driven bird-like flapping-wing aerial robot capable of autonomous flight,” IEEE/ASME Trans. Mechatron. 27(6), 111 (2022).CrossRefGoogle Scholar
He, W., Tang, X., Wang, T. and Liu, Z., “Trajectory tracking control for a three-dimensional flexible wing,” IEEE Trans. Control Syst. Technol. 30(5), 22432250 (2022).CrossRefGoogle Scholar
He, W., Mu, X., Zhang, L. and Zou, Y., “Modeling and trajectory tracking control for flapping-wing micro aerial vehicles,” IEEE/CAA J. Autom. Sin. 8(1), 148156 (2021).CrossRefGoogle Scholar
Chang, W., Li, Y. and Tong, S., “Adaptive fuzzy backstepping tracking control for flexible robotic manipulator,” IEEE/CAA J. Autom. Sin. 8(12), 19231930 (2021).CrossRefGoogle Scholar
Pradhan, S. K. and Subudhi, B., “Position control of a flexible manipulator using a new nonlinear self-tuning pid controller,” IEEE/CAA J. Autom. Sin. 7(1), 136149 (2020).Google Scholar
Fadilah, A., Abdul, R. H., Zaharuddin, M. and Ariffanan, M., “Adaptive pid actuator fault tolerant control of single-link flexible manipulator,” Trans. Inst. Meas. Control 41(4), 10191031 (2018).Google Scholar
Sun, W., Su, S., Xia, J. and Nguyen, V., “Adaptive fuzzy tracking control of flexible-joint robots with full-state constraints,” IEEE Trans. Syst. Man Cybern.: Syst. 49(11), 22012209 (2019).CrossRefGoogle Scholar
Sun, C., Gao, H., He, W. and Yu, Y., “Fuzzy neural network control of a flexible robotic manipulator using assumed mode method,” IEEE Trans. Neural Netw. Learn. Syst. 29(11), 52145227 (2018).CrossRefGoogle ScholarPubMed
Kong, L., He, W., Yang, C., Li, Z. and Sun, C., “Adaptive fuzzy control for coordinated multiple robots with constraint using impedance learning,” IEEE Trans. Cybern. 49(8), 30523063 (2019).CrossRefGoogle ScholarPubMed
Li, Z., Li, X., Li, Q., Su, H., Kan, Z. and He, W., “Human-in-the-loop control of soft exosuits using impedance learning on different terrains,” IEEE Trans. Robot. 38(5), 110 (2022). doi: 10.1109/TRO.2022.3160052.CrossRefGoogle Scholar
Wang, H. and Kang, S., “Adaptive neural command filtered tracking control for flexible robotic manipulator with input dead-zone,” IEEE Access 7(99), 2267522683 (2019).CrossRefGoogle Scholar
Kong, L., He, W., Liu, Z., Yu, X. and Silvestre, C., “Adaptive tracking control with global performance for output-constrained MIMO nonlinear systems,” IEEE Trans. Autom. Control, 18 (2022). doi: 10.1109/TAC.2022.3201258.Google Scholar
Huang, A.-C. and Chen, Y.-C., “Adaptive sliding control for single-link flexible-joint robot with mismatched uncertainties,” IEEE Trans. Control Syst. Technol. 12(5), 770775 (2004).CrossRefGoogle Scholar
Salgado, I. and Chairez, I., “Adaptive unknown input estimation by sliding modes and differential neural network observer,” IEEE Trans. Neural Netw. Learn. Syst. 29(8), 34993509 (2017).Google ScholarPubMed
Liang, D., Song, Y., Sun, T. and Jin, X., “Dynamic modeling and hierarchical compound control of a novel 2-dof flexible parallel manipulator with multiple actuation modes,” Mech. Syst. Signal Process 103, 413439 (2018).CrossRefGoogle Scholar
Feng, Y., Shi, W., Cheng, G., Huang, J. and Liu, Z., “Benchmarking framework for command and control mission planning under uncertain environment,” Soft Comput. 24(4), 24632478 (2020).CrossRefGoogle Scholar
Feng, Y., Yang, X. and Cheng, G., “Stability in mean for multi-dimensional uncertain differential equation,” Soft Comput. 22(17), 57835789 (2018).CrossRefGoogle Scholar
Kong, L., He, W., Chen, W., Zhang, H. and Wang, Y., “Dynamic movement primitives based robot skills learning,” Mach. Intell. Res., (2022), in press. doi: 10.1007/s11633-022-1346-z.Google Scholar
Dang, Q., Xu, W. and Yuan, Y., “A dynamic resource allocation strategy with reinforcement learning for multimodal multi-objective optimization,” Mach. Intell. Res. 19(2), 138152 (2022).CrossRefGoogle Scholar
Yang, Y., Modares, H., Vamvoudakis, K. G., He, W., Xu, C.-Z. and Wunsch, D. C., “Hamiltonian-driven adaptive dynamic programming with approximation errors,” IEEE Trans. Cybern. 52(12), 112 (2021). doi: 10.1109/TCYB.2021.3108034.Google Scholar
Yang, Y., Kiumarsi, B., Modares, H. and Xu, C., “Mode l-free $\lambda$ -policy iteration for discrete-time linear quadratic regulation,” IEEE Trans. Neural Netw. Learn. Syst. 34(2), 635–649 (2023). doi: 10.1109/TNNLS.2021.3098985.Google Scholar
Chen, C. L. P. and Liu, Z., “Broad learning system: An effective and efficient incremental learning system without the need for deep architecture,” IEEE Trans. Neural Netw. Learn. Syst. 29(1), 1024 (2018).CrossRefGoogle ScholarPubMed
Chen, C. L. P., Liu, Z. and Feng, S., “Universal approximation capability of broad learning system and its structural variations,” IEEE Trans. Neural Netw. Learn. Syst. 30(4), 11911204 (2019).CrossRefGoogle ScholarPubMed
Huang, H., Zhang, T., Yang, C. and Chen, C. L. P., “Motor learning and generalization using broad learning adaptive neural control,” IEEE Trans. Ind. Electron. 67(10), 86088617 (2020).CrossRefGoogle Scholar
Peng, G., Chen, C. L. P., He, W. and Yang, C., “Neural-learning-based force sensorless admittance control for robots with input deadzone,” IEEE Trans. Ind. Electron. 68(6), 51845196 (2021).CrossRefGoogle Scholar
Ghaf-Ghanbari, P., Mazare, M. and Taghizadeh, M., “Active fault-tolerant control of a schonflies parallel manipulator based on time delay estimation,” Robotica 39(8), 15181535 (2021).CrossRefGoogle Scholar
Liu, Z., Han, Z., Zhao, Z. and He, W., “Modeling and adaptive control for a spatial flexible spacecraft with unknown actuator failures,” Sci. China Inf. Sci. 64(5), 152208 (2021).CrossRefGoogle Scholar
Smaeilzadeh, S. M. and Golestani, M., “Finite-time fault-tolerant adaptive robust control for a class of uncertain non-linear systems with saturation constraints using integral backstepping approach,” IET Control Theory Appl. 12(15), 21092117 (2018).CrossRefGoogle Scholar
Van, M., Ge, S. S. and Ren, H., “Finite time fault tolerant control for robot manipulators using time delay estimation and continuous nonsingular fast terminal sliding mode control,” IEEE Trans. Cybern. 47(7), 16811693 (2017).CrossRefGoogle Scholar
Shen, Q., Yue, C., Goh, C. H. and Wang, D., “Active fault-tolerant control system design for spacecraft attitude maneuvers with actuator saturation and faults,” IEEE Trans. Ind. Electron. 66(5), 37633772 (2018).CrossRefGoogle Scholar
Kong, L., He, W., Yang, W., Li, Q. and Kaynak, O., “Fuzzy approximation-based finite-time control for a robot with actuator saturation under time-varying constraints of work space,” IEEE Trans. Cybern. 51(10), 48734884 (2021).CrossRefGoogle ScholarPubMed
Kong, L., He, W., Dong, Y., Cheng, L., Yang, C. and Li, Z., “Asymmetric bounded neural control for an uncertain robot by state feedback and output feedback,” IEEE Trans. Syst. Man Cybern.: Syst. 51(3), 17351746 (2021).Google Scholar
Li, H., Zhao, S., He, W. and Lu, R., “Adaptive finite-time tracking control of full state constrained nonlinear systems with dead-zone,” Automatica 100, 99107 (2019).CrossRefGoogle Scholar
Ilchmann, A., Ryan, E. P. and Trenn, S., “Tracking control: Performance funnels and prescribed transient behaviour,” Syst. Control. Lett. 54(7), 655670 (2005).CrossRefGoogle Scholar
Wang, S., Ren, X., Jing, N. and Zeng, T., “Extended-state-observer-based funnel control for nonlinear servomechanisms with prescribed tracking performance,” IEEE Trans. Autom. Sci. Eng. 14(1), 98108 (2017).CrossRefGoogle Scholar
Bechlioulis, C. P. and Rovithakis, G. A., “Robust adaptive control of feedback linearizable mimo nonlinear systems with prescribed performance,” IEEE Trans. Autom. Control 53(9), 20902099 (2008).CrossRefGoogle Scholar
Wang, P., Zhang, X. and Zhu, J., “Online performance-based adaptive fuzzy dynamic surface control for nonlinear uncertain systems under input saturation,” IEEE Trans. Fuzzy Syst. 27(2), 209220 (2018).CrossRefGoogle Scholar
Sun, W., Wu, Y.-Q. and Sun, Z.-Y., “Command filter-based finite-time adaptive fuzzy control for uncertain nonlinear systems with prescribed performance,” IEEE Trans. Fuzzy Syst. 28(12), 31613170 (2020).CrossRefGoogle Scholar
Guo, D., Li, A., Cai, J., Feng, Q. and Shi, Y., “Inverse kinematics of redundant manipulators with guaranteed performance,” Robotica 40(1), 170190 (2022).CrossRefGoogle Scholar
Gul, S., Zergeroglu, E., Tatlicioglu, E. and Kilinc, M. V., “Desired model compensation-based position constrained control of robotic manipulators,” Robotica 40(2), 279293 (2022).CrossRefGoogle Scholar
Guo, Q., Zhang, Y., Celler, B. G. and Su, S. W., “Neural adaptive backstepping control of a robotic manipulator with prescribed performance constraint,” IEEE Trans. Neural Netw. Learn. Syst. 30(12), 35723583 (2018).CrossRefGoogle ScholarPubMed
Chen, Z., Wang, M. and Zou, Y., “Dynamic learning from adaptive neural control for flexible joint robot with tracking error constraints using high-gain observer,” Syst. Sci. Control Eng. Open Access J. 6(3), 177190 (2018).CrossRefGoogle Scholar
Zhou, B. and Zhang, K.-K., “A linear time-varying inequality approach for prescribed time stability and stabilization,” IEEE Trans. Cybern., 110 (2022). doi: 10.1109/TCYB.2022.3164658.Google Scholar
Espitia, N. and Perruquetti, W., “Predictor-feedback prescribed-time stabilization of lti systems with input delay,” IEEE Trans. Autom. Control 67(6), 27842799 (2022). doi: 10.1109/TAC.2021.3093527.CrossRefGoogle Scholar
Zhang, S., Liu, R., Peng, K. and He, W., “Boundary output feedback control for a flexible two-link manipulator system with high-gain observers,” IEEE Trans. Control Syst. Technol. 29(2), 835840 (2019).CrossRefGoogle Scholar
Huang, X., Song, Y. and Lai, J., “Neuro-adaptive control with given performance specifications for strict feedback systems under full-state constraints,” IEEE Trans. Neural Netw. Learn. Syst. 30(1), 2534 (2018).CrossRefGoogle ScholarPubMed
Ahanda, J. J.-B. M., Mbede, J. B., Melingui, A. and Zobo, B. E., “Robust adaptive command filtered control of a robotic manipulator with uncertain dynamic and joint space constraints,” Robotica 36(5), 767786 (2018).CrossRefGoogle Scholar
Gao, H., He, W., Zhou, C. and Sun, C., “Neural network control of a two-link flexible robotic manipulator using assumed mode method,” IEEE Trans. Ind. Inform. 15(2), 755765 (2018).CrossRefGoogle Scholar
Chen, G., Song, Y. and Lewis, F. L., “Distributed fault-tolerant control of networked uncertain Euler-Lagrange systems under actuator faults,” IEEE Trans. Cybern. 47(7), 17061718 (2017).CrossRefGoogle Scholar
Zhang, S., Dong, Y., Ouyang, Y., Yin, Z. and Peng, K., “Adaptive neural control for robotic manipulators with output constraints and uncertainties,” IEEE Trans. Neural Netw. Learn. Syst. 29(11), 55545564 (2018).CrossRefGoogle ScholarPubMed
Ren, B., Ge, S. S., Tee, K. P. and Lee, T. H., “Adaptive neural control for output feedback nonlinear systems using a barrier Lyapunov function,” IEEE Trans. Neural Netw. 21(8), 13391345 (2010).Google ScholarPubMed
Karayiannidis, Y. and Doulgeri, Z., “Model-free robot joint position regulation and tracking with prescribed performance guarantees,” Robot. Auton. Syst. 60(2), 214226 (2012).CrossRefGoogle Scholar
Figure 0

Fig. 1. Diagram of the two-link flexible manipulator.

Figure 1

Fig. 2. Diagram of tracking error with preset performance.

Figure 2

Fig. 3. Neural networks structure with BLS.

Figure 3

Table I. Parameters of the flexible robotic manipulator.

Figure 4

Fig. 4. Tracking performance of the two joints.

Figure 5

Fig. 5. Approximation effect of neural networks to unknown function.

Figure 6

Fig. 6. Control inputs and tip positions of two joints.

Figure 7

Fig. 7. Tracking performance comparison of two joints.

Figure 8

Fig. 8. Tracking errors comparison of two joints.

Figure 9

Fig. 9. Tracking performance of two joints with actuator faults.

Figure 10

Fig. 10. Control inputs of two joints with actuator faults.