Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-10T17:46:21.676Z Has data issue: false hasContentIssue false

Fixed-time control of teleoperation systems based on adaptive event-triggered communication and force estimators

Published online by Cambridge University Press:  16 September 2024

Xia Liu*
Affiliation:
School of Electrical Engineering and Electronic Information, Xihua University, Chengdu, China
Hui Wen
Affiliation:
School of Electrical Engineering and Electronic Information, Xihua University, Chengdu, China
*
Corresponding author: Xia Liu; Email: xliucd@163.com
Rights & Permissions [Opens in a new window]

Abstract

A fixed-time control strategy based on adaptive event-triggered communication and force estimators is proposed for a class of teleoperation systems with time-varying delays and limited bandwidth. Two force estimators are designed to estimate the operator force and environment force instead of force sensors. With the position, velocity, force estimate signals, and triggering error, an adaptive event-triggered scheme is designed, which automatically adjusts the triggering thresholds to reduce the access frequency of the communication network. With the state information transmitted at the moment of event triggering while considering the time-varying delays, a fixed-time sliding mode controller is designed to achieve the position and force tracking. The stability of the system and the convergence of tracking error within a fixed time are mathematically proved. Experimental results indicate that the control strategy can significantly reduce the information transmission, enhance the bandwidth utilization, and ensure the convergence of tracking error within a fixed time for teleoperation systems.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Teleoperation systems extend human’s capabilities to remote workspaces, such as space exploration, undersea resource exploration, medical rescue, and environment monitoring [Reference Liu, Dao and Zhao1]. In a teleoperation system, the operator manipulates the master robot, sends control commands via the communication network to the slave robot, and enables the slave to track the master’s commands in the remote environment. Meanwhile, the slave provides environment force feedback to the master, enhancing the transparency of the teleoperation system [Reference Yang, Feng, Li and Hua2, Reference Kebria, Khosravi, Nahavandi, Shi and Alizadehsani3].

Due to the data exchange between the master and slave in the communication network, time-varying delays (TVDs) are inevitable. For teleoperation systems with TVDs, sliding mode control (SMC) is widely used due to its strong robustness [Reference Tran and Kang4]. In ref. [Reference Wang, Chen, Liang and Zhang5], a finite-time SMC was proposed for bilateral teleoperation systems with TVDs to ensure the stability and transient response performance of the system. In ref. [Reference Nguyen and Liu6], a terminal SMC was proposed for teleoperation systems with TVDs to stabilize the system and enable the position error to converge in finite time. In ref. [Reference Wang, Chen, Zhang, Yu, Wang and Liang7], for teleoperation systems with TVDs and dynamic uncertainty, a finite-time SMC was proposed to ensure system stability and finite-time convergence. However, in refs. [Reference Wang, Chen, Liang and Zhang5Reference Wang, Chen, Zhang, Yu, Wang and Liang7] the convergence time depends on the initial values of the system states. To solve this problem, in ref. [Reference Xu, Ge, Ding, Liang and Liu8] an adaptive fixed-time SMC was designed for teleoperation systems with TVDs and parameter uncertainty to achieve stabilization and trajectory tracking of the system. In ref. [Reference Yang, Hua and Guan9], an integral SMC was proposed for teleoperation systems with TVDs and external disturbance, ensuring the system stability and synchronization error convergence within a fixed time. In ref. [Reference Guo, Liu, Li, Ma and Huang10], for teleoperation systems with TVDs and uncertainty, a fixed-time SMC was designed to enhance the tracking performance while ensuring the system stability. Although in refs. [Reference Xu, Ge, Ding, Liang and Liu8Reference Guo, Liu, Li, Ma and Huang10] fixed-time convergence of the position tracking can be achieved, the force tracking is not considered and thus the transparency of the teleoperation systems cannot be guaranteed.

Good transparency improves the capacity of operator to execute complex tasks, requiring accurate perception of the interaction force between the slave and the remote environment. Typically, these forces are measured by force sensors, which may be limited by costs and noise [Reference Azimifar, Abrishamkar, Farzaneh, Sarhan and Amini11Reference Yang, Peng, Cheng, Na and Li13]. To circumvent force sensors, in ref. [Reference Azimifar, Hassani, Saveh and Ghomshe14], a PD controller based on force estimator (FE) was designed for teleoperation systems with constant time delays to achieve stable position and force tracking. In ref. [Reference Namnabat, Zaeri and Vahedi15], an enhanced FE and a passive control strategy were designed to predict the operator force and environment force, ensuring precise position and force tracking of teleoperation systems with constant time delays. In ref. [Reference Dehghan, Koofigar, Sadeghian and Ekramian16], an observer-based control strategy was developed for teleoperation systems with constant time delays to ensure the position and force tracking. In ref. [Reference Yang, Guo, Li and Luo17], a sliding mode force observer was designed to estimate the operator force and environment force within a fixed time, and a P-like controller was designed to achieve stable position and force tracking of teleoperation systems with TVDs. In ref. [Reference Yuan, Wang and Guo18], a dynamic gain force observer was developed for teleoperation systems with TVDs, which employed adaptive laws and wave variables to obtain satisfactory control performance. Although in refs. [Reference Azimifar, Hassani, Saveh and Ghomshe14Reference Yuan, Wang and Guo18] system transparency is enhanced through force estimator instead of force sensors, fixed-time convergence of tracking error cannot be ensured. Moreover, it is implicitly assured in refs. [Reference Azimifar, Hassani, Saveh and Ghomshe14Reference Yuan, Wang and Guo18] that continuous data transmission is maintained between the master and slave, which is prone to network congestion and degrades the control performance.

In fact, continuous data transmission is unavailability to the communication network in teleoperation systems as the network bandwidth is always limited. Therefore, network congestion inevitably arises, which will degrade the control performance or even making the system unstable. Event-triggered control strategy serves as an effective method to relieve the system from relying on the communication network resources, ensuring system performance while enhancing resource utilization [Reference Zhao, Shi, Xing and Agarwal19]. During event-triggered communication, the transmission of each state depends on its corresponding triggering condition. If the triggering condition is satisfied, the current state information is transmitted. Otherwise, the state information at the last triggering moment is retained. In ref. [Reference Hu, Chan and Liu20], for teleoperation systems under constant time delays, an event-triggered scheme was constructed by scattering transformation and an adaptive controller was designed to ensure the system stability and position tracking. In ref. [Reference Liu and Hu21], for teleoperation systems with constant time delays, an event-triggered scheme was proposed based on joint velocities, and a P-like controller was designed to ensure the system stability and position tracking. In ref. [Reference Li, Li, Dong and Wang22], an event-triggered P-like control was investigated for teleoperation systems with TVDs to achieve system stability and position synchronization. In ref. [Reference Hu and Liu23], an event-triggered coordination control for teleoperation systems with constant time delays was introduced to ensure the system stability and position tracking, where the event-triggered scheme was constructed based on auxiliary variables associated with position and velocity. In ref. [Reference Gao and Ma24], an event-triggered scheme based on norm of sliding mode was designed to enhance the sensitivity of the controller and save the communication network resources in teleoperation systems. In ref. [Reference Zhao, Liu and Wang25], an event-triggered backstepping control for teleoperation systems with constant time delays was proposed, which could achieve system stability within a fixed time and avoid unnecessary resource consumption. In ref. [Reference Wang, Lam, Xiao, Chen, Liang and Zhang26], an event-triggered prescribed-time control based on exponential Lyapunov functions was presented for teleoperation systems with multiple constraints and TVDs. However, the triggering thresholds in refs. [Reference Hu, Chan and Liu20Reference Wang, Lam, Xiao, Chen, Liang and Zhang26] are constant and cannot be adjusted according to the system states, which may waste communication resources and degrade control performance.

Therefore, this paper proposes a fixed-time control strategy for teleoperation systems based on adaptive event-triggered communication and FEs. This strategy flexibly and effectively reduces redundant data transmission and achieves fixed-time convergence of tracking error in teleoperation systems with TVDs. The main contributions of this paper are:

  • Two FEs are designed to indirectly acquire the operator force and environment force without force sensors.

  • An adaptive event-triggered scheme (AETS) is designed which can automatically adjust the triggering thresholds based on the system states. Compared to the event-trigged scheme with fixed triggering thresholds, the designed AETS can further reduce unnecessary data transmission and conserve network resources.

  • A fixed-time SMC is developed by utilizing the FEs and event-triggered states. Compared to the conventional SMC, the fixed-time SMC can ensure the convergence of tracking error within a fixed time under TVDs. Meanwhile, it can guarantee the system stability and enhance the position and force tracking performance.

2. Dynamical model of teleoperation systems

The dynamic model of a teleoperation system with $n$ -DOF master and slave can be described as [Reference Wang, Chen, Zhang, Yu, Wang and Liang7]

(1) \begin{align} & M_{m}(q_{m})\ddot{q}_{m}+C_{m}(q_{m},\dot{q}_{m})\dot{q}_{m}+g_{m}(q_{m}) = \tau _{m}+F_{h} \\ & M_{s}(q_{s})\ddot{q}_{s}+C_{s}(q_{s},\dot{q}_{s})\dot{q}_{s}+g_{s}(q_{s})=\tau _{s}+F_{e}\nonumber \end{align}

where the subscript $i=\{m,s\}$ represents the master and slave, respectively. $q_{i}\in R^{n\times 1}$ represents the joint position, $\dot{q}_{i}\in R^{n\times 1}$ represents the velocity, $\ddot{q}_{i}\in R^{n\times 1}$ represents the acceleration. $M_{i}(q_{i})\in R^{n\times n}$ represents the inertia matrix, $C_{i}(q_{i},\dot{q}_{i})\in R^{n\times n}$ represents the Coriolis/centrifugal matrix, $g_{i}(q_{i})\in R^{n\times 1}$ represents the gravitational force, $\tau _{i}\in R^{n\times 1}$ is the control input. $F_{h}\in R^{n\times 1}$ is the operator force and $F_{e}\in R^{n\times 1}$ is the environment force.

The dynamic model (1) has the following properties [Reference de Lima, Mozelli, Neto and Souza27Reference Chan, Huang and Wang30]:

Property 1: The inertia matrix $M_{i}(q_{i}(t))$ is symmetric positive definite and there exist positive constants  $\underline{{\unicode{x019B}}}_{i}$ and  $\overline{\unicode{x019B}}_{i}$ such that $0\lt \underline{\unicode{x019B}}_{i}I\leq M_{i}(q_{i})\leq \overline{\unicode{x019B}}_{i}I$ , where $I\in R^{n\times n}$ is the identity matrix.

Property 2: The matrix $\dot{M}_{i}(q_{i})-2C_{i}(q_{i},\dot{q}_{i})$ is skew-symmetric, that is, $x^{T}(\dot{M}_{i}(q_{i})-2C_{i}(q_{i},\dot{q}_{i}))x=0$ holds for any vector $x\in R^{n\times 1}$ .

Property 3: For vectors $p_{1}, p_{2}\in R^{n\times 1}$ , there always exists a positive constant ${\hslash }_{i}$ such that the Coriolis/centrifugal matrix is bounded, that is, $\| C_{i}(q_{i},p_{1})p_{2}\| \leq {\hslash }_{i}\| p_{1}\| \| p_{2}\|$ .

Property 4: When $\dot{q}_{i}$ and $\ddot{q}_{i}$ are bounded, $\dot{C}_{i}(q_{i},\dot{q}_{i})$ is also bounded.

3. Design of the control strategy

The proposed fixed-time control strategy based on adaptive event-triggered communication and FEs is shown in Figure 1. Firstly, the FEs are used to acquire the estimate of the operator force $w_{h}$ and environment force $w_{e}$ . Further, the transmission of the position, velocity, and the estimate of the environment force are regulated by the AETS. Finally, the fixed-time SMC ensures that the tracking error of the teleoperation systems under TVDs $T_{1}(t)$ and $T_{2}(t)$ converges within a fixed time.

Figure 1. Block diagram of fixed-time control strategy based on adaptive event-triggered communication and FEs.

3.1. FEs

Two FEs are designed to acquire the operator force and environment force instead of directly using force sensors.

The FE for the master is designed as

(2) \begin{align} w_{h}(t) & =\mathcal{Z}_{m} (t) + y_{m}(\dot{q}_{m}) \\ \dot{{\mathcal{Z}}}_{m}(t)& = -{\ell }_{h}{\mathcal{Z}}_{m}(t)+{\ell }_{h}(C_{m}\dot{q}_{m}+g_{m}-\tau _{m}-y_{m}(\dot{q}_{m}))-{\mathcal{P}}_{m}\dot{q}_{m}\nonumber \end{align}

where $w_{h}(t)=\hat{F}_{h}$ is the estimate of the operator force $F_{h}, {\mathcal{P}}_{m}$ is a positive definite gain matrix and ${\ell }_{h}=\chi _{m}{M_{m}}^{-1}(q_{m})$ . Let

(3) \begin{align} \dot{y}_{m}(\dot{q}_{m})={\ell }_{h}M_{m}(q_{m})\ddot{q}_{m} \end{align}

Substituting ${\ell }_{h}$ into (3) and integrating its both sides yield

(4) \begin{align} y_{m}(\dot{q}_{m})=\chi _{m}\dot{q}_{m} \end{align}

where $\chi _{m}\gt 0$ is a constant. Define the estimate error of the operator force as $\overline{F}_{h}=F_{h}-w_{h}$ . Then from (2) to (4), it yields

(5) \begin{align} \begin{array}{ll} \dot{\overline{F}}_{h} & =\dot{F}_{h}-\dot{w}_{h}\\ & =\dot{F}_{h}+{\ell }_{h}(w_{h}-y_{m}(\dot{q}_{m}))-{\ell }_{h}(M_{m}(q_{m})\ddot{q}_{m}+C_{m}\dot{q}_{m}+g_{m}-\tau _{m})+\mathcal{P}_{m}\dot{q}_{m}+{\ell }_{h}y_{m}(\dot{q}_{m})\\ & =\dot{F}_{h}-{\ell }_{h}\overline{F}_{h}+\mathcal{P}_{m}\dot{q}_{m} \end{array} \end{align}

Similarly, the FE for the slave is designed as

(6) \begin{align} w_{e}(t)& = \mathcal{Z}_{s}(t)+y_{s}(\dot{q}_{s})\\ \dot{\mathcal{Z}}_{s}(t)& = -{\ell }_{e}\mathcal{Z}_{s}(t)+{\ell }_{e}(C_{s}\dot{q}_{s}+g_{s}-\tau _{s}-y_{s}(\dot{q}_{s}))-\mathcal{P}_{s}\dot{q}_{s}-\mathcal{Q}q_{s} \nonumber \end{align}

where $w_{e}(t)=\hat{F}_{e}$ is the estimate of the environment force $F_{e}$ . Besides, ${\mathcal{P}}_{s}, {\mathcal{Q}}$ are positive definite gain matrices and ${\ell }_{e}=\chi _{s}{M_{s}}^{-1}(q_{s})$ . Let

(7) \begin{align} \dot{y}_{s}(\dot{q}_{s})={\ell }_{e}M_{s}(q_{s})\ddot{q}_{s} \end{align}

Substituting ${\ell }_{e}$ into (7) and integrating its both sides yield

(8) \begin{align} y_{s}(\dot{q}_{s})=\chi _{s}\dot{q}_{s} \end{align}

where $\chi _{s}\gt 0$ is a constant. Define the estimate error of the environment force as $\overline{F}_{e}=F_{e}-w_{e}$ . Then from (6) to (8), it yields

(9) \begin{align} \begin{array}{ll} \dot{\overline{F}}_{e} & =\dot{F}_{e}-\dot{w}_{e}\\ & =\dot{F}_{e}+{\ell }_{e}(w_{e}-y_{s}(\dot{q}_{s}))-{\ell }_{e}(M_{s}(q_{s})\ddot{q}_{s}+C_{s}\dot{q}_{s}+g_{s}-\tau _{s})+{\mathcal{P}}_{s}\dot{q}_{s}+{\mathcal{Q}}q_{s}+{\ell }_{e}y_{s}(\dot{q}_{s})\\ & =\dot{F}_{e}-{\ell }_{e}\overline{F}_{e}+{\mathcal{P}}_{s}\dot{q}_{s}+{\mathcal{Q}}q_{s} \end{array} \end{align}

In practice, the operator force and the environment force are usually bounded, that is, $\| F_{h}\| \lt {\mathcal{F}}_{h}, \| F_{e}\| \lt {\mathcal{F}}_{e}$ [Reference Yang, Guo, Li and Luo17]. However, since the bounds ${\mathcal{F}}_{h}$ and ${\mathcal{F}}_{e}$ are usually unknown, the following adaptive laws are designed to estimate them

(10) \begin{align} \begin{array}{l} \dot{\hat{{\mathcal{F}_{h}}}} =\overline{F}_{h}^{T}\!\left(\dot{\overline{F}}_{h}+{\ell }_{h}\overline{F}_{h}\right)\\ \dot{\hat{{\mathcal{F}_{e}}}} =\overline{F}_{e}^{T}\!\left(\dot{\overline{F}}_{e}+{\ell }_{e}\overline{F}_{e}\right) \end{array} \end{align}

Theorem 1. For the teleoperation system (1) under TVDs, using the FEs (2), (6) and the adaptive laws (10), the estimate errors of the operator force and environment force asymptotically converge to zero, that is, $\lim \limits_{t\rightarrow \infty }\overline{F}_{e}\rightarrow 0$ and $\lim \limits_{t\rightarrow \infty }\overline{F}_{h}\rightarrow 0$ .

Proof: Define a Lyapunov function as

(11) \begin{align} V_{1}=\frac{1}{2}\overline{F}_{h}^{T}\overline{F}_{h}+\frac{1}{2}\overline{F}_{e}^{T}\overline{F}_{e}+\frac{1}{2}\!\left({\mathcal{F}}_{h}-\hat{{\mathcal{F}_{h}}}\right)^{2}+\frac{1}{2}\!\left({\mathcal{F}}_{e}-\hat{{\mathcal{F}_{e}}}\right)^{2} \end{align}

Differentiating (11) and substituting (5) and (9) into it, we have

(12) \begin{align} \begin{array}{ll} \dot{V}_{1} & =\overline{F}_{h}^{T}\dot{\overline{F}}_{h}+\overline{F}_{e}^{T}\dot{\overline{F}}_{e}-\dot{\hat{{\mathcal{F}_{h}}}}-\dot{\hat{{\mathcal{F}_{e}}}}\\ & =\overline{F}_{h}^{T}\!\left(\dot{F}_{h}-{\ell }_{h}\overline{F}_{h}+{\mathcal{P}}_{m}\dot{q}_{m}\right)+\overline{F}_{e}^{T}\!\left(\dot{F}_{e}-{\ell }_{e}\overline{F}_{e}+{\mathcal{P}}_{s}\dot{q}_{s}+{\mathcal{Q}}q_{s}\right)-\dot{\hat{{\mathcal{F}_{h}}}}-\dot{\hat{{\mathcal{F}_{e}}}} \end{array} \end{align}

Substituting the adaptive laws (10) into (12) yields

(13) \begin{align} \dot{V}_{1}\leq -{\ell }_{h}\overline{F}_{h}^{T}\overline{F}_{h}-{\ell }_{e}\overline{F}_{e}^{T}\overline{F}_{e} \end{align}

Since ${\ell }_{h}=\chi _{m}{M_{m}}^{-1}(q_{m}), {\ell }_{e}=\chi _{s}{M_{s}}^{-1}(q_{s}), \chi _{m}$ and $\chi _{s}$ are positive constants, and both ${M_{m}}^{-1}(q_{m})$ and ${M_{s}}^{-1}(q_{s})$ are positive definite matrices, it follows that $\dot{V}_{1}\leq 0$ . Consequently, $\overline{F}_{h}$ and $\overline{F}_{e}$ are bounded. Also, $\dot{V}_{1}=0$ holds if and only if $\overline{F}_{h}=0$ and $\overline{F}_{e}=0$ . Hence, the estimate errors of the operator force $\overline{F}_{h}$ and environment force $\overline{F}_{e}$ asymptotically converge to zero, that is, $\lim\limits _{t\rightarrow \infty }\overline{F}_{e}\rightarrow 0$ and $\lim\limits _{t\rightarrow \infty }\overline{F}_{h}\rightarrow 0$ .

3.2. AETS

To save limited network resources, the AETS is designed as Figure 2. The adaptive triggering thresholds and triggering errors constitute the triggering conditions. Then, an event detector (ED) evaluates these conditions. Once the triggering conditions are satisfied, the state information is allowed to transit through the communication network. Simultaneously, the zero-order hold (ZOH) preserves the state information that meets the triggering conditions until the next triggering moment occurs.

Figure 2. Block diagram of AETS.

Define the triggered position error and triggered velocity error for the master as

(14) \begin{align} \begin{array}{l} e_{m}^{q}=q_{m}-\hat{q}_{m}\\[5pt] e_{m}^{v}=\dot{q}_{m}-\hat{\dot{q}}_{m} \end{array} \end{align}

where $\hat{q}_{m}=q_{m}(t_{l}^{mq})$ and $\hat{\dot{q}}_{m}=\dot{q}_{m}(t_{l}^{mv})$ are the triggered position and triggered velocity transmitted at the current triggering moment. Therefore, the adaptive event-triggering conditions are designed as

(15) \begin{align} \begin{array}{l} \left(e_{m}^{q}\right)^{T}\Xi _{m}e_{m}^{q}\gt \delta _{1}\dot{q}_{m}^{T}\Xi _{m}\dot{q}_{m}\\ \\ \left(e_{m}^{v}\right)^{T}\Xi _{m}e_{m}^{v}\gt \delta _{2}\dot{q}_{m}^{T}\Xi _{m}\dot{q}_{m} \end{array} \end{align}

where $\Xi _{m}$ is the weighted matrix of the triggering conditions, and $\delta _{1}, \delta _{2}$ are adaptive triggering thresholds for the master designed as

(16) \begin{align} \begin{array}{l} \delta _{1}=\max \!\left( \delta _{1\min },\min \!\left(\delta _{1\max },\mathcal{E}_{1}\right), \mathcal{J} _{1}\right)\\ \\ \delta _{2}=\max \!\left(\delta _{2\min },\min\!\left(\delta _{2\max },\mathcal{E}_{2}\right), \mathcal{J} _{2}\right) \end{array} \end{align}

where $\delta _{1\min }$ and $\delta _{1\max }, \delta _{2\min }$ and $\delta _{2\max }$ represent the minimum and maximum values of the position triggering threshold, velocity triggering threshold for the master, respectively. Also,

(17) \begin{align} \begin{array}{l} {\mathcal{E}}_{1}=a*\tanh \!\left(\frac{\left\| q_{m}-\hat{q}_{m}\right\| }{\left\| q_{m}+\hat{q}_{m}\right\| }\right)*\delta _{1ed}\\[5pt] {\mathcal{E}}_{2}=a*\tanh \!\left(\frac{\left\| \dot{q}_{m}-\widehat{\dot{q}}_{m}\right\| }{\left\| \dot{q}_{m}+\widehat{\dot{q}}_{m}\right\| }\right)*\delta _{2ed} \end{array} \end{align}

In (17), $a\gt 0, \delta _{1ed}$ and $\delta _{2ed}$ represent the adaptive triggering thresholds of the position and velocity for the master at the last triggering moment, respectively. When the triggering conditions (15) are satisfied, the values of $\delta _{1ed}$ and $\delta _{2ed}$ are updated to the current triggering thresholds $\delta _{1}$ and $\delta _{2}$ . In addition, the initial values of $\delta _{1ed}$ and $\delta _{2ed}$ are $\delta _{1\max }$ and $\delta _{2\max }$ . Besides, $\mathcal{J}_{1}$ and $\mathcal{J} _{2}$ are

(18) \begin{align} \begin{array}{l} \mathcal{J} _{1}=e^{b*\left| \left\| \hat{q}_{m}\right\| -\left\| q_{m}\left(t_{l-1}^{mq}\right)\right\| \right| }\\[5pt] \mathcal{J} _{2}=e^{b*\left| \left\| \widehat{\dot{q}}_{m}\right\| -\left\| \dot{q}_{m}\left(t_{l-1}^{mv}\right)\right\| \right| } \end{array} \end{align}

where $b\lt 0$ , and $q_{m}(t_{l-1}^{mq}), \dot{q}_{m}(t_{l-1}^{mv})$ represent the triggered position and triggered velocity transmitted at the last triggering moment. Notice that when $b\gt 0$ , the values of $\mathcal{J} _{1}$ or $\mathcal{J} _{2}$ may easily exceed $\delta _{1\max }$ or $\delta _{2\max }$ . Therefore, to ensure $\mathcal{J} _{1}\lt \delta _{1\max }$ and $\mathcal{J} _{2}\lt \delta _{2\max }$ , b must be less than 0.

Now, define the triggered position error, triggered velocity error, and triggered estimate error of the environment force for the slave as

(19) \begin{align} \begin{array}{l} e_{s}^{q}=q_{s}-\hat{q}_{s}\\ e_{s}^{v}=\dot{q}_{s}-\hat{\dot{q}}_{s}\\ e_{f}=w_{e}-\hat{w}_{e} \end{array} \end{align}

where $\hat{q}_{s}=q_{s}(t_{r}^{sq}), \widehat{\dot{q}}_{s}=\dot{q}_{s}(t_{r}^{sv})$ and $\hat{w}_{e}=w_{e}(t_{r}^{f})$ . Thus, the adaptive event-triggering conditions for the slave are designed as

(20) \begin{align} \begin{array}{l} \left(e_{s}^{q}\right)^{T}\Xi _{s}e_{s}^{q}\gt \delta _{3}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s}\\ \\ \left(e_{s}^{v}\right)^{T}\Xi _{s}e_{s}^{v}\gt \delta _{4}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s}\\ \\ \left(e_{f}\right)^{T}\Xi _{s}e_{f}\gt \delta _{5}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s} \end{array} \end{align}

where $\Xi _{s}$ is the weighted matrix of the triggering conditions, and $\delta _{3}, \delta _{4}$ and $\delta _{5}$ are the adaptive triggering thresholds for the slave designed as

(21) \begin{align} \begin{array}{l} \delta _{3}=\max\!\left(\delta _{3\min },\min\!\left(\delta _{3\max },\mathcal{E}_{3}\right),\mathcal{J} _{3}\right)\\ \\ \delta _{4}=\max\!\left(\delta _{4\min },\min\!\left(\delta _{4\max },\mathcal{E}_{4}\right),\mathcal{J} _{4}\right)\\ \\ \delta _{5}=\max\!\left(\delta _{5\min },\min\!\left(\delta _{5\max },\mathcal{E}_{5}\right),\mathcal{J} _{5}\right) \end{array} \end{align}

where $\delta _{3\min }$ and $\delta _{3\max }, \delta _{4\min }$ and $\delta _{4\max }, \delta _{5\min }$ and $\delta _{5\max }$ represent the minimum and maximum values of the position triggering threshold, velocity triggering threshold, and the estimate of the environment force triggering threshold for the slave, respectively. Additionally,

(22) \begin{align} \begin{array}{l} \mathcal{E}_{3}=a*\tanh\!\left(\frac{\left\| q_{s}-\hat{q}_{s}\right\| }{\left\| q_{s}+\hat{q}_{s}\right\| }\right)*\delta _{3ed}\\[5pt] \mathcal{E}_{4}=a*\tanh\!\left(\frac{\left\| \dot{q}_{s}-\widehat{\dot{q}}_{s}\right\| }{\left\| \dot{q}_{s}+\widehat{\dot{q}}_{s}\right\| }\right)*\delta _{4ed}\\[5pt] \mathcal{E}_{5}=a*\tanh\!\left(\frac{\left\| w_{e}-\hat{w}_{e}\right\| }{\left\| w_{e}+\hat{w}_{e}\right\| }\right)*\delta _{5ed} \end{array} \end{align}

In (22), $a\gt 0$ is a constant, $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ represent the adaptive triggering thresholds of the position, velocity, and the estimate of the environment force for the slave at the last triggering moment, respectively. When the triggering conditions (20) are satisfied, the values of $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ are updated to the current triggering thresholds $\delta _{3}, \delta _{4}$ , and $\delta _{5}$ . In addition, the initial values of $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ are set to $\delta _{3\max }, \delta _{4\max }$ and $\delta _{5\max }$ . Besides, $\mathcal{J} _{3}, \mathcal{J} _{4}$ , and $\mathcal{J} _{5}$ in (21) are

(23) \begin{align} \begin{array}{l} \mathcal{J} _{3}=e^{b*\left| \left\| \hat{q}_{s}\right\| -\left\| q_{s}\left(t_{r-1}^{sq}\right)\right\| \right| }\\[5pt] \mathcal{J} _{4}=e^{b*\left| \left\| \widehat{\dot{q}}_{s}\right\| -\left\| \dot{q}_{s}\left(t_{r-1}^{sv}\right)\right\| \right| }\\[5pt] \mathcal{J} _{5}=e^{b*\left| \left\| \hat{w}_{e}\right\| -\left\| w_{e}\left(t_{r-1}^{f}\right)\right\| \right| } \end{array} \end{align}

In (23), $b\lt 0$ , and $q_{s}(t_{r-1}^{sq}), \dot{q}_{s}(t_{r-1}^{sv})$ and $w_{e}(t_{r-1}^{f})$ represent the triggered position, triggered velocity and triggered estimate of the environment force transmitted at the last triggering moment, respectively. From (18) and (23) one can see that the adaptive triggering thresholds include the current triggered values as well as the last triggered values of the position, velocity for the master and slave, and the estimate of the environment force.

Now, the AETS can be designed as

(24) \begin{align} t_{l+1}^{mq} & = \inf \left\{t\gt t_{l}^{mq}\ {\mid}\ \left(e_{m}^{q}\right)^{T}\Xi _{m}e_{m}^{q}\gt \delta _{1}\dot{q}_{m}^{T}\Xi _{m}\dot{q}_{m}\right\} \nonumber\\ t_{l+1}^{mv} & = \inf \left\{t\gt t_{l}^{mv} \ {\mid}\ \left(e_{m}^{v}\right)^{T}\Xi _{m}e_{m}^{v}\gt \delta _{2}\dot{q}_{m}^{T}\Xi _{m}\dot{q}_{m}\right\}\\ t_{r+1}^{sq} & = \inf \left\{t\gt t_{r}^{sq} \ {\mid}\ \left(e_{s}^{q}\right)^{T}\Xi _{s}e_{s}^{q}\gt \delta _{3}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s}\right\}\nonumber \\ t_{r+1}^{sv} & = \inf \left\{t\gt t_{r}^{sv} \ {\mid}\ \left(e_{s}^{v}\right)^{T}\Xi _{s}e_{s}^{v}\gt \delta _{4}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s}\right\}\nonumber \\ t_{r+1}^{f} & = \inf \left\{t\gt t_{r}^{f} \ {\mid}\ \left(e_{f}\right)^{T}\Xi _{s}e_{f}\gt \delta _{5}\dot{q}_{s}^{T}\Xi _{s}\dot{q}_{s}\right\} \nonumber \end{align}

where the time series $t_{l}^{mq}, t_{l}^{mv}, t_{r}^{sq}, t_{r}^{sv}$ and $t_{r}^{f}$ denote the current triggering moments of the position for the master, velocity for the master, position for the slave, velocity for the slave, and the estimate of the environment force, respectively. $t_{l+1}^{mq}, t_{l+1}^{mv}, t_{r+1}^{sq}, t_{r+1}^{sv}$ , and $t_{r+1}^{f}$ denote the next triggering moments of $t_{l}^{mq}, t_{l}^{mv}, t_{r}^{sq}, t_{r}^{sv}$ and $t_{r}^{f}$ , where $l\in \mathrm{N }, r\in \mathrm{N }$ and $\mathrm{N }$ denotes the set of natural numbers. As the triggering thresholds are associated with the current and last values of the states, when the triggered errors increase, the event-triggering thresholds will appropriately decrease to increase the data transmission frequency. Conversely, the event-triggering thresholds increase to reduce the data transmission frequency. That is, the triggering thresholds can be dynamically adjusted based on the adaptive triggering thresholds designed in (16) and (21).

Remark 1. In the AETS (24), the next triggering always satisfies the triggering condition and it occurs strictly after the current triggering moment. This prevents the occurrence of zero intervals between two triggering moments, thereby avoiding the Zeno phenomenon.

3.3. Fixed-time SMC

Based on the FEs and AETS presented in Sections 3.1 and 3.2, fixed-time SMC for master and slave will be designed to ensure the convergence of tracking error under TVDs.

Define the position tracking error as

(25) \begin{align} e_{m}\!\left(t\right) & = q_{m}\!\left(t\right)-\tilde{q}_{s}\!\left(t\right)\\ e_{s}\!\left(t\right) & = q_{s}\!\left(t\right)-\tilde{q}_{m}\!\left(t\right) \nonumber \end{align}

where $\tilde{q}_{m}=\hat{q}_{m}(t-T_{1}(t))$ and $\tilde{q}_{s}=\hat{q}_{s}(t-T_{2}(t))$ are the triggered positions for the master and slave at the current triggering moment affected by TVDs. Differentiating (25) with respect to time yields

(26) \begin{align} \dot{e}_{m}(t)& = \dot{q}_{m}(t)-\dot{\hat{q}}_{s}(t-T_{2}(t))(1-\dot{T}_{2}(t))\nonumber\\ \dot{e}_{s}(t) & = \dot{q}_{s}(t)-\dot{\hat{q}}_{m}(t-T_{1}(t))(1-\dot{T}_{1}(t))\nonumber \\ \ddot{e}_{m}(t) & = \ddot{q}_{m}(t)-\ddot{\hat{q}}_{s}(t-T_{2}(t))(1-\dot{T}_{2}(t))^{2}+\dot{\hat{q}}_{s}(t-T_{2}(t))\ddot{T}_{2}(t)\\ \ddot{e}_{s}(t)& = \ddot{q}_{s}(t)-\ddot{\hat{q}}_{m}(t-T_{1}(t))(1-\dot{T}_{1}(t))^{2}+\dot{\hat{q}}_{m}(t-T_{1}(t))\ddot{T}_{1}(t) \nonumber\end{align}

According to (25) and (26), the sliding mold surface is designed as

(27) \begin{align} s_{m} & = \dot{e}_{m}+k_{m1}sig\!\left(e_{m}\right)^{{\varphi _{m1}}}+k_{m2}sig\!\left(e_{m}\right)^{{\varphi _{m2}}}\nonumber \\ s_{s} & = \dot{e}_{s}+k_{s1}sig\!\left(e_{s}\right)^{{\varphi _{s1}}}+k_{s2}sig\!\left(e_{s}\right)^{{\varphi _{s2}}} \end{align}

where $k_{m1}\gt 0, k_{m2}\gt 0, k_{s1}\gt 0, k_{s2}\gt 0$ are constant gains. In addition, $0\lt \varphi _{m1}\lt 1, \varphi _{m2}\gt 1, 0\lt \varphi _{s1}\lt 1, \varphi _{s2}\gt 1$ , and $sig(\cdot )^{{\mathcal{W}}}=|{\cdot}| ^{{\mathcal{W}}}\mathrm{sgn}({\cdot})$ where $\mathrm{sgn}({\cdot})$ is sign function.

Differentiating (27) leads to

(28) \begin{align} \dot{s}_{m}& = \ddot{e}_{m}+k_{m1}\varphi _{m1}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m1}}-1}\right)\dot{e}_{m}+k_{m2}\varphi _{m2}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m2}}-1}\right)\dot{e}_{m}\nonumber\\ \dot{s}_{s}& = \ddot{e}_{s}+k_{s1}\varphi _{s1}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s1}}-1}\right)\dot{e}_{s}+k_{s2}\varphi _{s2}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s2}}-1}\right)\dot{e}_{s} \end{align}

Therefore, the fixed-time SMC can be designed as

(29) \begin{align} \tau _{m} & = \tau _{m1}+\tau _{m2} \nonumber \\ \tau _{s}& = \tau _{s1}+\tau _{s2} \end{align}

where

(30) \begin{align} \tau _{m1} & = M_{m}\!\left(\ddot{\hat{q}}_{s}\!\left(t-T_{2}\!\left(t\right)\right)\!\left(1-\dot{T}_{2}\!\left(t\right)\right)^{2}-\dot{\hat{q}}_{s}\!\left(t-T_{2}\!\left(t\right)\right)\ddot{T}_{2}\!\left(t\right)-k_{m1}\varphi _{m1}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m1}}-1}\right)\dot{e}_{m} \right. \nonumber\\ &\left. -k_{m2}\varphi _{m2}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m2}}-1}\right)\dot{e}_{m}\right)+C_{m}\!\left(\dot{q}_{m}-s_{m}\right)+g_{m}-w_{h}-\wp \!\left| w_{h}-\tilde{w}_{e}\right| \nonumber \\ \tau _{s1} &= M_{s}\!\left(\ddot{\hat{q}}_{m}\!\left(t-T_{1}\!\left(t\right)\right)\!\left(1-\dot{T}_{1}\!\left(t\right)\right)^{2}-\dot{\hat{q}}_{m}\!\left(t-T_{1}\!\left(t\right)\right)\ddot{T}_{1}\!\left(t\right)-k_{s1}\varphi _{s1}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s1}}-1}\right)\dot{e}_{s} \right.\nonumber \\ &\left. -k_{s2}\varphi _{s2}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s2}}-1}\right)\dot{e}_{s}\right)+C_{s}\!\left(\dot{q}_{s}-s_{s}\right)+g_{s}-w_{e} \end{align}
(31) \begin{align} \tau _{m2} = & -k_{m3}M_{m}\mathrm{sgn}\!\left(s_{m}\right)-k_{m4}M_{m}sig\!\left(s_{m}\right)^{{\sigma _{m1}}}-k_{m5}M_{m}sig\!\left(s_{m}\right)^{{\sigma _{m2}}} \nonumber\\ &\tau _{s2}=-k_{s3}M_{s}\mathrm{sgn}\!\left(s_{s}\right)-k_{s4}M_{s}sig\!\left(s_{s}\right)^{{\sigma _{s1}}}-k_{s5}M_{s}sig\!\left(s_{s}\right)^{{\sigma _{s2}}} \end{align}

where $\wp$ is a positive definite matrix, $\tilde{w}_{e}=\hat{w}_{e}(t-T_{2}(t))$ . $k_{m3}\gt 0, k_{m4}\gt 0, k_{m5}\gt 0, k_{s3}\gt 0, k_{s4}\gt 0$ , and $k_{s5}\gt 0$ are constant gains. Besides, $0\lt \sigma _{m1}\lt 1, \sigma _{m2}\gt 1, 0\lt \sigma _{s1}\lt 1, \sigma _{s2}\gt 1$ . Eq. (30) is the equivalent control law for the master and slave, while (31) is the double-power convergence law. Compared to the convergence law in the traditional SMC, the double-power convergence law allows the system to have faster convergence.

Substituting (29)-(31) into (1), the closed-loop system is obtained as

(32) \begin{align} & M_{m}\!\left(q_{m}\right)\ddot{q}_{m}+C_{m}\!\left(q_{m},\dot{q}_{m}\right)\dot{q}_{m}+g_{m}\!\left(q_{m}\right)\nonumber\\ & = M_{m}\!\left(\ddot{\hat{q}}_{s}\!\left(t-T_{2}\!\left(t\right)\right)\!\left(1-\dot{T}_{2}\!\left(t\right)\right)^{2}-\dot{\hat{q}}_{s}\!\left(t-T_{2}\!\left(t\right)\right)\ddot{T}_{2}\!\left(t\right)-k_{m1}\varphi _{m1}{\textit{diag}}\!\left(\left| e_{m}\right| ^{{\varphi _{m1}}-1}\right)\dot{e}_{m} \right.\nonumber\\ & \!\left. -k_{m2}\varphi _{m2} {\textit{diag}} \!\left(\left| e_{m}\right| ^{{\varphi _{m2}}-1}\right)\dot{e}_{m}\right)+C_{m}\!\left(\dot{q}_{m}-s_{m}\right)+g_{m}+F_{h}-w_{h}-\wp \left| w_{h}-\tilde{w}_{e}\right| \nonumber\\ &- k_{m3}M_{m}\text{sgn}\!\left(s_{m}\right)-k_{m4}M_{m}{\textit{sig}}\!\left(s_{m}\right)^{{\sigma _{m1}}}-k_{m5}M_{m} {\textit{sig}} \!\left(s_{m}\right)^{{\sigma _{m2}}}\\ & M_{s}\!\left(q_{s}\right)\ddot{q}_{s}+C_{s}\!\left(q_{s},\dot{q}_{s}\right)\dot{q}_{s}+g_{s}\!\left(q_{s}\right)\nonumber\\ & = M_{s}\!\left(\ddot{\hat{q}}_{m}\!\left(t-T_{1}\!\left(t\right)\right)\!\left(1-\dot{T}_{1}\!\left(t\right)\right)^{2}-\dot{\hat{q}}_{m}\!\left(t-T_{1}\!\left(t\right)\right)\ddot{T}_{1}\!\left(t\right)-k_{s1}\varphi _{s1} {\textit{diag}}\!\left(\left| e_{s}\right| ^{{\varphi _{s1}}-1}\right)\dot{e}_{s} \right.\nonumber\\ & \!\left.-k_{s2}\varphi _{s2}{\textit{diag}}\!\left(\left| e_{s}\right| ^{{\varphi _{s2}}-1}\right)\dot{e}_{s}\right)+C_{s}\!\left(\dot{q}_{s}-s_{s}\right)+g_{s}+F_{e}-w_{e}-k_{s3}M_{s}\text{sgn}\!\left(s_{s}\right)\nonumber\\ &-k_{s4}M_{s} {\textit{sig}}\!\left(s_{s}\right)^{{\sigma _{s1}}}-k_{s5}M_{s} {\textit{sig}}\!\left(s_{s}\right)^{{\sigma _{s2}}}\nonumber \end{align}

Lemma 1 [Reference Du, Wen, Wu, Cheng and Lu31]: For a nonlinear system $\dot{x}=f(x,t), x(0)=x_{0}$ , if there exists a continuous positive definite Lyapunov function $V(x)\colon R^{n\times 1}\rightarrow R^{+}$ satisfying

(33) \begin{align} \dot{V}\left(x\right)\leq -\Im _{1}V\left(x\right)^{a}-\Im _{2}V\left(x\right)^{b} \end{align}

where $x\in R^{n\times 1}, \Im _{1}\gt 0, \Im _{2}\gt 0$ and $0\lt a\lt 1\lt b$ . Then, the nonlinear system is globally fixed-time stable with the convergence time bounded by $T_{st}$ as

(34) \begin{align} T_{st}\leq \frac{1}{\Im _{1}}\frac{1}{\left(1-a\right)}+\frac{1}{\Im _{2}}\frac{1}{\left(b-1\right)} \end{align}

Theorem 2. For the teleoperation system (1), using the FEs (2), (6), the AETS (24), along with the fixed-time SMC (29) to (31), the system stability within a fixed time $T_{\sup }$ is ensured. Moreover, the upper bound of the convergence time for the position tracking error is $T_{\sup }=T_{rt}+T_{st}=\frac{1}{k_{4}}\frac{1}{\left(1-\sigma _{1}\right)}+\frac{1}{k_{5}}\frac{1}{\left(\sigma _{2}-1\right)}+\frac{1}{k_{1}}\frac{1}{\left(1-\varphi _{1}\right)}+\frac{1}{k_{2}}\frac{1}{\left(\varphi _{2}-1\right)}$ . Furthermore, the force tracking error also converges to zero. that is, $\lim\limits _{t\rightarrow \infty }(| w_{h}-\tilde{w}_{e}| )\rightarrow 0$ .

Proof. Define a Lyapunov function as

(35) \begin{align} V_{2}=\frac{1}{2}s_{m}^{T}M_{m}s_{m}+\frac{1}{2}s_{s}^{T}M_{s}s_{s} \end{align}

Differentiating (35), using property 2 and substituting (28) into (35) yield

(36) \begin{align} \dot{V}_{2} =&\, s_{m}^{T}M_{m}\!\left(\ddot{e}_{m}+k_{m1}\varphi _{m1}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m1}}-1}\right)\!\dot{e}_{m}+k_{m2}\varphi _{m2}diag\!\left(\left| e_{m}\right| ^{{\varphi _{m2}}-1}\right)\!\dot{e}_{m}\right)+s_{m}^{T}C_{m}s_{m}\\ &\quad + s_{s}^{T}M_{s}\!\left(\ddot{e}_{s}+k_{s1}\varphi _{s1}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s1}}-1}\right)\!\dot{e}_{s}+k_{s2}\varphi _{s2}diag\!\left(\left| e_{s}\right| ^{{\varphi _{s2}}-1}\right)\!\dot{e}_{s}\right)+s_{s}^{T}C_{s}s_{s}\nonumber \end{align}

Since the TVDs $T_{1}(t), T_{2}(t)$ and their derivatives are usually bounded [Reference Shen and Pan32, Reference Zhang, Song, Li, Chen and Fan33], then according to (32) and (26) we can get

(37) \begin{align}\dot{V}_{2}& =s_{m}^{T}(\tau _{m}+F_{h}-C_{m}\dot{q}_{m}-g_{m}+M_{m}(-\ddot{\hat{q}}_{s}\left(t-T_{2}\left(t\right)\right)\left(1-\dot{T}_{2}\left(t\right)\right)^{2} \nonumber\\& +\dot{\hat{q}}_{s}\left(t-T_{2}\left(t\right)\right)\ddot{T}_{2}\left(t\right)+k_{m1}\varphi _{m1}diag\left(\left| e_{m}\right| ^{{\varphi _{m1}}-1}\right)\dot{e}_{m} \nonumber\\ & +k_{m2}\varphi _{m2}diag\left(\left| e_{m}\right| ^{{\varphi _{m2}}-1}\right)\dot{e}_{m}))+s_{m}^{T}C_{m}s_{m}+s_{s}^{T}(\tau _{s}+F_{e}-C_{s}\dot{q}_{s} \nonumber\\ & -g_{s}+M_{s}(-\ddot{\hat{q}}_{m}\left(t-T_{1}\left(t\right)\right)\left(1-\dot{T}_{1}\left(t\right)\right)^{2}+\dot{\hat{q}}_{m}\left(t-T_{1}\left(t\right)\right)\ddot{T}_{1}\left(t\right) \nonumber\\ & +k_{s1}\varphi _{s1}diag\left(\left| e_{s}\right| ^{{\varphi _{s1}}-1}\right)\dot{e}_{s}+k_{s2}\varphi _{s2}diag\left(\left| e_{s}\right| ^{{\varphi _{s2}}-1}\right)\dot{e}_{s}))+s_{s}^{T}C_{s}s_{s} \end{align}

Substituting (29)-(31) into (37), we have

(38) \begin{align} \dot{V}_{2}&= -s_{m}^{T}k_{m3}M_{m}\mathrm{sgn}\left(s_{m}\right)-s_{m}^{T}k_{m4}M_{m}sig\left(s_{m}\right)^{{\sigma _{m1}}}-s_{m}^{T}k_{m5}M_{m}sig\left(s_{m}\right)^{{\sigma _{m2}}} \nonumber\\ &\qquad -s_{s}^{T}k_{s3}M_{s}\mathrm{sgn}\left(s_{s}\right)-s_{s}^{T}k_{s4}M_{s}sig\left(s_{s}\right)^{{\sigma _{s1}}}-s_{s}^{T}k_{s5}M_{s}sig\left(s_{s}\right)^{{\sigma _{s2}}}\nonumber \\ & \leq -k_{m4}\left\| s_{m}\right\| ^{{\sigma _{m1}}+1}-k_{m5}\left\| s_{m}\right\| ^{{\sigma _{m2}}+1}-k_{s4}\left\| s_{s}\right\| ^{{\sigma _{s1}}+1}-k_{s5}\left\| s_{s}\right\| ^{{\sigma _{s2}}+1}\nonumber\\ & \leq -k_{m4}2^{\frac{\sigma _{m1}+1}{2}}\left\| \frac{1}{2}s_{m}^{2}\right\| ^{\frac{\sigma _{m1}+1}{2}}-k_{m5}2^{\frac{\sigma _{m2}+1}{2}}\left\| \frac{1}{2}s_{m}^{2}\right\| ^{\frac{\sigma _{m2}+1}{2}}\nonumber\\ & \quad\quad-k_{s4}2^{\frac{\sigma _{s1}+1}{2}}\left\| \frac{1}{2}s_{s}^{2}\right\| ^{\frac{\sigma _{s1}+1}{2}}-k_{s5}2^{\frac{\sigma _{s2}+1}{2}}\left\| \frac{1}{2}s_{s}^{2}\right\| ^{\frac{\sigma _{s2}+1}{2}}\nonumber\\ & \leq -k_{4}V_{2}^{\sigma _{1}}-k_{5}V_{2}^{\sigma _{2}}\nonumber\\ & \leq 0\end{align}

where $k_{4}=\min \left(k_{m4},k_{s4}\right), k_{5}=\min \left(k_{m5},k_{s5}\right), \sigma _{1}=\min \left(\frac{\sigma _{m1}+1}{2},\frac{\sigma _{s1}+1}{2}\right), \sigma _{2}=\min \left(\frac{\sigma _{m2}+1}{2},\frac{\sigma _{s2}+1}{2}\right)$ . According to Lemma 1 and (38), the system states converge to the sliding mode surface within a fixed time, and hence the system is stable. Therefore, all signals in $V_{2}(t)$ are bounded and the reaching time $T_{rt}$ of the system to the sliding surface is bounded by $T_{\sup 1}$ , that is,

(39) \begin{align} T_{rt}\leq T_{\sup 1}=\frac{1}{k_{4}}\frac{1}{\left(1-\sigma _{1}\right)}+\frac{1}{k_{5}}\frac{1}{\left(\sigma _{2}-1\right)} \end{align}

Thus, when the system reaches the sliding mode, we have $s_{m}=s_{s}=0$ . Then (27) can be rewritten as

(40) \begin{align} \dot{e}_{m} & = -k_{m1}sig\left(e_{m}\right)^{{\varphi _{m1}}}-k_{m2}sig\left(e_{m}\right)^{{\varphi _{m2}}}\\ \dot{e}_{s}& = -k_{s1}sig\left(e_{s}\right)^{{\varphi _{s1}}}-k_{s2}sig\left(e_{s}\right)^{{\varphi _{s2}}} \nonumber \end{align}

From Lemma 1 and (40), it can be seen that the position tracking error can converge to zero within a fixed time $T_{st}$ which is bounded by $T_{\sup 2}$ as follows

(41) \begin{align} T_{st}\leq T_{\sup 2}=\frac{1}{k_{1}}\frac{1}{\left(1-\varphi _{1}\right)}+\frac{1}{k_{2}}\frac{1}{\left(\varphi _{2}-1\right)} \end{align}

where $k_{1}=\min (k_{m1},k_{s1}), k_{2}=\min (k_{m2},k_{s2}), \varphi _{1}=\min (\varphi _{m1},\varphi _{s1})$ , and $\varphi _{2}=\min (\varphi _{m2},\varphi _{s2})$ . From (39) and (41), the convergence time $T_{rt}$ and $T_{st}$ does not depend on the initial states.

Next, the force tracking performance will be proved. Since we have proved that the system is stable, it is clear that $q_{i}(t)\in {\mathcal{L}}_{\infty }, \hat{q}_{m}(t-T_{2}(t))\in {\mathcal{L}}_{\infty }$ . Then we have $e_{s}(t-T_{2}(t))\in {\mathcal{L}}_{\infty }$ . As $q_{m}(t)-\tilde{q}_{s}(t)=e_{s}(t-T_{2}(t))+\int _{0}^{T_{2}(t)}\dot{q}_{s}(t-\theta )d\theta +q_{m}-q_{s}$ and $\int _{0}^{T_{2}(t)}\dot{q}_{s}(t-\theta )d\theta \in {\mathcal{L}}_{\infty }$ , it can be obtained that $q_{m}(t)-\tilde{q}_{s}(t)\in {\mathcal{L}}_{\infty }$ . Similarly, $q_{s}(t)-\tilde{q}_{m}(t)\in {\mathcal{L}}_{\infty }$ . According to (1), Property 1, Property 3, and Property 4, we have $\ddot{q}_{m}\in {\mathcal{L}}_{\infty }, \ddot{q}_{s}\in {\mathcal{L}}_{\infty }$ . Thus, $\dot{q}_{m}$ and $\dot{q}_{s}$ are uniformly continuous. According to Barbalat’s Lemma [Reference Dehghan, Koofigar, Sadeghian and Ekramian16], it can be deduced that

(42) \begin{align} \lim _{t\rightarrow \infty }\dot{q}_{i}\left(t\right)=0 \end{align}

Further, according to $\ddot{q}_{m}\in {\mathcal{L}}_{\infty }, \ddot{q}_{s}\in {\mathcal{L}}_{\infty }$ , using Barbalat’s Lemma, one can deduce that $\lim\limits_{t\rightarrow \infty }\ddot{q}_{m}(t)=0$ and $\lim\limits_{t\rightarrow \infty }\ddot{q}_{s}(t)=0$ . According to Theorem 1, we can obtain

(43) \begin{align} \lim _{t\rightarrow \infty }\left(F_{h}-w_{h}\right)=0,\lim _{t\rightarrow \infty }\left(F_{e}-w_{e}\right)=0 \end{align}

From (39) and (41), we have

(44) \begin{align} \left\| s_{i}\right\| =0,\lim _{t\rightarrow \infty }e_{i}\rightarrow 0,\lim _{t\rightarrow \infty }\dot{e}_{i}\rightarrow 0 \end{align}

Substituting (42)-(44) into (32), we can get

(45) \begin{align} M_{m}\left(q_{m}\right)\ddot{q}_{m}=-\wp \!\left| w_{h}-\tilde{w}_{e}\right| \end{align}

Multiplying $M_{m}(q_{m})^{-1}$ on both sides of (45) yields

(46) \begin{align} \ddot{q}_{m}=-M_{m}\left(q_{m}\right)^{-1}\wp \!\left| w_{h}-\tilde{w}_{e}\right| \end{align}

From Property 1, it follows that $\frac{1}{{\overline{\unicode{x019B}}}_{m}}I\leq M_{m}\!\left(q_{m}\right)^{-1}$ , that is, $-\frac{1}{\overline{\unicode{x019B}}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \geq - {M_{m}}\!\left(q_{m}\right)^{-1}\wp \!\left| w_{h}-\tilde{w}_{e}\right|$ . Thus,

(47) \begin{align} \ddot{q}_{m}\leq -\frac{1}{\overline{{\unicode{x019B}}}_{m}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \end{align}

Since $\overline{{\unicode{x019B}}}_{m}$ is a positive constant and $\wp$ is a positive definite matrix, it follows that $-\frac{1}{\overline{\unicode{x019B}}_{m}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \leq 0$ , that is $\ddot{q}_{m}\leq 0$ . When $-\frac{1}{\overline{\unicode{x019B}}_{m}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \lt 0$ and $\ddot{q}_{m}\lt 0, \sum _{i=1}^{n}\ddot{q}_{mi}\lt 0$ holds, where $\ddot{q}_{mi}$ is the ith element of $\ddot{q}_{m}$ . Hence, there always exits some $\ddot{q}_{mi}\lt 0$ when $t\rightarrow \infty$ , which is inconsistent with the previous conclusion $\lim \limits_{t\rightarrow \infty }\ddot{q}_{m}(t)=0$ . Thus, we can get $\lim\limits _{t\rightarrow \infty }\ddot{q}_{m}\rightarrow 0$ and then $\lim\limits_{t\rightarrow \infty }\left(-\frac{1}{\underline{{\unicode{x019B}}}_{m}}I\wp\!\left| w_{h}-\tilde{w}_{e}\right| \right)\rightarrow 0$ , that is, $\lim \limits_{t\rightarrow \infty }(| w_{h}-\tilde{w}_{e}| )\rightarrow 0$ . Therefore, the force tracking error can converge to zero.

4. Experiments

In the teleoperation experimental platform shown in Figure 3, two PHANTOM Omni haptic devices are used. The master is on the left and the slave is on the right. The master is connected to the computer and the slave is connected to the master via IEEE 1394 firewire. Besides, the proposed strategy is implemented in Visual Studio with C++. The haptic device application programming interface of PHANTOM Omni haptic device is called through static linking.

Table 1. Control parameters.

Figure 3. Experimental platform.

To validate the effectiveness of the proposed strategy, comparative experiments with the scheme in ref. [Reference Gao and Ma24] are conducted. In the experiments, the initial positions for the master and slave are $q_{m}(0)=[q_{{m_{1}}}(0),q_{{m_{2}}}(0)]^{T}=[0.2356,-0.0314]^{T}, q_{s}(0)=[q_{{s_{1}}}(0),q_{{s_{2}}}(0)]^{T}=[0.1587,0.0518]^{T}$ , where $q_{{i_{1}}}(0)$ and $q_{{i_{2}}}(0)\ i=\{m,s\}$ represent the initial positions of joints 1 and joint 2. $T_{1}(t)$ and $T_{2}(t)$ are shown in Figure 4. The rest of the control parameters are shown in Table 1.

Figure 4. TVDs.

Figure 5 and Figure 6 show the position tracking for the scheme in ref. [Reference Gao and Ma24] and the proposed strategy, respectively. As shown in Figure 5, when there are TVDs, the scheme in ref. [Reference Gao and Ma24] exhibits significant chattering at the beginning of the experiment. Moreover, when the operator force is applied during 5s–15s, the master and slave fail to achieve satisfactory tracking, resulting in a large position tracking error. In contrast, Figure 6 illustrates that the proposed strategy exhibits no significant chattering in position tracking. Furthermore, during the period when the operator force is applied, the slave can rapidly track the master with small position tracking error. This indicates that the proposed strategy has faster transient response, higher tracking accuracy, and minor chattering.

Figure 5. Position tracking (in [Reference Gao and Ma24]) (a) Joint 1 (b) Joint 2.

Figure 6. Position tracking (proposed strategy) (a) Joint 1 (b) Joint 2.

To verify the fixed-time performance, three different initial states are set

$Case1\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[-0.1176\ -0.1239\ 0.0551\ 0.2119]$ ,

$Case2\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[0.2056\ -0.1744\ -0.1916\ 0.1883]$ ,

$Case3\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[0.0053\ -0.1724\ -0.0740\ 0.1064]\circ$

By using (39) and (41), the upper bound of the convergence time for the position tracking error can be obtained as: $T_{\sup }=T_{rt}+T_{st}=\frac{1}{1}\frac{1}{\left(1-0.8\right)}+\frac{1}{100}\frac{1}{\left(2.3-1\right)}+\frac{1}{5}\frac{1}{\left(1-0.2\right)}+\frac{1}{5}\frac{1}{\left(1.5-1\right)}=5.658$ s.

The position tracking for the master and slave under three initial states is shown in Figure 7. It can be observed that the proposed strategy enables the slave and master to achieve tracking within 0.5 s. This implies that the position tracking error converges within the fixed time of 5.658 s as 0.5 s $\ll$ 5.658 s. Furthermore, the convergence time does not depend on the initial states.

Figure 7. Position tracking under different initial states (a) Joint 1 (b) Joint 2.

Figure 8 and Figure 9 show the triggering intervals for the scheme in ref. [Reference Gao and Ma24] and the proposed strategy, respectively. From Figure 8, it can be observed that since the fixed triggering thresholds are not related to the system states in ref. [Reference Gao and Ma24], the triggering intervals are either very dense or sparse. In contrast, in Figure 9 the triggering intervals for the proposed strategy are less frequent overall and much sparser. Moreover, since the adaptive triggering thresholds are related to the system states in the proposed strategy, when the operator force is applied during 5 s–15 s, the triggering intervals exhibit considerable variability, demonstrating the flexibility of the proposed strategy.

Figure 8. Triggering intervals (in [Reference Gao and Ma24]) (a) Master (b) Slave.

The experimental results for force tracking of the proposed strategy are illustrated in Figure 10. It can be observed that there is a good tracking performance between the estimate of the operator force and environment force, demonstrating the effectiveness of the FEs in the proposed strategy.

Remark 2. To avoid force measurement in the experiments, the operator force and environment force are estimated by the FEs. Furthermore, from Theorem 1 the estimate errors of the FEs can asymptotically approach zero. Therefore, the estimated forces rather than the measured forces are displayed in Figure 10.

Table 2. Qualitative comparison of different control methods.

Figure 9. Triggering intervals (proposed strategy) (a) Master (b) Slave.

Figure 10. Force tracking (proposed strategy) (a) Joint 1 (b) Joint 2.

Table 2 compares the average values of the position tracking errors of joint 1 and joint 2, that is, $avg(q_{{m_{1}}}-q_{{s_{1}}}), avg(q_{{m_{2}}}-q_{{s_{2}}})$ , and the ratios of the triggering intervals for the master and slave, that is, $RTI_{m}$ = (Triggered position data for the master / Total data) * 100%, $RTI_{s}$ = (Triggered position data for the slave / Total data) * 100%. It can be seen that the proposed strategy has smaller position tracking errors and lower triggering intervals compared to [Reference Gao and Ma24].

5. Conclusions

For a class of teleoperation systems with TVDs and limited bandwidth, this paper proposes a fixed-time control strategy based on adaptive event-triggered communication and FEs. The FEs accurately estimate the operator force and environment force without force sensors. The AETS which correlates the triggering frequency with the system states can save network resources. The SMC achieves fixed-time convergence of the tracking error and the convergence time is independent of the initial conditions. However, in complex communication networks there are other important issues such as cyber-attacks. Therefore, how to extend the proposed strategy to address these issues will remain as our future work.

Author contribution

Xia Liu: Investigation (lead), Methodology (equal), Writing – review and editing (lead), Supervision (lead); Hui Wen: Software (equal), Data curation(lead), Validation(lead), Writing – original draft (lead).

Financial support

This work is supported by Natural Science Foundation of Sichuan Province (No. 2023NSFSC0510) and National Natural Science Foundation of China (No. 61973257).

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical approval

None.

References

Liu, Y. C., Dao, P. N. and Zhao, K. Y., “On robust control of nonlinear teleoperators under dynamic uncertainties with variable time delays and without relative velocity,” IEEE Trans Ind Inform 16(2), 12721280 (2020).Google Scholar
Yang, Y., Feng, X., Li, J. and Hua, C., “Robust fixed-time cooperative control strategy design for nonlinear multiple-master/multiple-slave teleoperation system,” J Frankl Inst 360(3), 21932214 (2023).Google Scholar
Kebria, P. M., Khosravi, A., Nahavandi, S., Shi, P. and Alizadehsani, R., “Robust adaptive control scheme for teleoperation systems with delay and uncertainties,” IEEE Trans Cybernet 50(7), 32433253 (2020).Google ScholarPubMed
Tran, M.-D. and Kang, H.-J., “Adaptive terminal sliding mode control of uncertain robotic manipulators based on local approximation of a dynamic system,” Neurocomputing 228, 231240 (2017).Google Scholar
Wang, Z., Chen, Z., Liang, B. and Zhang, B., “A novel adaptive finite time controller for bilateral teleoperation system,” Acta Astronaut 144, 263270 (2018).Google Scholar
Nguyen, T.-V. and Liu, Y.-C., “Advanced finite-time control for bilateral teleoperators with delays and uncertainties,” IEEE Access 9, 141951141960 (2021).Google Scholar
Wang, Z., Chen, Z., Zhang, Y., Yu, X., Wang, X. and Liang, B., “Adaptive finite-time control for bilateral teleoperation systems with jittering time delays,” Int J Robust Nonlin Control 29(4), 10071030 (2019).Google Scholar
Xu, J. Z., Ge, M. F., Ding, T. F., Liang, C. D. and Liu, Z. W., “Neuro-adaptive fixed-time trajectory tracking control for human-in-the-loop teleoperation with mixed communication delays,” IET Control Theory Appl 14(19), 31933203 (2021).Google Scholar
Yang, Y., Hua, C. and Guan, X., “Multi-manipulators coordination for bilateral teleoperation system using fixed-time control approach,” Int J Robust Nonlin Control 28(18), 56675687 (2018).Google Scholar
Guo, S., Liu, Z., Li, L., Ma, Z. and Huang, P., “Fixed-time personalized variable gain tracking control for teleoperation systems with time varying delays,” J Frankl Inst 360(17), 1301513032 (2023).Google Scholar
Azimifar, F., Abrishamkar, M., Farzaneh, B., Sarhan, A. A. D. and Amini, H., “Improving teleoperation system performance in the presence of estimated external force,” Robot Comp Integr Manuf 46, 8693 (2017).Google Scholar
Han, L., Mao, J., Cao, P., Gan, Y. and Li, S., “Toward sensorless interaction force estimation for industrial robots using high-order finite-time observers,” IEEE Trans Ind Electron 69(7), 72757284 (2022).Google Scholar
Yang, C., Peng, G., Cheng, L., Na, J. and Li, Z., “Force sensorless admittance control for teleoperation of uncertain robot manipulator using neural networks,” IEEE Trans Syst Man Cybern Syst 51(5), 32823292 (2021).Google Scholar
Azimifar, F., Hassani, K., Saveh, A. H. and Ghomshe, F. T., “Performance analysis in delayed nonlinear bilateral teleoperation systems by force estimation algorithm,” Trans Inst Meas Control 40(5), 16371644 (2017).Google Scholar
Namnabat, M., Zaeri, A. H. and Vahedi, M., “A passivity-based control strategy for nonlinear bilateral teleoperation employing estimated external forces,” J Braz Soc Mech Sci Eng 42(12), 110 (2020).Google Scholar
Dehghan, S. A. M., Koofigar, H. R., Sadeghian, H. and Ekramian, M., “Observer-based adaptive force-position control for nonlinear bilateral teleoperation with time delay,” Control Eng Pract 107, 110 (2021).Google Scholar
Yang, Y., Guo, F., Li, J. and Luo, X., “New delay-dependent position/force hybrid controller design for uncertain telerobotics without force sensors,” Trans Inst Meas Control 46(2), 253267 (2023).Google Scholar
Yuan, Y., Wang, Y. and Guo, L., “Force reflecting control for bilateral teleoperation system under time-varying delays,” IEEE Trans Ind Inform 15(2), 11621172 (2019).Google Scholar
Zhao, N., Shi, P., Xing, W. and Agarwal, R. K., “Resilient event-triggered control for networked cascade control systems under denial-of-service attacks and actuator saturation,” IEEE Syst J 16(1), 11141122 (2022).Google Scholar
Hu, S. C., Chan, C. Y. and Liu, Y. C., “Event-Triggered Control for Bilateral Teleoperation with Time Delays,” In: IEEE International Conference on Advanced Intelligent Mechatronics, Banff, Canada (IEEE, 2016) pp. 16341639.Google Scholar
Liu, Y. C. and Hu, S. C., “Nonlinear bilateral teleoperators under event-driven communication with constant time delays,” Int J Robust Nonlin Control 29(11), 35473569 (2019).Google Scholar
Li, C., Li, Y., Dong, J. and Wang, H., “Event-triggered control of teleoperation systems with time-varying delays,” In: China automation congress , Beijing, China, (2021) pp.15371542.Google Scholar
Hu, S. C. and Liu, Y. C., “Event-triggered control for adaptive bilateral teleoperators with communication delays,” IET Control Theory Appl 14(3), 427437 (2020).Google Scholar
Gao, H. and Ma, C., “Event-triggered aperiodic intermittent sliding-mode control for master-slave bilateral teleoperation robotic systems,” Indu Robot Int J Robot Res Appl 50(3), 467482 (2023).Google Scholar
Zhao, Y., Liu, P. X. and Wang, H., “Adaptive event-triggered synchronization control for bilateral teleoperation system subjected to fixed-time constraint,” Int J Adapt Control Signal Process 36(8), 20412064 (2022).Google Scholar
Wang, Z., Lam, H. K., Xiao, B., Chen, Z., Liang, B. and Zhang, T., “Event-triggered prescribed-time fuzzy control for space teleoperation systems subject to multiple constraints and uncertainties,” IEEE Trans Fuzzy Syst 29(9), 27852797 (2021).Google Scholar
de Lima, M. V., Mozelli, L. A., Neto, A. A. and Souza, F. O., “A simple algebraic criterion for stability of bilateral teleoperation systems under time-varying delays,” Mech Syst Signal Process 137, 111 (2020).Google Scholar
Yu, X., He, W., Li, H. and Sun, J., “Adaptive fuzzy full-state and output-feedback control for uncertain robots with output constraint,” IEEE Trans Syst Man Cybern Syst 51(11), 69947007 (2021).Google Scholar
Bavili, R. E., Akbari, A. and Esfanjani, R. M., “Control of teleoperation systems in the presence of varying transmission delay, non-passive interaction forces, and model uncertainty,” Robotica 39(8), 14511467 (2021).Google Scholar
Chan, L., Huang, Q. and Wang, P., “Adaptive-observer-based robust control for a time-delayed teleoperation system with scaled four-channel architecture,” Robotica 40(5), 13851405 (2022).Google Scholar
Du, H., Wen, G., Wu, D., Cheng, Y. and Lu, J., “Distributed fixed-time consensus for nonlinear heterogeneous multi-agent systems,” Automatica 113, 111 (2020).Google Scholar
Shen, H. and Pan, Y. J., “Improving tracking performance of nonlinear uncertain bilateral teleoperation systems with time-varying delays and disturbances,” IEEE-ASME Trans Mechtron 25(3), 11711181 (2020).Google Scholar
Zhang, H., Song, A., Li, H., Chen, D. and Fan, L., “Adaptive finite-time control scheme for teleoperation with time-varying delay and uncertainties,” IEEE Trans Syst Man Cybern Syst 52(3), 15521566 (2022).Google Scholar
Figure 0

Figure 1. Block diagram of fixed-time control strategy based on adaptive event-triggered communication and FEs.

Figure 1

Figure 2. Block diagram of AETS.

Figure 2

Table 1. Control parameters.

Figure 3

Figure 3. Experimental platform.

Figure 4

Figure 4. TVDs.

Figure 5

Figure 5. Position tracking (in [24]) (a) Joint 1 (b) Joint 2.

Figure 6

Figure 6. Position tracking (proposed strategy) (a) Joint 1 (b) Joint 2.

Figure 7

Figure 7. Position tracking under different initial states (a) Joint 1 (b) Joint 2.

Figure 8

Figure 8. Triggering intervals (in [24]) (a) Master (b) Slave.

Figure 9

Table 2. Qualitative comparison of different control methods.

Figure 10

Figure 9. Triggering intervals (proposed strategy) (a) Master (b) Slave.

Figure 11

Figure 10. Force tracking (proposed strategy) (a) Joint 1 (b) Joint 2.