1. Introduction
Teleoperation systems extend human’s capabilities to remote workspaces, such as space exploration, undersea resource exploration, medical rescue, and environment monitoring [Reference Liu, Dao and Zhao1]. In a teleoperation system, the operator manipulates the master robot, sends control commands via the communication network to the slave robot, and enables the slave to track the master’s commands in the remote environment. Meanwhile, the slave provides environment force feedback to the master, enhancing the transparency of the teleoperation system [Reference Yang, Feng, Li and Hua2, Reference Kebria, Khosravi, Nahavandi, Shi and Alizadehsani3].
Due to the data exchange between the master and slave in the communication network, time-varying delays (TVDs) are inevitable. For teleoperation systems with TVDs, sliding mode control (SMC) is widely used due to its strong robustness [Reference Tran and Kang4]. In ref. [Reference Wang, Chen, Liang and Zhang5], a finite-time SMC was proposed for bilateral teleoperation systems with TVDs to ensure the stability and transient response performance of the system. In ref. [Reference Nguyen and Liu6], a terminal SMC was proposed for teleoperation systems with TVDs to stabilize the system and enable the position error to converge in finite time. In ref. [Reference Wang, Chen, Zhang, Yu, Wang and Liang7], for teleoperation systems with TVDs and dynamic uncertainty, a finite-time SMC was proposed to ensure system stability and finite-time convergence. However, in refs. [Reference Wang, Chen, Liang and Zhang5–Reference Wang, Chen, Zhang, Yu, Wang and Liang7] the convergence time depends on the initial values of the system states. To solve this problem, in ref. [Reference Xu, Ge, Ding, Liang and Liu8] an adaptive fixed-time SMC was designed for teleoperation systems with TVDs and parameter uncertainty to achieve stabilization and trajectory tracking of the system. In ref. [Reference Yang, Hua and Guan9], an integral SMC was proposed for teleoperation systems with TVDs and external disturbance, ensuring the system stability and synchronization error convergence within a fixed time. In ref. [Reference Guo, Liu, Li, Ma and Huang10], for teleoperation systems with TVDs and uncertainty, a fixed-time SMC was designed to enhance the tracking performance while ensuring the system stability. Although in refs. [Reference Xu, Ge, Ding, Liang and Liu8–Reference Guo, Liu, Li, Ma and Huang10] fixed-time convergence of the position tracking can be achieved, the force tracking is not considered and thus the transparency of the teleoperation systems cannot be guaranteed.
Good transparency improves the capacity of operator to execute complex tasks, requiring accurate perception of the interaction force between the slave and the remote environment. Typically, these forces are measured by force sensors, which may be limited by costs and noise [Reference Azimifar, Abrishamkar, Farzaneh, Sarhan and Amini11–Reference Yang, Peng, Cheng, Na and Li13]. To circumvent force sensors, in ref. [Reference Azimifar, Hassani, Saveh and Ghomshe14], a PD controller based on force estimator (FE) was designed for teleoperation systems with constant time delays to achieve stable position and force tracking. In ref. [Reference Namnabat, Zaeri and Vahedi15], an enhanced FE and a passive control strategy were designed to predict the operator force and environment force, ensuring precise position and force tracking of teleoperation systems with constant time delays. In ref. [Reference Dehghan, Koofigar, Sadeghian and Ekramian16], an observer-based control strategy was developed for teleoperation systems with constant time delays to ensure the position and force tracking. In ref. [Reference Yang, Guo, Li and Luo17], a sliding mode force observer was designed to estimate the operator force and environment force within a fixed time, and a P-like controller was designed to achieve stable position and force tracking of teleoperation systems with TVDs. In ref. [Reference Yuan, Wang and Guo18], a dynamic gain force observer was developed for teleoperation systems with TVDs, which employed adaptive laws and wave variables to obtain satisfactory control performance. Although in refs. [Reference Azimifar, Hassani, Saveh and Ghomshe14–Reference Yuan, Wang and Guo18] system transparency is enhanced through force estimator instead of force sensors, fixed-time convergence of tracking error cannot be ensured. Moreover, it is implicitly assured in refs. [Reference Azimifar, Hassani, Saveh and Ghomshe14–Reference Yuan, Wang and Guo18] that continuous data transmission is maintained between the master and slave, which is prone to network congestion and degrades the control performance.
In fact, continuous data transmission is unavailability to the communication network in teleoperation systems as the network bandwidth is always limited. Therefore, network congestion inevitably arises, which will degrade the control performance or even making the system unstable. Event-triggered control strategy serves as an effective method to relieve the system from relying on the communication network resources, ensuring system performance while enhancing resource utilization [Reference Zhao, Shi, Xing and Agarwal19]. During event-triggered communication, the transmission of each state depends on its corresponding triggering condition. If the triggering condition is satisfied, the current state information is transmitted. Otherwise, the state information at the last triggering moment is retained. In ref. [Reference Hu, Chan and Liu20], for teleoperation systems under constant time delays, an event-triggered scheme was constructed by scattering transformation and an adaptive controller was designed to ensure the system stability and position tracking. In ref. [Reference Liu and Hu21], for teleoperation systems with constant time delays, an event-triggered scheme was proposed based on joint velocities, and a P-like controller was designed to ensure the system stability and position tracking. In ref. [Reference Li, Li, Dong and Wang22], an event-triggered P-like control was investigated for teleoperation systems with TVDs to achieve system stability and position synchronization. In ref. [Reference Hu and Liu23], an event-triggered coordination control for teleoperation systems with constant time delays was introduced to ensure the system stability and position tracking, where the event-triggered scheme was constructed based on auxiliary variables associated with position and velocity. In ref. [Reference Gao and Ma24], an event-triggered scheme based on norm of sliding mode was designed to enhance the sensitivity of the controller and save the communication network resources in teleoperation systems. In ref. [Reference Zhao, Liu and Wang25], an event-triggered backstepping control for teleoperation systems with constant time delays was proposed, which could achieve system stability within a fixed time and avoid unnecessary resource consumption. In ref. [Reference Wang, Lam, Xiao, Chen, Liang and Zhang26], an event-triggered prescribed-time control based on exponential Lyapunov functions was presented for teleoperation systems with multiple constraints and TVDs. However, the triggering thresholds in refs. [Reference Hu, Chan and Liu20–Reference Wang, Lam, Xiao, Chen, Liang and Zhang26] are constant and cannot be adjusted according to the system states, which may waste communication resources and degrade control performance.
Therefore, this paper proposes a fixed-time control strategy for teleoperation systems based on adaptive event-triggered communication and FEs. This strategy flexibly and effectively reduces redundant data transmission and achieves fixed-time convergence of tracking error in teleoperation systems with TVDs. The main contributions of this paper are:
-
Two FEs are designed to indirectly acquire the operator force and environment force without force sensors.
-
An adaptive event-triggered scheme (AETS) is designed which can automatically adjust the triggering thresholds based on the system states. Compared to the event-trigged scheme with fixed triggering thresholds, the designed AETS can further reduce unnecessary data transmission and conserve network resources.
-
A fixed-time SMC is developed by utilizing the FEs and event-triggered states. Compared to the conventional SMC, the fixed-time SMC can ensure the convergence of tracking error within a fixed time under TVDs. Meanwhile, it can guarantee the system stability and enhance the position and force tracking performance.
2. Dynamical model of teleoperation systems
The dynamic model of a teleoperation system with $n$ -DOF master and slave can be described as [Reference Wang, Chen, Zhang, Yu, Wang and Liang7]
where the subscript $i=\{m,s\}$ represents the master and slave, respectively. $q_{i}\in R^{n\times 1}$ represents the joint position, $\dot{q}_{i}\in R^{n\times 1}$ represents the velocity, $\ddot{q}_{i}\in R^{n\times 1}$ represents the acceleration. $M_{i}(q_{i})\in R^{n\times n}$ represents the inertia matrix, $C_{i}(q_{i},\dot{q}_{i})\in R^{n\times n}$ represents the Coriolis/centrifugal matrix, $g_{i}(q_{i})\in R^{n\times 1}$ represents the gravitational force, $\tau _{i}\in R^{n\times 1}$ is the control input. $F_{h}\in R^{n\times 1}$ is the operator force and $F_{e}\in R^{n\times 1}$ is the environment force.
The dynamic model (1) has the following properties [Reference de Lima, Mozelli, Neto and Souza27–Reference Chan, Huang and Wang30]:
Property 1: The inertia matrix $M_{i}(q_{i}(t))$ is symmetric positive definite and there exist positive constants $\underline{{\unicode{x019B}}}_{i}$ and $\overline{\unicode{x019B}}_{i}$ such that $0\lt \underline{\unicode{x019B}}_{i}I\leq M_{i}(q_{i})\leq \overline{\unicode{x019B}}_{i}I$ , where $I\in R^{n\times n}$ is the identity matrix.
Property 2: The matrix $\dot{M}_{i}(q_{i})-2C_{i}(q_{i},\dot{q}_{i})$ is skew-symmetric, that is, $x^{T}(\dot{M}_{i}(q_{i})-2C_{i}(q_{i},\dot{q}_{i}))x=0$ holds for any vector $x\in R^{n\times 1}$ .
Property 3: For vectors $p_{1}, p_{2}\in R^{n\times 1}$ , there always exists a positive constant ${\hslash }_{i}$ such that the Coriolis/centrifugal matrix is bounded, that is, $\| C_{i}(q_{i},p_{1})p_{2}\| \leq {\hslash }_{i}\| p_{1}\| \| p_{2}\|$ .
Property 4: When $\dot{q}_{i}$ and $\ddot{q}_{i}$ are bounded, $\dot{C}_{i}(q_{i},\dot{q}_{i})$ is also bounded.
3. Design of the control strategy
The proposed fixed-time control strategy based on adaptive event-triggered communication and FEs is shown in Figure 1. Firstly, the FEs are used to acquire the estimate of the operator force $w_{h}$ and environment force $w_{e}$ . Further, the transmission of the position, velocity, and the estimate of the environment force are regulated by the AETS. Finally, the fixed-time SMC ensures that the tracking error of the teleoperation systems under TVDs $T_{1}(t)$ and $T_{2}(t)$ converges within a fixed time.
3.1. FEs
Two FEs are designed to acquire the operator force and environment force instead of directly using force sensors.
The FE for the master is designed as
where $w_{h}(t)=\hat{F}_{h}$ is the estimate of the operator force $F_{h}, {\mathcal{P}}_{m}$ is a positive definite gain matrix and ${\ell }_{h}=\chi _{m}{M_{m}}^{-1}(q_{m})$ . Let
Substituting ${\ell }_{h}$ into (3) and integrating its both sides yield
where $\chi _{m}\gt 0$ is a constant. Define the estimate error of the operator force as $\overline{F}_{h}=F_{h}-w_{h}$ . Then from (2) to (4), it yields
Similarly, the FE for the slave is designed as
where $w_{e}(t)=\hat{F}_{e}$ is the estimate of the environment force $F_{e}$ . Besides, ${\mathcal{P}}_{s}, {\mathcal{Q}}$ are positive definite gain matrices and ${\ell }_{e}=\chi _{s}{M_{s}}^{-1}(q_{s})$ . Let
Substituting ${\ell }_{e}$ into (7) and integrating its both sides yield
where $\chi _{s}\gt 0$ is a constant. Define the estimate error of the environment force as $\overline{F}_{e}=F_{e}-w_{e}$ . Then from (6) to (8), it yields
In practice, the operator force and the environment force are usually bounded, that is, $\| F_{h}\| \lt {\mathcal{F}}_{h}, \| F_{e}\| \lt {\mathcal{F}}_{e}$ [Reference Yang, Guo, Li and Luo17]. However, since the bounds ${\mathcal{F}}_{h}$ and ${\mathcal{F}}_{e}$ are usually unknown, the following adaptive laws are designed to estimate them
Theorem 1. For the teleoperation system (1) under TVDs, using the FEs (2), (6) and the adaptive laws (10), the estimate errors of the operator force and environment force asymptotically converge to zero, that is, $\lim \limits_{t\rightarrow \infty }\overline{F}_{e}\rightarrow 0$ and $\lim \limits_{t\rightarrow \infty }\overline{F}_{h}\rightarrow 0$ .
Proof: Define a Lyapunov function as
Differentiating (11) and substituting (5) and (9) into it, we have
Substituting the adaptive laws (10) into (12) yields
Since ${\ell }_{h}=\chi _{m}{M_{m}}^{-1}(q_{m}), {\ell }_{e}=\chi _{s}{M_{s}}^{-1}(q_{s}), \chi _{m}$ and $\chi _{s}$ are positive constants, and both ${M_{m}}^{-1}(q_{m})$ and ${M_{s}}^{-1}(q_{s})$ are positive definite matrices, it follows that $\dot{V}_{1}\leq 0$ . Consequently, $\overline{F}_{h}$ and $\overline{F}_{e}$ are bounded. Also, $\dot{V}_{1}=0$ holds if and only if $\overline{F}_{h}=0$ and $\overline{F}_{e}=0$ . Hence, the estimate errors of the operator force $\overline{F}_{h}$ and environment force $\overline{F}_{e}$ asymptotically converge to zero, that is, $\lim\limits _{t\rightarrow \infty }\overline{F}_{e}\rightarrow 0$ and $\lim\limits _{t\rightarrow \infty }\overline{F}_{h}\rightarrow 0$ .
3.2. AETS
To save limited network resources, the AETS is designed as Figure 2. The adaptive triggering thresholds and triggering errors constitute the triggering conditions. Then, an event detector (ED) evaluates these conditions. Once the triggering conditions are satisfied, the state information is allowed to transit through the communication network. Simultaneously, the zero-order hold (ZOH) preserves the state information that meets the triggering conditions until the next triggering moment occurs.
Define the triggered position error and triggered velocity error for the master as
where $\hat{q}_{m}=q_{m}(t_{l}^{mq})$ and $\hat{\dot{q}}_{m}=\dot{q}_{m}(t_{l}^{mv})$ are the triggered position and triggered velocity transmitted at the current triggering moment. Therefore, the adaptive event-triggering conditions are designed as
where $\Xi _{m}$ is the weighted matrix of the triggering conditions, and $\delta _{1}, \delta _{2}$ are adaptive triggering thresholds for the master designed as
where $\delta _{1\min }$ and $\delta _{1\max }, \delta _{2\min }$ and $\delta _{2\max }$ represent the minimum and maximum values of the position triggering threshold, velocity triggering threshold for the master, respectively. Also,
In (17), $a\gt 0, \delta _{1ed}$ and $\delta _{2ed}$ represent the adaptive triggering thresholds of the position and velocity for the master at the last triggering moment, respectively. When the triggering conditions (15) are satisfied, the values of $\delta _{1ed}$ and $\delta _{2ed}$ are updated to the current triggering thresholds $\delta _{1}$ and $\delta _{2}$ . In addition, the initial values of $\delta _{1ed}$ and $\delta _{2ed}$ are $\delta _{1\max }$ and $\delta _{2\max }$ . Besides, $\mathcal{J}_{1}$ and $\mathcal{J} _{2}$ are
where $b\lt 0$ , and $q_{m}(t_{l-1}^{mq}), \dot{q}_{m}(t_{l-1}^{mv})$ represent the triggered position and triggered velocity transmitted at the last triggering moment. Notice that when $b\gt 0$ , the values of $\mathcal{J} _{1}$ or $\mathcal{J} _{2}$ may easily exceed $\delta _{1\max }$ or $\delta _{2\max }$ . Therefore, to ensure $\mathcal{J} _{1}\lt \delta _{1\max }$ and $\mathcal{J} _{2}\lt \delta _{2\max }$ , b must be less than 0.
Now, define the triggered position error, triggered velocity error, and triggered estimate error of the environment force for the slave as
where $\hat{q}_{s}=q_{s}(t_{r}^{sq}), \widehat{\dot{q}}_{s}=\dot{q}_{s}(t_{r}^{sv})$ and $\hat{w}_{e}=w_{e}(t_{r}^{f})$ . Thus, the adaptive event-triggering conditions for the slave are designed as
where $\Xi _{s}$ is the weighted matrix of the triggering conditions, and $\delta _{3}, \delta _{4}$ and $\delta _{5}$ are the adaptive triggering thresholds for the slave designed as
where $\delta _{3\min }$ and $\delta _{3\max }, \delta _{4\min }$ and $\delta _{4\max }, \delta _{5\min }$ and $\delta _{5\max }$ represent the minimum and maximum values of the position triggering threshold, velocity triggering threshold, and the estimate of the environment force triggering threshold for the slave, respectively. Additionally,
In (22), $a\gt 0$ is a constant, $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ represent the adaptive triggering thresholds of the position, velocity, and the estimate of the environment force for the slave at the last triggering moment, respectively. When the triggering conditions (20) are satisfied, the values of $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ are updated to the current triggering thresholds $\delta _{3}, \delta _{4}$ , and $\delta _{5}$ . In addition, the initial values of $\delta _{3ed}, \delta _{4ed}$ , and $\delta _{5ed}$ are set to $\delta _{3\max }, \delta _{4\max }$ and $\delta _{5\max }$ . Besides, $\mathcal{J} _{3}, \mathcal{J} _{4}$ , and $\mathcal{J} _{5}$ in (21) are
In (23), $b\lt 0$ , and $q_{s}(t_{r-1}^{sq}), \dot{q}_{s}(t_{r-1}^{sv})$ and $w_{e}(t_{r-1}^{f})$ represent the triggered position, triggered velocity and triggered estimate of the environment force transmitted at the last triggering moment, respectively. From (18) and (23) one can see that the adaptive triggering thresholds include the current triggered values as well as the last triggered values of the position, velocity for the master and slave, and the estimate of the environment force.
Now, the AETS can be designed as
where the time series $t_{l}^{mq}, t_{l}^{mv}, t_{r}^{sq}, t_{r}^{sv}$ and $t_{r}^{f}$ denote the current triggering moments of the position for the master, velocity for the master, position for the slave, velocity for the slave, and the estimate of the environment force, respectively. $t_{l+1}^{mq}, t_{l+1}^{mv}, t_{r+1}^{sq}, t_{r+1}^{sv}$ , and $t_{r+1}^{f}$ denote the next triggering moments of $t_{l}^{mq}, t_{l}^{mv}, t_{r}^{sq}, t_{r}^{sv}$ and $t_{r}^{f}$ , where $l\in \mathrm{N }, r\in \mathrm{N }$ and $\mathrm{N }$ denotes the set of natural numbers. As the triggering thresholds are associated with the current and last values of the states, when the triggered errors increase, the event-triggering thresholds will appropriately decrease to increase the data transmission frequency. Conversely, the event-triggering thresholds increase to reduce the data transmission frequency. That is, the triggering thresholds can be dynamically adjusted based on the adaptive triggering thresholds designed in (16) and (21).
Remark 1. In the AETS (24), the next triggering always satisfies the triggering condition and it occurs strictly after the current triggering moment. This prevents the occurrence of zero intervals between two triggering moments, thereby avoiding the Zeno phenomenon.
3.3. Fixed-time SMC
Based on the FEs and AETS presented in Sections 3.1 and 3.2, fixed-time SMC for master and slave will be designed to ensure the convergence of tracking error under TVDs.
Define the position tracking error as
where $\tilde{q}_{m}=\hat{q}_{m}(t-T_{1}(t))$ and $\tilde{q}_{s}=\hat{q}_{s}(t-T_{2}(t))$ are the triggered positions for the master and slave at the current triggering moment affected by TVDs. Differentiating (25) with respect to time yields
According to (25) and (26), the sliding mold surface is designed as
where $k_{m1}\gt 0, k_{m2}\gt 0, k_{s1}\gt 0, k_{s2}\gt 0$ are constant gains. In addition, $0\lt \varphi _{m1}\lt 1, \varphi _{m2}\gt 1, 0\lt \varphi _{s1}\lt 1, \varphi _{s2}\gt 1$ , and $sig(\cdot )^{{\mathcal{W}}}=|{\cdot}| ^{{\mathcal{W}}}\mathrm{sgn}({\cdot})$ where $\mathrm{sgn}({\cdot})$ is sign function.
Differentiating (27) leads to
Therefore, the fixed-time SMC can be designed as
where
where $\wp$ is a positive definite matrix, $\tilde{w}_{e}=\hat{w}_{e}(t-T_{2}(t))$ . $k_{m3}\gt 0, k_{m4}\gt 0, k_{m5}\gt 0, k_{s3}\gt 0, k_{s4}\gt 0$ , and $k_{s5}\gt 0$ are constant gains. Besides, $0\lt \sigma _{m1}\lt 1, \sigma _{m2}\gt 1, 0\lt \sigma _{s1}\lt 1, \sigma _{s2}\gt 1$ . Eq. (30) is the equivalent control law for the master and slave, while (31) is the double-power convergence law. Compared to the convergence law in the traditional SMC, the double-power convergence law allows the system to have faster convergence.
Substituting (29)-(31) into (1), the closed-loop system is obtained as
Lemma 1 [Reference Du, Wen, Wu, Cheng and Lu31]: For a nonlinear system $\dot{x}=f(x,t), x(0)=x_{0}$ , if there exists a continuous positive definite Lyapunov function $V(x)\colon R^{n\times 1}\rightarrow R^{+}$ satisfying
where $x\in R^{n\times 1}, \Im _{1}\gt 0, \Im _{2}\gt 0$ and $0\lt a\lt 1\lt b$ . Then, the nonlinear system is globally fixed-time stable with the convergence time bounded by $T_{st}$ as
Theorem 2. For the teleoperation system (1), using the FEs (2), (6), the AETS (24), along with the fixed-time SMC (29) to (31), the system stability within a fixed time $T_{\sup }$ is ensured. Moreover, the upper bound of the convergence time for the position tracking error is $T_{\sup }=T_{rt}+T_{st}=\frac{1}{k_{4}}\frac{1}{\left(1-\sigma _{1}\right)}+\frac{1}{k_{5}}\frac{1}{\left(\sigma _{2}-1\right)}+\frac{1}{k_{1}}\frac{1}{\left(1-\varphi _{1}\right)}+\frac{1}{k_{2}}\frac{1}{\left(\varphi _{2}-1\right)}$ . Furthermore, the force tracking error also converges to zero. that is, $\lim\limits _{t\rightarrow \infty }(| w_{h}-\tilde{w}_{e}| )\rightarrow 0$ .
Proof. Define a Lyapunov function as
Differentiating (35), using property 2 and substituting (28) into (35) yield
Since the TVDs $T_{1}(t), T_{2}(t)$ and their derivatives are usually bounded [Reference Shen and Pan32, Reference Zhang, Song, Li, Chen and Fan33], then according to (32) and (26) we can get
Substituting (29)-(31) into (37), we have
where $k_{4}=\min \left(k_{m4},k_{s4}\right), k_{5}=\min \left(k_{m5},k_{s5}\right), \sigma _{1}=\min \left(\frac{\sigma _{m1}+1}{2},\frac{\sigma _{s1}+1}{2}\right), \sigma _{2}=\min \left(\frac{\sigma _{m2}+1}{2},\frac{\sigma _{s2}+1}{2}\right)$ . According to Lemma 1 and (38), the system states converge to the sliding mode surface within a fixed time, and hence the system is stable. Therefore, all signals in $V_{2}(t)$ are bounded and the reaching time $T_{rt}$ of the system to the sliding surface is bounded by $T_{\sup 1}$ , that is,
Thus, when the system reaches the sliding mode, we have $s_{m}=s_{s}=0$ . Then (27) can be rewritten as
From Lemma 1 and (40), it can be seen that the position tracking error can converge to zero within a fixed time $T_{st}$ which is bounded by $T_{\sup 2}$ as follows
where $k_{1}=\min (k_{m1},k_{s1}), k_{2}=\min (k_{m2},k_{s2}), \varphi _{1}=\min (\varphi _{m1},\varphi _{s1})$ , and $\varphi _{2}=\min (\varphi _{m2},\varphi _{s2})$ . From (39) and (41), the convergence time $T_{rt}$ and $T_{st}$ does not depend on the initial states.
Next, the force tracking performance will be proved. Since we have proved that the system is stable, it is clear that $q_{i}(t)\in {\mathcal{L}}_{\infty }, \hat{q}_{m}(t-T_{2}(t))\in {\mathcal{L}}_{\infty }$ . Then we have $e_{s}(t-T_{2}(t))\in {\mathcal{L}}_{\infty }$ . As $q_{m}(t)-\tilde{q}_{s}(t)=e_{s}(t-T_{2}(t))+\int _{0}^{T_{2}(t)}\dot{q}_{s}(t-\theta )d\theta +q_{m}-q_{s}$ and $\int _{0}^{T_{2}(t)}\dot{q}_{s}(t-\theta )d\theta \in {\mathcal{L}}_{\infty }$ , it can be obtained that $q_{m}(t)-\tilde{q}_{s}(t)\in {\mathcal{L}}_{\infty }$ . Similarly, $q_{s}(t)-\tilde{q}_{m}(t)\in {\mathcal{L}}_{\infty }$ . According to (1), Property 1, Property 3, and Property 4, we have $\ddot{q}_{m}\in {\mathcal{L}}_{\infty }, \ddot{q}_{s}\in {\mathcal{L}}_{\infty }$ . Thus, $\dot{q}_{m}$ and $\dot{q}_{s}$ are uniformly continuous. According to Barbalat’s Lemma [Reference Dehghan, Koofigar, Sadeghian and Ekramian16], it can be deduced that
Further, according to $\ddot{q}_{m}\in {\mathcal{L}}_{\infty }, \ddot{q}_{s}\in {\mathcal{L}}_{\infty }$ , using Barbalat’s Lemma, one can deduce that $\lim\limits_{t\rightarrow \infty }\ddot{q}_{m}(t)=0$ and $\lim\limits_{t\rightarrow \infty }\ddot{q}_{s}(t)=0$ . According to Theorem 1, we can obtain
Substituting (42)-(44) into (32), we can get
Multiplying $M_{m}(q_{m})^{-1}$ on both sides of (45) yields
From Property 1, it follows that $\frac{1}{{\overline{\unicode{x019B}}}_{m}}I\leq M_{m}\!\left(q_{m}\right)^{-1}$ , that is, $-\frac{1}{\overline{\unicode{x019B}}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \geq - {M_{m}}\!\left(q_{m}\right)^{-1}\wp \!\left| w_{h}-\tilde{w}_{e}\right|$ . Thus,
Since $\overline{{\unicode{x019B}}}_{m}$ is a positive constant and $\wp$ is a positive definite matrix, it follows that $-\frac{1}{\overline{\unicode{x019B}}_{m}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \leq 0$ , that is $\ddot{q}_{m}\leq 0$ . When $-\frac{1}{\overline{\unicode{x019B}}_{m}}I\wp \!\left| w_{h}-\tilde{w}_{e}\right| \lt 0$ and $\ddot{q}_{m}\lt 0, \sum _{i=1}^{n}\ddot{q}_{mi}\lt 0$ holds, where $\ddot{q}_{mi}$ is the ith element of $\ddot{q}_{m}$ . Hence, there always exits some $\ddot{q}_{mi}\lt 0$ when $t\rightarrow \infty$ , which is inconsistent with the previous conclusion $\lim \limits_{t\rightarrow \infty }\ddot{q}_{m}(t)=0$ . Thus, we can get $\lim\limits _{t\rightarrow \infty }\ddot{q}_{m}\rightarrow 0$ and then $\lim\limits_{t\rightarrow \infty }\left(-\frac{1}{\underline{{\unicode{x019B}}}_{m}}I\wp\!\left| w_{h}-\tilde{w}_{e}\right| \right)\rightarrow 0$ , that is, $\lim \limits_{t\rightarrow \infty }(| w_{h}-\tilde{w}_{e}| )\rightarrow 0$ . Therefore, the force tracking error can converge to zero.
4. Experiments
In the teleoperation experimental platform shown in Figure 3, two PHANTOM Omni haptic devices are used. The master is on the left and the slave is on the right. The master is connected to the computer and the slave is connected to the master via IEEE 1394 firewire. Besides, the proposed strategy is implemented in Visual Studio with C++. The haptic device application programming interface of PHANTOM Omni haptic device is called through static linking.
To validate the effectiveness of the proposed strategy, comparative experiments with the scheme in ref. [Reference Gao and Ma24] are conducted. In the experiments, the initial positions for the master and slave are $q_{m}(0)=[q_{{m_{1}}}(0),q_{{m_{2}}}(0)]^{T}=[0.2356,-0.0314]^{T}, q_{s}(0)=[q_{{s_{1}}}(0),q_{{s_{2}}}(0)]^{T}=[0.1587,0.0518]^{T}$ , where $q_{{i_{1}}}(0)$ and $q_{{i_{2}}}(0)\ i=\{m,s\}$ represent the initial positions of joints 1 and joint 2. $T_{1}(t)$ and $T_{2}(t)$ are shown in Figure 4. The rest of the control parameters are shown in Table 1.
Figure 5 and Figure 6 show the position tracking for the scheme in ref. [Reference Gao and Ma24] and the proposed strategy, respectively. As shown in Figure 5, when there are TVDs, the scheme in ref. [Reference Gao and Ma24] exhibits significant chattering at the beginning of the experiment. Moreover, when the operator force is applied during 5s–15s, the master and slave fail to achieve satisfactory tracking, resulting in a large position tracking error. In contrast, Figure 6 illustrates that the proposed strategy exhibits no significant chattering in position tracking. Furthermore, during the period when the operator force is applied, the slave can rapidly track the master with small position tracking error. This indicates that the proposed strategy has faster transient response, higher tracking accuracy, and minor chattering.
To verify the fixed-time performance, three different initial states are set
$Case1\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[-0.1176\ -0.1239\ 0.0551\ 0.2119]$ ,
$Case2\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[0.2056\ -0.1744\ -0.1916\ 0.1883]$ ,
$Case3\colon [q_{{m_{1}}}(0)\ q_{{m_{2}}}(0)\ q_{{s_{1}}}(0)\ q_{{s_{2}}}(0)]=[0.0053\ -0.1724\ -0.0740\ 0.1064]\circ$
By using (39) and (41), the upper bound of the convergence time for the position tracking error can be obtained as: $T_{\sup }=T_{rt}+T_{st}=\frac{1}{1}\frac{1}{\left(1-0.8\right)}+\frac{1}{100}\frac{1}{\left(2.3-1\right)}+\frac{1}{5}\frac{1}{\left(1-0.2\right)}+\frac{1}{5}\frac{1}{\left(1.5-1\right)}=5.658$ s.
The position tracking for the master and slave under three initial states is shown in Figure 7. It can be observed that the proposed strategy enables the slave and master to achieve tracking within 0.5 s. This implies that the position tracking error converges within the fixed time of 5.658 s as 0.5 s $\ll$ 5.658 s. Furthermore, the convergence time does not depend on the initial states.
Figure 8 and Figure 9 show the triggering intervals for the scheme in ref. [Reference Gao and Ma24] and the proposed strategy, respectively. From Figure 8, it can be observed that since the fixed triggering thresholds are not related to the system states in ref. [Reference Gao and Ma24], the triggering intervals are either very dense or sparse. In contrast, in Figure 9 the triggering intervals for the proposed strategy are less frequent overall and much sparser. Moreover, since the adaptive triggering thresholds are related to the system states in the proposed strategy, when the operator force is applied during 5 s–15 s, the triggering intervals exhibit considerable variability, demonstrating the flexibility of the proposed strategy.
The experimental results for force tracking of the proposed strategy are illustrated in Figure 10. It can be observed that there is a good tracking performance between the estimate of the operator force and environment force, demonstrating the effectiveness of the FEs in the proposed strategy.
Remark 2. To avoid force measurement in the experiments, the operator force and environment force are estimated by the FEs. Furthermore, from Theorem 1 the estimate errors of the FEs can asymptotically approach zero. Therefore, the estimated forces rather than the measured forces are displayed in Figure 10.
Table 2 compares the average values of the position tracking errors of joint 1 and joint 2, that is, $avg(q_{{m_{1}}}-q_{{s_{1}}}), avg(q_{{m_{2}}}-q_{{s_{2}}})$ , and the ratios of the triggering intervals for the master and slave, that is, $RTI_{m}$ = (Triggered position data for the master / Total data) * 100%, $RTI_{s}$ = (Triggered position data for the slave / Total data) * 100%. It can be seen that the proposed strategy has smaller position tracking errors and lower triggering intervals compared to [Reference Gao and Ma24].
5. Conclusions
For a class of teleoperation systems with TVDs and limited bandwidth, this paper proposes a fixed-time control strategy based on adaptive event-triggered communication and FEs. The FEs accurately estimate the operator force and environment force without force sensors. The AETS which correlates the triggering frequency with the system states can save network resources. The SMC achieves fixed-time convergence of the tracking error and the convergence time is independent of the initial conditions. However, in complex communication networks there are other important issues such as cyber-attacks. Therefore, how to extend the proposed strategy to address these issues will remain as our future work.
Author contribution
Xia Liu: Investigation (lead), Methodology (equal), Writing – review and editing (lead), Supervision (lead); Hui Wen: Software (equal), Data curation(lead), Validation(lead), Writing – original draft (lead).
Financial support
This work is supported by Natural Science Foundation of Sichuan Province (No. 2023NSFSC0510) and National Natural Science Foundation of China (No. 61973257).
Competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethical approval
None.