Servicios
Descargas
Buscar
Idiomas
P. Completa
Iterative learning control for impulsive multi-agent systems with varying trial lengths*
Xiaokai Cao; Michal Feckan; Dong Shen;
Xiaokai Cao; Michal Feckan; Dong Shen; JinRong Wang
Iterative learning control for impulsive multi-agent systems with varying trial lengths*
Nonlinear Analysis: Modelling and Control, vol. 27, núm. 3, pp. 445-465, 2022
Vilniaus Universitetas
resúmenes
secciones
referencias
imágenes

Abstract: In this paper, we introduce iterative learning control (ILC) schemes with varying trial lengths (VTL) to control impulsive multi-agent systems (I-MAS). We use domain alignment operator to characterize each tracking error to ensure that the error can completely update the control function during each iteration. Then we analyze the system’s uniform convergence to the target leader. Further, we use two local average operators to optimize the control function such that it can make full use of the iteration error. Finally, numerical examples are provided to verify the theoretical results.

Keywords: impulsive multi-agent system, consensus tracking, fractional iterative learning control, domain alignment operator, varying trial lengths.

Carátula del artículo

Articles

Iterative learning control for impulsive multi-agent systems with varying trial lengths*

Xiaokai Caoa
Guizhou University, China
Michal Feckanbc
Academy of Sciences, Eslovaquia
Dong Shend
University of China, Beijing, China
JinRong Wanga1
Guizhou University, China
Nonlinear Analysis: Modelling and Control, vol. 27, núm. 3, pp. 445-465, 2022
Vilniaus Universitetas

Recepción: 11 Octubre 2020

Revisado: 24 Julio 2021

Publicación: 26 Enero 2022

1 Introduction

With the development of swarm intelligence algorithms, multi-agent systems are widely used in communication networks, wireless sensor networks, and unmanned vehicles. The consensus problem is a basic problem of MAS because it has a wide range of applications in formation control, distributed estimation, and congestion control. This is essentially the agent’s consensus tracking of a given target trajectory through the network. The multi-agent system is a system abstracted from the biological world, and the biological population may suddenly change state at certain moments. Due to predation, disease, and bird migration, changes in population status can occur. For this situation, MAS with impulsive can well describe the inevitable interference during actual system operation. The problem of consensus tracking of researching impulsive MAS is to study whether an agent can return to a predetermined trajectory through information exchange after being subject to external interference. In this regard, Cui [6] has carried out the relevant research. Zhang et al. [18, 30, 32] considered the consensus problem of impulsive MAS in the traditional consensus framework. In addition, impulsive control approach has advantage in simplicity and flexibility for such kind of systems because the standard continuous state information is not required. As a consequence, this approach has been offered to study adaptive consensus and synchronization problems [22,23] and consensus problem [5,26] for MASs.

Iterative learning control (ILC) is suitable for robots to perform trajectory tracking tasks within a limited time interval. ILC uses the error information of the previous or multiple tracking batch measurements to correct the next control input, which can improve tracking accuracy along the iteration axis. ILC was first proposed in [2] for a robot, whereas Ahn and Chen [1] applied ILC to the consensus tracking trajectory of a MAS. Recently, ILC laws have been extensively studied for various types of MASs [19]. Note that MASs with impulse can generate discontinuous inputs, thus it is still challenging to consider whether ILC can be successfully applied to collect the sampled error data from each agent and track continuous or discontinuous trajectory, i.e., achieving leaderfollowing consensus for nonlinear dynamics of MAS with impulse [4]. In addition, [7,8] used Lyapunov stability theory to analyze the coordination performance of MAS.

Under normal circumstances, ILC requires the same length of time for each iteration cycle [12, 24]. However, in some practical applications, due to the inherent properties of the system or the needs of the operator, the operation may be terminated early, that is, the trial lengths of the iteration will be less than the complete trial lengths. People began to consider ILC with varying trial lengths (VTL). Li et al. [10, 29] considered continuous-time nonlinear systems and discrete-time linear systems, and designed an averaging operator based on the above method to construct an ILC scheme. Subsequently, Li [9] proposed two improved schemes to control discrete real linear systems, and in literature [11], the ILC problem of nonlinear dynamic systems was considered. Shen et al. [13, 25, 31] studied the ILC of VTL by using a composite energy function. Liu et al. [14] used the two-dimensional Kalman filter technology to study the ILC of VTL.

Fractional-order calculus was first proposed in the letter of Leibniz and l’Hôpital, and it has a history of more than 300 years. In recent years, the viscoelasticity and memory effects of fractional calculus have been widely concerned in the field of engineering applications and become an important research tool in numerical calculations. In general, fractional ILC has the following advantages:

  • Fractional iterative learning law covers PID learning law.

  • Fractional iterative learning laws has a weighted function (singular kernel) and has an additional parameter to adjust the learning procedure.

  • Fractional iterative learning laws has a memory function and keeps the global information fully, which can be used to improve the learning effect.

Recently, Luo et al. [20,21] studied the fractional ILC problem of fractional multi-agent systems. Liu et al. [1517] studied the ILC of VTL for fractional impulsive systems. However, there are very few works on ILC of impulsive multi-agent systems with varying trial lengths even for the classical learning laws or fractional learning laws. Note that impulsive effects often appear in the control of multi-agent systems, and the memory communication always happen in each agent. In order to achieve the consensus of multiagent systems with impulse and past communication, one can try to adopt fractional ILC approach to deal with this problem. Here fractional ILC will be used to deal with the past communication in each agent.

Based on the above discussion, this paper introduces new error processing methods and designs a variety of learning laws to consider the consensus tracking of the target trajectory by the impulsive multi-agent system. The specific work is as follows:

  • For impulsive multi-agent systems with VTL, we first use zeros to replace nonexistent errors and then consider the system’s consensus tracking of the target trajectory under the DαD-type learning law.

  • The domain alignment operator is introduced to deal with errors, and the consensus of the system under the IβD-type learning law is considered.

  • Based on the above, the local average operator is used to improve the control function, and the control convergence of the DαD and IβD learning laws to impulse multi-agents is considered, respectively.

Compared with the previous work, this paper uses the memory effect of fractional-order calculus to adjust the input of the system and combines the domain alignment operator to design an appropriate learning law to control the multi-agent system in the VTL case. Combining the two methods, the iterative accuracy and speed of the system are higher. Fractional-order learning law is more complex than integer-order learning law, and it also needs to consider the uncertainty caused by varying trial lengths, which makes it more difficult to construct the learning law and analyze the convergence of the system.

The rest of the paper is organized as follows: Section 2 provides the problem formulation and preliminaries. Section 3 provides the main results of this paper. An illustrative example is presented in Section 4.

2 Preliminaries and problem formulation

We consider a weighted directed graph composed of the set of vertices , represents the number of agents in the system, the set of edges , and the adjacency matrix V represents the set of multi-agents. Set of edge is composed of directed sequence pairs , where means that agent can pass information to agent , that is, is called the parent node of , and is called the child node of . All the sets adjacency with the agent are called the adjacency sets of the agent denoted as is the weighted adjacency matrix of which is composed of nonnegative elements In particular, ; , it is means that agent can receive information from agent ; if , it is means that agent cannot receive information from agent .

The Laplace operator of is defined as: represents the entry degree of vertex , that is, Importar imagen . In order to describe the communication relationship between virtual leader and follower, let = 1 denote that the i agent can receive the leader’s information directly; otherwise, let di = 0 and .

In this paper, is used to represent the 2-norm of vector , and is used to represent the matrix norm compatible with it. The norm of the function is expressed as as defined below: , The symbol ⊗ denotes the Kronecker product.

Consider a system with agents, each agent with impulsive points. represents their interaction topology. The th agent is controlled by the following nonlinear impulsive systems:

for all and , where = 1,2,...,. This system is right-continuous, where is the state vector of the .th agent, is the control function of the .th agent, is matrix, is the output vector of the .th agent, (·,·) : and are continuous, is a continuous matrix function. Impulsive time sequence is denoted by 0 . represent the right and left limits of respectively.

We need the following conditions:

(H1) (·,·) satisfies the Lipschitz condition

for any and .

(H2) (·) satisfies the Lipschitz condition

for any .

Under assumptions (H1) and (H2), following [28, Remark 4.1], system (1) with (0) = has aunique solution in a piecewise continuous functions space

Let be the expected consensus trace of the MAS on the time interval , 0 . Here is not necessarily continuous on the whole time interval . We regard the desired trajectory as the virtual leader in the communication topology and mark it with vertex 0. Then the information exchange among agents can be represented by an extended communication topology graph , where represents the edge set, and represents the weighted adjacency matrix. The control objective is to design appropriate iterative learning laws such that the output of all agents can asymptotically converge to the desired trajectory .

In order to describe the phenomenon of varying trial lengths, we introduce a random variable represents the end time of the th iteration. satisfies

Here represents the probability density function of the random variable (0,1), is the maximum running time of the system. In particular, when 0, that is, all the error data of this iteration is lost, so we can think that the current trial is not running. For the VTL system, we can deal with errors in the following ways (see the work of Li et al [10]):

where represents the tracking error of the .th iteration. .

Let be a function sequence (here is a normed space).

The domain alignment operator is , with taken as PC, where 0,1,2,...,N}, and where and exist with satisfies the following:

In this paper, we use to represent the Caputo fractional-order left derivative of the function to represent the Riemann–Liouville fractional-order right derivative of the function and to represent the Riemann–Liouville fractional-order right integral of the function . In this paper, 0 < α < 1. The following is fractional-order integration by parts formulas.

Lemma 1.

Lemma 2.

where x,, a is nondecreasing, and . Then, for , the following inequality is valid:

3 Main results

We use the symbol to represent all the information received by the th agent in the .th iteration. Then it can be expressed as the sum of the information transmitted from other agents to the th agent and the possible information transmitted from the leader to the th agent

The th agent can get information directly from the desired trajectory. That is, if , then = 1; otherwise, = 0. Here the first subscript of and indicates the number of iterations, and the second subscript indicates the sequence number of the agent. The subscripts of and are explained in Section 2. The derivative of the function is defined as follows:

Here is defined in formula (1). In order to make the intelligent body track the target trajectory, the following DαD-type learning laws are employed:

where are matrix function and are differentiable during the interval . The initial state learning rule is as follows:

Set as the tracking error of the agent; that is, . The learning law (8) can be written as .

We set all involved quantities of all agents of arbitrary iteration into vector form as follows:

where (·)T is the transpose of (·). Then (9) and (10) can be written as follows:

To study the multi-agent consensus problem with impulsive points, (H1), (H2), and the following assumptions are necessary in this paper.

Assumption 1. The desired trajectory is trackable; that is, there exists a desired input such that .

Assumption 2. The length of the system’s first run time is complete.

In order to make the proof process more concise, we introduce the following symbols to replace the norm of some variables that frequently appear in the proof process, let and

3.1 DαD-type learning law

Considering the multi-agent system (1), under the condition of varying trial lengths, analyze the convergence of the correction error (6), similar to (8)–(11), we give the following D.D-type learning law:

Theorem 1.For the multi-agent system (1) with Assumption 1, let (H1), (H2) hold, and the DαD-type learning law (12) is applied. Consider the varying trial lengths as the iteration number approaches infinity. The corrected tracking error converges to

represents the compression coefficient in the iterative process, and is the Lipschitz constant in (3).

Proof. We are divided into the following three cases for discussion.

Case 1. .

The tracking error of the th agent in the ( + 1)th iteration is

Then

From (4) and (12) it can be known that

where .

According to Lemma 1, we can get

Then

Taking norm to both sides of (15), according to (2) and (3), we can get

In a similar way, we can get

Substituting (15) into (14) and taking the norm, we can get

Where

Then, taking norm to (19), according to (16) and (17), we have

Substitute (18) into (20) and then set , we obtain

Case 2. .

Whether loses data in the interval , the control function cannot be updated in the interval on .

Case 3. , then 0.

In summary, when , according to (21) and (13), we can get , according to (5). Since ν > 0, then the number of times takes the value , which is also infinite times. By (6), (21), and (13) we have The proof is completed.

3.2 IβD-type learning law

Considering the multi-agent system (1) and the correction error (7), similar to (8)–(11), we give the following IβD-type learning law:

Theorem 2. For system (1) with Assumptions 1, 2, let (H1) and (H2) hold, and the -type learning law (22) is applied. Consider the varying trial lengths as the iteration number approaches infinity. The corrected tracking error converges to , for all

and is the Lipschitz constant in (3).

Proof. Since the -type learning law uses the domain alignment operator (formula (7), (·)) to correct , Assumption 2 is needed.

When , according to (7), we have

When , the tracking error of the th agent in the ( + 1)th iteration is , and

From (4) we obtain

Where

and

Taking norm to both sides of (25), according to (2) and (3), we can get

and

According to (27) and Lemma 2, we can get

Substitute (28) into (26) and then set ,

By (7), (24), (29), and (23) we have . The proof is completed.

3.3 DαD-type learning law with local average operator 1

Li et al. [11, Eq. (11)] introduced the local average operator (LAO)

This operator effectively utilizes the information from the most recent experiments, where is any known positive integer.

Considering system (1) and the correction error (6), similar to (8)–(11), we give the following -type learning law with local average operator:

where is any known positive integer. It means that the learning law can use the tracking error of the previous iterations to adjust the next input, and is the ergodic of .

Due to the lack of iterative information, only using the tracking error of the last iteration to adjust the input will slow down the convergence speed. The local average operator can make full use of the tracking error of multiple iterations to adjust the input so that the convergence speed is faster.

Theorem 3. For system (1) with Assumption 1, let (H1) and (H2) hold, and the -type learning law (30) is applied with local average operator. Consider the varying trial lengths as the iteration number approaches infinity. The corrected tracking error converges to

and is the Lipschitz constant in (3).

Proof. When, the proof of the theorem is similar to Theorem 1.

When , we only need to analyze a few key steps, and the rest of the proof is similar to Theorem 1.

According to (30) and similar to (14)–(20), we can get

From Theorem 1 and the above analysis we know

By (6), (32), and (31) we have . The proof is completed.

3.4 IβD-type learning law with local average operator 2

To characterize local average operator 2, we set , where . Define the symbol with is the set of serial numbers of all trials with full trial length before the .th iteration, num is the number of elements in .

Considering system (1) and the correction error (6), similar to (8)–(11), we give the following -type learning law with local average operator:

where is any known positive integer. The design idea of this learning law is similar to the learning law (formula (30)), but the set (·) is introduced here, which makes the learning law only use the complete iterative error and discard the incomplete iterative error. The advantage of this method is to further accelerate the convergence speed, but it needs several complete iterations as the basis.

Theorem 4. For system (1) with Assumption ., let (H1) and (H2) hold, and the type learning law (33) is applied with local average operator. Consider the varying trial lengths as the iteration number approaches infinity. The corrected tracking error converges to

and is the Lipschitz constant in (3).

Proof. When num , the proof of the theorem is similar to Theorem 2. When num , we only need to analyze a few key steps, and the rest of the proof is similar to Theorem 2.

According to (33) and similar to (25) and (27), we can get

From Theorem 2 and the above analysis we know

By (6), (35), and (34) we have The proof is completed

4 Numerical simulation

We consider the following I-MAS consisting of five agents:

where for all ,.and represent the two states of the th agent, respectively. Initial value are , The communication topology is shown in Fig. 1. The probability density function of

The target trajectory, i.e., the trajectory of vertex 0 is as follows: where

where

and

The remaining parameters of learning laws are , which satisfies the condition of


Figure 1
The topological graph for (36).


Figure 2
The output error (DαD-type and IβD-type).


Figure 3
The output error (DαD-type and IβD-type with LAO).


Table 1
Tracking errors of each agent.

Theorems 1–4. Therefore, the multi-agent system can uniformly track the target trajectory under the given learning control. Figures 2 and 3 show that the error between the output value and the target trajectory gradually converges to 0.

Figures 47 shows the iterative learning process of the second state output trajectory with -type and -type learning law with LAO.

When the iteration reaches 60, the consensus errors of four learning laws are shown in Table 1. It should be noted that the error correction method of without LAO is different from that of with LAO (see (6) and (7)).


Figure 4
The trajectory of the first iteration (DαD-type and IβD-type with LAO).


Figure 5
The trajectory of the 12th iteration (DαD-type and IβD-type with LAO).


Figure 6
The trajectory of the 24th iteration (DαD-type and IβD-type with LAO).


Figure 7
The trajectory of the 60th iteration (DαD-type and IβD-type with LAO).

5. Conclusion

We introduce four ILC schemes for I-MAS with VTL via the domain alignment operator to correct the tracking error. In particular, the idea of local average operators to optimize the control function is applied to optimize the control function. Convergence results for I-MAS are shown and a numerically example is illustrated. In the future, on the one hand, we will consider the case of non fixed time impulse and non instantaneous impulse, and the actual model is often subject to the uncertainty of impulse interference, including the uncertainty of interference time point and the uncertainty of interference duration; on the other hand, we will consider the case of different running batch length between different agents, so we need to design the appropriate topological relationship to solve the problem Ensure the integrity of the iterative process.

Material suplementario
Acknowledgments

The authors are grateful to the referees for their careful reading of the manuscript and their valuable comments. We thank the editor also.

References
1. H.S. Ahn, Y. Chen, Iterative learning control for multi-agent formation, in ICROS-SICE International Joint Conference, Fukuoka, Japan, August 18–21, 2009, IEEE, Piscataway, NJ, 2009, pp. 3111–3116, https://doi.org/none.
2. S. Arimoto, S. Kawamura, F. Miyazaki, Bettering operation of robots by learning, J. Rob. Syst., 1(2):123–140, 1984, https://doi.org/10.1002/rob.4620010203.
3. D. Bainov, V. Covachev, Impulsive Differential Equations with a Small Parameter, Ser. Adv. Math. Appl. Sci., Vol. 24, World Scientific, Singapore, 1994, https://doi.org/10.1142/2058.
4. X. Cao, M. Feckan, D. Shen, J. Wang, Iterative learning control for multi-agent systems withˇ impulsive consensus tracking, Nonlinear Anal. Model. Control, 26(1):130–150, 2021, https://doi.org/10.15388/namc.2021.26.20981.
5. Y. Cao, L. Zhang, C. Li, M.Z.Q. Chen, Observer-based consensus tracking of nonlinear agents in hybrid varying directed topology, IEEE Trans. Cybern., 47:2212–2222, 2017, https://doi.org/10.1109/TCYB.2016.2573138.
6. B. Cui, Y. Xia, K. Liu, Y. Wang, D.-H. Zhai, Velocity-observer-based distributed finite-time attitude tracking control for multiple uncertain rigid spacecraft, IEEE Trans. Ind. Inf., 16(4): 2509–2519, 2020, https://doi.org/10.1109/TII.2019.2935842.
7. J. Li, J. Li, Adaptive iterative learning control for consensus of multi-agent systems, IET Control Theory Appl., 7:136–142, 2013, https://doi.org/10.1049/iet-cta. 2012.0048.
8. J. Li, J. Li, Adaptive iterative learning control for coordination of second-order multi-agent systems, Int. J. Robust Nonlinear Control, 24:3282–3299, 2014, https://doi.org/10.1002/rnc.3055.
9. X. Li, D. Shen, Two novel iterative learning control schemes for systems with randomly varying trial lengths, Syst. Control Lett., 107:9–16, 2017, https://doi.org/10.1016/ j.sysconle.2017.07.003.
10. X. Li, J. Xu, D. Huang, An iterative learning control approach for linear systems with randomly varying trial lengths, IEEE Trans. Autom. Control, 59:1954–1960, 2014, https://doi. org/10.1109/TAC.2013.2294827.
11. X. Li, J. Xu, D. Huang, Iterative learning control for nonlinear dynamic systems with randomly varying trial lengths, Int. J. Adapt. Control Signal Process., 29:1341–1353, 2015, https://doi.org/10.1002/acs.2543.
12. Y. Li, Y. Chen, H. Ahn H, Fractional-order iterative learning control for fractional-order linear systems, Asian J. Control, 13:54–63, 2011, https://doi.org/10.1002/asjc.253.
13. C. Liu, D. Shen, J. Wang, Adaptive learning control for general nonlinear systems with nonuniform trial lengths, initial state deviation, and unknown control direction, Int. J. Robust Nonlinear Control, 29:6227–6243, 2019, https://doi.org/10.1002/rnc.4718.
14. C. Liu, D. Shen, J. Wang, A two-dimensional approach to iterative learning control with randomly varying trial lengths, J. Syst. Sci. Complex., 33:685–705, 2020, https://doi. org/10.1007/s11424-020-8215-z.
15. S. Liu, J. Wang, Fractional order iterative learning control with randomly varying trial lengths, J. Franklin Inst., 354:967–992, 2016, https://doi.org/10.1016/j.jfranklin. 2016.11.004.
16. S. Liu, J. Wang, D. Shen, Iterative learning control for noninstantaneous impulsive fractionalorder systems with varying trial lengths, Int. J. Robust Nonlinear Control, 28:6202–6238, 2018, https://doi.org/10.1002/rnc.4371.
17. S. Liu, J. Wang, D. Shen, O. Regan, Iterative learning control for differential inclusions of parabolic type with noninstantaneous impulses, Appl. Math. Comput., 350:48–59, 2019, https://doi.org/10.1016/j.amc.2018.12.058.
18. X. Liu, K. Zhang, W. Xie, Impulsive consensus of networked multi-agent systems with distributed delays in agent dynamics and impulsive protocols, J. Dyn. Syst. Meas. Control, 141:011008, 2019, https://doi.org/10.1115/1.4041202.
19. D. Luo, J. Wang, D. Shen, Learning formation control for fractional-order multiagent systems, Math. Methods Appl. Sci., 41:5003–5014, 2018, https://doi.org/10.1002/mma. 4948.
20. D. Luo, J. Wang, D. Shen, PD.-type distributed learning control for nonlinear fractional-order multiagent systems, Math. Methods Appl. Sci., 42:4543–4553, 2019, https://doi.org/ 10.1002/mma.5677.
21. D. Luo, J. Wang, D. Shen, Consensus tracking problem for linear fractional multi-agent systems with initial state error, Nonlinear Anal. Model. Control, 25(5):766–785, 2020, https://doi.org/10.15388/namc.2020.25.18128.
22. T. Ma, Z. Zhang, B. Cui, Adaptive consensus of multi-agent systems via odd impulsive control, Neurocomputing, 321:139–145, 2018, https://doi.org/10.1016/j. neucom.2018.09.007.
23. T. Ma, Z. Zhang, B. Cui, Variable impulsive consensus of nonlinear multi-agent systems, Nonlinear Anal., Hybrid Syst., 31:1–18, 2019, https://doi.org/10.1016/j.nahs. 2018.07.004.
24. D. Shen, Data-driven learning control for stochastic nonlinear systems: Multiple communication constraints and limited storage, IEEE Trans. Neural Networks Learn. Syst., 29:2429–2440, 2018, https://doi.org/10.1109/TNNLS.2017.2696040.
25. D. Shen, J.-X. Xu, Adaptive learning control for nonlinear systems with randomly varying iteration lengths, IEEE Trans. Neural Networks Learn. Syst., 30(4):1119–1132, 2019, https://doi.org/10.1109/TNNLS.2018.2861216.
26. H. Su, Y. Ye, Y. Qiu, Y. Cao, M.Z.Q. Chen, Semi-global output consensus for discretetime switching networked systems subject to input saturation and external disturbances, IEEE Trans. Cybern., 49(11):3934–3945, 2019, https://doi.org/10.1109/TCYB.2018. 2859436.
27. J. Wang, M. Feckan, Y. Zhou, Necessary and sufficient conditions for the fractional calculusˇ of variations with Caputo derivatives, Commun. Nonlinear Sci. Numer. Simul., 16:1490–1500, 2011, https://doi.org/10.1016/j.cnsns.2010.07.016.
28. J. Wang, M. Feckan, Y. Zhou, On the stability of first order impulsive evolution equations,ˇ Opusc. Math., 34:639–657, 2014, https://doi.org/10.7494/OpMath.2014.34. 3.639.
29. L. Wang, X. Li, D. Shen, Sampled-data iterative learning control for continuous-time nonlinear systems with iteration-varying lengths, Int. J. Robust Nonlinear Control, 28:3073–3091, 2018, https://doi.org/10.1002/rnc.4066.
30. Z. Xu, C. Li, Y. Han, Leader-following fixed-time quantized consensus of multi-agent systems via impulsive control, J. Franklin Inst., 356:441–456, 2019, https://doi.org/10.1016/j.jfranklin.2018.10.009.
31. C. Zeng, D. Shen, J. Wang, Adaptive learning tracking for robot manipulators with varying trial lengths, J. Franklin Inst., 356:5993–6014, 2019, https://doi.org/10.1016/j. jfranklin.2019.04.034.
32. W. Zhu, D. Wang, Leader-following consensus of multi-agent systems via event-based impulsive control, Meas. Control, 52:91–99, 2019, https://doi.org/10.1177/ 0020294018819549.
Notas
Notes
* This work was supported by the National Natural Science Foundation of China (grant Nos. 12161015, 61673045), Training Object of High Level and Innovative Talents of Guizhou Province ((2016)4006),

Department of Science and Technology of Guizhou Province (Fundamental Research Program [2018]1118), Guizhou Data Driven Modeling Learning and Optimization Innovation Team ([2020]5016), the Slovak Research and Development Agency under the contract No. APVV-18-0308, and the Slovak Grant Agency VEGA Nos. 1/0358/20 and 2/0127/20.

Notas de autor
a Department of Mathematics, Guizhou University, Guiyang 550025, Guizhou, China xkcaomath@126.com; jrwang@gzu.edu.cn
b Department of Mathematical Analysis and Numerical Mathematics, Faculty of Mathematics, Physics and Informatics,
c Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, 814 73 Bratislava, Slovakia d School of Mathematics, Renmin University of China, Beijing, China shendong@mail.buct.edu.cn
d School of Mathematics, Renmin University of China, Beijing, China shendong@mail.buct.edu.cn
a Department of Mathematics, Guizhou University, Guiyang 550025, Guizhou, China xkcaomath@126.com; jrwang@gzu.edu.cn
1 Corresponding author

Figure 1
The topological graph for (36).

Figure 2
The output error (DαD-type and IβD-type).

Figure 3
The output error (DαD-type and IβD-type with LAO).

Table 1
Tracking errors of each agent.

Figure 4
The trajectory of the first iteration (DαD-type and IβD-type with LAO).

Figure 5
The trajectory of the 12th iteration (DαD-type and IβD-type with LAO).

Figure 6
The trajectory of the 24th iteration (DαD-type and IβD-type with LAO).

Figure 7
The trajectory of the 60th iteration (DαD-type and IβD-type with LAO).
Buscar:
Contexto
Descargar
Todas
Imágenes
Visor de artículos científicos generados a partir de XML-JATS4R por Redalyc