Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Global convergence of a class of networks on time scales

Global convergence of a class of networks on time scales Abstract In this paper, we propose a class of simplified background neural networks model with two subnetworks on time scales. Some basic dynamic properties including positive invariance, boundedness, global attractivity and complete convergence of networks are analyzed. The main contributions in this paper are listed as follows: (1) the global attractive set of the model is verified and conditions for global attractivity are derived. (2) Complete convergence for the new networks are proved by constructing a novel energy function on time scales. Finally, three simulation examples are presented to illustrate the feasibility and effectiveness of the obtained results. 1. Introduction In the past few decades, neural networks have caused people’s high degree of attention due to their potential applications and have been used in many fields such as associative memory, signal processing, optimization problems and so on. For example, when we need to remember someone's phone number from radio report, our neural processing system will work. Without the context information, the same number vanishes promptly from the mind. Clearly, the context determines whether a motor responses or not. From the example, we know that if we want them to have more well result in applications, the dynamic properties play an important role in these fields. For dynamical properties of neural networks, the convergence analysis of neural networks has been extensively studied in the works of Yi et al. (1999), Yi et al. (2001), Xu et al. (2008), Xu & Yi (2011) and Liu et al. (2012). By constructing a sequence of solutions to delayed dynamical systems with high-slope activations, Liu et al. (2012) derived two sets of sufficient conditions for the global exponential stability and convergence of the neural networks. The boundedness, attractivity and stability analysis of neural networks have been studied in the works by Cao & Liang (2004), Rehim et al. (2004), Li & Yang (2009), Yuan et al. (2009) and Wan & Zhou (2012). Cao & Liang (2004) verified uniformly ultimate boundedness and uniform boundedness by utilizing the Hardy inequality. In the study by Li & Yang (2009), the authors investigated the global attractivity of Cohen–Grossberg neural network models with discrete and distributed delays via the Lyapunov functional method. The background model of neural networks was proposed in the studies by Salinas (2003), Xu et al. (2008) and Xu & Yi (2011), which is very different from the existing ones. By employing some mathematical techniques, the complete convergence and the global attractivity of a class of simplified background neural networks model have been investigated in the study by Xu & Yi (2011). As we know that Agarwal et al. (2002) and Bohner & Peterson (2003) compiled and summarized the time scale calculus theory which arouse much attention of scholars and experts. For example, the periodic solutions for dynamic equations on time scales have been discussed in the study by Liu & Li (2007); in the study by Yang & Li (2015), the authors introduced the existence and exponential stability of periodic solution for stochastic Hopfield neural networks on time scales. For other results on time scales, readers may refer to the studies by Gao et al. (2016), Huang et al. (2017) and Cheng & Cao (2015). From the theory of time scales, we can find that it is more general than common calculus and relative results contain continuous and discontinuous situations. However, to the best of our knowledge, there are few papers on the global attractive set and complete convergence of background neural networks on time scales. Stimulated by the work of Xu & Yi (2011), we consider a new class of simplified background neural networks with two subnetworks on time scales $$\begin{align}x_{i}^{\Delta}(t)=-x_{i}(t)+\left(wx_{i}(t)+h_{i}\right)^{2}G\left(x(t)\right)\!,\end{align}$$ (1.1) where G(x(t)) = 1 − c∥x(t)∥2, |$i\in \mathcal{N}:=\{1,2\}$| and |$t\in \mathbb{T}$|⁠. Each xi represents the firing rate of each subnetwork, c > 0 is the synaptic connection strength, hi > 0 represents the excitatory external input whose value is independent of the activity, the total synaptic input w to all neurons is a constant. The global convergence and stability result of neural networks are always limited to continuous time domain |$\mathbb{R}$| (see Yi et al., 1999; Yi et al., 2001; Xu et al., 2008; Liu et al., 2012). So how to establish dynamical properties of neural networks on general time scales is still unknown. In this paper, we will introduce the theory of time scales to solve this problem. The novelty and generalization of this paper can be concluded as follows. A more general positively invariant set and boundedness criteria of the new model on time scales are firstly investigated. Compared with the existing results in the study by Xu & Yi (2011), we can see that the main results reduce to ones in Xu & Yi (2011) when |$\mathbb{T}=\mathbb{R}$|⁠. When |$\mathbb{T}=\mathbb{Z}$|⁠, we can establish new stability of simplified background neural networks which have not been reported in the literature. Meanwhile, the complete convergence and stability analysis have been applicable on hybrid time domains. For this new dynamical behavior on time scales, we can see our computer simulations. This paper is organized as follows. In Section 2, some preliminaries are given. We will analyze the boundedness, invariant set and global attractive set of the new model in Section 3. Complete convergence of the new network is obtained in Section 4. In Section 5, computer simulations are provided. Finally, concluding remarks in Section 6. 2. Preliminaries In this section, we shall recall some definitions and state some preliminary results. Let |$\mathbb{T}$| be a time scale, which is a non-empty closed subset of |$\mathbb{R}$|⁠. The forward and backward jump operators σ, ρ: |$\mathbb{T}$||$\rightarrow$||$\mathbb{T}$| and the graininess μ: |$\mathbb{T}$||$\rightarrow$||$\mathbb{R}^{+}$| are defined by |$\sigma (t)=\inf \{s\in \mathbb{T}: s> t\}$|⁠, |$\rho (t)=\sup \{s\in \mathbb{T}: s < t\}$| and μ(t) = σ(t) − t, respectively. A point |$t\in \mathbb{T}$| is called left-dense if |$t>\inf \mathbb{T}$| and ρ(t) = t, left-scattered if ρ(t) < t, right-dense if |$t<\sup \mathbb{T}$| and σ(t) = t and right-scattered if σ(t) > t. If |$\mathbb{T}$| has a left-scattered maximum m, then |$\mathbb{T}^{k}=\mathbb{T}\setminus \{m\}$|⁠; otherwise, |$\mathbb{T}^{k}= \mathbb{T}$|⁠. If |$\mathbb{T}$| has a right-scattered minimum m, then |$\mathbb{T}^{k}= \mathbb{T}\setminus \{m\}$|⁠; otherwise, |$\mathbb{T}^{k}= \mathbb{T}$|⁠. A function |$f:\mathbb{T}\rightarrow \mathbb{R}$| is right-dense continuous provided it is continuous at right-dense points in |$\mathbb{T}$| and its left-side limits exist at left-dense points in |$\mathbb{T}$|⁠. The set of all right-dense continuous functions on |$\mathbb{T}$| is defined by |$\mathbb{C}_{rd} =\mathbb{C}_{rd}(\mathbb{T})=\mathbb{C}_{rd}(\mathbb{T},\mathbb{R})$|⁠. If f is continuous at each right-dense point and each left-dense point, then f is said to be a continuous function on |$\mathbb{T}$|⁠. Definition 2.1 (Bohner & Peterson, 2003) For a function |$f:\mathbb{T}\rightarrow \mathbb{R}$| (the range |$\mathbb{R}$| of f may be actually replaced by Banach space) the (delta) derivative is defined by $$f^{\bigtriangleup}=\frac{f(\sigma(t))-f(t)}{\sigma(t)-t},$$ if f is continuous at t and t is right-scattered. If t is right-dense then f is differentiable at t iff the limit $$\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s},$$ exists as a finite number. The derivative is defined by $$f^{\bigtriangleup}=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}.$$ Definition 2.2 (Bohner & Peterson, 2003) A function |$p:\mathbb{T} \rightarrow \mathbb{R}$| is said to be a regressive function if and only if 1 + μ(t)p(t)≠ 0 for all |$t\in \mathbb{T}^{k}$|⁠. Let |$\mathcal R$| be the set of all rd-continuous and regressive functions on |$\mathbb{T}$|⁠. Definition 2.3 (Bohner & Peterson, 2003) If |$p \in \mathcal R$|⁠, then the generalized exponential function ep(t, s) is defined on |$\mathbb{T}$| by $$e_{p}(t,s)=\exp\left[\int_{s}^{t}\xi_{\mu(\tau)}(p(\tau))\Delta\tau\right]\!, \quad \textrm{for} \quad s,\ t \in \mathbb{T},$$ where ξh(z) is a cylinder transformation defined by $$\xi_{h}(z)= \begin{cases} \frac{Log(1+hz)}{h}, &h\neq0,\\ z, &h=0.\end{cases}$$ Lemma 2.1 (Bohner & Peterson, 2003) Let |$p\in \mathcal{R}$| and |$f\in \mathbb{C}_{rd}$|⁠, |$t_{0}\in \mathbb{T}$| and |$y_{0}\in \mathbb{R}$|⁠. The unique solution of initial value problem yΔ ≤ p(t)y + f(t), y(t0) = y0, is given by $$y(t)\leq e_{p}(t,t_{0})y_{0}+\int_{t_{0}}^{t}e_{p}(t,\sigma(\tau))f(\tau)\Delta\tau.$$ In above lemma, ‘≤’ can be replaced by ‘≥’. Lemma 2.2 (Bohner & Peterson, 2003) Assume |$f, g:\mathbb{T} \rightarrow \mathbb{R}$| are differential at |$t\in \mathbb{T}^{k}$|⁠. Then (fg)△ = f△g + fσg△ = fg△ + f△gσ. Lemma 2.3 (Bohner & Peterson (2003)) If f ∈ Crd and |$t\in \mathbb{T}^{k}$|⁠, then $$\int_{t}^{\sigma(t)}f(\tau)\Delta\tau= \mu(t)f(t).$$ Definition 2.4 The network (1.1) is bounded if each of its trajectories is bounded. Definition 2.5 Let S be a compact subset of |$\mathbb{R}^{n}$|⁠. We denote the ϵ-neighborhood of S by Sε. The compact set S is called a global attractive set of (1.1) if for any ϵ > 0, all trajectories of that network ultimately enter and remain in Sε. Definition 2.6 A vector |$x^{*}=(x_{1}^{\ast }, x_{2}^{\ast })^{T} \in \mathbb{R}^{2}$| is called an equilibrium point of (1.1), if $$-x_{i}^{\ast}+\big(wx_{i}^{\ast}+h_{i}\big)^{2}\left(1-c\sum_{j=1}^{2}x_{j}^{*2}\right)=0 \quad (i=1,2).$$ Denote Se by the equilibrium set of (1.1). Definition 2.7 The network (1.1) is said to be completely convergent if the equilibrium set Se is not empty and every trajectory xi(t) (i = 1, 2) of (1.1) converges to Se, that is $$dist(x(t),S^{e})= \underset{x^{\ast}\in S^{e}}{\min}\|x(t)-x^{\ast}\|\rightarrow 0,\quad t\rightarrow \infty.$$ Definition 2.8 A set S ⊆ Crd is called to be a positive invariant set of system (1.1) if for any initial value ϕ = x(t0) ∈ S, we have x(t; t0, ϕ) ∈ S, where t ≥ t0 and |$t,\ t_{0}\in \mathbb{T}$|⁠. Definition 2.9 The neural network (1.1) is said to be completely stable if for any initial value ϕ, the corresponding solution trajectory x(t) converges to a certain equilibrium point. Definition 2.10 For any function f(t), the right-hand derivative is defined as $$D^{+}f(t)=\lim_{\epsilon\rightarrow 0^{+}}\frac{f(t+\epsilon)-f(t)}{\epsilon}.$$ 3. Global attractive set In this section, we give a global attractive set which attracts all the trajectories of (1.1). Property 3.1 Assume μ(t) ∈ [0, 1). If $$\begin{align}(1-\mu(t))\sqrt{\frac{1}{c}}+\mu(t)\left(w\sqrt{\frac{1}{c}}\!+\!h_{i}\right)^{2} <\sqrt{\frac{1}{2c}}\end{align}$$ (3.1) holds for all left-scatter points |$t\in \mathbb{T}$|⁠, then $$S_{1}=\left\{ (x_{1},x_{2})\in \mathbb{R}^{2}\big|\; x_{1}>0, x_{2}>0\;\;\textrm{and} \;\;\|x\|^{2}<1/ c \right\}$$ is a positive invariant set of (1.1). Proof. Let x(t) be the solution of (1.1) with initial condition x(t0) ∈ S1. We will prove $$\begin{align}\|x(t)\|^{2}=\sum_{i=1}^{2}x{_{i}^{2}}(t) < \frac{1}{c}\end{align}$$ (3.2) holds for t ≥ t0. By the way of contradiction, we suppose that (3.2) is not true. Then there exists a first time |$\tilde{t}$| > t0 such that |$\sum _{i=1}^{2}x{_{i}^{2}}(\tilde{t}) \geq 1/c.$| There are two cases for us to further consider. Case 1: |$\tilde{t}$| is left-dense, i.e., |$\tilde{t}$| = ρ(⁠|$\tilde{t}$|⁠). Then, we can get |$\sum _{i=1}^{2}x{_{i}^{2}}(\tilde{t})=1/ c$|⁠, |$D^{+}\sum _{i=1}^{2}(x{_{i}^{2}}(\tilde{t}))^{\Delta } \geq 0$| and |$\sum _{i=1}^{2}x{_{i}^{2}}(t) < 1/ c$|⁠, where |$t\in \mathbb{T}\;\;\textrm{and}\;\; t_{0}\leq t<\tilde{t}$|⁠. From 0 ≤ μ(t) < 1 and (1.1), we can get $$\begin{align*}D^{+}\sum_{i=1}^{2}{\big({x_{i}^{2}}\big(\tilde{t}\big)\big)}^{\Delta} &= -2\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t}) +2\sum_{i=1}^{2}x_{i}(\tilde{t})(wx_{i}(\tilde{t})\!+\!h_{i})^{2}G(x(\tilde{t}))\\ &\quad+\sum_{i=1}^{2}\mu(\tilde{t})(-x_{i}(\tilde{t})+(wx_{i}(\tilde{t})\!+\!h_{i})^{2}G(x(\tilde{t})))^{2}\\ &=(\mu(\tilde{t})-2)\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t})<(\mu(\tilde{t})-2) \frac{1}{c}\end{align*}$$ which leads to a contradiction. Case 2: |$\tilde{t}$| is left-scatter, we know that |$\sum _{i=1}^{2}{x_{i}^{2}}(\tilde{t})\geq 1 / c$|⁠, |$\sum _{i=1}^{2}{x_{i}^{2}}(\rho (\tilde{t}))< 1/ c$| and |$\sum _{i=1}^{2}{x_{i}^{2}}(t)< 1 / c$|⁠, t0 ≤ t < |$\tilde{t}$|⁠. It follows from 0 < μ(ρ(⁠|$\tilde{t}$|⁠)) < 1, 0 < G(xi(ρ(⁠|$\tilde{t}$|⁠))) < 1, |$x_{i}(\tilde{t})=x_{i}(\rho (\tilde{t}))+\mu (\rho (\tilde{t})){x_{i}^{\Delta }(\rho (\tilde{t}))}$| and (3.1), we can get $$\begin{align*}\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t})=&\;\sum_{i=1}^{2}\bigg[x_{i}(\rho(\tilde{t}))+\mu(\rho(\tilde{t})){x_{i}^{\Delta}(\rho(\tilde{t}))}\bigg]^{2}\\ =&\;\sum_{i=1}^{2}\bigg[(1-\mu(\rho(\tilde{t})))x_{i}(\rho(\tilde{t})) +\;\mu(\rho(\tilde{t}))(wx_{i}(\rho(\tilde{t}))\!+\!h_{i})^{2}G(x_{i}(\rho(\tilde{t})))\bigg]^{2}\\ \leq&\;\sum_{i=1}^{2}\left[(1-\mu(\rho(\tilde{t})))\sqrt{\frac{1}{c}} +\;\mu(\rho(\tilde{t}))\left(w\sqrt{\frac{1}{c}}+\!h_{i}\right)^{2}\right]^{2}<\frac{1}{c}\end{align*}$$ which leads to a contradiction. The two cases show that (3.2) is true. Then $$(wx_{i}(t)\!+\!h_{i})^{2}G(x(t))=(wx_{i}(t)\!+\!h_{i})^{2}\left(1-c\sum\limits_{j=1}^{2}{x_{j}^{2}}(t)\right)>0$$ holds for |$i\in \mathcal{N}$| and t ≥ t0. It follows from Lemma 2.1 that $$\begin{eqnarray}x_{i}(t)&=&\;\int_{t_{0}}^{t}\!\!e_{-1}(t,\!\sigma(\tau))(wx_{i}(\tau)\!+\!h_{i})^{2}G(x(\tau))\Delta\tau +e_{-1}(t,t_{0})x_{i}(t_{0})\nonumber\\ &=&\;\int_{t_{0}}^{t}\!\!\frac{1}{1-\mu(\tau)}e_{-1}(\tau, t)(wx_{i}(\tau)\!+\!h_{i})^{2}G(x(\tau))\Delta\tau +e_{-1}(t,t_{0})x_{i}(t_{0})\nonumber\\ &>&0,\end{eqnarray}$$ (3.3) where |$i\in \mathcal{N}$|⁠. From (3.2) and (3.3), we get that S1 is a positive invariant set of (1.1). The proof is complete. Remark 3.1 As shown in (3.3), μ(t) ∈ [0, 1) can guarantee xi(t) > 0. Hence, the assumption μ(t) ∈ [0, 1) in Property 3.1 is necessary and ensures S1 is a positive invariance set. Remark 3.2 As |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then all |$\tilde{t}\in \mathbb{T}$| are left-dense. Without (3.1), one can see our result can include ones in the study by Xu & Yi (2011). If |$\mathbb{T}=0.5\mathbb{Z}$|⁠, i.e., μ(t) = 0.5, then all |$\tilde{t}\in \mathbb{T}$| are left-scatter, (3.1) reduces to |$0.5\sqrt{\frac{1}{c}}+0.5(w\sqrt{\frac{1}{c}}\!+\!h_{i})^{2}<\sqrt{\frac{1}{2c}}$|⁠. So our results are more general and include ones in the study by Xu & Yi (2011) as a special case. Property 3.2 Under the assume μ(t) ∈ [0, 1), if $$\begin{align}1<\frac{2\sqrt{l}\left[\sqrt{l}-\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right]}{\left[\sqrt{l}+\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right]^{2}}\end{align}$$ (3.4) hold |$t\in \mathbb{T}$|⁠, then any solution x(t) with initial condition x(t0) is bounded with |$|x_{i}(t)|<\sqrt{l}$| for |$i\in \mathcal{N}$| and t ≥ t0, where l := ∥x(t0)∥2 + 1/c. Proof. We will prove $$\begin{align}{x_{i}^{2}}(t) < l\end{align}$$ (3.5) holds for |$i\in \mathcal{N}$| and t ≥ t0. Suppose (3.5) is not true, since |${x_{i}^{2}}(t_{0}) < l$| for |$\forall i\in \mathcal{N}$|⁠, there exist a first time t1 > t0 and some |$i\in \mathcal{N}$| such that |${x_{i}^{2}}(t_{1})\geq l$|⁠. Then there are two cases for us to consider. Case 1: t1 is left-dense, i.e., t1 = ρ(t1). Then, it gives |${x_{i}^{2}}(t_{1})=l$|⁠, |${x_{j}^{2}}(t)< l$| for |$\forall j\in \mathcal{N}$|⁠, t0 ≤ t < t1 and |$D^{+}{({x_{i}^{2}}(t_{1}))}^{\Delta }\geq 0$|⁠. It follows from 0 ≤ μ(t) < 1, (1.1) and (3.4) that $$\begin{align}D^{+}{\big({x_{i}^{2}}\big(t_{1}\big)\big)}^{\Delta} &\leq-2{x_{i}^{2}}(t_{1})+2|x_{i}(t_{1})|(wx_{i}(t_{1})+h_{i})^{2}|G(x(t_{1}))|\nonumber\\ &\quad+\;\mu(t_{1})\left(-x_{i}(t_{1})+(wx_{i}(t_{1})+h_{i})^{2}G(x(t_{1}))\right)^{2}\nonumber\\ &<-2l+2\sqrt{l}\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\nonumber\\ &\quad+\;\mu(t_{1})\left(\sqrt{l}+(w\sqrt{l}+h_{i})^{2}(1+2cl)\right)^{2}\nonumber\\ &<-2l+2\sqrt{l}(w\sqrt{l}+h_{i})^{2}(1+2cl)\nonumber\\ &\quad+\;\left(\sqrt{l}+(w\sqrt{l}+h_{i})^{2}(1+2cl)\right)^{2}<0\end{align}$$ (3.6) which leads to a contradiction. Case 2: t1 is left-scatter, then |${x_{i}^{2}}(t_{1})\geq l$|⁠, |${x_{i}^{2}}(\rho (t_{1}))< l$| and |${x_{j}^{2}}(t)<l$| for |$\forall j\in \mathcal{N}$|⁠, t0 ≤ t < t1. Since 0 ≤ μ(t) < 1, |$x_{i}(t_{1})=x_{i}(\rho (t_{1}))+\mu (\rho (t_{1})){x_{i}^{\Delta }(\rho (t_{1}))}$| and (1.1), we can get $$\begin{aligned}{x_{i}^{2}}(t_{1})=&\;\left[x_{i}(\rho(t_{1}))+\mu(\rho(t_{1})){x_{i}^{\Delta}(\rho(t_{1}))}\right]^{2}\\[3pt] =&\;\left[(1-\mu(\rho(t_{1})))x_{i}(\rho(t_{1})) +\mu(\rho(t_{1}))(wx_{i}(\rho(t_{1}))\!+\!h_{i})^{2}G(x(\rho(t_{1})))\right]^{2}\\[3pt] \leq&\;\left[(1-\mu(\rho(t_{1})))\sqrt{l} +\mu(\rho(t_{1}))(w\sqrt{l}+\!h_{i})^{2}(1+2cl)\right]^{2}.\end{aligned}$$ Due to (3.4), we know that |$\sqrt{l}>\left (w\sqrt{l}+h_{i}\right )^{2}(1+2cl)$|⁠. Hence, we get $$\begin{aligned}{x_{i}^{2}}(t_{1})\leq&\;\left[ \left(1-\mu\left(\rho(t_{1})))\sqrt{l} +\mu(\rho(t_{1})\right)\left(w\sqrt{l}+\!h_{i}\right)^{2}(1+2cl)\right.\right]^{2}<l\end{aligned}$$ which leads to a contradiction. The two cases show that (3.5) is true. Then $$|x_{i}(t)| < \sqrt{l}$$ holds for |$\forall i\in \mathcal{N}$| and t ≥ t0. This proves that the network (1.1) is bounded with |$|x_{i}(t)|<\sqrt{l}$| for |$i\in \mathcal{N}$| and t ≥ t0. Remark 3.3 If |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then (3.4) reduces to |$\sqrt{l}>(w\sqrt{l}+h_{i})^{2}(1+2cl)$|⁠. Compared with the result in Xu & Yi (2011), the result in this paper is more concisely and easily satisfied. If |$\mathbb{T}=0.5\mathbb{Z}$|⁠, i.e., μ(t) = 0.5, then (3.4) reduces to |$0.5[\sqrt{l}+ (w\sqrt{l}+h_{i})^{2}(1+2cl)]^{2}<2\sqrt{l}[\sqrt{l}-(w\sqrt{l}+h_{i})^{2}(1+2cl)]$|⁠. For any initial condition x(t0), the sequence solution x(t) is bounded under our new conditions. Obviously, the new bound criteria are more general than ones in the study of Xu & Yi (2011). Theorem 3.1 Assume that μ(t) ∈ [0, 1) and (3.4) hold, then the compact set $$S = \left\{ x\in \mathbb{R}^{2} \;\Big|\;\;\|x\|^{2}\leq\ \!\! \frac{1}{c}\;\right \}$$ is a globally attractive set of the network (1.1). Proof. Denote $$\underset{t\rightarrow \infty}{\lim}\sup{\|x(t)\|}^{2} = \xi.$$ It follows from Property 3.2 that each xi(t) is bounded. Then |$\xi <+\infty$|⁠. Now we should prove that $$\begin{align}\xi\leq\frac{1}{c}.\end{align}$$ (3.7) Suppose (3.7) is not true, i.e., $$\xi>\frac{1}{c}.$$ It can choose a small ϵ > 0 with |$\dfrac{1}{c}<\xi -\epsilon$|⁠. For this choice of ϵ, by the basic property of the upper limit, there exists a time t2 ≥ 0 such that ∥x(t)∥2 < ξ + ϵ, t ≥ t2. It will prove that there exists a time t3 ≥ t2 such that $$\begin{align}\left(\sum_{i=1}^{2}{x_{i}^{2}}(t)\right)^{\Delta}<0, \quad t\geq t_{3}.\end{align}$$ (3.8) If (3.8) is not true, there must exist a t4 ≥ t2 such that $$\left(\sum_{i=1}^{2}{x_{i}^{2}}(t_{4})\right)^{\Delta}\geq 0.$$ Since |$|x_{i}(t_{4})|<\sqrt{l}$| holds for |$i\in \mathcal{N}$|⁠, it follows (3.4) that $$\begin{aligned}\left(\sum_{i=1}^{2}{x_{i}^{2}}(t_{4})\right)^{\Delta}&\leq-2\sum_{i=1}^{2}{x_{i}^{2}}(t_{4}) +2\sum_{i=1}^{2}|x_{i}(t_{4})|(wx_{i}(t_{4})+h_{i})^{2}|G(x(t_{4}))|\\ &\quad+\sum_{i=1}^{2}\mu(t_{4})\left(-x_{i}(t_{4})+(wx_{i}(t_{4})+h_{i})^{2}G(x(t_{4}))\right)^{2}\\ &\leq-4l+4\sqrt{l}\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl) +2\mu(t_{4})\left(\sqrt{l}+\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right)^{2}<0,\end{aligned}$$ where l := ∥x(t0)∥2 + 1/c. This is a contradiction, thus (3.8) is true. By (3.8), it shows that ∥x(t)∥2 is monotonically decreasing and the limit of ∥x(t)∥2 exists, i.e., $$\underset{t\rightarrow\infty}{\lim}\|x(t)\|^{2}=\underset{t\rightarrow\infty}{\lim}\sup\|x(t)\|^{2}=\xi.$$ So there exists a time t5 ≥ t3 such that $$\frac{1}{c}<\xi-\epsilon<\|x(t)\|^{2}<\xi+\epsilon,$$ for all t ≥ t5. Together with (1.1), we have $$\begin{align}x_{i}^{\Delta}(t)=-x_{i}(t)\!+\!(wx_{i}(t)+h_{i})^{2}\left(1\!-\!c\sum_{j=1}^{2}{x_{j}^{2}}(t)\right) \leq-x_{i}(t), \quad t\geq t_{5}.\end{align}$$ (3.9) Next we assume that xi(t) > 0 for some t ≥ t5, where |$i\in \mathcal N$|⁠. It follows from (3.9) and Lemma 2.1 that $$x_{i}(t)\leq x_{i}(t_{5})e_{-1}(t,t_{5}),$$ where |$i\in \mathcal{N}$| and t ≥ t5. Since μ(t) ∈ (0, 1), we have $$\underset{t\rightarrow\infty}{\lim}x_{i}(t)\leq \underset{t\rightarrow\infty}{\lim}x_{i}(t_{5})e_{-1}(t,t_{5})=0$$ which contradicts xi(t) > 0, where |$i\in \mathcal N$| and t ≥ t5. For each |$i\in \mathcal N$|⁠, we assume that xi(t) < 0 for t ≥ t5. From (3.9) and Lemma 2.1, one gets that $$x_{i}(t)\geq\eta\int_{t_{5}}^{t}e_{-1}(t,\sigma(\tau))\Delta\tau$$ for t ≥ t5 and |$\eta =(w\sqrt{l}+h_{i})^{2}\left (1-c(\xi +\epsilon )\right )$|⁠. Since μ(t) ∈ [0, 1), we have $$\underset{t\rightarrow\infty}{\lim}x_{i}(t)\geq \eta\underset{t\rightarrow\infty}{\lim}\int_{t_{5}}^{t}e_{-1}(t,\sigma(\tau))\Delta\tau=0$$ which contradicts xi(t) < 0 for t ≥ t5. This will derive a contradiction which proves (3.7) to be true. Finally, if we assume that xi(t) = 0 for |$i\in \mathcal N$|⁠, then (3.7) is also true. The proof is complete. Remark 3.4 Note that if |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then (3.4) reduces to |$\sqrt{l}>(w\sqrt{l}+h_{i})^{2}(1+2cl)$|⁠. Obviously, our criteria are more clearly and concisely than that of Xu & Yi (2011) and easy to be satisfied. Moreover, Theorem 3.1 can not only solve continue-time situation in Xu & Yi (2011) but also extend to discrete-time and hybrid-time situation, such as |$\mathbb{T}=n\mathbb{Z}$| (n ∈ (0, 1)), |$\mathbb{T}=\bigcup _{s=0}^{+\infty }[s+0.5, s+1]$||$(s\in \mathbb{N})$| and so on. 4. Complete convergence In this section, the complete convergence of (1.1) will be studied. The general method to study complete convergence of neural networks is to construct suitable energy functions. Theorem 4.1 Assume that the network (1.1) is bounded and |${\int _{0}^{t}}\left (w x_{i}^{\sigma }(\tau )+h_{i}\right )^{3}\Delta \tau <+\infty$|⁠. If there exists an α > 0 such that $$\begin{align}1-H(\mu(t))\geq\alpha,\;\;\forall t \in\mathbb{T}\end{align}$$ (4.1) hold, where $$H(\mu(t))=\left[\dfrac{1}{2}+\dfrac{2c}{3w}(w\sqrt{l}+h)^{3}-w(1+2cl)\left(\dfrac{1}{3}\mu(t)wn+wn+h\right)\right]\mu(t),$$ and |$|x_{i}(t)|\leq \sqrt{l}$|⁠, |$\left |x_{i}^{\varDelta }(t)\right |\leq n$| (⁠|$i\in \mathcal N$|⁠), |$h=\underset{1\leq i\leq 2}{\max}\{h_{i}\}$|⁠, |$\beta =\dfrac{4\sqrt{l}nc}{3w}$|⁠, |$l=\|x(t_{0})\|+\dfrac{1}{c}$|⁠, then the network (1.1) is completely stable. Proof. Denote $$\begin{cases} a_{i}=(wx_{i}(t)+h_{i}),\\ \left[(wx_{i}(t)+h_{i})^{3}\right]^{\Delta}\ =\left({a_{i}^{3}}\right)^{\Delta} =a_{i}^{\Delta} {a_{i}^{2}}+a_{i}^{\sigma} \left({a_{i}^{2}}\right)^{\Delta} =a_{i}^{\Delta} {a_{i}^{2}}+a_{i}^{\sigma}\left(a_{i}^{\sigma}+a_{i}\right)a_{i}^{\Delta}\\ \qquad\qquad\qquad\quad{\hskip2.5pt} =3{a_{i}^{2}}a_{i}^{\Delta}+\mu^{2}(t)\left(a_{i}^{\Delta}\right)^{3}+3\mu(t)\left(a_{i}^{\Delta}\right)^{2}a_{i},\\ [G(x(t))]^{\Delta}=-2c\sum_{j=1}^{2}x_{j}(t)x_{j}^{\Delta}(t)-c\mu(t)\|x^{\Delta}(t)\|^{2},\\\end{cases}$$ where |$i\in \mathcal N\;\; \textrm{and}\;\; t\in \mathbb{T}$|⁠. Constructing an energy function |$\widehat{E}(t)=\frac{1}{2}\sum _{i=1}^{2}{x_{i}^{2}}(t)-\frac{1}{3w}\sum _{i=1}^{2}G(x(t))(wx_{i}(t)+h_{i})^{3}$| and let $$\begin{align}E(t)=\widehat{E}(t)-{\beta\int_{0}^{t}}\left(w x_{i}^{\sigma}(\tau)+h_{i}\right)^{3}\Delta\tau.\end{align}$$ (4.2) It follows from (1.1) and (4.2) that $$\begin{align*}E^{\Delta}(t)&=\frac{1}{2}\sum_{i=1}^{2}\left({x_{i}^{2}}(t)\right)^{\Delta}-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{3}G(x(t))\right]^{\Delta}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &=\frac{1}{2}\sum_{i=1}^{2}\left({x_{i}^{2}}(t)\right)^{\Delta}-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{3}\right]^{\Delta} G(x(t))-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{\sigma}\right]^{3} G^{\Delta}(x(t))\\&=\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)+\frac{\mu(t)}{2}\|x^{\Delta}(t)\|^{2}-\frac{1}{3}\mu^{2}(t)w^{2}G(x(t))\sum_{i=1}^{2}\left(x_{i}^{\Delta}(t)\right)^{3}\\ &\quad-\sum_{i=1}^{2}(w x_{i}(t)+h_{i})^{2}G(x(t))x_{i}^{\Delta}(t)-\mu(t)w G(x(t))\sum_{i=1}^{2}(w x_{i}(t)+h_{i})\left(x_{i}^{\Delta}(t)\right)^{2}\\&\quad+\frac{2c}{3w}\left(\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)\right)\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad+\frac{c\mu(t)}{3w}\|x^{\Delta}(t)\|^{2}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &=-\|x^{\Delta}(t)\|^{2}+\frac{\mu(t)}{2}\|x^{\Delta}(t)\|^{2}+\frac{2c}{3w}\left(\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)\right)\sum_{i=1}^{2}\left(wx_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad-\mu(t)w G(x(t))\left[\sum_{i=1}^{2}\left(\frac{w\mu(t)}{3}\left(x_{i}^{\Delta}(t)\right)^{3}+w x_{i}(t)\left(x_{i}^{\Delta}(t)\right)^{2}+h_{i}\left(x_{i}^{\Delta}(t)\right)^{2}\right)\right]\\ &\quad+\left[\frac{c\mu(t)}{3w}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\right]\|x^{\Delta}(t)\|^{2}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\&\leq-\left[1-\frac{\mu(t)}{2}-\frac{2c\mu(t)}{3w}(w\sqrt{l}+h)^{3}+\mu(t)w(1+2cl)\left(\frac{1}{3}\mu(t)wn+wn+h\right)\right]\\ &\quad\times\|x^{\Delta}(t)\|^{2}+\frac{4\sqrt{l}nc}{3w}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\leq-\left[1-\frac{\mu(t)}{2}-\frac{2c\mu(t)}{3w}(w\sqrt{l}+h)^{3}+\mu(t)w(1+2cl)\left(\frac{1}{3}\mu(t)wn+wn+h\right)\right]|x^{\Delta}(t)\|^{2}.\end{align*}$$ Hence, |$E^{\Delta }(t)\leq -\left [1-H(\mu (t))\right ]\|x^{\Delta }(t)\|^{2}$|⁠. Then $$\begin{align}E^{\Delta}(t)\leq-\left[1-H(\mu(t))\right]\|x^{\Delta}(t)\|^{2}\leq-\alpha\|x^{\Delta}(t)\|^{2}.\end{align}$$ (4.3) Clearly, E(t) is monotone decreasing. Since |$\widehat{E}(t)$| is bounded and |${\int _{0}^{t}}(w x_{i}^{\sigma }(\tau )+h_{i})^{3}\Delta \tau <+\infty$|⁠, one gets that E(t) is bounded. Thus, there must exist a constant E0 such that |$\underset{t\rightarrow \infty }{\lim}E(t)=E_{0}$|⁠. It follows from (4.3) that $$\|x^{\Delta}(t)\|^{2}\leq-\frac{1}{\alpha}E^{\Delta}(t).$$ Then $$\begin{align*}\int_{0}^{+\infty}\|x^{\Delta}(\tau)\|^{2}\Delta\tau&=\frac{1}{\alpha}E(0)-\frac{1}{\alpha}\underset{t\rightarrow\infty}{\lim}E(t)\\ &=\frac{1}{\alpha}E(0)-\frac{1}{\alpha}E_{0} <+\infty.\end{align*}$$ Due to |$x_{i}^{\Delta }(t)\;\;(i=1,2)$| are bounded. Thus, we have $$\underset{t\rightarrow\infty}{\lim}\|x^{\Delta}(t)\|^{2}=0, \;\;\textrm{where}\;\;i\in\mathcal N,$$ so $$\underset{t\rightarrow\infty}{\lim}x^{\Delta}(t)=0, \;\;\textrm{where}\;\;i\in\mathcal N.$$ Since x(t) is bounded, every subsequence of {x(t)} must contain convergent subsequence. Let {x(tm)} be any of such a convergent subsequence. There exists an |$x^{\ast }\in \mathbb{R}^{2}$| such that $$\underset{t_{m}\! \rightarrow +\infty}{\lim}x(t_{m})=x^{\ast}.$$ It follows from (1.1) that $$-x_{i}^{\ast}\! +\! \left(wx_{i}^{\ast}\! +\! h\right)^{2}G(x^{\ast})=\underset{t_{m}\! \rightarrow +\infty}{\lim}x_{i}^{\Delta}(t_{m})=0,\;\;i\in\mathcal N.$$ Thus, x*∈ Se. This shows that Se is not empty and any convergent subsequence of {x(t)} converges to a point of Se. Next, we will prove $$\begin{align}\underset{t\!\rightarrow +\infty}{\lim}dist(x(t),S^{e})=0.\end{align}$$ (4.4) Suppose (4.4) is not true. For any T > 0, there exists a constant ϵ0 > 0 and a |$\overline{t}\!\geq \! T$| such that |$dis(x(\overline{t}),S^{e})\!\geq \epsilon _{0}$|⁠. Together with the boundedness of x(t), we can choose a convergent subsequence |$\{x(\overline{t}{_{m}})\}$| with |$\underset{\overline{t}_{m}\!\rightarrow +\infty }{\lim}x(\overline{t}_{m})=x^{+}\in S^{e}$| such that $$dist(x(\overline{t}_{m}),S^{e})\geq\epsilon_{0} \quad(m=1,2,3,\cdots).$$ Letting |$\overline{t}_{m}\!\rightarrow +\infty$|⁠, we get dist(x+, Se) ≥ ϵ0 > 0 which contradicts to x+ ∈ Se. This proves (4.4) is true. Then, the network (1.1) is completely stable. The proof is completed. 5. Computer simulations In this section, we will employ three examples to further illustrate our theoretical results. Simulation 1: consider |$\mathbb{T}=\mathbb{R}$|⁠. Let w = 0.0575, c = 8.3750 × 10−3 and h1 = h2 = 0.5443. From Theorem 3.1, we get that the network (1.1) is bounded and there exists a compact set $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=10.93\right\}$$ which globally attracts all the trajectories of the network. Thus, every trajectory of the network must converge to the equilibrium point |$x^{\ast }=(x_{1}^{*}, x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. We can refer to Fig. 1. It is also easy to calculate the equilibrium points which are located in the attractive set S. Figures 1 and Fig. 2 show the dynamical behavior of complete convergence and phase portrait of the network (1.1) originating randomly from positive initial points, respectively. Fig. 1. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 1. Fig. 1. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 1. Fig. 2. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 1. Fig. 2. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 1. Fig. 3. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 3. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Simulation 2: consider |$\mathbb{T}=\bigcup _{k=0}^{+\infty }[1.2k, 1.2k+1],\,\, k\in \mathbb{N}$|⁠. Let w = 0.2375, c = 8.3750 × 10−4 and h1 = h2 = 0.4717. From Theorem 1, the network (1.1) is bounded and there exists a compact set $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=34.55\right\},$$ which globally attracts all the trajectories of the network. Thus, every trajectory of the network must converge to an equilibrium point. It is also easy to calculate the equilibrium points which are located in the attractive set S. Figures 3 and 4 show the dynamical behavior of complete convergence and phase portrait of the network (1.1) originating randomly from positive initial points and the equilibrium point is |$x^{\ast }=(x_{1}^{*}, x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. Fig. 4. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 2. Fig. 4. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 2. Fig. 5. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Fig. 5. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Simulation 3: consider |$\mathbb{T}=0.5\mathbb{Z}$|⁠. Let w = 0.2875, c = 8.3750 × 10−3 and h1 = h2 = 0.4717. It follows from Theorem 1 that the network (1.1) is bounded. The global attractive set is $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=10.93\right\}\!.$$ By simple calculations, equilibrium points do coexist in the network. Figures 5 and 6 show complete convergence and phase portrait of the network (1.1). It is clear that the number of equilibrium points depends on the external inputs and the equilibrium point is |$x^{\ast }=(x_{1}^{*} x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. Remark 5.1 In Simulation 1, we can easily see that when |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, the results in this paper reduce to the study by Xu & Yi (2011). Figures 1 and 2 show that the complete convergence and attractivity of the networks in continue-time domain. When |$\mathbb{T}\neq \mathbb{R}$|⁠, i.e., |$\mu (t)\neq 0$|⁠, we get several new criteria to guarantee the global attractivity and complete convergence. In Figs 3–6, we can see that when μ(t) takes different values, the new networks also complete converge and every trajectory of the networks converges to an equilibrium point in compact set. Remark 5.2 When the background is higher in Simulation 2 (Simulation 3), i.e., h1 = h2 = 1.4717 (h1 = h2 = 2.6717), Fig. 7 (Fig. 8) shows complete convergence of the network. It is clear that all trajectories converge to two equilibrium points which are symmetric. Hence, the network (1.1) possess multiple equilibrium points when the background input is higher and the corresponding convergent behaviours belong to multistability. Fig. 6. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 3. Fig. 6. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 3. Fig. 7. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 7. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 8. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Fig. 8. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. 6. Concluding remarks In this paper we focus on some dynamic properties of the simplified network (1.1) with uniform firing rate on time scales. By using the theory of calculus on time scales, we obtain global attractive set and complete convergence of the new networks. When |$\mathbb{T}=\mathbb{R}$|⁠, our results are complementary to the previously known results, when |$\mathbb{T}=\mathbb{Z}$| or other time scales, our results are completely new in the literature. Finally, three examples are given to illustrate the feasibility of the main result. Acknowledgements We also thank the reviewers for their helpful comments. Funding National Natural Science Foundation of China (61573005, 11361010). References Agarwal , R. , Bohner , M. , O’Regan , D. & Peterson , A. ( 2002 ) Dynamic equations on time scales . J. Comput. Appl. Math. , 141 , 1 – 26 . Google Scholar Crossref Search ADS WorldCat Bohner , M. & Peterson , A. ( 2003 ) Dynamic Equations on Time Scales an Introduction with Applications . Boston, MA : Birkhäuser . Google Preview WorldCat COPAC Cao , J. D. & Liang , J. L. ( 2004 ) Boundedness and stability for Cohen–Grossberg neural network with time-varying delays . J. Math. Anal. Appl. , 296 , 665 – 685 . Google Scholar Crossref Search ADS WorldCat Cheng , Q. X. & Cao , J. D. ( 2015 ) Synchronization of complex dynamical networks with discrete time delays on time scales . Neurocomputing , 151 , 729 – 736 . Google Scholar Crossref Search ADS WorldCat Gao , J. , Wang , Q. R. & Zhang , L. W. ( 2016 ) Existence and stability of almost-periodic solutions for cellular neural networks with time-varying delays in leakage terms on time scales . Math. Meth. Appl. Sci. , 39 , 1316 – 1375 . Google Scholar Crossref Search ADS WorldCat Huang , Z. K. , Raffoul , Y. N. & Cheng , C. Y. ( 2017 ) Scale-limited activating sets and multiperiodicity for threshold-linear networks on time scales [J] . IEEE Trans. Cybern. , 44 , 488 – 499 . Google Preview WorldCat COPAC Li , C. H. & Yang , S. Y. ( 2009 ) Global attractivity in delayed Cohen−Grossberg neural network models . Chaos Solitons Fractals , 39 , 1975 – 1987 . Google Scholar Crossref Search ADS WorldCat Liu , J. , Liu , X. Z. & Xie , W. C. ( 2012 ) Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations . Info. Sci ., 183 , 92 – 105 . Google Preview WorldCat COPAC Liu , X. L. & Li , W. T. ( 2007 ) Periodic solutions for dynamic equations on time scales. Nonlinear Anal. , 67 , 1457 – 1463 . Google Scholar Crossref Search ADS WorldCat Rehim , M. , Jiang , H. J. & Teng , Z. D. ( 2004 ) Boundedness and stability for non-autonomous cellular neural networks with delay . Neural Netw. , 17 , 1017 – 1025 . Salinas , E . ( 2003 ) Background synaptic activity as a switch between dynamical states in a network . Neural Comput. , 15 , 1439 – 1475 . Google Scholar Crossref Search ADS PubMed WorldCat Wan , L. & Zhou , Q. H. ( 2012 ) Attractor and boundedness for stochastic Cohen–Grossberg neural networks with delays . Neurocomputing , 79 , 164 – 167 . Google Scholar Crossref Search ADS WorldCat Xu , F. & Yi , Z. ( 2011 ) Convergence analysis of a class of simplified background neural networks with two subnetworks. Neurocomputing , 74 , 3877 – 3883 . Xu , F. , Zhang , L. & Qu , H. ( 2008 ) Convergence analysis of background neural networks with two subnetworks. IEEE Trans. Neural Comput. , 21 , 440 – 444 . Yang , L. & Li , K. ( 2015 ) Existence and exponential stability of periodic solution for stochastic Hopfield neural networks on time scales . Neurocomputing , 167 , 543 – 550 . Google Scholar Crossref Search ADS WorldCat Yi , Z. , Heng , P. A. & Fu , A. W. ( 1999 ) Estimate of exponential convergence rate and exponential stability for neural networks . IEEE Trans. Neural Netw. , 10 , 1487 – 1893 . Google Preview WorldCat COPAC Yi , Z. , Heng , P. A. & Leung , K. S. ( 2001 ) Convergence analysis of cellular neural networks with unbounded delay . IEEE Trans. Circuits Syst ., 48 , 680 – 687 . Google Preview WorldCat COPAC Yuan , Z. H. , Yuan , L. F. , Huang , L. H. & Hu , D. W. ( 2009 ) Boundedness and global convergence of non-autonomous neural networks with variable delays . Nonlinear Anal. R. World App. , 10 , 2195 – 2206 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

Global convergence of a class of networks on time scales

Loading next page...
 
/lp/ou_press/global-convergence-of-a-class-of-networks-on-time-scales-nwBPQSN1Pe

References (22)

Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0265-0754
eISSN
1471-6887
DOI
10.1093/imamci/dny006
Publisher site
See Article on Publisher Site

Abstract

Abstract In this paper, we propose a class of simplified background neural networks model with two subnetworks on time scales. Some basic dynamic properties including positive invariance, boundedness, global attractivity and complete convergence of networks are analyzed. The main contributions in this paper are listed as follows: (1) the global attractive set of the model is verified and conditions for global attractivity are derived. (2) Complete convergence for the new networks are proved by constructing a novel energy function on time scales. Finally, three simulation examples are presented to illustrate the feasibility and effectiveness of the obtained results. 1. Introduction In the past few decades, neural networks have caused people’s high degree of attention due to their potential applications and have been used in many fields such as associative memory, signal processing, optimization problems and so on. For example, when we need to remember someone's phone number from radio report, our neural processing system will work. Without the context information, the same number vanishes promptly from the mind. Clearly, the context determines whether a motor responses or not. From the example, we know that if we want them to have more well result in applications, the dynamic properties play an important role in these fields. For dynamical properties of neural networks, the convergence analysis of neural networks has been extensively studied in the works of Yi et al. (1999), Yi et al. (2001), Xu et al. (2008), Xu & Yi (2011) and Liu et al. (2012). By constructing a sequence of solutions to delayed dynamical systems with high-slope activations, Liu et al. (2012) derived two sets of sufficient conditions for the global exponential stability and convergence of the neural networks. The boundedness, attractivity and stability analysis of neural networks have been studied in the works by Cao & Liang (2004), Rehim et al. (2004), Li & Yang (2009), Yuan et al. (2009) and Wan & Zhou (2012). Cao & Liang (2004) verified uniformly ultimate boundedness and uniform boundedness by utilizing the Hardy inequality. In the study by Li & Yang (2009), the authors investigated the global attractivity of Cohen–Grossberg neural network models with discrete and distributed delays via the Lyapunov functional method. The background model of neural networks was proposed in the studies by Salinas (2003), Xu et al. (2008) and Xu & Yi (2011), which is very different from the existing ones. By employing some mathematical techniques, the complete convergence and the global attractivity of a class of simplified background neural networks model have been investigated in the study by Xu & Yi (2011). As we know that Agarwal et al. (2002) and Bohner & Peterson (2003) compiled and summarized the time scale calculus theory which arouse much attention of scholars and experts. For example, the periodic solutions for dynamic equations on time scales have been discussed in the study by Liu & Li (2007); in the study by Yang & Li (2015), the authors introduced the existence and exponential stability of periodic solution for stochastic Hopfield neural networks on time scales. For other results on time scales, readers may refer to the studies by Gao et al. (2016), Huang et al. (2017) and Cheng & Cao (2015). From the theory of time scales, we can find that it is more general than common calculus and relative results contain continuous and discontinuous situations. However, to the best of our knowledge, there are few papers on the global attractive set and complete convergence of background neural networks on time scales. Stimulated by the work of Xu & Yi (2011), we consider a new class of simplified background neural networks with two subnetworks on time scales $$\begin{align}x_{i}^{\Delta}(t)=-x_{i}(t)+\left(wx_{i}(t)+h_{i}\right)^{2}G\left(x(t)\right)\!,\end{align}$$ (1.1) where G(x(t)) = 1 − c∥x(t)∥2, |$i\in \mathcal{N}:=\{1,2\}$| and |$t\in \mathbb{T}$|⁠. Each xi represents the firing rate of each subnetwork, c > 0 is the synaptic connection strength, hi > 0 represents the excitatory external input whose value is independent of the activity, the total synaptic input w to all neurons is a constant. The global convergence and stability result of neural networks are always limited to continuous time domain |$\mathbb{R}$| (see Yi et al., 1999; Yi et al., 2001; Xu et al., 2008; Liu et al., 2012). So how to establish dynamical properties of neural networks on general time scales is still unknown. In this paper, we will introduce the theory of time scales to solve this problem. The novelty and generalization of this paper can be concluded as follows. A more general positively invariant set and boundedness criteria of the new model on time scales are firstly investigated. Compared with the existing results in the study by Xu & Yi (2011), we can see that the main results reduce to ones in Xu & Yi (2011) when |$\mathbb{T}=\mathbb{R}$|⁠. When |$\mathbb{T}=\mathbb{Z}$|⁠, we can establish new stability of simplified background neural networks which have not been reported in the literature. Meanwhile, the complete convergence and stability analysis have been applicable on hybrid time domains. For this new dynamical behavior on time scales, we can see our computer simulations. This paper is organized as follows. In Section 2, some preliminaries are given. We will analyze the boundedness, invariant set and global attractive set of the new model in Section 3. Complete convergence of the new network is obtained in Section 4. In Section 5, computer simulations are provided. Finally, concluding remarks in Section 6. 2. Preliminaries In this section, we shall recall some definitions and state some preliminary results. Let |$\mathbb{T}$| be a time scale, which is a non-empty closed subset of |$\mathbb{R}$|⁠. The forward and backward jump operators σ, ρ: |$\mathbb{T}$||$\rightarrow$||$\mathbb{T}$| and the graininess μ: |$\mathbb{T}$||$\rightarrow$||$\mathbb{R}^{+}$| are defined by |$\sigma (t)=\inf \{s\in \mathbb{T}: s> t\}$|⁠, |$\rho (t)=\sup \{s\in \mathbb{T}: s < t\}$| and μ(t) = σ(t) − t, respectively. A point |$t\in \mathbb{T}$| is called left-dense if |$t>\inf \mathbb{T}$| and ρ(t) = t, left-scattered if ρ(t) < t, right-dense if |$t<\sup \mathbb{T}$| and σ(t) = t and right-scattered if σ(t) > t. If |$\mathbb{T}$| has a left-scattered maximum m, then |$\mathbb{T}^{k}=\mathbb{T}\setminus \{m\}$|⁠; otherwise, |$\mathbb{T}^{k}= \mathbb{T}$|⁠. If |$\mathbb{T}$| has a right-scattered minimum m, then |$\mathbb{T}^{k}= \mathbb{T}\setminus \{m\}$|⁠; otherwise, |$\mathbb{T}^{k}= \mathbb{T}$|⁠. A function |$f:\mathbb{T}\rightarrow \mathbb{R}$| is right-dense continuous provided it is continuous at right-dense points in |$\mathbb{T}$| and its left-side limits exist at left-dense points in |$\mathbb{T}$|⁠. The set of all right-dense continuous functions on |$\mathbb{T}$| is defined by |$\mathbb{C}_{rd} =\mathbb{C}_{rd}(\mathbb{T})=\mathbb{C}_{rd}(\mathbb{T},\mathbb{R})$|⁠. If f is continuous at each right-dense point and each left-dense point, then f is said to be a continuous function on |$\mathbb{T}$|⁠. Definition 2.1 (Bohner & Peterson, 2003) For a function |$f:\mathbb{T}\rightarrow \mathbb{R}$| (the range |$\mathbb{R}$| of f may be actually replaced by Banach space) the (delta) derivative is defined by $$f^{\bigtriangleup}=\frac{f(\sigma(t))-f(t)}{\sigma(t)-t},$$ if f is continuous at t and t is right-scattered. If t is right-dense then f is differentiable at t iff the limit $$\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s},$$ exists as a finite number. The derivative is defined by $$f^{\bigtriangleup}=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}.$$ Definition 2.2 (Bohner & Peterson, 2003) A function |$p:\mathbb{T} \rightarrow \mathbb{R}$| is said to be a regressive function if and only if 1 + μ(t)p(t)≠ 0 for all |$t\in \mathbb{T}^{k}$|⁠. Let |$\mathcal R$| be the set of all rd-continuous and regressive functions on |$\mathbb{T}$|⁠. Definition 2.3 (Bohner & Peterson, 2003) If |$p \in \mathcal R$|⁠, then the generalized exponential function ep(t, s) is defined on |$\mathbb{T}$| by $$e_{p}(t,s)=\exp\left[\int_{s}^{t}\xi_{\mu(\tau)}(p(\tau))\Delta\tau\right]\!, \quad \textrm{for} \quad s,\ t \in \mathbb{T},$$ where ξh(z) is a cylinder transformation defined by $$\xi_{h}(z)= \begin{cases} \frac{Log(1+hz)}{h}, &h\neq0,\\ z, &h=0.\end{cases}$$ Lemma 2.1 (Bohner & Peterson, 2003) Let |$p\in \mathcal{R}$| and |$f\in \mathbb{C}_{rd}$|⁠, |$t_{0}\in \mathbb{T}$| and |$y_{0}\in \mathbb{R}$|⁠. The unique solution of initial value problem yΔ ≤ p(t)y + f(t), y(t0) = y0, is given by $$y(t)\leq e_{p}(t,t_{0})y_{0}+\int_{t_{0}}^{t}e_{p}(t,\sigma(\tau))f(\tau)\Delta\tau.$$ In above lemma, ‘≤’ can be replaced by ‘≥’. Lemma 2.2 (Bohner & Peterson, 2003) Assume |$f, g:\mathbb{T} \rightarrow \mathbb{R}$| are differential at |$t\in \mathbb{T}^{k}$|⁠. Then (fg)△ = f△g + fσg△ = fg△ + f△gσ. Lemma 2.3 (Bohner & Peterson (2003)) If f ∈ Crd and |$t\in \mathbb{T}^{k}$|⁠, then $$\int_{t}^{\sigma(t)}f(\tau)\Delta\tau= \mu(t)f(t).$$ Definition 2.4 The network (1.1) is bounded if each of its trajectories is bounded. Definition 2.5 Let S be a compact subset of |$\mathbb{R}^{n}$|⁠. We denote the ϵ-neighborhood of S by Sε. The compact set S is called a global attractive set of (1.1) if for any ϵ > 0, all trajectories of that network ultimately enter and remain in Sε. Definition 2.6 A vector |$x^{*}=(x_{1}^{\ast }, x_{2}^{\ast })^{T} \in \mathbb{R}^{2}$| is called an equilibrium point of (1.1), if $$-x_{i}^{\ast}+\big(wx_{i}^{\ast}+h_{i}\big)^{2}\left(1-c\sum_{j=1}^{2}x_{j}^{*2}\right)=0 \quad (i=1,2).$$ Denote Se by the equilibrium set of (1.1). Definition 2.7 The network (1.1) is said to be completely convergent if the equilibrium set Se is not empty and every trajectory xi(t) (i = 1, 2) of (1.1) converges to Se, that is $$dist(x(t),S^{e})= \underset{x^{\ast}\in S^{e}}{\min}\|x(t)-x^{\ast}\|\rightarrow 0,\quad t\rightarrow \infty.$$ Definition 2.8 A set S ⊆ Crd is called to be a positive invariant set of system (1.1) if for any initial value ϕ = x(t0) ∈ S, we have x(t; t0, ϕ) ∈ S, where t ≥ t0 and |$t,\ t_{0}\in \mathbb{T}$|⁠. Definition 2.9 The neural network (1.1) is said to be completely stable if for any initial value ϕ, the corresponding solution trajectory x(t) converges to a certain equilibrium point. Definition 2.10 For any function f(t), the right-hand derivative is defined as $$D^{+}f(t)=\lim_{\epsilon\rightarrow 0^{+}}\frac{f(t+\epsilon)-f(t)}{\epsilon}.$$ 3. Global attractive set In this section, we give a global attractive set which attracts all the trajectories of (1.1). Property 3.1 Assume μ(t) ∈ [0, 1). If $$\begin{align}(1-\mu(t))\sqrt{\frac{1}{c}}+\mu(t)\left(w\sqrt{\frac{1}{c}}\!+\!h_{i}\right)^{2} <\sqrt{\frac{1}{2c}}\end{align}$$ (3.1) holds for all left-scatter points |$t\in \mathbb{T}$|⁠, then $$S_{1}=\left\{ (x_{1},x_{2})\in \mathbb{R}^{2}\big|\; x_{1}>0, x_{2}>0\;\;\textrm{and} \;\;\|x\|^{2}<1/ c \right\}$$ is a positive invariant set of (1.1). Proof. Let x(t) be the solution of (1.1) with initial condition x(t0) ∈ S1. We will prove $$\begin{align}\|x(t)\|^{2}=\sum_{i=1}^{2}x{_{i}^{2}}(t) < \frac{1}{c}\end{align}$$ (3.2) holds for t ≥ t0. By the way of contradiction, we suppose that (3.2) is not true. Then there exists a first time |$\tilde{t}$| > t0 such that |$\sum _{i=1}^{2}x{_{i}^{2}}(\tilde{t}) \geq 1/c.$| There are two cases for us to further consider. Case 1: |$\tilde{t}$| is left-dense, i.e., |$\tilde{t}$| = ρ(⁠|$\tilde{t}$|⁠). Then, we can get |$\sum _{i=1}^{2}x{_{i}^{2}}(\tilde{t})=1/ c$|⁠, |$D^{+}\sum _{i=1}^{2}(x{_{i}^{2}}(\tilde{t}))^{\Delta } \geq 0$| and |$\sum _{i=1}^{2}x{_{i}^{2}}(t) < 1/ c$|⁠, where |$t\in \mathbb{T}\;\;\textrm{and}\;\; t_{0}\leq t<\tilde{t}$|⁠. From 0 ≤ μ(t) < 1 and (1.1), we can get $$\begin{align*}D^{+}\sum_{i=1}^{2}{\big({x_{i}^{2}}\big(\tilde{t}\big)\big)}^{\Delta} &= -2\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t}) +2\sum_{i=1}^{2}x_{i}(\tilde{t})(wx_{i}(\tilde{t})\!+\!h_{i})^{2}G(x(\tilde{t}))\\ &\quad+\sum_{i=1}^{2}\mu(\tilde{t})(-x_{i}(\tilde{t})+(wx_{i}(\tilde{t})\!+\!h_{i})^{2}G(x(\tilde{t})))^{2}\\ &=(\mu(\tilde{t})-2)\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t})<(\mu(\tilde{t})-2) \frac{1}{c}\end{align*}$$ which leads to a contradiction. Case 2: |$\tilde{t}$| is left-scatter, we know that |$\sum _{i=1}^{2}{x_{i}^{2}}(\tilde{t})\geq 1 / c$|⁠, |$\sum _{i=1}^{2}{x_{i}^{2}}(\rho (\tilde{t}))< 1/ c$| and |$\sum _{i=1}^{2}{x_{i}^{2}}(t)< 1 / c$|⁠, t0 ≤ t < |$\tilde{t}$|⁠. It follows from 0 < μ(ρ(⁠|$\tilde{t}$|⁠)) < 1, 0 < G(xi(ρ(⁠|$\tilde{t}$|⁠))) < 1, |$x_{i}(\tilde{t})=x_{i}(\rho (\tilde{t}))+\mu (\rho (\tilde{t})){x_{i}^{\Delta }(\rho (\tilde{t}))}$| and (3.1), we can get $$\begin{align*}\sum_{i=1}^{2}{x_{i}^{2}}(\tilde{t})=&\;\sum_{i=1}^{2}\bigg[x_{i}(\rho(\tilde{t}))+\mu(\rho(\tilde{t})){x_{i}^{\Delta}(\rho(\tilde{t}))}\bigg]^{2}\\ =&\;\sum_{i=1}^{2}\bigg[(1-\mu(\rho(\tilde{t})))x_{i}(\rho(\tilde{t})) +\;\mu(\rho(\tilde{t}))(wx_{i}(\rho(\tilde{t}))\!+\!h_{i})^{2}G(x_{i}(\rho(\tilde{t})))\bigg]^{2}\\ \leq&\;\sum_{i=1}^{2}\left[(1-\mu(\rho(\tilde{t})))\sqrt{\frac{1}{c}} +\;\mu(\rho(\tilde{t}))\left(w\sqrt{\frac{1}{c}}+\!h_{i}\right)^{2}\right]^{2}<\frac{1}{c}\end{align*}$$ which leads to a contradiction. The two cases show that (3.2) is true. Then $$(wx_{i}(t)\!+\!h_{i})^{2}G(x(t))=(wx_{i}(t)\!+\!h_{i})^{2}\left(1-c\sum\limits_{j=1}^{2}{x_{j}^{2}}(t)\right)>0$$ holds for |$i\in \mathcal{N}$| and t ≥ t0. It follows from Lemma 2.1 that $$\begin{eqnarray}x_{i}(t)&=&\;\int_{t_{0}}^{t}\!\!e_{-1}(t,\!\sigma(\tau))(wx_{i}(\tau)\!+\!h_{i})^{2}G(x(\tau))\Delta\tau +e_{-1}(t,t_{0})x_{i}(t_{0})\nonumber\\ &=&\;\int_{t_{0}}^{t}\!\!\frac{1}{1-\mu(\tau)}e_{-1}(\tau, t)(wx_{i}(\tau)\!+\!h_{i})^{2}G(x(\tau))\Delta\tau +e_{-1}(t,t_{0})x_{i}(t_{0})\nonumber\\ &>&0,\end{eqnarray}$$ (3.3) where |$i\in \mathcal{N}$|⁠. From (3.2) and (3.3), we get that S1 is a positive invariant set of (1.1). The proof is complete. Remark 3.1 As shown in (3.3), μ(t) ∈ [0, 1) can guarantee xi(t) > 0. Hence, the assumption μ(t) ∈ [0, 1) in Property 3.1 is necessary and ensures S1 is a positive invariance set. Remark 3.2 As |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then all |$\tilde{t}\in \mathbb{T}$| are left-dense. Without (3.1), one can see our result can include ones in the study by Xu & Yi (2011). If |$\mathbb{T}=0.5\mathbb{Z}$|⁠, i.e., μ(t) = 0.5, then all |$\tilde{t}\in \mathbb{T}$| are left-scatter, (3.1) reduces to |$0.5\sqrt{\frac{1}{c}}+0.5(w\sqrt{\frac{1}{c}}\!+\!h_{i})^{2}<\sqrt{\frac{1}{2c}}$|⁠. So our results are more general and include ones in the study by Xu & Yi (2011) as a special case. Property 3.2 Under the assume μ(t) ∈ [0, 1), if $$\begin{align}1<\frac{2\sqrt{l}\left[\sqrt{l}-\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right]}{\left[\sqrt{l}+\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right]^{2}}\end{align}$$ (3.4) hold |$t\in \mathbb{T}$|⁠, then any solution x(t) with initial condition x(t0) is bounded with |$|x_{i}(t)|<\sqrt{l}$| for |$i\in \mathcal{N}$| and t ≥ t0, where l := ∥x(t0)∥2 + 1/c. Proof. We will prove $$\begin{align}{x_{i}^{2}}(t) < l\end{align}$$ (3.5) holds for |$i\in \mathcal{N}$| and t ≥ t0. Suppose (3.5) is not true, since |${x_{i}^{2}}(t_{0}) < l$| for |$\forall i\in \mathcal{N}$|⁠, there exist a first time t1 > t0 and some |$i\in \mathcal{N}$| such that |${x_{i}^{2}}(t_{1})\geq l$|⁠. Then there are two cases for us to consider. Case 1: t1 is left-dense, i.e., t1 = ρ(t1). Then, it gives |${x_{i}^{2}}(t_{1})=l$|⁠, |${x_{j}^{2}}(t)< l$| for |$\forall j\in \mathcal{N}$|⁠, t0 ≤ t < t1 and |$D^{+}{({x_{i}^{2}}(t_{1}))}^{\Delta }\geq 0$|⁠. It follows from 0 ≤ μ(t) < 1, (1.1) and (3.4) that $$\begin{align}D^{+}{\big({x_{i}^{2}}\big(t_{1}\big)\big)}^{\Delta} &\leq-2{x_{i}^{2}}(t_{1})+2|x_{i}(t_{1})|(wx_{i}(t_{1})+h_{i})^{2}|G(x(t_{1}))|\nonumber\\ &\quad+\;\mu(t_{1})\left(-x_{i}(t_{1})+(wx_{i}(t_{1})+h_{i})^{2}G(x(t_{1}))\right)^{2}\nonumber\\ &<-2l+2\sqrt{l}\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\nonumber\\ &\quad+\;\mu(t_{1})\left(\sqrt{l}+(w\sqrt{l}+h_{i})^{2}(1+2cl)\right)^{2}\nonumber\\ &<-2l+2\sqrt{l}(w\sqrt{l}+h_{i})^{2}(1+2cl)\nonumber\\ &\quad+\;\left(\sqrt{l}+(w\sqrt{l}+h_{i})^{2}(1+2cl)\right)^{2}<0\end{align}$$ (3.6) which leads to a contradiction. Case 2: t1 is left-scatter, then |${x_{i}^{2}}(t_{1})\geq l$|⁠, |${x_{i}^{2}}(\rho (t_{1}))< l$| and |${x_{j}^{2}}(t)<l$| for |$\forall j\in \mathcal{N}$|⁠, t0 ≤ t < t1. Since 0 ≤ μ(t) < 1, |$x_{i}(t_{1})=x_{i}(\rho (t_{1}))+\mu (\rho (t_{1})){x_{i}^{\Delta }(\rho (t_{1}))}$| and (1.1), we can get $$\begin{aligned}{x_{i}^{2}}(t_{1})=&\;\left[x_{i}(\rho(t_{1}))+\mu(\rho(t_{1})){x_{i}^{\Delta}(\rho(t_{1}))}\right]^{2}\\[3pt] =&\;\left[(1-\mu(\rho(t_{1})))x_{i}(\rho(t_{1})) +\mu(\rho(t_{1}))(wx_{i}(\rho(t_{1}))\!+\!h_{i})^{2}G(x(\rho(t_{1})))\right]^{2}\\[3pt] \leq&\;\left[(1-\mu(\rho(t_{1})))\sqrt{l} +\mu(\rho(t_{1}))(w\sqrt{l}+\!h_{i})^{2}(1+2cl)\right]^{2}.\end{aligned}$$ Due to (3.4), we know that |$\sqrt{l}>\left (w\sqrt{l}+h_{i}\right )^{2}(1+2cl)$|⁠. Hence, we get $$\begin{aligned}{x_{i}^{2}}(t_{1})\leq&\;\left[ \left(1-\mu\left(\rho(t_{1})))\sqrt{l} +\mu(\rho(t_{1})\right)\left(w\sqrt{l}+\!h_{i}\right)^{2}(1+2cl)\right.\right]^{2}<l\end{aligned}$$ which leads to a contradiction. The two cases show that (3.5) is true. Then $$|x_{i}(t)| < \sqrt{l}$$ holds for |$\forall i\in \mathcal{N}$| and t ≥ t0. This proves that the network (1.1) is bounded with |$|x_{i}(t)|<\sqrt{l}$| for |$i\in \mathcal{N}$| and t ≥ t0. Remark 3.3 If |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then (3.4) reduces to |$\sqrt{l}>(w\sqrt{l}+h_{i})^{2}(1+2cl)$|⁠. Compared with the result in Xu & Yi (2011), the result in this paper is more concisely and easily satisfied. If |$\mathbb{T}=0.5\mathbb{Z}$|⁠, i.e., μ(t) = 0.5, then (3.4) reduces to |$0.5[\sqrt{l}+ (w\sqrt{l}+h_{i})^{2}(1+2cl)]^{2}<2\sqrt{l}[\sqrt{l}-(w\sqrt{l}+h_{i})^{2}(1+2cl)]$|⁠. For any initial condition x(t0), the sequence solution x(t) is bounded under our new conditions. Obviously, the new bound criteria are more general than ones in the study of Xu & Yi (2011). Theorem 3.1 Assume that μ(t) ∈ [0, 1) and (3.4) hold, then the compact set $$S = \left\{ x\in \mathbb{R}^{2} \;\Big|\;\;\|x\|^{2}\leq\ \!\! \frac{1}{c}\;\right \}$$ is a globally attractive set of the network (1.1). Proof. Denote $$\underset{t\rightarrow \infty}{\lim}\sup{\|x(t)\|}^{2} = \xi.$$ It follows from Property 3.2 that each xi(t) is bounded. Then |$\xi <+\infty$|⁠. Now we should prove that $$\begin{align}\xi\leq\frac{1}{c}.\end{align}$$ (3.7) Suppose (3.7) is not true, i.e., $$\xi>\frac{1}{c}.$$ It can choose a small ϵ > 0 with |$\dfrac{1}{c}<\xi -\epsilon$|⁠. For this choice of ϵ, by the basic property of the upper limit, there exists a time t2 ≥ 0 such that ∥x(t)∥2 < ξ + ϵ, t ≥ t2. It will prove that there exists a time t3 ≥ t2 such that $$\begin{align}\left(\sum_{i=1}^{2}{x_{i}^{2}}(t)\right)^{\Delta}<0, \quad t\geq t_{3}.\end{align}$$ (3.8) If (3.8) is not true, there must exist a t4 ≥ t2 such that $$\left(\sum_{i=1}^{2}{x_{i}^{2}}(t_{4})\right)^{\Delta}\geq 0.$$ Since |$|x_{i}(t_{4})|<\sqrt{l}$| holds for |$i\in \mathcal{N}$|⁠, it follows (3.4) that $$\begin{aligned}\left(\sum_{i=1}^{2}{x_{i}^{2}}(t_{4})\right)^{\Delta}&\leq-2\sum_{i=1}^{2}{x_{i}^{2}}(t_{4}) +2\sum_{i=1}^{2}|x_{i}(t_{4})|(wx_{i}(t_{4})+h_{i})^{2}|G(x(t_{4}))|\\ &\quad+\sum_{i=1}^{2}\mu(t_{4})\left(-x_{i}(t_{4})+(wx_{i}(t_{4})+h_{i})^{2}G(x(t_{4}))\right)^{2}\\ &\leq-4l+4\sqrt{l}\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl) +2\mu(t_{4})\left(\sqrt{l}+\left(w\sqrt{l}+h_{i}\right)^{2}(1+2cl)\right)^{2}<0,\end{aligned}$$ where l := ∥x(t0)∥2 + 1/c. This is a contradiction, thus (3.8) is true. By (3.8), it shows that ∥x(t)∥2 is monotonically decreasing and the limit of ∥x(t)∥2 exists, i.e., $$\underset{t\rightarrow\infty}{\lim}\|x(t)\|^{2}=\underset{t\rightarrow\infty}{\lim}\sup\|x(t)\|^{2}=\xi.$$ So there exists a time t5 ≥ t3 such that $$\frac{1}{c}<\xi-\epsilon<\|x(t)\|^{2}<\xi+\epsilon,$$ for all t ≥ t5. Together with (1.1), we have $$\begin{align}x_{i}^{\Delta}(t)=-x_{i}(t)\!+\!(wx_{i}(t)+h_{i})^{2}\left(1\!-\!c\sum_{j=1}^{2}{x_{j}^{2}}(t)\right) \leq-x_{i}(t), \quad t\geq t_{5}.\end{align}$$ (3.9) Next we assume that xi(t) > 0 for some t ≥ t5, where |$i\in \mathcal N$|⁠. It follows from (3.9) and Lemma 2.1 that $$x_{i}(t)\leq x_{i}(t_{5})e_{-1}(t,t_{5}),$$ where |$i\in \mathcal{N}$| and t ≥ t5. Since μ(t) ∈ (0, 1), we have $$\underset{t\rightarrow\infty}{\lim}x_{i}(t)\leq \underset{t\rightarrow\infty}{\lim}x_{i}(t_{5})e_{-1}(t,t_{5})=0$$ which contradicts xi(t) > 0, where |$i\in \mathcal N$| and t ≥ t5. For each |$i\in \mathcal N$|⁠, we assume that xi(t) < 0 for t ≥ t5. From (3.9) and Lemma 2.1, one gets that $$x_{i}(t)\geq\eta\int_{t_{5}}^{t}e_{-1}(t,\sigma(\tau))\Delta\tau$$ for t ≥ t5 and |$\eta =(w\sqrt{l}+h_{i})^{2}\left (1-c(\xi +\epsilon )\right )$|⁠. Since μ(t) ∈ [0, 1), we have $$\underset{t\rightarrow\infty}{\lim}x_{i}(t)\geq \eta\underset{t\rightarrow\infty}{\lim}\int_{t_{5}}^{t}e_{-1}(t,\sigma(\tau))\Delta\tau=0$$ which contradicts xi(t) < 0 for t ≥ t5. This will derive a contradiction which proves (3.7) to be true. Finally, if we assume that xi(t) = 0 for |$i\in \mathcal N$|⁠, then (3.7) is also true. The proof is complete. Remark 3.4 Note that if |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, then (3.4) reduces to |$\sqrt{l}>(w\sqrt{l}+h_{i})^{2}(1+2cl)$|⁠. Obviously, our criteria are more clearly and concisely than that of Xu & Yi (2011) and easy to be satisfied. Moreover, Theorem 3.1 can not only solve continue-time situation in Xu & Yi (2011) but also extend to discrete-time and hybrid-time situation, such as |$\mathbb{T}=n\mathbb{Z}$| (n ∈ (0, 1)), |$\mathbb{T}=\bigcup _{s=0}^{+\infty }[s+0.5, s+1]$||$(s\in \mathbb{N})$| and so on. 4. Complete convergence In this section, the complete convergence of (1.1) will be studied. The general method to study complete convergence of neural networks is to construct suitable energy functions. Theorem 4.1 Assume that the network (1.1) is bounded and |${\int _{0}^{t}}\left (w x_{i}^{\sigma }(\tau )+h_{i}\right )^{3}\Delta \tau <+\infty$|⁠. If there exists an α > 0 such that $$\begin{align}1-H(\mu(t))\geq\alpha,\;\;\forall t \in\mathbb{T}\end{align}$$ (4.1) hold, where $$H(\mu(t))=\left[\dfrac{1}{2}+\dfrac{2c}{3w}(w\sqrt{l}+h)^{3}-w(1+2cl)\left(\dfrac{1}{3}\mu(t)wn+wn+h\right)\right]\mu(t),$$ and |$|x_{i}(t)|\leq \sqrt{l}$|⁠, |$\left |x_{i}^{\varDelta }(t)\right |\leq n$| (⁠|$i\in \mathcal N$|⁠), |$h=\underset{1\leq i\leq 2}{\max}\{h_{i}\}$|⁠, |$\beta =\dfrac{4\sqrt{l}nc}{3w}$|⁠, |$l=\|x(t_{0})\|+\dfrac{1}{c}$|⁠, then the network (1.1) is completely stable. Proof. Denote $$\begin{cases} a_{i}=(wx_{i}(t)+h_{i}),\\ \left[(wx_{i}(t)+h_{i})^{3}\right]^{\Delta}\ =\left({a_{i}^{3}}\right)^{\Delta} =a_{i}^{\Delta} {a_{i}^{2}}+a_{i}^{\sigma} \left({a_{i}^{2}}\right)^{\Delta} =a_{i}^{\Delta} {a_{i}^{2}}+a_{i}^{\sigma}\left(a_{i}^{\sigma}+a_{i}\right)a_{i}^{\Delta}\\ \qquad\qquad\qquad\quad{\hskip2.5pt} =3{a_{i}^{2}}a_{i}^{\Delta}+\mu^{2}(t)\left(a_{i}^{\Delta}\right)^{3}+3\mu(t)\left(a_{i}^{\Delta}\right)^{2}a_{i},\\ [G(x(t))]^{\Delta}=-2c\sum_{j=1}^{2}x_{j}(t)x_{j}^{\Delta}(t)-c\mu(t)\|x^{\Delta}(t)\|^{2},\\\end{cases}$$ where |$i\in \mathcal N\;\; \textrm{and}\;\; t\in \mathbb{T}$|⁠. Constructing an energy function |$\widehat{E}(t)=\frac{1}{2}\sum _{i=1}^{2}{x_{i}^{2}}(t)-\frac{1}{3w}\sum _{i=1}^{2}G(x(t))(wx_{i}(t)+h_{i})^{3}$| and let $$\begin{align}E(t)=\widehat{E}(t)-{\beta\int_{0}^{t}}\left(w x_{i}^{\sigma}(\tau)+h_{i}\right)^{3}\Delta\tau.\end{align}$$ (4.2) It follows from (1.1) and (4.2) that $$\begin{align*}E^{\Delta}(t)&=\frac{1}{2}\sum_{i=1}^{2}\left({x_{i}^{2}}(t)\right)^{\Delta}-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{3}G(x(t))\right]^{\Delta}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &=\frac{1}{2}\sum_{i=1}^{2}\left({x_{i}^{2}}(t)\right)^{\Delta}-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{3}\right]^{\Delta} G(x(t))-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad-\frac{1}{3w}\sum_{i=1}^{2}\left[(w x_{i}(t)+h_{i})^{\sigma}\right]^{3} G^{\Delta}(x(t))\\&=\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)+\frac{\mu(t)}{2}\|x^{\Delta}(t)\|^{2}-\frac{1}{3}\mu^{2}(t)w^{2}G(x(t))\sum_{i=1}^{2}\left(x_{i}^{\Delta}(t)\right)^{3}\\ &\quad-\sum_{i=1}^{2}(w x_{i}(t)+h_{i})^{2}G(x(t))x_{i}^{\Delta}(t)-\mu(t)w G(x(t))\sum_{i=1}^{2}(w x_{i}(t)+h_{i})\left(x_{i}^{\Delta}(t)\right)^{2}\\&\quad+\frac{2c}{3w}\left(\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)\right)\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad+\frac{c\mu(t)}{3w}\|x^{\Delta}(t)\|^{2}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &=-\|x^{\Delta}(t)\|^{2}+\frac{\mu(t)}{2}\|x^{\Delta}(t)\|^{2}+\frac{2c}{3w}\left(\sum_{i=1}^{2}x_{i}(t)x_{i}^{\Delta}(t)\right)\sum_{i=1}^{2}\left(wx_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\quad-\mu(t)w G(x(t))\left[\sum_{i=1}^{2}\left(\frac{w\mu(t)}{3}\left(x_{i}^{\Delta}(t)\right)^{3}+w x_{i}(t)\left(x_{i}^{\Delta}(t)\right)^{2}+h_{i}\left(x_{i}^{\Delta}(t)\right)^{2}\right)\right]\\ &\quad+\left[\frac{c\mu(t)}{3w}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\right]\|x^{\Delta}(t)\|^{2}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\&\leq-\left[1-\frac{\mu(t)}{2}-\frac{2c\mu(t)}{3w}(w\sqrt{l}+h)^{3}+\mu(t)w(1+2cl)\left(\frac{1}{3}\mu(t)wn+wn+h\right)\right]\\ &\quad\times\|x^{\Delta}(t)\|^{2}+\frac{4\sqrt{l}nc}{3w}\sum_{i=1}^{2}\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}-\beta\left(w x_{i}^{\sigma}(t)+h_{i}\right)^{3}\\ &\leq-\left[1-\frac{\mu(t)}{2}-\frac{2c\mu(t)}{3w}(w\sqrt{l}+h)^{3}+\mu(t)w(1+2cl)\left(\frac{1}{3}\mu(t)wn+wn+h\right)\right]|x^{\Delta}(t)\|^{2}.\end{align*}$$ Hence, |$E^{\Delta }(t)\leq -\left [1-H(\mu (t))\right ]\|x^{\Delta }(t)\|^{2}$|⁠. Then $$\begin{align}E^{\Delta}(t)\leq-\left[1-H(\mu(t))\right]\|x^{\Delta}(t)\|^{2}\leq-\alpha\|x^{\Delta}(t)\|^{2}.\end{align}$$ (4.3) Clearly, E(t) is monotone decreasing. Since |$\widehat{E}(t)$| is bounded and |${\int _{0}^{t}}(w x_{i}^{\sigma }(\tau )+h_{i})^{3}\Delta \tau <+\infty$|⁠, one gets that E(t) is bounded. Thus, there must exist a constant E0 such that |$\underset{t\rightarrow \infty }{\lim}E(t)=E_{0}$|⁠. It follows from (4.3) that $$\|x^{\Delta}(t)\|^{2}\leq-\frac{1}{\alpha}E^{\Delta}(t).$$ Then $$\begin{align*}\int_{0}^{+\infty}\|x^{\Delta}(\tau)\|^{2}\Delta\tau&=\frac{1}{\alpha}E(0)-\frac{1}{\alpha}\underset{t\rightarrow\infty}{\lim}E(t)\\ &=\frac{1}{\alpha}E(0)-\frac{1}{\alpha}E_{0} <+\infty.\end{align*}$$ Due to |$x_{i}^{\Delta }(t)\;\;(i=1,2)$| are bounded. Thus, we have $$\underset{t\rightarrow\infty}{\lim}\|x^{\Delta}(t)\|^{2}=0, \;\;\textrm{where}\;\;i\in\mathcal N,$$ so $$\underset{t\rightarrow\infty}{\lim}x^{\Delta}(t)=0, \;\;\textrm{where}\;\;i\in\mathcal N.$$ Since x(t) is bounded, every subsequence of {x(t)} must contain convergent subsequence. Let {x(tm)} be any of such a convergent subsequence. There exists an |$x^{\ast }\in \mathbb{R}^{2}$| such that $$\underset{t_{m}\! \rightarrow +\infty}{\lim}x(t_{m})=x^{\ast}.$$ It follows from (1.1) that $$-x_{i}^{\ast}\! +\! \left(wx_{i}^{\ast}\! +\! h\right)^{2}G(x^{\ast})=\underset{t_{m}\! \rightarrow +\infty}{\lim}x_{i}^{\Delta}(t_{m})=0,\;\;i\in\mathcal N.$$ Thus, x*∈ Se. This shows that Se is not empty and any convergent subsequence of {x(t)} converges to a point of Se. Next, we will prove $$\begin{align}\underset{t\!\rightarrow +\infty}{\lim}dist(x(t),S^{e})=0.\end{align}$$ (4.4) Suppose (4.4) is not true. For any T > 0, there exists a constant ϵ0 > 0 and a |$\overline{t}\!\geq \! T$| such that |$dis(x(\overline{t}),S^{e})\!\geq \epsilon _{0}$|⁠. Together with the boundedness of x(t), we can choose a convergent subsequence |$\{x(\overline{t}{_{m}})\}$| with |$\underset{\overline{t}_{m}\!\rightarrow +\infty }{\lim}x(\overline{t}_{m})=x^{+}\in S^{e}$| such that $$dist(x(\overline{t}_{m}),S^{e})\geq\epsilon_{0} \quad(m=1,2,3,\cdots).$$ Letting |$\overline{t}_{m}\!\rightarrow +\infty$|⁠, we get dist(x+, Se) ≥ ϵ0 > 0 which contradicts to x+ ∈ Se. This proves (4.4) is true. Then, the network (1.1) is completely stable. The proof is completed. 5. Computer simulations In this section, we will employ three examples to further illustrate our theoretical results. Simulation 1: consider |$\mathbb{T}=\mathbb{R}$|⁠. Let w = 0.0575, c = 8.3750 × 10−3 and h1 = h2 = 0.5443. From Theorem 3.1, we get that the network (1.1) is bounded and there exists a compact set $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=10.93\right\}$$ which globally attracts all the trajectories of the network. Thus, every trajectory of the network must converge to the equilibrium point |$x^{\ast }=(x_{1}^{*}, x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. We can refer to Fig. 1. It is also easy to calculate the equilibrium points which are located in the attractive set S. Figures 1 and Fig. 2 show the dynamical behavior of complete convergence and phase portrait of the network (1.1) originating randomly from positive initial points, respectively. Fig. 1. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 1. Fig. 1. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 1. Fig. 2. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 1. Fig. 2. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 1. Fig. 3. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 3. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Simulation 2: consider |$\mathbb{T}=\bigcup _{k=0}^{+\infty }[1.2k, 1.2k+1],\,\, k\in \mathbb{N}$|⁠. Let w = 0.2375, c = 8.3750 × 10−4 and h1 = h2 = 0.4717. From Theorem 1, the network (1.1) is bounded and there exists a compact set $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=34.55\right\},$$ which globally attracts all the trajectories of the network. Thus, every trajectory of the network must converge to an equilibrium point. It is also easy to calculate the equilibrium points which are located in the attractive set S. Figures 3 and 4 show the dynamical behavior of complete convergence and phase portrait of the network (1.1) originating randomly from positive initial points and the equilibrium point is |$x^{\ast }=(x_{1}^{*}, x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. Fig. 4. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 2. Fig. 4. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 2. Fig. 5. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Fig. 5. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Simulation 3: consider |$\mathbb{T}=0.5\mathbb{Z}$|⁠. Let w = 0.2875, c = 8.3750 × 10−3 and h1 = h2 = 0.4717. It follows from Theorem 1 that the network (1.1) is bounded. The global attractive set is $$S=\left\{x\big|\|x\|\leq\sqrt{\frac{1}{c}}=10.93\right\}\!.$$ By simple calculations, equilibrium points do coexist in the network. Figures 5 and 6 show complete convergence and phase portrait of the network (1.1). It is clear that the number of equilibrium points depends on the external inputs and the equilibrium point is |$x^{\ast }=(x_{1}^{*} x_{2}^{*})^{T}= (0.3158, 0.3158)^{T}$|⁠. Remark 5.1 In Simulation 1, we can easily see that when |$\mathbb{T}=\mathbb{R}$|⁠, i.e., μ(t) ≡ 0, the results in this paper reduce to the study by Xu & Yi (2011). Figures 1 and 2 show that the complete convergence and attractivity of the networks in continue-time domain. When |$\mathbb{T}\neq \mathbb{R}$|⁠, i.e., |$\mu (t)\neq 0$|⁠, we get several new criteria to guarantee the global attractivity and complete convergence. In Figs 3–6, we can see that when μ(t) takes different values, the new networks also complete converge and every trajectory of the networks converges to an equilibrium point in compact set. Remark 5.2 When the background is higher in Simulation 2 (Simulation 3), i.e., h1 = h2 = 1.4717 (h1 = h2 = 2.6717), Fig. 7 (Fig. 8) shows complete convergence of the network. It is clear that all trajectories converge to two equilibrium points which are symmetric. Hence, the network (1.1) possess multiple equilibrium points when the background input is higher and the corresponding convergent behaviours belong to multistability. Fig. 6. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 3. Fig. 6. Open in new tabDownload slide Phase portrait of network (1.1) in Simulation 3. Fig. 7. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 7. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 2. Fig. 8. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. Fig. 8. Open in new tabDownload slide Complete convergence of network (1.1) in Simulation 3. 6. Concluding remarks In this paper we focus on some dynamic properties of the simplified network (1.1) with uniform firing rate on time scales. By using the theory of calculus on time scales, we obtain global attractive set and complete convergence of the new networks. When |$\mathbb{T}=\mathbb{R}$|⁠, our results are complementary to the previously known results, when |$\mathbb{T}=\mathbb{Z}$| or other time scales, our results are completely new in the literature. Finally, three examples are given to illustrate the feasibility of the main result. Acknowledgements We also thank the reviewers for their helpful comments. Funding National Natural Science Foundation of China (61573005, 11361010). References Agarwal , R. , Bohner , M. , O’Regan , D. & Peterson , A. ( 2002 ) Dynamic equations on time scales . J. Comput. Appl. Math. , 141 , 1 – 26 . Google Scholar Crossref Search ADS WorldCat Bohner , M. & Peterson , A. ( 2003 ) Dynamic Equations on Time Scales an Introduction with Applications . Boston, MA : Birkhäuser . Google Preview WorldCat COPAC Cao , J. D. & Liang , J. L. ( 2004 ) Boundedness and stability for Cohen–Grossberg neural network with time-varying delays . J. Math. Anal. Appl. , 296 , 665 – 685 . Google Scholar Crossref Search ADS WorldCat Cheng , Q. X. & Cao , J. D. ( 2015 ) Synchronization of complex dynamical networks with discrete time delays on time scales . Neurocomputing , 151 , 729 – 736 . Google Scholar Crossref Search ADS WorldCat Gao , J. , Wang , Q. R. & Zhang , L. W. ( 2016 ) Existence and stability of almost-periodic solutions for cellular neural networks with time-varying delays in leakage terms on time scales . Math. Meth. Appl. Sci. , 39 , 1316 – 1375 . Google Scholar Crossref Search ADS WorldCat Huang , Z. K. , Raffoul , Y. N. & Cheng , C. Y. ( 2017 ) Scale-limited activating sets and multiperiodicity for threshold-linear networks on time scales [J] . IEEE Trans. Cybern. , 44 , 488 – 499 . Google Preview WorldCat COPAC Li , C. H. & Yang , S. Y. ( 2009 ) Global attractivity in delayed Cohen−Grossberg neural network models . Chaos Solitons Fractals , 39 , 1975 – 1987 . Google Scholar Crossref Search ADS WorldCat Liu , J. , Liu , X. Z. & Xie , W. C. ( 2012 ) Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations . Info. Sci ., 183 , 92 – 105 . Google Preview WorldCat COPAC Liu , X. L. & Li , W. T. ( 2007 ) Periodic solutions for dynamic equations on time scales. Nonlinear Anal. , 67 , 1457 – 1463 . Google Scholar Crossref Search ADS WorldCat Rehim , M. , Jiang , H. J. & Teng , Z. D. ( 2004 ) Boundedness and stability for non-autonomous cellular neural networks with delay . Neural Netw. , 17 , 1017 – 1025 . Salinas , E . ( 2003 ) Background synaptic activity as a switch between dynamical states in a network . Neural Comput. , 15 , 1439 – 1475 . Google Scholar Crossref Search ADS PubMed WorldCat Wan , L. & Zhou , Q. H. ( 2012 ) Attractor and boundedness for stochastic Cohen–Grossberg neural networks with delays . Neurocomputing , 79 , 164 – 167 . Google Scholar Crossref Search ADS WorldCat Xu , F. & Yi , Z. ( 2011 ) Convergence analysis of a class of simplified background neural networks with two subnetworks. Neurocomputing , 74 , 3877 – 3883 . Xu , F. , Zhang , L. & Qu , H. ( 2008 ) Convergence analysis of background neural networks with two subnetworks. IEEE Trans. Neural Comput. , 21 , 440 – 444 . Yang , L. & Li , K. ( 2015 ) Existence and exponential stability of periodic solution for stochastic Hopfield neural networks on time scales . Neurocomputing , 167 , 543 – 550 . Google Scholar Crossref Search ADS WorldCat Yi , Z. , Heng , P. A. & Fu , A. W. ( 1999 ) Estimate of exponential convergence rate and exponential stability for neural networks . IEEE Trans. Neural Netw. , 10 , 1487 – 1893 . Google Preview WorldCat COPAC Yi , Z. , Heng , P. A. & Leung , K. S. ( 2001 ) Convergence analysis of cellular neural networks with unbounded delay . IEEE Trans. Circuits Syst ., 48 , 680 – 687 . Google Preview WorldCat COPAC Yuan , Z. H. , Yuan , L. F. , Huang , L. H. & Hu , D. W. ( 2009 ) Boundedness and global convergence of non-autonomous neural networks with variable delays . Nonlinear Anal. R. World App. , 10 , 2195 – 2206 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

IMA Journal of Mathematical Control and InformationOxford University Press

Published: Sep 16, 2019

There are no references for this article.