## Abstract

The elementary set, or alphabet, of neural firing modes is derived from the widely accepted conductance-based rectified firing-rate model. The firing dynamics of interacting neurons are shown to be governed by a multidimensional bilinear threshold discrete iteration map. The parameter-dependent global attractors of the map morph into 12 attractor types. Consistent with the dynamic modes observed in biological neuronal firing, the global attractor alphabet is highly visual and intuitive in the scalar, single-neuron case. As synapse permeability varies from high depression to high potentiation, the global attractor type varies from chaotic to multiplexed, oscillatory, fixed, and saturated. As membrane permeability decreases, the global attractor transforms from active to passive state. Under the same activation, learning and retrieval end at the same global attractor. The bilinear threshold structure of the multidimensional map associated with interacting neurons generalizes the global attractor alphabet of neuronal firing modes to multineuron systems. Selective positive or negative activation and neural interaction yield combinatorial revelation and concealment of stored neuronal global attractors.

- global attractors
- neural firing
- neural firing alphabet
- neural firing code
- neural firing modes

empirical and analytic findings show high dynamic variability of neuronal firing, including random (Gerstein and Mandelbrot 1964), tonic (Murthy and Fetz 1996), and oscillatory (Cymbalyuk and Shilnikov 2005) spiking, transient (Kuznetsov et al. 2006) and oscillatory (Elson et al. 2002) bursting, chaos (Fell et al. 1993), and bifurcation (Li et al. 2004; Ren et al. 1997). Individual neurons of the same type are often capable of producing different firing modes, switching from one to another in a seemingly unpredictable manner (Hyland et al. 2002). The transition from one dynamic mode to another has been called local bifurcation when caused by a change in parameter values and global bifurcation when caused by the landscape of the underlying map subject to fixed parameter values (Blanchard et al. 2006). Variation in synaptic efficacy, widely associated with learning and memory (Dudai 1989), has been shown to play a key role in the disorderly dynamics of neural firing (Baram 2012). Although almost all theoretical and experimental studies make the implicit assumption that synaptic efficacy is both necessary and sufficient to account for learning and memory, it has been suggested that learning and memory in neural networks result from an ongoing interplay between changes in synaptic efficacy and intrinsic membrane properties (Marder et al. 1996). It seems equally plausible that changes in membrane efficacy play a role in shaping the firing dynamics. Early models of neuronal firing (Lapicque 1907) have been reformulated in accordance with the conductance paradigm (Hodgkin and Huxley 1952), yielding reliable firing-rate models of dynamic evolution (Wilson and Cowan 1972, 1973). Yet, analytic representation of the firing modes has been limited to isolated dynamic modes associated with phenomenological restrictions of the general model, such as limit cycles, stable points and hysteresis (Wilson and Cowan 1972, 1973), and chaos (Shilnikov and Rulkov 2003), leaving the underlying morphology of the neuronal firing code an open question.

Employing the widely accepted conductance-based rectification paradigm of neural firing (Carandini and Ferster 2000), first observed in empirical data (Granit et al. 1963), we derive a multidimensional discrete iteration map for the firing rates of interacting neurons. The map is shown to have a bilinear threshold structure, possessing a single global attractor beside the origin. Although, as is often the case, the dynamic nature of the multidimensional map is not immediately susceptible to advanced analytic methods, a reduction of the model to a single neuron reveals that the global attractor morphs into 12 attractor types, which we call the global attractor alphabet of neural firing modes. The global attractor types are grouped into active attractors, which, ranging from high synaptic depression (high negative synaptic weight) to high synaptic potentiation (high positive synaptic weight) comprise chaotic attractor, quasi-chaotic multiplexed attractor, oscillatory and point attractors, and attractor at infinity (saturated), associated with positive activation, and passive attractors, comprising attractor at zero (silent), and bipolar attractor at zero and infinity (binary), associated with negative activation. The global attractor alphabet may be reduced into an essential set of nine firing modes. A change in the sign of activation will transform an active attractor into a passive one (concealment) or a passive attractor into an active one (revelation). In the case of spontaneous time-dependent activation variation (Connor and Stevens 1971), such transformation may in itself become a secondary dynamic mode. A case in point is periodic bursting (Elson et al. 2002) instigated by postinhibitory rebound (Perkel and Mulloney 1974). Under the same activation, neuronal learning and retrieval will end at the same global attractor. The bilinear threshold structure of the multidimensional map associated with interacting neurons implies that the global attractor alphabet of neuronal firing extends to systems of interacting neurons. Selective activation of interacting neurons will create a shunting effect, producing globally stable combinatorial patterns of stored, concealed, and revealed neuronal global attractors.

## METHODS

#### Bilinear threshold map of neural firing.

A considerable body of work has taken us from Lapicque's integrate-and-fire model (Lapicque 1907), through cortical tissue firing models (Wilson and Cowan 1972, 1973), to the firing rate model of interacting neurons (Dayan and Abbott 2001):
_{i}(*t*) is the firing rate of the *i*th neuron, τ_{mi} is the membrane time constant, **υ**(*t*) is the vector of firing rates of the interacting neurons, **ω**_{i} is the vector of synaptic weights connecting the *i*th neuron to its preneurons, τ_{si} is the synaptic time constant, *u*_{i}(*t*) is the membrane permeability, or activation, and

The BCM plasticity model (Bienenstock et al. 1982) is:
_{i}(*t*) satisfying (Intrator and Cooper 1992)

Assuming that the ranges of the time constants τ_{mi}, τ_{ωi}, and τ_{θi} guarantee nonstiffness (Lambert 1992) of the *Eqs. 1*, *3*, and *4*, respectively, we apply Euler discretization (Butcher 2003) with unity step size to obtain:

Because, for physical reasons, υ_{i}(*k*) must be bounded for any *k*, replacing ∞ by a sufficiently large positive integer *N* in *Eqs. 5–7* will approximate the latter to any desirable degree. Furthermore, the BCM plasticity model has been shown to yield convergence of the synaptic weights under bounding conditions on the firing rates (Cooper et al. 2004).

Denoting by:
*n*·*N*-dimensional vector constructed by stacking the vectors **υ**(*k* − *p*), *p* = 1, …, *N*, into a single vector and by **u**(*k*) the *n*-dimensional vector for which the *i*th element is *u*_{i}(*k*), *Eq. 5* can be written, for constant synaptic weights, as a multidimensional bilinear threshold map:
**Λ**^{(1)}, **Λ**^{(2)}, **W**, **E**, and **Θ** are corresponding matrices {specifically, **Λ**_{i,i}^{(1)} = [τ_{mi} − 1]/τ_{mi} for *i* ≤ *n*, **Λ**_{i,i−n}^{(1)} = 1 for *i* > *n* and **Λ**_{i,j}^{(1)} = 0 for *i* ≤ *n*, *j* ≠ *i* and for *i* > *n*, *j* ≠ *i* − *n*, **Λ**^{(2)} = **Λ**^{(1)} + **Ω**, where the *i*th row of **Ω** is **Ω**_{i} = (1/τ_{mi})**ω**_{i}^{T}**E** for *i* ≤ *n*, with **E** = [**E**_{1}, **E**_{2}, …, **E**_{N}], **E**_{p} = diag[exp(−*p*/τ_{si})]_{i=1, …, n}, and **Ω**_{i,j} = 0 for *i* > *n*. Further in *Eq. 9*, **W**_{i} = **ω**_{i}^{T} and **Θ**_{i,i} = τ_{mi}^{−1} for *i* ≤ *n*, **Θ**_{i,j} = 0 for *i* ≤ *n*, *j* ≠ *i* and for *i* > *n*}.

The map (*Eq. 9*) has two fixed points, where the line **z**(*k*) = **z**(*k* − 1) meets the two hyperplanes. The first, associated with the hyperplane of slope **Λ**^{(1)}, is the origin, and the second, associated with the hyperplane of slope **Λ**^{(2)} and with constant activation **u**(*k*) = **u**, is:

Whereas, due to the terms (τ_{mi} − 1)/τ_{mi} of **Λ**_{i,i}^{(1)}, *i* ≤ *n*, the dynamics associated with the fixed point at the origin are characterized by monotone convergence to zero, the dynamics associated with the other fixed point, *Eq. 10*, are considerably less obvious in the multidimensional case under consideration. We approach the representation of these dynamics by a reduction of the model to one dimension, corresponding to a single neuron. An examination of the dynamics associated with the reduced model, and of the implications of the additional dimensions, will provide a detailed insight into the behavior of the multidimensional system. A reduced version of *Eq. 1*:
_{mi} >> τ_{si} (Brunel and Sergi 1998; Dayan and Abbott 2001) but will be shown to represent the firing modes of *Eq. 1* more generally. Without lateral interaction and with constant activation and constant synaptic weights, the discrete time version of *Eq. 11*, with the index *i* removed (as we consider a single neuron), takes the scalar bilinear threshold form:

In the space spanned by the coordinates υ(*k* − 1) and υ(*k*), the line υ(*k*) = f_{1}[υ(*k* − 1)], having a positive slope <1, meets the diagonal υ(*k*) = υ(*k* − 1) only at the origin. On the other hand, when the line υ(*k*) = f_{2}[υ(*k* − 1)] intersects the diagonal, it will be at:

#### Global attractor types of neuronal activity.

An attractor (Abraham et al. 1997) is a subset *A* of the state space, which has a neighborhood, *B*(*A*), called a basin, such that any trajectory originating from *B*(*A*) stays in it, and no proper subset of *A* has the same property. A global attractor is an attractor for which the basin is the entire state space. As shown next, the fixed point (*Eq. 15*) of the bilinear threshold map (*Eq. 12*) is a global attractor for which the characteristics depend on the feedback synaptic permeability ω and the membrane activation *u*. We call the global attractor types corresponding to positive activation, *u* > 0, active global attractor types. These are described below and depicted in Fig. 1, for synaptic weight values ranging from high negative ω to high positive ω. The two parts of the underlying map, f_{1} and f_{2}, are represented by solid lines, and the diagonal, υ(*k*) = υ(*k* − 1), is represented by a dashed line. Sample trajectories are represented by dotted-line cobweb diagrams (Abraham et al. 1997), marked by points of υ(*k*) where required for clarity. Mathematical details of cases *A–E* are provided in appendix. The other cases are self-evident. It might be noted that, although some of the attractor names used here include combinations of same terms, specifically “interval,” “multiplexity,” and “oscillation,” each such combination has a unique meaning, and the corresponding attractor has unique characteristics.

*A* is the chaotic attractor type. For λ_{2} ≤ (−1 − _{1}, the global attractor is represented in Fig. 1*A* by the interval [*a*, *b*] with all cobweb trajectories attracted to the domain contained within the external dash-dot pentagonal shown. The displayed orbit of *period 3*, *a*_{1} → *a*_{2} → *a*_{3} → *a*_{1}, satisfies the Li-Yorke condition (Li and Yorke 1975), and the displayed orbit *p* → *q* satisfies the Marotto snap-back repeller condition (Marotto 1978, 1985), extended to piecewise smooth maps (Gardini and Tramontana 2011), for Li-Yorke chaos.

*B* is the interval multiplexity attractor type. For (−1 − _{1} < λ_{2} < −1/λ_{1}, the global attractor temporally multiplexes points in two disjoint intervals. The domain of the attractor is represented in Fig. 1*B* by the subintervals [*a*, *b*] and [*c*, *d*], separated by the repelling subinterval [*b*, *c*], with all cobweb trajectories attracted to the domain bounded between the external dash-dot pentagonal and the internal square shown. It is shown in appendix that neither an orbit of *period 3* nor a snap-back pepeller exists in this case.

*C* is the multiplexed oscillation attractor type. For λ_{2} = −1/λ_{1}, the global attractor temporally multiplexes two oscillations, which take place in two disjoint intervals, represented in Fig. 1*C* by the intervals [*a*, *b*] and [*c*, *d*]. The actual oscillatory values depend on the initial condition. The [*a*, *b*] interval oscillation is represented in Fig. 1*C* by the points *a*_{2} and *a*_{4}, whereas the [*c*, *d*] interval oscillation is represented by the points *a*_{1} and *a*_{3}.

*D* is the fixed oscillation attractor type. For −1/λ_{1} < λ_{2} < −1, the global attractor is an oscillation between two points, represented in Fig. 1*D* by *a*, for which the vertical coordinate is *u*/τ_{m}(1 − λ_{1}λ_{2}), and *b*, for which the vertical coordinate is λ_{1}*u*/τ_{m}(1 − λ_{1}λ_{2}). All cobweb trajectories are first attracted to the domain bounded between the external dash-dot pentagonal and internal square shown.

*E* is the oscillatory interval attractor type. For λ_{2} = −1, a cobweb trajectory initiating at any point in the state space will converge to an oscillation within the interior of the external dash-dot square shown in Fig. 1*E*, which, having the bend point of the map at its lower-right corner, represents the interval [*a*, *b*] of υ(*k*), which is, then, an attractor. The specific points of oscillation, represented here by υ(2) and υ(3), will depend on the initial condition.

*F* is the alternate point attractor type. For −1 < λ_{2} ≤ 0, we have a point attractor, approached by alternate convergence [increasing υ(*k*) step followed by decreasing υ(*k*) step].

*G* is the bimodal monotone point attractor type for 0 < λ_{2} ≤ λ_{1}.

*H* is the unimodal point attractor type for λ_{1} < λ_{2} ≤ 1.

*I* is the attractor at infinity type for λ_{2} > 1. In reality, it will be rectified at the maximal sustainable physical firing rate limit, defining saturation.

Nonpositive activation (*u* ≤ 0) defines the passive global attractors, depicted in Fig. 2:

*J* is the unimodal attractor at zero type for λ_{2} ≤ λ_{1}.

*K* is the bimodal attractor at zero type for λ_{1} < λ_{2} ≤ 1.

*L* is the bipolar attractor at zero and infinity type for λ_{2} > 1. The final destination of a trajectory at zero (silence) or infinity (saturation) is determined by the initial condition with *p* being the point of separation between the two basins.

We call the 12 global attractor types *A–L* the global attractor alphabet of neural firing modes. We note that the satisfaction of nonstiffness assumed in the underlying discretization is evidenced through the containment of all the dynamic modes associated with the global attractor alphabet, except for modes *I* and *L*, which involve a monotone attractor at infinity, regardless of the discretization step size. As revealed by *Eq. 12* and illustrated by Fig. 1, a change in λ_{2}, caused by a change in τ_{m} or in ω, represents rotation of f_{2}[υ(*k* − 1)]. Although the characterization of the global attractor types is discrete, the transformation of the global attractor within each type and from type to type, caused by parameter changes, is continuous. Under positive activation, the transition of the feedback synapse efficacy from high depression (ω < 1 − 2τ_{m}) to high potentiation (ω > 1) changes the nature of the firing mode from chaotic, through multiplexed and nearly periodic, to oscillatory, constant and saturated. A change in *u* represents translation of f_{2}[υ(*k* − 1)] in parallel to the υ(*k*) axis. The sliding effect of time-dependent activation (Connor and Stevens 1971), such as postinhibitory rebound (Perkel and Mulloney 1974), can produce secondary firing modes, such as periodic bursting (Elson et al. 2002), by switching elementary firing modes, such as saturation, from passive to active state and vice versa. Although only *A* is strictly chaotic, *B*, having seemingly chaotic components due to the transition from f_{1} to f_{2} and vice versa, may be regarded as quasi-chaotic. The dynamic modes associated with the attractors *C–F* similarly have quasi-chaotic components in their convergence stages.

Combining the point attractor types, *F–H*, into one and the attractor at zero types, *J* and *K*, into one, the group of 12 global attractor types may be rearranged into a group of 9 types associated with different limit dynamics.

As the synaptic weights converge to final values, the learning process converges to a global attractor of one of the specified types. Following such convergence, the attractor type cannot be changed by continued activity and can be said to be stored in the corresponding neuron. Activity triggered by a change in initial condition will result in convergence to the stored global attractor. A change in *u* will have a translational, or sliding, effect, moving f_{2} and its point of intersection, *p*, with the diagonal υ(*k*) = υ(*k* − 1), up or down in parallel to the coordinate υ(*k*). This will change the span of the firing mode associated with the global attractor but, as long as *u* does not change in sign, not its dynamic nature. As the sign of *u* changes from positive to nonpositive, the corresponding active attractor will turn into a passive attractor and may be regarded as concealed in this state. Specifically, the silent (at zero) attractor types *J* and *K* are, respectively, the passive versions of the active attractor types *A–F* and *G*, whereas the bipolar attractor type *L* is the passive version of the active attractor at infinity type *I*. A change from positive to nonpositive activation will conceal the active state of the global attractor stored in the neuron. Conversely, as the sign of activation changes from nonpositive to positive, the active state of the attractor is revealed as one of the active attractor types.

#### Back to interacting neurons.

Multineuron cobweb diagrams are impossible to produce, as even a two-neuron circuit would require a plot in four dimensions. However, due to the bilinear threshold structure of the multidimensional map (*Eq. 9*), the nature of the corresponding dynamics is not difficult to envision. As in the case of the individual neuron, the map corresponding to interacting neurons has two fixed points, one at the origin and the other at a parameter-dependent location (*Eq. 10*). The dynamic mode associated with the fixed points depends on the slopes and the zero-crossings of the two hyperplanes involved, which, in turn, depend on the neuronal synaptic and membrane efficacies and time constants. Depending on parameter values, firing trajectories will converge to one of the two hyperplanes or will alternate between the two hyperplanes in the multidimensional space just as the firing trajectories of the single neuron converge to one of the two lines of the one-dimensional map or alternate between the two lines. As in cases *A–E* of the individual neuron, depending on parameter values, trajectories of interacting neurons will converge to domains, confined by multidimensional pentagons and cubes, defining chaotic *A*, multiplexed *B*, *C*, and oscillatory *D*, *E* behaviors. As in cases *F–L* of the individual neuron, firing trajectories of subsets of interacting neurons will converge through linear domains to the origin or to a parameter-dependent fixed point, whereas others will diverge to infinity (or saturation), jointly creating multidimensional saddles. Consequently, the multidimensional system of interacting neurons will possess a global attractor alphabet, which is the multivariate equivalent of the one-dimensional global attractor alphabet underlying the activity of the individual neuron, with the addition of saddles. As in the case of the individual neuron, subsets of interacting neurons may be said to store a global attractor of the multidimensional alphabet. Each of the neurons may be said to store the corresponding element of the single-neuron alphabet. Applying a network-wide activation pattern, by which some of the neurons receive positive activation and the others nonpositive activation, will produce a sliding effect, causing retrieval and concealment of stored active global attractors and revelation of the active state of stored passive global attractors. Whereas, in the single-neuron case, mode concealment and revelation will be directly caused by changes in external activation, in the multineuron case, it can also be created through activation, inhibition, or their removal by interacting neurons. The shunting effect will allow for the creation of a combinatorial variety of network-wide patterns from the stored active and passive neuronal patterns and their complimentary passive and active versions. This is illustrated by simulation.

## RESULTS

To demonstrate the relevance of the global attractor alphabet presented in the previous section to the neural behavior, we have simulated the discrete time models of both an isolated neuron and a two-neuron circuit. The latter is particularly significant in view of the fact that the production of a cobweb diagram for a two-neuron circuit would require a plot in four dimensions. For illustration and comparison purposes of the more interesting firing modes, we present specific cases of the single-neuron dynamics and of the phenomena of mode concealment and revelation in the two-neuron circuit. We employed truncated discrete-time versions of *Eq. 5*:
*Eqs. 6* and *7*):
*N* was selected so as to yield a sufficiently small relative error:

First, we demonstrate the emergence of representative global attractor types, *A*, *B*, *D*, and *F*, in a single neuron, removing the index *i*. In all cases, we used *N* = 90, τ = τ_{s} = 30 (yielding ρ < 0.05), τ_{θ} = 1, and zero initial conditions for the firing rate and the synaptic weight. In all cases, the same activation, *u* = 1, was used in learning and retrieval with the following time constants in the four cases:

*A*: τ_{m} = 2, τ_{ω} = 2; *B*: τ_{m} = 1.01, τ_{ω} = 120; *D*: τ_{m} = 1.05, τ_{ω} = 100; and *F*: τ_{m} = 10, τ_{ω} = 10.

Figure 3 shows, from left to right, the synaptic weight variation during learning, the firing rate process during learning, the firing rate process in retrieval, and an expanded view of a time segment of the latter. It can be verified that the analytically derived conditions for chaotic attractor, interval multiplexity attractor, fixed oscillation attractor, and alternate fixed point attractor are satisfied by the corresponding final values of λ_{1} and λ_{2}, given by:

*A*: λ_{1} = 0.5, λ_{2} = −72.3324; *B*: λ_{1} = 0.0099, λ_{2} = −2.1117; *D*: λ_{1} = 0.0476, λ_{2} = −2.2807; and *F*: λ_{1} = 0.9, λ_{2} = −0.5964.

Next, we demonstrate the combinatorial capacity of a neural circuit controlled by selective activation. We simulated a two-neuron circuit with the parameters *N* = 90, τ_{si} = 30, τ_{mi} = 10, τ_{ωi} = 10, τ_{θi} = 1; *i* = 1, 2. In learning, we had υ_{1}(0) = υ_{2}(0) = 0 and *u*_{1} = *u*_{2} = 1. In retrieval, initialized at the same initial conditions, four different cases of activation were implemented:

(*a*): *u*_{1} = *u*_{2} = 1; (*b*): *u*_{1} = 1, *u*_{2} = 0; (*c*): *u*_{1} = 0, *u*_{2} = 1; (*d*): *u*_{1} = 0, *u*_{2} =0

It can be seen in Fig. 4, *(1)*, that, in retrieval, each of the four permutations of the firing patterns stored by the two neurons and the silent state were produced by appropriate choice of activation, displaying the full combinatorial capacity of the circuit.

Finally, we demonstrate concealment and revelation of global neuronal attractors due to the sliding effect caused by interacting neurons. We employ the same two-neuron circuit as above but, in learning, with *u*_{1} = 1, *u*_{2} = 0.5, and, in retrieval, *u*_{1} = 1, *u*_{2} = 0. As seen in Fig. 4, *(2)*, following learning, the first neuron stores an attractor at zero, concealing its active version, whereas the other neuron stores the active state of an alternate point attractor. In retrieval, as the second neuron was deactivated by *u*_{2} = 0 and, indeed, produced the silent state, the first neuron revealed the active version of the global attractor stored in it, modified by the sliding effect (as the activation patterns in learning and retrieval were different). As explained by our analysis, this revelation is a result of the change in lateral activity, modifying the inhibitory effect of the second neuron on the first.

## DISCUSSION

Facilitated by the bilinear threshold structure of the rectified firing rate model, the global attractor alphabet offers effective mathematical and geometric means for describing the firing modes of both isolated neurons and systems of interacting neurons. Beyond the mathematical analysis, the following discussion, relating empirically observed firing modes to the global attractor alphabet, represents the writer's own, largely intuitive, views. Seemingly random spiking (Gerstein and Mandelbrot 1964) can be represented by a chaotic attractor, tonic spiking (Murthy and Fetz 1996) by a point attractor, oscillatory spiking (Cymbalyuk and Shilnikov 2005) by a fixed oscillation attractor or an oscillatory interval attractor, and bursting (Kuznetsov et al. 2006) by saturation, representing an attractor at infinity. Secondary modes, such as periodic bursting (Elson et al. 2002), may arise from the sliding effect caused by time-dependent activation (Connor and Stevens 1971), such as postinhibitory rebound (Perkel and Mulloney 1974), switching elementary firing modes, such as saturation, from passive to active state and vice versa. Although there seems to be a clear relationship between certain firing modes and neural functions (e.g., oscillation, or periodic bursting, seems directly related to heartbeat, walking and chewing), the utility of others is not as commonly recognized or understood. Temporal multiplexing of different firing rates, observed in sensory cortices (Fairhall et al. 2001; Lundstrom and Fairhall 2006; Wark et al. 2009), enhances the coding and information transmission capacity (Bullock 1997; Kayser et al. 2009; Lisman 2005). Although temporal precision of the multiplexing code can be achieved by narrow windowing, the need for such precision in neural coding has been questioned (Panzeri et al. 2009). Whereas a chaotic attractor drives temporal mixing of firing rates over a wide range in the state space, and a quasi-chaotic interval multiplexity attractor multiplexes specific subdomains in the state space, a multiplexed oscillation attractor performs exact multiplexing of two oscillatory signals. Depending on the function of the receiving neuron, demultiplexing can be done, in principle, by band-pass filtering. Neuronal low-pass (Pettersen and Einevoll 2008) and high-pass (Poon et al. 2000) filtering have been reported. Yet, the raw, multiplexed, and even chaotic signal can be useful in sensory systems. Multiplexed red, green, and blue (RGB) color coding is a known example of creating mixtures of the primary colors, found in both biological and technological vision systems (Hunt 2004). Although in a previous paper (Baram 2012), viewing chaos as a natural multiplexing mechanism, I combined cases *A* and *B* into a single modal type termed “chaotic multiplexity,” the present paper shows an analytic distinction between chaos and multiplexity. The maximum-energy response of a neuron storing a bipolar attractor, aroused by initial condition or instantaneous stimulus, may represent a conditioned reflex. A global attractor at zero, representing silence, may serve the purpose not only of neuronal rest, but also of a common initial condition for learning and retrieval. The combinatorial concealment and revelation of active and passive global attractors by interacting neurons responding to selective activation may give rise not only to stored subpatterns, but also to their previously unaroused passive or active complements, a possible manifestation of innovation. Mathematically, the analytic bifurcation points specified in terms of λ_{1} and λ_{2} can be highly valuable, for instance, as analytic indicators of chaos, in view of the often-reported inadequacy of empirical measures (Sprott 2003), such as the first Lyapunov exponent (Wright 1984), used in the analysis of empirical neurophysiological dada (Fell et al. 1993). The mathematical analysis also suggests the grouping of the underlying set of global attractors into a smaller essential set. The question remains whether the entire range of mathematical conditions associated with the global attractor alphabet is biologically feasible. This may be tested by laboratory experiments on isolated and interacting neurons. Statistical hypothesis testing may distinguish the more significant, or relevant, components of the global attractor alphabet from the others.

## DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the author(s).

## AUTHOR CONTRIBUTIONS

Y.B. conception and design of research; Y.B. performed experiments; Y.B. analyzed data; Y.B. interpreted results of experiments; Y.B. prepared figures; Y.B. drafted manuscript; Y.B. edited and revised manuscript; Y.B. approved final version of manuscript.

## APPENDIX: MATHEMATICAL DETAILS OF CASES *A–E*

For λ_{2} < −1/λ_{1}, we distinguish between two domains. The first is:
_{e} and υ_{p} denote the vertical coordinates [υ(*k*) values] of the points *e* and *p* shown in Fig. 1*A*. Clearly, λ_{2} < −1/λ_{1} implies λ_{2} < −1, as λ_{1} < 1. Figure 1*A* shows an orbit of *period 3*, *a*_{1} → *a*_{2} → *a*_{3} → *a*_{1}, which satisfies the Li-Yorke condition for chaos (Li and Yorke 1975). Figure 1*A* also shows a snap-back repeller at *p*, which satisfies Marotto's condition (Marotto 1978, 2005), extended to noninvertible piecewise smooth maps (Gardini and Tramontana 2011), for Li-Yorke chaos. This can be verified by tracing back from *p* horizontally to f_{1}, then vertically to the diagonal υ(*k*) = υ(*k* − 1), then horizontally to f_{2}, then vertically to the diagonal, and then horizontally to a point, *q*, on f_{2}, which is in a repelling neighborhood of *p*, establishing the latter as a snap-back repeller. We now show that the snap-back repeller condition (*Eq. 20*) is not only sufficient, but also necessary for the map under consideration to have an orbit of *period 3*, hence, to be chaotic. As the dynamics according to f_{1} are strictly convergent (as λ_{1} < 1), and, for λ_{2} < −1, the dynamics according to f_{2} are strictly divergent, the only way for an orbit to be of *period 3* is for the corresponding cobweb orbit to have a transition from f_{1} to f_{2}, followed by a transition from f_{2} to f_{1}, followed by a transition from f_{1} to itself. However, the latter transition can only take place if its vertical realization is from the upper side of *p* to its lower side, which can only happen if *p* is a snap-back repeller.

Some calculations will show that:
*condition 20* is equivalent to the condition:
*Eq. 23* also satisfies the condition λ_{2} < −1/λ_{1} < −1, it satisfies the snap-back repeller condition, implying chaos.

The second domain is:
*Condition 24* is equivalent to:
_{2} < −1/λ_{1}, is equivalent to:
*Eq. 26* consists of two alternating intervals, [*a*, *b*] and [*c*, *d*], separated by the repelling interval [*b*, *c*], defining *case B*, illustrated in Fig. 1*B*. As can be geometrically verified, there cannot be an orbit of *period 3*, and there cannot be a snap-back repeller in this case. Hence, in contrast to *case A*, this global attractor is not chaotic. It can be characterized as a multiplexity of the two mutually exclusive intervals, [*a*, *b*] and [*c*, *d*]. Resembling the multiplexing of temporal sequences, often used for efficient communication in technological systems, the interval multiplexity at hand implies a certain degree of ordering, imposed by alternate motions along the two branches, f_{1} and f_{2}, of the map.

It can be verified that, for −1/λ_{1} ≤ λ_{2} ≤ −1, the sequence generated by the map (*Eq. 12*), initiated, say, on the right side of the bend point, consists of the following two subsequences:

We divide the domain −1/λ_{1} ≤ λ_{2} ≤ −1 into three subdomains.

*C*: for λ_{2} = −1/λ_{1}, yielding λ_{1}λ_{2} = −1, *Eqs. 27* and *28*, respectively, yield two multiplexed oscillations:
*C*.

*D*: for −1/λ_{1} < λ_{2} < −1, yielding −1 < λ_{1}λ_{2} < −1, the subsequence (*Eq. 27*) converges to:
*Eq. 28*) converges to:

The firing rate trajectory converges, then, to an oscillation between the two values, as illustrated by Fig. 1*D*.

*E*: the case λ_{2} = −1, depicted by Fig. 1*E*, is characterized by the bend point of the map.

Any cobweb trajectory will be attracted to the dash-dotted square shown, and any cobweb trajectory entering that domain will form a perfect square within it. Equivalently, any trajectory entering the interval [*a*, *b*] at a point υ will form an oscillation between υ and 2υ_{p} − υ. The interval [*a*, *b*] is, then, a global attractor, each point of which is of *period 2*.

- Copyright © 2013 the American Physiological Society