# Self-control in Sparsely Coded Networks

###### Abstract

A complete self-control mechanism is proposed in the dynamics of neural networks through the introduction of a time-dependent threshold, determined in function of both the noise and the pattern activity in the network. Especially for sparsely coded models this mechanism is shown to considerably improve the storage capacity, the basins of attraction and the mutual information content of the network.

###### pacs:

PACS numbers: 87.10+e, 64.60CnSparsely coded models have attracted a lot of attention in the development of neural networks, both from the device oriented and biologically oriented point of view [3]–[4]. It is well-known that they have a large storage capacity, which behaves as for small where is the pattern activity. However, it is clear that the basins of attraction, e.g., should not become too small because then sparse coding is, in fact, useless.

In this context the necessity of an activity control system has been emphasized, which tries to keep the activity of the network in the retrieval process the same as the one for the memorized patterns [5]–[7]. This has led to several discussions imposing external constraints on the dynamics (see the references in [4]). Clearly, the enforcement of such a constraint at every time step destroys part of the autonomous functioning of the network.

An important question is then whether the capacity of storage and retrieval with non-negligible basins of attraction can be improved and even be optimized without imposing these external constraints, keeping at the same time the simplicity of the architecture of the network.

In this Letter we answer this question by proposing, as far as we are aware for the first time, a complete self-control mechanism in the dynamics of neural networks. This is done through the introduction of a time-dependent threshold in the transfer function. This threshold is chosen as a function of the noise in the system and the pattern activity, and adapts itself in the course of the time evolution. The difference with existing results in the literature [4] precisely lies in this adaptivity property. This immediately solves, e.g., the difficult problem of finding the mostly narrow interval for an optimal threshold such that the basins of attraction of the memorized patterns do not shrink to zero.

We have worked out the practical case of sparsely coded models. We find that the storage capacity, the basins of attraction as well as the mutual information content are improved. These results are shown to be valid also for not so sparse models. Indeed, a similar self-control mechanism should even work in more complicated architectures, e.g., layered and fully connected ones. Furthermore, this idea of self-control might be relevant for dynamical systems in general, when trying to improve the basins of attraction and the convergence times.

Consider a network of binary neurons. At time and zero temperature the neurons are updated in parallel according to the rule

(1) |

In general, the input-output relation can be a monotonic function with a time-dependent threshold. In the sequel we restrict ourselves to the step function . The quantity is the local field of neuron at time and is the activity of the stored patterns, . The latter are independent identically distributed random variables (IIDRV) with respect to and determined by the probability distribution

(2) |

At this point we remark that the activity can be written as with the bias of the patterns as defined, e.g., in [5]. In fact, but no correlations between the patterns occur, i.e., . We now consider an extremely diluted asymmetric version of this model in which each neuron is connected, on average, with other neurons. In that case the synaptic couplings are determined by the covariance rule

(3) |

Here the are IIDRV with probability . For and we recover the diluted Hopfield model.

The relevant order parameters measuring the quality of retrieval are the overlap of the microscopic state of the network and the th pattern, and the neural activity

(4) |

The are normalized order parameters within the interval , which attain the maximal value . They have to be considered over the diluted structure such that the loading is defined by . The Hamming distance between the state of the neuron and the pattern can be written as

To fix the ideas and without loss of generality, we take an initial network configuration correlated with only one pattern meaning that only the retrieval overlap for that pattern, say , is macroscopic, i.e., of order in the thermodynamic limit . The rest of the patterns causes a residual noise at each time step of the dynamics. Depending on the architecture of the network this noise might be extremely difficult to treat [8]. A novel idea is then to let the network itself autonomously counter this residual noise at each step of the dynamical evolution, by introducing an adaptive, hence time-dependent, threshold. We propose the general form with this residual noise. This self-control mechanism of the network is complete if we find a way to determine .

In order to do so we first write down the evolution equations governing the dynamics. We recall that for the particular model we are considering the parallel dynamics can be solved exactly following the methods involving a signal-to-noise analysis (see, e.g, [9], [10]). Such an approach leads to the following equations for the order parameters in the thermodynamic limit

(5) | |||

(6) |

with , where we have averaged over the first pattern and where the angular brackets indicate that we still have to average over the residual noise which can be written as with and a Gaussian random variable with mean zero and variance unity. The order parameters and are the thermodynamic limits of (4). The quantity reduces to the overlap of the Hopfield model, again when taking and . From now on we forget about the superscript .

The equations (5) and (6) give a self-controlled dynamics if we can completely specify, a priori, the threshold proposed before. We remark that, for the present model, is a macroscopic parameter, thus no average must be done over the microscopic random variables at each time step . We have, therefore, a mapping with a threshold with changes each time step but no statistical history effects the evolution process. What is left then is to find an optimal form for .

A very intuitive reasoning based on the detailed behavior of these equations (5)-(6) goes as follows. To have and with at a given time such that good retrieval properties, i.e., for most are realized, we want the following inequalities to be satisfied: and . Using the general form for the threshold we obtain that . This leads to . Here we remark that itself depends on in the sense that for increasing it gets more difficult to have good retrieval such that decreases. But it can still be chosen a priori.

In the limit of sparse coding meaning that the fraction of active neurons is very small and tends to zero in the thermodynamic limit, we can present a more refined result for by rewriting the second term on the r.h.s. of Eq. (6) asymptotically as

This term must vanish faster than so that we obtain . Using this and the first inequality written down above we can evaluate the maximal capacity for which some small errors in the retrieval are allowed. The result is , which is of the same order as the critical capacity found for non-self-controlled sparsely coded neural networks [4],[7], [11]-[13].

Next, it is known that while the Hamming distance is a good measure for the performance of a uniform network (i.e., ), it does not give a complete description of the information content for sparsely coded networks. In more detail, it can not distinguish between a situation where most of the wrong neurons () are turned off and a situation where these wrong neurons are turned on. This distinction is extremely critical because the inactive neurons carry less information than the active ones. To give one example, when for all , the Hamming distance and hence vanishes in the sparsely coded limit, while for for all , and hence goes to . However, in both cases there is no information transmitted. To solve this problem we introduce the mutual information content of the network.

The mutual information function (see, e.g., [14]) is a concept in information theory which measures the average amount of information that can be received by the user by observing the signal at the output of a channel. For the problem at hand, i.e. retrieval dynamics of the pattern , where each time step is regarded as a channel it can be defined as (we forget about the time index )

(7) | |||

(8) | |||

(9) |

Here and are the entropy and the conditional entropy of the output, respectively. The quantity is the conditional probability that the neuron is in a state at time , given that the site of the pattern being retrieved is . It is given by

(10) |

where we have assumed that this formula holds for every site index , and where the and are precisely the order parameters (4) in the thermodynamic limit. We have also used the normalizations . Using the probability distribution of the patterns (Eq.(2)), we furthermore obtain

(11) |

Hence the expressions for the entropies defined above become

(12) | |||||

(13) | |||||

Recalling eq. (7) this completes the calculation of the mutual information content of the present model.

We have solved this self-controlled dynamics for the sparsely coded network numerically and compared its retrieval properties with non-self-controlled models. We are only interested in the retrieval solutions leading to and carrying a non-zero information .

In Fig. 1 we have plotted the information content as a function of the threshold for and and different values of , self-control. This illustrates that it is rather difficult, especially for sparse coding, to choose a threshold interval such that is non-zero.

In Fig. 2 we compare the time evolution of the retrieval overlap, , starting from several initial values, , for the self-control model with an initial neural activity and , with the model where the threshold is chosen by hand in an optimal way in the sense that we took the one with the greatest information content , by looking at the corresponding results of Fig. 1 for . We see that the self-control forces more of the overlap trajectories to go to the retrieval attractor. It does improve substantially the basin of attraction. This is further illustrated in Fig. 3 where the basin of attraction for the whole retrieval phase is shown for the model with a selected for every loading and the model with self-control . We remark that even near the border of critical storage the results are still improved. Hence the storage capacity itself is also larger. These results are not strongly dependent upon the initial value of as long as .

Furthermore, we find that self-control gives a comparable improvement for not so sparse models, e.g., . This is illustrated in Fig. 4 where we show some analytic results together with a first set of simulations for the basins of attraction. This type of simulations for extremely diluted models is known to be difficult because of the theoretical limits and . Nevertheless, it is clear that concerning the self-control aspect, qualitative agreement with the analytic results is obtained. For these values of , is the relevant quantity. The quantitative difference is mostly due to the fact that .

Figure 5 displays the information as a function of for the self-controlled model with several values of . We observe that is reached somewhat before the critical capacity and that it slowly increases with increasing .

Finally, in Fig. 6 we have plotted and as a function of the activity on a logarithmic scale. It shows that increases with until it starts to saturate. The saturation is rather slow, in agreement with results found in the literature [12], [13].

In conclusion, we have found a novel way to let a diluted network autonomously control its dynamics such that the basins of attraction and the mutual information content are maximal.

We thank S.Amari and G.Jongen for useful discussions. This work has been supported by the Research Fund of the K.U.Leuven (grant OT/94/9). One of us (D.B.) is indebted to the Fund for Scientific Research - Flanders (Belgium) for financial support.

## References

- [*] Electronic-mail:
- [*] Electronic-mail:
- [3] J. Nadal and G. Toulouse, Network: Computation in Neural Systems 1, 61 (1990)
- [4] M. Okada, Neural Network 9, 1429 (1996).
- [5] D.Amit, H.Gutfreund and H.Sompolinsky, Phys. Rev. A 35, 2293 (1987).
- [6] S. Amari, Neural Network 2, 451 (1989).
- [7] J.Buhmann, R.Divko and K.Schulten, Phys. Rev. A 39, 2689 (1989).
- [8] E. Barkai, I. Kanter and H.Sompolinsky, Phys. Rev. A 41, 590 (1990).
- [9] B. Derrida, E. Gardner and A. Zippelius, Europhys. Lett. 4, 167 (1987).
- [10] D. Bollé, G.M. Shim, B. Vinck, and V. A. Zagrebnov, J. Stat. Phys. 74, 565 (1994).
- [11] M.V.Tsodyks, Europhys. Lett. 7, 203 (1988).
- [12] C.J.Perez-Vicente, Europhys. Lett. 10, 621 (1989).
- [13] H. Horner, Z. Phys. B 75, 133 (1989).
- [14] R.E.Blahut, Principles and Practice of Information Theory, (Addison-Wesley, Reading, MA, 90), Chapter 5.