TY - JOUR AU - Wibral, Michael AB - Introduction Biological systems must process information about their environment and their internal states in order to survive. Many biological systems have evolved specialized areas where such information processing is particularly evident. Prime examples are the central nervous systems of many animals and the human brain in particular. Taking inspiration from such systems, humans have developed biologically-inspired, artificial information processing systems, such as artificial neural networks, to solve a variety of tasks. Artificial neural networks and their biological sources of inspiration share an important property—they perform highly distributed information processing in which fundamental information-processing operations such as storing, transferring and modifying information are both, highly distributed and co-located at almost all computational elements. The computational elements making up biological and artificial neural networks, for example, are neurons, where each neuron’s activity can simultaneously serve the storage, transfer and modification of information. This lack of specialization and high degree of distribution separates such information processing systems from classical digital architectures (like a household PC) where the fundamental information processing operations are much more spatially separated and carried out by dedicated subsystems. While the highly distributed information processing certainly adds to the performance of artificial and biological neural networks on certain tasks, it also poses a formidable challenge to understand how such a system functions. A powerful approach to describe and understand computation in systems such as biological or artificial neural networks is information theory, which introduces measures of information transfer, storage and modification [1–5]. The proposed measures are well-suited to investigate the function of artificial information processing systems, and have successfully been applied to biological neural systems [6–9]. However, in its original form, the framework neglects a central aspect of information processing in biological neural networks, namely the frequently displayed highly rhythmic activity when performing a computation. To understand those systems better and to build a bridge between information processing and their biophysical dynamics, it would therefore be beneficial to link the components of information processing to specific neural rhythms. We have recently presented such a link for the case of information transfer in [10], and have provided results that challenged some long-held ad hoc beliefs about the relationship of brain rhythms and information transfer. In the present work, we extend this approach to information storage. In particular we focus on the active storage, where the information storage is actively in use for a computation in the dynamic of the neural activity (for differences with passive storage, e.g synaptic gain changes, see [11]). A measure of this kind of storage is the active information storage (AIS) [3, 7], which quantifies the amount of information in the present samples of a process (“currently active”) that is predictable from its past value. The AIS measure is closely linked to the transfer entropy (TE) [1]: the TE quantifies information transferred from a source process to the current value of a target process, in the context of the target process’ own past. Hence, AIS and TE together reveal the sources of information which contribute to prediction of the target process’ next outcome (either, information actively stored in the processes’ own past, or additional information being transferred from another process) [7]. The importance of understanding how neurons and neural systems store information when studying neural information processing has been outlined already in [11] and later by the work of [3, 7, 12]. AIS as a measure of information storage has been successfully applied in magnetoencephalographic (MEG) recordings to test, for example, predictive coding theory [13] or to provide better understanding of the information processing in people affected with the Autism spectrum disorder (ASD) [6, 14]. In local field potential (LFP), [8] found an increased AIS measure as a function of anesthesia (isoflurane) concentrations in two ferrets recordings, at prefrontal (PFC) and visual cortical (V1) sites. Anesthetic agents such as isoflurane are known to affect the frequency spectrum throughout the cortex [15] and at laminar level [15, 16]. In [15] it was shown that the effect of isoflurane on neural oscillatory activity is not only frequency-specific but also related to the computational property of the area, being different between different areas of the cortex (PFC or V1) or between different layers (deep laminar or infragranular layers, granular layers, and superficial or supragranular ones). Similarly, [16, 17] reported highly specific effects of isoflurane on laminar frequency data. Even though the effect of anesthesia on brain rhythms is known, due to a lack of a suitable method, all attempts to link the AIS with the rhythmic activity in different frequency bands were only indirect and through correlation analysis [6, 8, 14]. Hence, it seems beneficial to have also a spectrally-resolved AIS to directly investigate effects, for example, of isoflurane agents on brain rhythms and thus on neural information processing. We here present such a method, which is able to quantify AIS in a spectrally-resolved fashion. We apply this method to laminar recordings from two areas of the ferret cortex (PFC and V1) under different levels of anesthesia, to investigate how different frequency bands contribute to information storage under anesthesia. We hypothesised that due to the different computational properties of the layers [18] (either deep or superficial), the frequency-resolved AIS would show a heterogeneity of anesthesia-related AIS changes across frequencies and recording sites. In more detail, the computational and oscillatory differences of AIS we expect are associated mainly to two distinct pathways that are responsible for communication between cortical areas and intracolumnar communication within area [17], i.e. feedfoward and feedback pathways. Feedfoward pathways are thought to carry sensory information from superficial layers to superficial and middle layers of higher cortical areas while feedback connections are thought to carry contextual information and predictions from deep layers to other deep or superficial layers of lower order areas [17–19]. Our choice of investigating the behaviour of AIS in V1 and PFC is motivated by these areas being hierarchically well separated, with V1 at the bottom of the visual cortical hierarchy, and PFC being a hierarchically high association area. Investigating spectral AIS in ferrets is motivated by the fact that ferrets, as an intermediate model species, have similarities with primates, i.e. a highly developed visual system (V1) and cortical association areas such as PFC [15]. Materials and method Ethics statement Experimental procedures for ferrets cortical layers recordings were approved by the University of North Carolina-Chapel Hill Institutional Animal Care and Use Committee (UNC-CH IACUC) and exceed guidelines set forth by the National Institutes of Health and U.S. Department of Agriculture. In this section, we first clarify the purpose and application of the proposed method. Second, we introduce the information theoretic preliminaries together with the AIS measure, and the corresponding notation. Central to our method is the creation of frequency-specific surrogate data, for which we summarize the technical background. Here, we outline only the crucial properties of the Maximal Overlap Discrete Wavelet Transform (MODWT), while a more detailed description can be found in [20, 21]. Finally, we present the core algorithm to identify frequency-specific AIS. Ferrets data employed in the AIS analysis can be obtained from the Dryad database [22]. Background Problem statement and analysis setting. The aim of the proposed method is to determine whether there is statistically significant active information storage generated by one or more frequencies. Our method can be implemented after a significant AIS has been determined in the time domain, e.g. as computed by the AIS algorithm in [23], in order to provide a perspective on this novel spectrally-resolved AIS. Technical background: Active information storage (AIS). We assume that a stochastic process recorded from a system (e.g cortical or layers sites), can be treated as a realizations yt of random variables Yt that form a random process , describing the system dynamics. Then, AIS is defined as the (differential) mutual information between the future of a signal and its immediate past state [3, 7, 24]: (1) where Y is a random process with present value Yt, and past state , with δi = iΔt, where Δt is the sampling interval of the process observation, and δ1 ≤ δi ≤ δk. Y