Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A window‐less approach for capturing time‐varying connectivity in fMRI data reveals the presence of states with variable rates of change

A window‐less approach for capturing time‐varying connectivity in fMRI data reveals the presence... INTRODUCTIONCoherent spatially distributed fluctuations as a vehicle to measure the functional integration of the brain has been investigated under various cognitive states and is commonly referred to as functional connectivity (Ferri et al., ). In task‐based studies of FC, based on the prior design of the study, underlying cognitive states are hypothesized to constitute a task‐focused state and a short period of resting‐state as the baseline. Neurophysiological interpretations are typically driven by context‐dependent connectivity analysis based on these states of the brain as in psycho‐physiological interaction (PPI) (Friston et al., ; Kim and Horwitz, ).More recently, task‐free studies referred to as resting‐state (RS) studies have challenged the previous assumption that resting state represents only a singleton baseline cognitive state. Multiple recent RS studies have provided evidence of the dynamics of functional connectivity during an extended resting‐state session (Allen et al., ; Handwerker, Roopchansingh, Gonzalez‐Castillo, & Bandettini, ; Hutchison et al., ). However, compared to task‐based studies, there is a lack of prior knowledge on the expected dynamics during the resting state because there are no designed or explicit stimuli of the brain. This requires leveraging approaches that capture dynamics of the connectivity with no need for prior knowledge on when changes of underlying states might occur, although each approach includes some assumption regarding the underlying nature of dynamics. For example, approaches based on the selection of a sliding‐window—a commonly used approach—assume that connectivity at a given timepoint (usually at the center of the window) can be estimated from all the samples of the input time series spanned by the selected window. We refer to this as a temporal locality assumption. Moreover, if it is assumed that the samples are derived from a multivariate Gaussian distribution, connectivity can be estimated by measuring correlation or covariance between samples in a given window as has been studied in Allen et al. ().The validity of these assumptions depends on the actual nature of the time series. For example, if we have a time series in which we know that the rate of its actual change is much slower than its sampling rate, then a locality assumption is justified. Likewise, if we understand that what the time series at each timepoint is measuring is the average of many independent and identically distributed (i.i.d.) samples from an unknown distribution, then based on the central limit theorem, the Gaussian assumption is justified. Although these assumptions can be validated in many types of time series, such as in financial data, working with brain functionality data is much less certain since there are so many unknowns about the actual nature of the captured signal. There are a variety of modalities that capture different correlates of true neurophysiological activity of the brain with different spatial and temporal resolution. For example, in the case of fMRI, voxel time series have been commonly modeled as the convolution of neurological activity with a slow‐varying hemodynamic response function (HRF). From this HRF, a locality assumption can be justified. This is possibly the main reason behind the popularity of approaches based on a sliding‐window analysis.Nevertheless, recent studies have found evidence of changes in both activation of and connectivity between voxels and networks (groups of functionally related voxels) at a rate beyond what has been often assumed. In terms of frequency, the blood‐oxygen‐level dependent (BOLD) signal, as a reflection of the underlying neurological activity and the connectivity due to it, has been assumed to span the range between 0.01 and 0.1 Hz, which also agrees with the commonly used models of HRFs. However, more recent studies (Chang and Glover, ; Trapp, Vakamudi, & Posse, ; Yaesoubi, Allen, Miller, & Calhoun, ) have argued that there is observable activity and connectivity in higher frequencies up to the maximum observable. Since these studies are very recent, there have been various speculations on the sources of observed high‐frequency activation and connectivity variance. This includes the relationship with nuisance variables due to improper models of controlling for these variables (Chen, Jahanian, & Glover, ) and questioning conventional models for BOLD signal generation that are unable to explain these observations (Chen and Glover, ). Nonetheless, these studies question the validity of assumptions of commonly used analysis of fMRI, including the locality assumption, and support the use of models that can at least capture information that is blurred out or ignored by existing models, should it be present.Various alternative approaches have been proposed to relax the limiting assumption of locality. These include adaptive windowing based on the current temporal frequency of the data by leveraging time–frequency analysis and measuring dependence in the augmented domain (Chang and Glover, ; Yaesoubi et al., ). There are approaches that do not require an explicit selection of a window, but still have an inherited locality assumption, including approaches based on filtering in time such as Hidden Markov Models (HMMs) (Eavani, Satterthwaite, Gur, Gur, & Davatzikos, ). Though HMM can be implemented in a way that does not require a windowing operation, the traditional HMM still has an embedded locality assumption in the transition probability between the states that is stationary in time.There is also another approach known as co‐activation pattern (CAP) analysis (Liu and Duyn, ) which, similarly to what we are proposing here, is not based on a locality assumption. Although at first glance CAP analysis might look very similar to this work, as we will explain in more detail in the discussion section, CAP lacks an actual formulation for capturing dynamic linear dependence as we are proposing here. However, there are interesting similarities as well as differences between the two approaches which will be discussed later.In this work, we are proposing a new approach to capture dynamic functional connectivity of functional fMRI networks that does not require either explicit or implicit locality assumption. This enables us to observe changes in the connectivity of the brain that—although occurring close in time—might belong to different underlying sources of activity. Theoretically speaking, this allows observation of changes in connectivity with heterogeneous rates up to the Nyquist rate, which means observing changes that occur on the scale of one timepoint. We will argue that by removing the locality assumption we can both capture connectivity with similar modularity and novel connectivity patterns, which were likely to be obscured in previous studies.Finally, studies of functional connectivity have found many applications in investigations of functional brain differences between different clinical groups (i.e., mental illnesses such as schizophrenia and bipolar disorder) (Kaiser et al., ; Rashid, Damaraju, Pearlson, & Calhoun, ) or biological variables (such as age and gender) (Battaglia et al., ; Yaesoubi, Miller, & Calhoun, ). Although study of group differences is not the main focus of this work, we do study the correlation between age and gender and different characteristics of connectivity states captured by the proposed method. Most interestingly, we found a strong correlation between age and gender of subjects and the tendency of specific states to occur in subjects. Among these states, there is one state that has not been observed in previous studies of resting‐state dynamic functional connectivity. We show evidence that suggests this state in particular is missed or at least blurred out by previous approaches.MATERIALS AND METHODSWe first explain our method on a general multivariate input signal and then we will explain how we apply the method to fMRI data. In the following, an uppercase bold letter (e.g., X(t)) represents a matrix (and if indexed by t, it represents a multivariate signal, i.e., realization from a random process), lowercase bold letter indexed (e.g., v) represents a (column) vector (and similarly if indexed by t, it represents a univariate signal again a realization of a random process). Italic letter (e.g., k or K) represents a scalar variable.Assume we have an n‐dimensional input Xt=x1t,x2t, …, xn(t)∈RT×n . Each feature vector xi (columns of Xt) consists of i.i.d samples and since we are focusing only on the estimation of covariance/correlation as the measure of dependence there is an inherent assumption that the samples have a Gaussian distribution. We are also assuming all the feature vectors have zero mean. Now assume that X(t) is low‐rank and we can reasonably (i.e., with small residual error) approximate it with its rank‐1 estimate:1X(t)≅m(t)vTAnd since the covariance between any two feature vectors of Xt (e.g., xi(t),xj(t)) is equal toExi(t)xj(t)−E(xi(t))E(xj(t)) and becomes equal to Exi(t)xj(t) as Exi(t)= Exj(t)=0, it is easy to show that covxi(t),xj(t) can also be approximated by2cov(xi(t),xj(t))≅var(m(t)) vi vjwhere vi and vj are ith and jth elements of the vector v.The above equation can be easily extended to the approximation of the covariance of the whole input matrix X(t) as follows:3cov(X(t))=[cov(x1(t),x1(t))⋯cov(xn(t),x1(t))⋮⋱⋮cov(x1(t),xn(t))⋯cov(xn(t),xn(t))]                                 ≅var(m(t))[v12⋯vnv1⋮⋱⋮v1vn⋯vn2]                                 =var(m(t)) v vTIn the above equation, varm(t) is a constant scalar and consequently, by ignoring it, we would be able to estimate covX(t) by a simple self outer product of vector v up to a scaling ambiguity.The main concept behind the above formulation is that estimation of covariance is in fact equivalent to finding the dominant linear pattern (rank‐1 estimation) in the sample space (without using the information about the time domain). For example, in a sliding‐window analysis, our goal is to find such linear pattern in the space of samples that lie in the window and a different linear pattern is fitted to each window. Conceptually, the idea is very similar to piecewise linear regression although here we are focusing on the estimation of covariance so a different kind of linear pattern is fitted.Dynamic covariance translates into having multiple linear patterns in the sample space, each of which with an estimated covariance of the disjoint subset of samples. Figure better describes this concept. Here, we simulate a two‐dimensional (2‐D) signal that has samples driven from two Gaussian distributions with a different covariance matrix at each period (50 sample‐points here). This would be the perfect scenario for a sliding‐window analysis and, as it is shown in Figure a, it can capture the true underlying dynamic covariance matrices. However, it has an inaccurate estimation at some periods when samples from different distribution lay in the same window. This would be more troublesome if we do not choose the correct window‐size.Our proposed approach is based on the fact that dynamic covariance matrices of a multivariate signal map to estimation of dominant linear patterns in the sample space. Here, we have simulated a 2‐D signal that has samples driven from two Gaussian distributions with a different covariance matrix at each period (50 sample‐points here). (a) A correct choice of sliding window will capture underlying dynamic covariance matrices if the samples fit the locality assumption, but can generate spurious results if the samples do not confirm to the embedded assumption. (b) If we remove time information from signal data‐points and instead study the signal in the sample space, dynamic covariance matrices would map to the dominant linear patterns in this space. The goal of this paper is developing a method that is able to capture these linear patterns in the sample space [Color figure can be viewed at wileyonlinelibrary.com]Figure b shows how the simulated dynamic covariance matrices map to the linear patterns in the sample space, and this is the key observation upon which our proposed method is based. We demonstrate that, through a sparse dictionary representation of this sample space, we can effectively estimate the dominant linear patterns, which as discussed above translate to the underlying dynamic covariance matrices.Our goal is to find a partitioning of the samples so that, after computing rank‐1 estimation for each partition separately, the sum of the residuals between the sample values and their corresponding rank‐1 estimations is minimized. Note that we are not making any assumption on the sizes of the partitions and assigning a sample to a partition does not give any information on the assignment of any other sample to that partition. It is only the number of partitions that needs to be pre‐selected.Mathematically speaking, we are interested in finding disjoint partitions s1, s2…,sk (each si is a set of sample indices) such that the following is minimized:4                                      ∑i=1kCiwhere Ci=‖‖X(t)− m(t)vTi‖‖2 and t∈siThe above optimization problem is non‐convex without additional constraints. However, we pose this as a sparse dictionary learning problem which enables us to obtain a good approximation to the solution for the partitioning problem as follows:We learn k dictionary elements such that each sample in the sample space is approximated by one and only one (this is where the sparse representation constraint plays a role) dictionary element. The dictionary elements would be the equivalent of vectors vsi in Equation .Our sparse dictionary learning algorithm, aims to sparsely represent input matrix X(t) with dictionary D∈Rk×n and mixing matrix Mt=m1t,m2t, …, mk(t)∈RT×k where k is the number of dictionary elements. Sparse representation means that each row of X(t)(i.e. each sample) is a sparse linear combination of dictionary D meaning rows of mixing matrix M(t) is sparse. General formulation of sparse dictionary learning is as follows:5min⁡D,M‖‖X(t)−M(t)D‖‖2+λ∑i‖‖[m1…k(i)]‖‖0where m1…ki is the ith row of mixing matrix M and λ is a regularizer parameter that controls degree of sparsity of the solution.For our purpose, as we only evaluate a rank‐1 estimation of the samples, the sparsity constraint in the above formula needs to be changed to a hard constraint of ∀i m1…ki0=1 and we can achieve this with the K‐SVD algorithm (Aharon, Elad, & Bruckstein, ). K‐SVD is a popular method for sparse dictionary learning and its main difference from the above formulation is that the sparsity constraint on the mixing matrix is a hard constraint. The sparsity level can be set to any integer value which in our case we choose as 1. This means that each column of mixing matrix M(t) has one and only one nonzero value which corresponds to our desired rank‐1 estimation of the samples which mean the formulation of K‐SVD algorithm would now be as follows:6minD,M(t)⁡X(t)−M(t)D2 s.t. ∀i  m1…ki0=1Similar to Equation (, the above formulation is also non‐convex but K‐SVD would approximate the desired solution by finding a solution which is in fact a local minimum. The details of the K‐SVD algorithm are explained elsewhere (Aharon et al., ) but briefly, it iterates between first fixing D (for the first iteration D is randomly selected from the input data) and solving the equation for M(t). For the next iteration, it does the reverse. In our framework, to make sure that the solution proposed by K‐SVD is in fact a good approximation of the desired solution, we run K‐SVD on the same data but with different initializations of D for multiple times and select the solution with the minimum residual (Equation ). Figure visually described the process of estimation of dynamic covariance as proposed.Outline of the proposed algorithm. By using K‐SVD, input multivariate signal (top left) is decomposed into a dictionary (top middle) and a sparse mixing matrix (top right). Based on sparsity constraint of Mi0=1, each data‐point is represented by a singleton dictionary element. Covariance of a partition of data‐points (bottom left) can be estimated with a self‐outer product of the corresponding dictionary element up to a scaling value (bottom right) [Color figure can be viewed at wileyonlinelibrary.com]For the actual application of the proposed method on the rs‐fMRI, we first describe the data and then explain what we are actually feeding into our method to get the functional dynamic connectivity states.MATERIALSData we used in this study consists of resting‐state scans of 405 healthy subjects of which 200 are female. The age of subjects ranges from 12 to 35 years with the mean of 21 years. Same fMRI scanner device was used to capture resting‐state fMRI scans of all subjects. The scanner was a 3‐T Siemens Trio with a 12‐channel radio frequency coil and it was used to acquire T2∗‐weighted functional images using a gradient‐echo EPI sequence with TE = 29 ms, TR = 2 s, flip angle = 75°, slice thickness = 3.5 mm and gap = 1.05 mm, FOV = 240 mm, matrix size = 64 × 64, voxel size = 3.75 mm × 3.75 mm × 4.55 mm.Subjects had their eyes open and were instructed to fixate on a foveally presented cross. Informed consent was obtained according to institutional guidelines at the University of New Mexico.After the scan, the first four (Brenon et al.) image volumes were removed to avoid T1 equilibration effects followed by realignment, slice‐timing correction, spatial normalization, reslicing to 3 × 3 × 3 mm3 voxels, and spatial Gaussian smoothing (FWHM = 5 mm).Group‐independent component analysis to retrieve intrinsic connectivity networksWe are interested in estimating dynamic connectivity between functional networks of the brain. These networks can be either defined with prior anatomical and functional knowledge of the brain, which are then used to define regions of interests (ROIs) as the functional networks, or in a more flexible approach, these networks can be retrieved using a data‐driven approach with no prior knowledge as in independent component analysis (ICA). In this work, we choose the latter as it allows us to capture subject‐specific networks while allowing to some extent variation of a given network between subjects (Calhoun and Adali, ).We used the implementation of group spatial ICA (gsICA) in the GIFT toolbox (http://mialab.mrn.org/software/gift) to acquire 100 functional networks, which are represented by ICA components as follows. First, principal component analysis (PCA) is performed separately for each subject to find 120 principal components in the voxel‐space of each subject. Then, these principal components are concatenated along time for all subjects and another PCA is performed on this concatenated data to find 100 “group” PCs. Next, ICA was used to identify maximally independent spatial components (Calhoun, Adali, Pearlson, & Pekar, ). This process is repeated 10 times in ICASSO and the best selection (most central) among these runs is used as the final solution (Ma et al., ). As the final step, 50 of these components that were identified as being related to motion, imaging artifact or physiological processes were removed. The remaining 50 components were reported as the actual resting‐state functional networks. These functional networks include auditory (AUD), visual (VIS), somatomotor (SM), cognitive control (CC), default mode (DM), and cerebellum (CB) networks. A visual depiction of these functional networks is presented in Figure of Supporting Information, S1.GICA back‐reconstruction was used to obtain subject‐specific functional networks and corresponding time courses (TCs) from the group‐level ICA (Erhardt et al., ). Also, subject‐specific TCs underwent post‐processing that included detrending, motion regression and outlier removal to reduce effects of noise.Dynamic functional network connectivity states estimated from sparse dictionary representationIn the final stage of our framework, we feed subject‐wise network time‐courses into our proposed method to estimate dynamic functional network connectivity states which, as explained above, are derived from the dictionary elements of the decomposition. We form input multivariate signal X as follows. For each row, we concatenate time‐courses of a specific network for all subjects. This is repeated for each row, which corresponds to a different network. Consequently, the number of rows is equal to the number of networks (50 here) and the number of columns is equal to the number of subjects (405 here) multiplied by the number of timepoints in a time‐course (148 here). Figure a describes how we performed our proposed method on the actual rs‐fRMI data. Figure b shows the five “dynamic connectivity states” derived from elements of dictionary D by self‐cross product of each element. The number of dictionary elements, which also corresponds to number of derived connectivity states, is selected based on elbow curve criteria (Supporting Information, S2) that also correspond to the previous dynamic connectivity studies (which have shown that five “connectivity states” would describe most of the dynamics of the human brain during rest (Allen et al., ).Application of the proposed algorithm to the resting‐state (rs) fMRI data. (a) After decomposing the rs‐fMRI data into functional networks and the corresponding time‐courses, subject‐wise time‐courses of all subjects are concatenated along time to form input multivariate signal. Each row of the input signal corresponds to each network's time‐courses of multiple subjects. This input signal is fed into our proposed algorithm to decompose it into a dictionary with five elements as well as corresponding sparse mixing matrix. (b) Self‐outer product of each dictionary element corresponds to one of the dynamic connectivities of the multi‐subject rs‐fMRI [Color figure can be viewed at wileyonlinelibrary.com]RESULTSWe studied the nature of connectivity states captured by our proposed method by comparing our results with a sliding‐window analysis of dynamic connectivity followed by k‐means summarization into states. Specifically, for this study, we have reproduced the results of Allen et al. (). On the left side of Figure , we show states captured by the proposed method and, on the right side, we show states captured by the sliding‐window approach. There are several interesting findings we observed when comparing the two sets of states. Connectivity states 1 and 2 estimated by our approach (Figure left) have a very similar patterns with their corresponding connectivity states estimated by sliding window analysis (Figure right). The similarity between states of each approach decreases in states 3 and 4 although there are still shared patterns of connectivity between corresponding states of each approach. These include connectivity patterns between default mode and visual networks as well as default modes and cognitive control networks in both states 3 and 4 which are shared between corresponding states of two approaches. Moreover, state 3 of both approaches shares a similar in‐between connectivity of cognitive control networks. These similarities become more interesting knowing that the two methods have minimum overlap with respect to the specification of the chosen algorithms and their underlying assumptions.Side‐by‐side comparison of the dynamic connectivity states estimated by our approach (left) and states estimated by sliding‐window analysis as implemented by (Allen et al., ). Connectivity states on the left have sharper modularity, although there visible similarities between first two states of each approach which decreases between corresponding states three and four although the states still share some visible connectivity patterns. The fifth state of each approach exhibits minimum similarity to any other states of the other approach. The fifth state of the proposed approach has a unique and more focal modularity constituting cognitive control and default mode regions, whereas the windowed approach shows a much weaker and more diffuse connectivity pattern [Color figure can be viewed at wileyonlinelibrary.com]In Figure , we compare the states between the two approaches and also magnify important modular patterns within those states in order to better investigate the overall similarity and finer differences between the paired states. There is one state for each approach (the fifth state of each set,) which is minimally correlated to any of the states in the other approach. The fifth state of the sliding‐window analysis shows global weak connectivity between the majority of networks with very little visible modularity in its connectivity pattern. However, due to the mentioned limiting assumptions of the sliding‐window analysis, we are not able to identify the actual connectivity this state represents. It can represent connectivity with overall weak dependence between the majority of networks. However it could also represent connectivity with significant dependence between the subset of networks, but due to its more spontaneous nature and a lower rate of occurrence might not have been captured by the selected window. As such it would likely be averaged with other unrelated connectivities also occurring in the same window, leading to an estimation of overall weak connectivity. Also, as some studies suggest (Shakil, Lee, & Keilholz, ), choosing the window length is a crucial parameter of sliding‐window approaches, which in fact controls observable rates of change in the underlying dynamics. A large window means that samples belonging to unrelated dynamic connectivities might lie in the same window, which would lead to an incorrect estimation of connectivity corresponding to that window. A small window size may also lead to an incorrect estimation of dynamic connectivity due to not having enough number of samples in the given window.These highlight a major limitation of a sliding‐window analysis for estimating the more transient states. However, there is no such limitation for our proposed method. In fact, all the states on the left show very specific and strong modularity in their corresponding patterns of connectivity. This is a direct result of not having any need for a temporal smoothing operation, which might blur spontaneous connectivity. As detailed further in the discussion section, we conducted a simulation on a multivariate signal with a known dynamic covariance matrix and compared performance of our approach and the sliding‐window approach in capturing ground truth dynamic covariance matrices that support our arguments here.Following these observations, we were interested in measuring the tendency of each state to be spontaneous. For each state, we recorded the number of timepoints between each consecutive occurrence of a given state. Then we measured the median of the distribution of these measurements for each state. When a state is less spontaneous, we expect a smaller median for this distribution compared to a state that occurs more spontaneously. Figure shows the cumulative distribution function (CDF) of the above measure for each state. The x‐axis of the center of mass of this distribution corresponds to the median of the measure (highlighted in the figure). As expected, we can clearly observe that state 5 followed by state 1 have a higher median compared to the other states.In this figure, we compare cumulative distribution function of number of time‐points that it takes for consecutive occurrence of each state. If a state has a longer period of time to reappear, it could mean the state is less frequently occurring and is more spontaneous; a characteristic that may not necessarily fit the locality assumption. As expected, state 5, which is only observable via our approach has relatively higher median of this measure compared to other three out of four states [Color figure can be viewed at wileyonlinelibrary.com]The tendency of the states to be spontaneous is not the only interesting characteristic of the states, which provides insight into the underlying behavior of human dynamic connectivity during the resting state. In fact, more interestingly, we observed that a number of data‐points that has been assigned to each state varies between subjects and is also strongly correlated to the age and gender of subjects. However, for measuring this quantity, which we refer to as the sample‐density of each state, we have to only count assigned data‐points that are also reasonably far from other states. As explained in the methods section, the states in our approach correspond to dominant linear patterns in the sample space that all intersect at the origin of this space. Consequently, data‐points close to this intersection point are relatively close to other states, and we do not have enough confidence in the assignment of the points. To address this problem, we define a sphere with radius 0.01 after z‐scoring distances of all points from the origin (note that due to curse of dimensionality the sphere does not need to be very large) and exclude all the timepoints lying in this sphere for our desired measurement.Next, we formed a 5‐dimensional (5‐D) response variable of number of assigned data‐point to each state and separately for each subject. We included gender (coded as 0: male, 1: female), log of age and motion parameters as explanatory variables. We then performed a MANCOVA analysis for the model selection (Allen et al., ) using backward selection where gender is set as the factor log of age and motion parameters as covariates. Backward selection reduced the explanatory variable only to gender and log(age) covariates followed by a linear regression analysis of each state's sample‐density with gender and log(age) after regressing out the other variable. Interestingly, we observed a significantly higher sample‐density for state 4 (p value 0.0004) in females than males. Moreover, states 1, 2, and 5 have lower sample‐density as the age of the subjects increases (p values 0.003, 0.0000, and 0.05 respectively).DISCUSSIONTo better understand where this study stands compared to many other previous studies, we first describe two natural but different generation models of a multivariate signal with a dynamic covariance. For the first model, we assume that there is an underlying multivariate Gaussian distribution which generates samples of the multivariate signal at each timepoint. This multivariate Gaussian distribution slowly changes into another Gaussian distribution with a different covariance. As explained earlier, this is a perfect model for analyses, based on a sliding‐window, as with the right choice of the window size the slow varying covariance can be estimated at each timepoint. In another model, let us assume we have a fixed number of multivariate Gaussian distributions, each with different covariance matrices, and instead of sampling from same distribution over a period of time, there is another random process used that determines the distribution from which the sample at the current timepoint is taken. This is repeated at each timepoint for generation of the multivariate signal. Figure in the Method section provides a visualization of the differences between the two models. Based on the first model with different distribution at different periods of time, we see that a sliding‐window can capture true underlying dynamic dependence in the form of correlation matrix (Figure a). Importantly, there are also instances in which samples belonging to different distributions lie in the same window and lead to an inaccurate and nonexisting estimation of dependence, also highlighted in the figure. For the second model, there is no assumed knowledge about which distribution occurs at each timepoint and consequently any estimation from the samples is essentially agnostic regarding when the states occur. To visualize the results, the samples are shown jointly in the corresponding sample space (Figure b) and, as discussed earlier, our proposed method is able to capture the underlying true dynamic covariances from these samples in the form of “dictionary elements” even in the absence of temporal dimension. Moreover, as also highlighted in the figure, the dictionary elements nicely translate into the “lines” as the representation of the dominant linear patterns in this sample space. And, not surprisingly, a sliding‐window approach (or any other approaches that require a smoothing operation in the temporal domain) is unable to capture the underlying dynamic dependence from these timeless samples.We now conduct a simulation based on these two models to study and compare performance of the proposed approach and sliding‐window approach. We define three separate 50x50 covariance matrices each with a unique modularity (Figure a); next, we randomly choose one of these matrices and set it as the covariance matrix of a multivariate Gaussian distribution function; then we derive 49 samples from this PDF and set those as the 49 consecutive data‐points of the desired simulated signal. This is repeated for 500 times and the final simulated signal is fed into K‐SVD to be decomposed into a dictionary matrix with three elements and the corresponding sparse mixing matrix. This corresponds to the first model of a signal with dynamic covariance matrix. Figure b,Left represents the self‐cross product of each dictionary element. As expected, this nicely corresponds to each of the original dynamic covariance matrices. Figure b Right shows the results of sliding‐window analysis as implemented by Allen et al. () and it has done a reasonable job in capturing the underlying connectivities. However, an interesting property of the proposed approach is that it is invariant to the random permutation of timepoints as it is estimating the covariance matrices in the sample space. Random permutation of the simulated signal would correspond to the second model of a signal with dynamic covariance, where at each timepoint a covariance matrix is randomly selected. Figure c Left shows the result of our approach on the simulated signal when the samples are randomly shuffled in time. The permutation causes the signal to have changes of covariance at the rate as high as one timepoint. With our approach, we get the ground truth covariance matrices even on the permuted signal. However, as is shown at Figure c Right, the sliding‐window approach has been unable to separate the underlying dynamic covariances because it is based on locality assumption.Result of the proposed approach in capturing ground‐truth underlying dynamic coherence of a multivariate signal compared to sliding‐window approach. (a) Ground‐truth dynamic coherence matrices of the signal. (b) Side‐by‐side result of proposed approach (left) and sliding‐window approach (right) when the simulation is based on locality assumption. (c) Same results when data‐points of the signal are randomly permuted in time, which corresponds to a heterogeneous rate of changes of dynamic coherence and not following locality assumption. As it is obvious from the result, our approach produces the same results and is comparable to ground‐truth in both scenarios. (d) Selected number of states based on the elbow‐curve analysis (more information is provided at Supporting Information, S2) [Color figure can be viewed at wileyonlinelibrary.com]To show that the latter model is not just a conceptual model and it, in fact, applies to natural and common systems. Let us consider a popular system for resource allocation that is widely used in computation devices such as personal computers. In such systems, there are multiple processes sending request for shared resources. If a resource is allocated to a single process for a relatively long period of time, the lag is cascaded to other processes, eventually causing delays in the performance of the whole system. This is the main reason for the commonly used strategy to alternate the resource allocation at a higher rate so that all processes experience fairly similar lags given that the processes have the same priority.Although we are not claiming that underlying cognitive processes of the brain would necessarily follow the same model, if we think of a specific functional connectivity as the necessary resource for a specific cognitive process in the brain, there is a logical reason to believe that any smoothing assumption might be limited in capturing the underlying dynamic connectivity states.Although Figure suggests a strong resemblance between the majority of the connectivity states captured by our method and a sliding‐window analysis, and thus supports the smoothing assumption, we also show an interesting connectivity pattern that is only captured by our method and reveals a strong correlation with gender.Overall, the key observation of this study is that dynamic connectivity estimated from the resting‐state brain has heterogeneous rates of change and consequently methods that do not address this property would be unable to capture all the underling aspects of dynamic connectivities. In this study, though, we have developed a method that does not have such limiting assumptions; theoretically, it is able to capture arbitrary rates of change of dynamic connectivities and the only limitation is the one imposed by the sampling rate itself (Nyquist rate).Before concluding this work, we explain in more detail that how CAP analysis, as seemingly a very similar approach to this work, is in fact theoretically different. Differences of the two approaches can be better understood when we study what each approach is looking for in the sample space. In the proposed approach, dynamic covariance maps to multiple dominant linear patterns in the sample space and our approach is estimating these patterns. CAP analysis, on the other hand, is looking for the “activation status” of other dimensions of input multivariate signal (which in the original study constitutes ROI‐level resting‐state BOLD signal) when the selected dimension (selected seed) of the signal is activated. The activation is determined by a threshold value. Figure visualizes the concept of CAP analysis when is studied in the sample space. Here we have a 2‐D signal whose samples are driven from two different Gaussian random processes. The covariance matrices of these random processes are captured by the dominant linear patterns in the sample space. CAP analysis first selects a seed dimension (x‐axis here). Then a threshold value is selected along that axis based on 90% quantile (thresholding is represented by a half space (gray box)). After thresholding, the analysis is followed by a clustering of the samples which survive the threshold (marked as red) which are then k‐means clustered with k = 2. The centroids of the clusters represent the “co‐activation patterns.” We can clearly see that centers of the clusters are points in the sample space (marked as green) which are very close to the dominant linear patterns we captured with our method. However, mathematically speaking, we are unable to say anything about the actual underlying covariance matrices as when we are only given points and not the linear pattern itself which is provided by our method.Comparison of our approach and CAP analysis: sample space of an input 2‐d signal is represented in a 2‐d plane. Our approach finds linear patterns in this sample space thorough sparse decomposition. CAP analysis, on the other hand, finds points in the sample space which best represented as “co‐activation” between the seed dimension and other dimensions. However, as explained in the text, it is in fact the linear patterns in the sample that represents covariance measurement and not the point and consequently CAP analysis formulation is unable to find the actual covariance [Color figure can be viewed at wileyonlinelibrary.com]In conclusion, as our method is general enough to be applied to any multivariate signal, any immediate future work would entail applying the proposed method to other imaging modalities such as EEG/MEG, which have a higher sampling rate that enables them to capture more spontaneous dynamic connectivity. Lastly, application of the method is not limited to only resting‐state and can also be used in task‐based studies. In fact, as a quick extension of this work we are currently looking into applying this method on the auditory oddball task fMRI data and although we are not using prior knowledge of the task‐design in capturing the connectivity states, in the preliminary results, we are observing significant correlation between occurrence of states and target, novel and standard stimuli.ACKNOWLEDGMENTSThe authors thank investigators at the Mind Research Network who collected the data.REFERENCESAharon, M., Elad, M., & Bruckstein, A. (2006). K‐SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing, 54, 4311–4322.Allen, E. A., Damaraju, E., Plis, S. M., Erhardt, E. B., Eichele, T., & Calhoun, V. D. (2014). Tracking Whole‐Brain Connectivity Dynamics in the Resting State. Cerebral Cortex, 24, 663–676.Allen, E. A., Erhardt, E. B., Damaraju, E., Gruner, W., Segall, J. M., Silva, R. F., … Calhoun, V. D. (2011). A baseline for the multivariate comparison of resting‐state networks. Frontiers in Systems Neuroscience, 5, 2.Battaglia, D., Thomas, B., Hansen, E. C., Chettouf, S., Daffertshofer, A., McIntosh, A. R., … Jirsa, V. (2017). Functional connectivity dynamics of the resting state across the human adult lifespan. bioRxiv.Brenon, H., Goldbeck, W., Howe, J. W., Bronson, B., Torrence, E., Faire, V. B., … Barrie, J. M. Famous Players‐Lasky Corporation., Paramount Pictures Corporation (1914–1927), AFI/Walt Disney Films Collection (Library of Congress), Walt Disney Company Collection (Library of Congress), 1924. Peter Pan. Paramount Pictures, United States, p. 2 videocassettes (inc.) (ca. 67 min.).Calhoun, V. D., & Adali, T. (2012). Multisubject independent component analysis of fMRI: A decade of intrinsic networks, default mode, and neurodiagnostic discovery. IEEE Reviews in Biomedical Engineering, 5, 60–73.Calhoun, V. D., Adali, T., Pearlson, G. D., & Pekar, J. J. (2001). A method for making group inferences from functional MRI data using independent component analysis. Human Brain Mapping, 14, 140–151.Chang, C., & Glover, G. H. (2010). Time‐frequency dynamics of resting‐state brain connectivity measured with fMRI. NeuroImage, 50, 81–98.Chen, J. E., & Glover, G. H. (2015). BOLD fractional contribution to resting‐state functional connectivity above 0.1 Hz. NeuroImage, 107, 207–218.Chen, J. E., Jahanian, H., & Glover, G. H. (2017). Nuisance Regression of High‐Frequency Functional Magnetic Resonance Imaging Data: Denoising Can Be Noisy. Brain Connect,Eavani, H., Satterthwaite, T. D., Gur, R. E., Gur, R. C., & Davatzikos, C. (2013). Unsupervised learning of functional network dynamics in resting state fMRI. Information Processing in Medical Imaging: Proceedings of the. Conference, 23, 426–437.Erhardt, E. B., Rachakonda, S., Bedrick, E. J., Allen, E. A., Adali, T., & Calhoun, V. D. (2011). Comparison of Multi‐Subject ICA Methods for Analysis of fMRI Data. Human Brain Mapping, 32, 2075–2095.Ferri, C. P., Prince, M., Brayne, C., Brodaty, H., Fratiglioni, L., Ganguli, M., … Alzheimer's, D. I. (2005). Global prevalence of dementia: A Delphi consensus study. Lancet, 366, 2112–2117.Friston, K. J., Buechel, C., Fink, G. R., Morris, J., Rolls, E., & Dolan, R. J. (1997). Psychophysiological and modulatory interactions in neuroimaging. NeuroImage, 6, 218–229.Handwerker, D. A., Roopchansingh, V., Gonzalez‐Castillo, J., & Bandettini, P. A. (2012). Periodic changes in fMRI connectivity. NeuroImage, 63, 1712–1719.Hutchison, R. M., Womelsdorf, T., Allen, E. A., Bandettini, P. A., Calhoun, V. D., Corbetta, M., … Chang, C. (2013). Dynamic functional connectivity: Promise, issues, and interpretations. NeuroImage, 80, 360–378.Kaiser, R. H., Whitfield‐Gabrieli, S., Dillon, D. G., Goer, F., Beltzer, M., Minkel, J., … Pizzagalli, D. A. (2016). Dynamic Resting‐State Functional Connectivity in Major Depression. Neuropsychopharmacology, 41, 1822–1830.Kim, J., & Horwitz, B. (2008). Investigating the neural basis for fMRI‐based functional connectivity in a blocked design: Application to interregional correlations and psycho‐physiological interactions. Magnetic Resonance Imaging, 26, 583–593.Liu, X., & Duyn, J. H. (2013). Time‐varying functional network information extracted from brief instances of spontaneous brain activity. Proceedings of the National Academy of Sciences of Sciences, 110, 4392–4397.Ma, S., Correa, N. M., Li, X. L., Eichele, T., Calhoun, V. D., & Adali, T. (2011). Automatic identification of functional clusters in FMRI data using spatial dependence. IEEE Transactions on Biomedical Engineering, 58, 3406–3417.Rashid, B., Damaraju, E., Pearlson, G. D., & Calhoun, V. D. (2014). Dynamic Connectivity States Estimated from Resting fMRI Identify Differences among Schizophrenia, Bipolar Disorder, and Healthy Control Subjects. Frontiers in Human Neuroscience, 8,Shakil, S., Lee, C. H., & Keilholz, S. D. (2016). Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. NeuroImage, 133, 111–128.Trapp, C., Vakamudi, K., & Posse, S. (2017). On the detection of high frequency correlations in resting state fMRI. NeuroImage.Yaesoubi, M., Allen, E. A., Miller, R. L., & Calhoun, V. D. (2015a). Dynamic coherence analysis of resting fMRI data to jointly capture state‐based phase, frequency, and time‐domain information. NeuroImage, 120, 133–142.Yaesoubi, M., Miller, R. L., & Calhoun, V. D. (2015b). Mutually temporally independent connectivity patterns: A new framework to study the dynamics of brain connectivity at rest with application to explain group difference based on gender. NeuroImage, 107, 85–94. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Human Brain Mapping Wiley

A window‐less approach for capturing time‐varying connectivity in fMRI data reveals the presence of states with variable rates of change

Loading next page...
 
/lp/wiley/a-window-less-approach-for-capturing-time-varying-connectivity-in-fmri-jebSiTYmnr

References (27)

Publisher
Wiley
Copyright
© 2018 Wiley Periodicals, Inc.
ISSN
1065-9471
eISSN
1097-0193
DOI
10.1002/hbm.23939
Publisher site
See Article on Publisher Site

Abstract

INTRODUCTIONCoherent spatially distributed fluctuations as a vehicle to measure the functional integration of the brain has been investigated under various cognitive states and is commonly referred to as functional connectivity (Ferri et al., ). In task‐based studies of FC, based on the prior design of the study, underlying cognitive states are hypothesized to constitute a task‐focused state and a short period of resting‐state as the baseline. Neurophysiological interpretations are typically driven by context‐dependent connectivity analysis based on these states of the brain as in psycho‐physiological interaction (PPI) (Friston et al., ; Kim and Horwitz, ).More recently, task‐free studies referred to as resting‐state (RS) studies have challenged the previous assumption that resting state represents only a singleton baseline cognitive state. Multiple recent RS studies have provided evidence of the dynamics of functional connectivity during an extended resting‐state session (Allen et al., ; Handwerker, Roopchansingh, Gonzalez‐Castillo, & Bandettini, ; Hutchison et al., ). However, compared to task‐based studies, there is a lack of prior knowledge on the expected dynamics during the resting state because there are no designed or explicit stimuli of the brain. This requires leveraging approaches that capture dynamics of the connectivity with no need for prior knowledge on when changes of underlying states might occur, although each approach includes some assumption regarding the underlying nature of dynamics. For example, approaches based on the selection of a sliding‐window—a commonly used approach—assume that connectivity at a given timepoint (usually at the center of the window) can be estimated from all the samples of the input time series spanned by the selected window. We refer to this as a temporal locality assumption. Moreover, if it is assumed that the samples are derived from a multivariate Gaussian distribution, connectivity can be estimated by measuring correlation or covariance between samples in a given window as has been studied in Allen et al. ().The validity of these assumptions depends on the actual nature of the time series. For example, if we have a time series in which we know that the rate of its actual change is much slower than its sampling rate, then a locality assumption is justified. Likewise, if we understand that what the time series at each timepoint is measuring is the average of many independent and identically distributed (i.i.d.) samples from an unknown distribution, then based on the central limit theorem, the Gaussian assumption is justified. Although these assumptions can be validated in many types of time series, such as in financial data, working with brain functionality data is much less certain since there are so many unknowns about the actual nature of the captured signal. There are a variety of modalities that capture different correlates of true neurophysiological activity of the brain with different spatial and temporal resolution. For example, in the case of fMRI, voxel time series have been commonly modeled as the convolution of neurological activity with a slow‐varying hemodynamic response function (HRF). From this HRF, a locality assumption can be justified. This is possibly the main reason behind the popularity of approaches based on a sliding‐window analysis.Nevertheless, recent studies have found evidence of changes in both activation of and connectivity between voxels and networks (groups of functionally related voxels) at a rate beyond what has been often assumed. In terms of frequency, the blood‐oxygen‐level dependent (BOLD) signal, as a reflection of the underlying neurological activity and the connectivity due to it, has been assumed to span the range between 0.01 and 0.1 Hz, which also agrees with the commonly used models of HRFs. However, more recent studies (Chang and Glover, ; Trapp, Vakamudi, & Posse, ; Yaesoubi, Allen, Miller, & Calhoun, ) have argued that there is observable activity and connectivity in higher frequencies up to the maximum observable. Since these studies are very recent, there have been various speculations on the sources of observed high‐frequency activation and connectivity variance. This includes the relationship with nuisance variables due to improper models of controlling for these variables (Chen, Jahanian, & Glover, ) and questioning conventional models for BOLD signal generation that are unable to explain these observations (Chen and Glover, ). Nonetheless, these studies question the validity of assumptions of commonly used analysis of fMRI, including the locality assumption, and support the use of models that can at least capture information that is blurred out or ignored by existing models, should it be present.Various alternative approaches have been proposed to relax the limiting assumption of locality. These include adaptive windowing based on the current temporal frequency of the data by leveraging time–frequency analysis and measuring dependence in the augmented domain (Chang and Glover, ; Yaesoubi et al., ). There are approaches that do not require an explicit selection of a window, but still have an inherited locality assumption, including approaches based on filtering in time such as Hidden Markov Models (HMMs) (Eavani, Satterthwaite, Gur, Gur, & Davatzikos, ). Though HMM can be implemented in a way that does not require a windowing operation, the traditional HMM still has an embedded locality assumption in the transition probability between the states that is stationary in time.There is also another approach known as co‐activation pattern (CAP) analysis (Liu and Duyn, ) which, similarly to what we are proposing here, is not based on a locality assumption. Although at first glance CAP analysis might look very similar to this work, as we will explain in more detail in the discussion section, CAP lacks an actual formulation for capturing dynamic linear dependence as we are proposing here. However, there are interesting similarities as well as differences between the two approaches which will be discussed later.In this work, we are proposing a new approach to capture dynamic functional connectivity of functional fMRI networks that does not require either explicit or implicit locality assumption. This enables us to observe changes in the connectivity of the brain that—although occurring close in time—might belong to different underlying sources of activity. Theoretically speaking, this allows observation of changes in connectivity with heterogeneous rates up to the Nyquist rate, which means observing changes that occur on the scale of one timepoint. We will argue that by removing the locality assumption we can both capture connectivity with similar modularity and novel connectivity patterns, which were likely to be obscured in previous studies.Finally, studies of functional connectivity have found many applications in investigations of functional brain differences between different clinical groups (i.e., mental illnesses such as schizophrenia and bipolar disorder) (Kaiser et al., ; Rashid, Damaraju, Pearlson, & Calhoun, ) or biological variables (such as age and gender) (Battaglia et al., ; Yaesoubi, Miller, & Calhoun, ). Although study of group differences is not the main focus of this work, we do study the correlation between age and gender and different characteristics of connectivity states captured by the proposed method. Most interestingly, we found a strong correlation between age and gender of subjects and the tendency of specific states to occur in subjects. Among these states, there is one state that has not been observed in previous studies of resting‐state dynamic functional connectivity. We show evidence that suggests this state in particular is missed or at least blurred out by previous approaches.MATERIALS AND METHODSWe first explain our method on a general multivariate input signal and then we will explain how we apply the method to fMRI data. In the following, an uppercase bold letter (e.g., X(t)) represents a matrix (and if indexed by t, it represents a multivariate signal, i.e., realization from a random process), lowercase bold letter indexed (e.g., v) represents a (column) vector (and similarly if indexed by t, it represents a univariate signal again a realization of a random process). Italic letter (e.g., k or K) represents a scalar variable.Assume we have an n‐dimensional input Xt=x1t,x2t, …, xn(t)∈RT×n . Each feature vector xi (columns of Xt) consists of i.i.d samples and since we are focusing only on the estimation of covariance/correlation as the measure of dependence there is an inherent assumption that the samples have a Gaussian distribution. We are also assuming all the feature vectors have zero mean. Now assume that X(t) is low‐rank and we can reasonably (i.e., with small residual error) approximate it with its rank‐1 estimate:1X(t)≅m(t)vTAnd since the covariance between any two feature vectors of Xt (e.g., xi(t),xj(t)) is equal toExi(t)xj(t)−E(xi(t))E(xj(t)) and becomes equal to Exi(t)xj(t) as Exi(t)= Exj(t)=0, it is easy to show that covxi(t),xj(t) can also be approximated by2cov(xi(t),xj(t))≅var(m(t)) vi vjwhere vi and vj are ith and jth elements of the vector v.The above equation can be easily extended to the approximation of the covariance of the whole input matrix X(t) as follows:3cov(X(t))=[cov(x1(t),x1(t))⋯cov(xn(t),x1(t))⋮⋱⋮cov(x1(t),xn(t))⋯cov(xn(t),xn(t))]                                 ≅var(m(t))[v12⋯vnv1⋮⋱⋮v1vn⋯vn2]                                 =var(m(t)) v vTIn the above equation, varm(t) is a constant scalar and consequently, by ignoring it, we would be able to estimate covX(t) by a simple self outer product of vector v up to a scaling ambiguity.The main concept behind the above formulation is that estimation of covariance is in fact equivalent to finding the dominant linear pattern (rank‐1 estimation) in the sample space (without using the information about the time domain). For example, in a sliding‐window analysis, our goal is to find such linear pattern in the space of samples that lie in the window and a different linear pattern is fitted to each window. Conceptually, the idea is very similar to piecewise linear regression although here we are focusing on the estimation of covariance so a different kind of linear pattern is fitted.Dynamic covariance translates into having multiple linear patterns in the sample space, each of which with an estimated covariance of the disjoint subset of samples. Figure better describes this concept. Here, we simulate a two‐dimensional (2‐D) signal that has samples driven from two Gaussian distributions with a different covariance matrix at each period (50 sample‐points here). This would be the perfect scenario for a sliding‐window analysis and, as it is shown in Figure a, it can capture the true underlying dynamic covariance matrices. However, it has an inaccurate estimation at some periods when samples from different distribution lay in the same window. This would be more troublesome if we do not choose the correct window‐size.Our proposed approach is based on the fact that dynamic covariance matrices of a multivariate signal map to estimation of dominant linear patterns in the sample space. Here, we have simulated a 2‐D signal that has samples driven from two Gaussian distributions with a different covariance matrix at each period (50 sample‐points here). (a) A correct choice of sliding window will capture underlying dynamic covariance matrices if the samples fit the locality assumption, but can generate spurious results if the samples do not confirm to the embedded assumption. (b) If we remove time information from signal data‐points and instead study the signal in the sample space, dynamic covariance matrices would map to the dominant linear patterns in this space. The goal of this paper is developing a method that is able to capture these linear patterns in the sample space [Color figure can be viewed at wileyonlinelibrary.com]Figure b shows how the simulated dynamic covariance matrices map to the linear patterns in the sample space, and this is the key observation upon which our proposed method is based. We demonstrate that, through a sparse dictionary representation of this sample space, we can effectively estimate the dominant linear patterns, which as discussed above translate to the underlying dynamic covariance matrices.Our goal is to find a partitioning of the samples so that, after computing rank‐1 estimation for each partition separately, the sum of the residuals between the sample values and their corresponding rank‐1 estimations is minimized. Note that we are not making any assumption on the sizes of the partitions and assigning a sample to a partition does not give any information on the assignment of any other sample to that partition. It is only the number of partitions that needs to be pre‐selected.Mathematically speaking, we are interested in finding disjoint partitions s1, s2…,sk (each si is a set of sample indices) such that the following is minimized:4                                      ∑i=1kCiwhere Ci=‖‖X(t)− m(t)vTi‖‖2 and t∈siThe above optimization problem is non‐convex without additional constraints. However, we pose this as a sparse dictionary learning problem which enables us to obtain a good approximation to the solution for the partitioning problem as follows:We learn k dictionary elements such that each sample in the sample space is approximated by one and only one (this is where the sparse representation constraint plays a role) dictionary element. The dictionary elements would be the equivalent of vectors vsi in Equation .Our sparse dictionary learning algorithm, aims to sparsely represent input matrix X(t) with dictionary D∈Rk×n and mixing matrix Mt=m1t,m2t, …, mk(t)∈RT×k where k is the number of dictionary elements. Sparse representation means that each row of X(t)(i.e. each sample) is a sparse linear combination of dictionary D meaning rows of mixing matrix M(t) is sparse. General formulation of sparse dictionary learning is as follows:5min⁡D,M‖‖X(t)−M(t)D‖‖2+λ∑i‖‖[m1…k(i)]‖‖0where m1…ki is the ith row of mixing matrix M and λ is a regularizer parameter that controls degree of sparsity of the solution.For our purpose, as we only evaluate a rank‐1 estimation of the samples, the sparsity constraint in the above formula needs to be changed to a hard constraint of ∀i m1…ki0=1 and we can achieve this with the K‐SVD algorithm (Aharon, Elad, & Bruckstein, ). K‐SVD is a popular method for sparse dictionary learning and its main difference from the above formulation is that the sparsity constraint on the mixing matrix is a hard constraint. The sparsity level can be set to any integer value which in our case we choose as 1. This means that each column of mixing matrix M(t) has one and only one nonzero value which corresponds to our desired rank‐1 estimation of the samples which mean the formulation of K‐SVD algorithm would now be as follows:6minD,M(t)⁡X(t)−M(t)D2 s.t. ∀i  m1…ki0=1Similar to Equation (, the above formulation is also non‐convex but K‐SVD would approximate the desired solution by finding a solution which is in fact a local minimum. The details of the K‐SVD algorithm are explained elsewhere (Aharon et al., ) but briefly, it iterates between first fixing D (for the first iteration D is randomly selected from the input data) and solving the equation for M(t). For the next iteration, it does the reverse. In our framework, to make sure that the solution proposed by K‐SVD is in fact a good approximation of the desired solution, we run K‐SVD on the same data but with different initializations of D for multiple times and select the solution with the minimum residual (Equation ). Figure visually described the process of estimation of dynamic covariance as proposed.Outline of the proposed algorithm. By using K‐SVD, input multivariate signal (top left) is decomposed into a dictionary (top middle) and a sparse mixing matrix (top right). Based on sparsity constraint of Mi0=1, each data‐point is represented by a singleton dictionary element. Covariance of a partition of data‐points (bottom left) can be estimated with a self‐outer product of the corresponding dictionary element up to a scaling value (bottom right) [Color figure can be viewed at wileyonlinelibrary.com]For the actual application of the proposed method on the rs‐fMRI, we first describe the data and then explain what we are actually feeding into our method to get the functional dynamic connectivity states.MATERIALSData we used in this study consists of resting‐state scans of 405 healthy subjects of which 200 are female. The age of subjects ranges from 12 to 35 years with the mean of 21 years. Same fMRI scanner device was used to capture resting‐state fMRI scans of all subjects. The scanner was a 3‐T Siemens Trio with a 12‐channel radio frequency coil and it was used to acquire T2∗‐weighted functional images using a gradient‐echo EPI sequence with TE = 29 ms, TR = 2 s, flip angle = 75°, slice thickness = 3.5 mm and gap = 1.05 mm, FOV = 240 mm, matrix size = 64 × 64, voxel size = 3.75 mm × 3.75 mm × 4.55 mm.Subjects had their eyes open and were instructed to fixate on a foveally presented cross. Informed consent was obtained according to institutional guidelines at the University of New Mexico.After the scan, the first four (Brenon et al.) image volumes were removed to avoid T1 equilibration effects followed by realignment, slice‐timing correction, spatial normalization, reslicing to 3 × 3 × 3 mm3 voxels, and spatial Gaussian smoothing (FWHM = 5 mm).Group‐independent component analysis to retrieve intrinsic connectivity networksWe are interested in estimating dynamic connectivity between functional networks of the brain. These networks can be either defined with prior anatomical and functional knowledge of the brain, which are then used to define regions of interests (ROIs) as the functional networks, or in a more flexible approach, these networks can be retrieved using a data‐driven approach with no prior knowledge as in independent component analysis (ICA). In this work, we choose the latter as it allows us to capture subject‐specific networks while allowing to some extent variation of a given network between subjects (Calhoun and Adali, ).We used the implementation of group spatial ICA (gsICA) in the GIFT toolbox (http://mialab.mrn.org/software/gift) to acquire 100 functional networks, which are represented by ICA components as follows. First, principal component analysis (PCA) is performed separately for each subject to find 120 principal components in the voxel‐space of each subject. Then, these principal components are concatenated along time for all subjects and another PCA is performed on this concatenated data to find 100 “group” PCs. Next, ICA was used to identify maximally independent spatial components (Calhoun, Adali, Pearlson, & Pekar, ). This process is repeated 10 times in ICASSO and the best selection (most central) among these runs is used as the final solution (Ma et al., ). As the final step, 50 of these components that were identified as being related to motion, imaging artifact or physiological processes were removed. The remaining 50 components were reported as the actual resting‐state functional networks. These functional networks include auditory (AUD), visual (VIS), somatomotor (SM), cognitive control (CC), default mode (DM), and cerebellum (CB) networks. A visual depiction of these functional networks is presented in Figure of Supporting Information, S1.GICA back‐reconstruction was used to obtain subject‐specific functional networks and corresponding time courses (TCs) from the group‐level ICA (Erhardt et al., ). Also, subject‐specific TCs underwent post‐processing that included detrending, motion regression and outlier removal to reduce effects of noise.Dynamic functional network connectivity states estimated from sparse dictionary representationIn the final stage of our framework, we feed subject‐wise network time‐courses into our proposed method to estimate dynamic functional network connectivity states which, as explained above, are derived from the dictionary elements of the decomposition. We form input multivariate signal X as follows. For each row, we concatenate time‐courses of a specific network for all subjects. This is repeated for each row, which corresponds to a different network. Consequently, the number of rows is equal to the number of networks (50 here) and the number of columns is equal to the number of subjects (405 here) multiplied by the number of timepoints in a time‐course (148 here). Figure a describes how we performed our proposed method on the actual rs‐fRMI data. Figure b shows the five “dynamic connectivity states” derived from elements of dictionary D by self‐cross product of each element. The number of dictionary elements, which also corresponds to number of derived connectivity states, is selected based on elbow curve criteria (Supporting Information, S2) that also correspond to the previous dynamic connectivity studies (which have shown that five “connectivity states” would describe most of the dynamics of the human brain during rest (Allen et al., ).Application of the proposed algorithm to the resting‐state (rs) fMRI data. (a) After decomposing the rs‐fMRI data into functional networks and the corresponding time‐courses, subject‐wise time‐courses of all subjects are concatenated along time to form input multivariate signal. Each row of the input signal corresponds to each network's time‐courses of multiple subjects. This input signal is fed into our proposed algorithm to decompose it into a dictionary with five elements as well as corresponding sparse mixing matrix. (b) Self‐outer product of each dictionary element corresponds to one of the dynamic connectivities of the multi‐subject rs‐fMRI [Color figure can be viewed at wileyonlinelibrary.com]RESULTSWe studied the nature of connectivity states captured by our proposed method by comparing our results with a sliding‐window analysis of dynamic connectivity followed by k‐means summarization into states. Specifically, for this study, we have reproduced the results of Allen et al. (). On the left side of Figure , we show states captured by the proposed method and, on the right side, we show states captured by the sliding‐window approach. There are several interesting findings we observed when comparing the two sets of states. Connectivity states 1 and 2 estimated by our approach (Figure left) have a very similar patterns with their corresponding connectivity states estimated by sliding window analysis (Figure right). The similarity between states of each approach decreases in states 3 and 4 although there are still shared patterns of connectivity between corresponding states of each approach. These include connectivity patterns between default mode and visual networks as well as default modes and cognitive control networks in both states 3 and 4 which are shared between corresponding states of two approaches. Moreover, state 3 of both approaches shares a similar in‐between connectivity of cognitive control networks. These similarities become more interesting knowing that the two methods have minimum overlap with respect to the specification of the chosen algorithms and their underlying assumptions.Side‐by‐side comparison of the dynamic connectivity states estimated by our approach (left) and states estimated by sliding‐window analysis as implemented by (Allen et al., ). Connectivity states on the left have sharper modularity, although there visible similarities between first two states of each approach which decreases between corresponding states three and four although the states still share some visible connectivity patterns. The fifth state of each approach exhibits minimum similarity to any other states of the other approach. The fifth state of the proposed approach has a unique and more focal modularity constituting cognitive control and default mode regions, whereas the windowed approach shows a much weaker and more diffuse connectivity pattern [Color figure can be viewed at wileyonlinelibrary.com]In Figure , we compare the states between the two approaches and also magnify important modular patterns within those states in order to better investigate the overall similarity and finer differences between the paired states. There is one state for each approach (the fifth state of each set,) which is minimally correlated to any of the states in the other approach. The fifth state of the sliding‐window analysis shows global weak connectivity between the majority of networks with very little visible modularity in its connectivity pattern. However, due to the mentioned limiting assumptions of the sliding‐window analysis, we are not able to identify the actual connectivity this state represents. It can represent connectivity with overall weak dependence between the majority of networks. However it could also represent connectivity with significant dependence between the subset of networks, but due to its more spontaneous nature and a lower rate of occurrence might not have been captured by the selected window. As such it would likely be averaged with other unrelated connectivities also occurring in the same window, leading to an estimation of overall weak connectivity. Also, as some studies suggest (Shakil, Lee, & Keilholz, ), choosing the window length is a crucial parameter of sliding‐window approaches, which in fact controls observable rates of change in the underlying dynamics. A large window means that samples belonging to unrelated dynamic connectivities might lie in the same window, which would lead to an incorrect estimation of connectivity corresponding to that window. A small window size may also lead to an incorrect estimation of dynamic connectivity due to not having enough number of samples in the given window.These highlight a major limitation of a sliding‐window analysis for estimating the more transient states. However, there is no such limitation for our proposed method. In fact, all the states on the left show very specific and strong modularity in their corresponding patterns of connectivity. This is a direct result of not having any need for a temporal smoothing operation, which might blur spontaneous connectivity. As detailed further in the discussion section, we conducted a simulation on a multivariate signal with a known dynamic covariance matrix and compared performance of our approach and the sliding‐window approach in capturing ground truth dynamic covariance matrices that support our arguments here.Following these observations, we were interested in measuring the tendency of each state to be spontaneous. For each state, we recorded the number of timepoints between each consecutive occurrence of a given state. Then we measured the median of the distribution of these measurements for each state. When a state is less spontaneous, we expect a smaller median for this distribution compared to a state that occurs more spontaneously. Figure shows the cumulative distribution function (CDF) of the above measure for each state. The x‐axis of the center of mass of this distribution corresponds to the median of the measure (highlighted in the figure). As expected, we can clearly observe that state 5 followed by state 1 have a higher median compared to the other states.In this figure, we compare cumulative distribution function of number of time‐points that it takes for consecutive occurrence of each state. If a state has a longer period of time to reappear, it could mean the state is less frequently occurring and is more spontaneous; a characteristic that may not necessarily fit the locality assumption. As expected, state 5, which is only observable via our approach has relatively higher median of this measure compared to other three out of four states [Color figure can be viewed at wileyonlinelibrary.com]The tendency of the states to be spontaneous is not the only interesting characteristic of the states, which provides insight into the underlying behavior of human dynamic connectivity during the resting state. In fact, more interestingly, we observed that a number of data‐points that has been assigned to each state varies between subjects and is also strongly correlated to the age and gender of subjects. However, for measuring this quantity, which we refer to as the sample‐density of each state, we have to only count assigned data‐points that are also reasonably far from other states. As explained in the methods section, the states in our approach correspond to dominant linear patterns in the sample space that all intersect at the origin of this space. Consequently, data‐points close to this intersection point are relatively close to other states, and we do not have enough confidence in the assignment of the points. To address this problem, we define a sphere with radius 0.01 after z‐scoring distances of all points from the origin (note that due to curse of dimensionality the sphere does not need to be very large) and exclude all the timepoints lying in this sphere for our desired measurement.Next, we formed a 5‐dimensional (5‐D) response variable of number of assigned data‐point to each state and separately for each subject. We included gender (coded as 0: male, 1: female), log of age and motion parameters as explanatory variables. We then performed a MANCOVA analysis for the model selection (Allen et al., ) using backward selection where gender is set as the factor log of age and motion parameters as covariates. Backward selection reduced the explanatory variable only to gender and log(age) covariates followed by a linear regression analysis of each state's sample‐density with gender and log(age) after regressing out the other variable. Interestingly, we observed a significantly higher sample‐density for state 4 (p value 0.0004) in females than males. Moreover, states 1, 2, and 5 have lower sample‐density as the age of the subjects increases (p values 0.003, 0.0000, and 0.05 respectively).DISCUSSIONTo better understand where this study stands compared to many other previous studies, we first describe two natural but different generation models of a multivariate signal with a dynamic covariance. For the first model, we assume that there is an underlying multivariate Gaussian distribution which generates samples of the multivariate signal at each timepoint. This multivariate Gaussian distribution slowly changes into another Gaussian distribution with a different covariance. As explained earlier, this is a perfect model for analyses, based on a sliding‐window, as with the right choice of the window size the slow varying covariance can be estimated at each timepoint. In another model, let us assume we have a fixed number of multivariate Gaussian distributions, each with different covariance matrices, and instead of sampling from same distribution over a period of time, there is another random process used that determines the distribution from which the sample at the current timepoint is taken. This is repeated at each timepoint for generation of the multivariate signal. Figure in the Method section provides a visualization of the differences between the two models. Based on the first model with different distribution at different periods of time, we see that a sliding‐window can capture true underlying dynamic dependence in the form of correlation matrix (Figure a). Importantly, there are also instances in which samples belonging to different distributions lie in the same window and lead to an inaccurate and nonexisting estimation of dependence, also highlighted in the figure. For the second model, there is no assumed knowledge about which distribution occurs at each timepoint and consequently any estimation from the samples is essentially agnostic regarding when the states occur. To visualize the results, the samples are shown jointly in the corresponding sample space (Figure b) and, as discussed earlier, our proposed method is able to capture the underlying true dynamic covariances from these samples in the form of “dictionary elements” even in the absence of temporal dimension. Moreover, as also highlighted in the figure, the dictionary elements nicely translate into the “lines” as the representation of the dominant linear patterns in this sample space. And, not surprisingly, a sliding‐window approach (or any other approaches that require a smoothing operation in the temporal domain) is unable to capture the underlying dynamic dependence from these timeless samples.We now conduct a simulation based on these two models to study and compare performance of the proposed approach and sliding‐window approach. We define three separate 50x50 covariance matrices each with a unique modularity (Figure a); next, we randomly choose one of these matrices and set it as the covariance matrix of a multivariate Gaussian distribution function; then we derive 49 samples from this PDF and set those as the 49 consecutive data‐points of the desired simulated signal. This is repeated for 500 times and the final simulated signal is fed into K‐SVD to be decomposed into a dictionary matrix with three elements and the corresponding sparse mixing matrix. This corresponds to the first model of a signal with dynamic covariance matrix. Figure b,Left represents the self‐cross product of each dictionary element. As expected, this nicely corresponds to each of the original dynamic covariance matrices. Figure b Right shows the results of sliding‐window analysis as implemented by Allen et al. () and it has done a reasonable job in capturing the underlying connectivities. However, an interesting property of the proposed approach is that it is invariant to the random permutation of timepoints as it is estimating the covariance matrices in the sample space. Random permutation of the simulated signal would correspond to the second model of a signal with dynamic covariance, where at each timepoint a covariance matrix is randomly selected. Figure c Left shows the result of our approach on the simulated signal when the samples are randomly shuffled in time. The permutation causes the signal to have changes of covariance at the rate as high as one timepoint. With our approach, we get the ground truth covariance matrices even on the permuted signal. However, as is shown at Figure c Right, the sliding‐window approach has been unable to separate the underlying dynamic covariances because it is based on locality assumption.Result of the proposed approach in capturing ground‐truth underlying dynamic coherence of a multivariate signal compared to sliding‐window approach. (a) Ground‐truth dynamic coherence matrices of the signal. (b) Side‐by‐side result of proposed approach (left) and sliding‐window approach (right) when the simulation is based on locality assumption. (c) Same results when data‐points of the signal are randomly permuted in time, which corresponds to a heterogeneous rate of changes of dynamic coherence and not following locality assumption. As it is obvious from the result, our approach produces the same results and is comparable to ground‐truth in both scenarios. (d) Selected number of states based on the elbow‐curve analysis (more information is provided at Supporting Information, S2) [Color figure can be viewed at wileyonlinelibrary.com]To show that the latter model is not just a conceptual model and it, in fact, applies to natural and common systems. Let us consider a popular system for resource allocation that is widely used in computation devices such as personal computers. In such systems, there are multiple processes sending request for shared resources. If a resource is allocated to a single process for a relatively long period of time, the lag is cascaded to other processes, eventually causing delays in the performance of the whole system. This is the main reason for the commonly used strategy to alternate the resource allocation at a higher rate so that all processes experience fairly similar lags given that the processes have the same priority.Although we are not claiming that underlying cognitive processes of the brain would necessarily follow the same model, if we think of a specific functional connectivity as the necessary resource for a specific cognitive process in the brain, there is a logical reason to believe that any smoothing assumption might be limited in capturing the underlying dynamic connectivity states.Although Figure suggests a strong resemblance between the majority of the connectivity states captured by our method and a sliding‐window analysis, and thus supports the smoothing assumption, we also show an interesting connectivity pattern that is only captured by our method and reveals a strong correlation with gender.Overall, the key observation of this study is that dynamic connectivity estimated from the resting‐state brain has heterogeneous rates of change and consequently methods that do not address this property would be unable to capture all the underling aspects of dynamic connectivities. In this study, though, we have developed a method that does not have such limiting assumptions; theoretically, it is able to capture arbitrary rates of change of dynamic connectivities and the only limitation is the one imposed by the sampling rate itself (Nyquist rate).Before concluding this work, we explain in more detail that how CAP analysis, as seemingly a very similar approach to this work, is in fact theoretically different. Differences of the two approaches can be better understood when we study what each approach is looking for in the sample space. In the proposed approach, dynamic covariance maps to multiple dominant linear patterns in the sample space and our approach is estimating these patterns. CAP analysis, on the other hand, is looking for the “activation status” of other dimensions of input multivariate signal (which in the original study constitutes ROI‐level resting‐state BOLD signal) when the selected dimension (selected seed) of the signal is activated. The activation is determined by a threshold value. Figure visualizes the concept of CAP analysis when is studied in the sample space. Here we have a 2‐D signal whose samples are driven from two different Gaussian random processes. The covariance matrices of these random processes are captured by the dominant linear patterns in the sample space. CAP analysis first selects a seed dimension (x‐axis here). Then a threshold value is selected along that axis based on 90% quantile (thresholding is represented by a half space (gray box)). After thresholding, the analysis is followed by a clustering of the samples which survive the threshold (marked as red) which are then k‐means clustered with k = 2. The centroids of the clusters represent the “co‐activation patterns.” We can clearly see that centers of the clusters are points in the sample space (marked as green) which are very close to the dominant linear patterns we captured with our method. However, mathematically speaking, we are unable to say anything about the actual underlying covariance matrices as when we are only given points and not the linear pattern itself which is provided by our method.Comparison of our approach and CAP analysis: sample space of an input 2‐d signal is represented in a 2‐d plane. Our approach finds linear patterns in this sample space thorough sparse decomposition. CAP analysis, on the other hand, finds points in the sample space which best represented as “co‐activation” between the seed dimension and other dimensions. However, as explained in the text, it is in fact the linear patterns in the sample that represents covariance measurement and not the point and consequently CAP analysis formulation is unable to find the actual covariance [Color figure can be viewed at wileyonlinelibrary.com]In conclusion, as our method is general enough to be applied to any multivariate signal, any immediate future work would entail applying the proposed method to other imaging modalities such as EEG/MEG, which have a higher sampling rate that enables them to capture more spontaneous dynamic connectivity. Lastly, application of the method is not limited to only resting‐state and can also be used in task‐based studies. In fact, as a quick extension of this work we are currently looking into applying this method on the auditory oddball task fMRI data and although we are not using prior knowledge of the task‐design in capturing the connectivity states, in the preliminary results, we are observing significant correlation between occurrence of states and target, novel and standard stimuli.ACKNOWLEDGMENTSThe authors thank investigators at the Mind Research Network who collected the data.REFERENCESAharon, M., Elad, M., & Bruckstein, A. (2006). K‐SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing, 54, 4311–4322.Allen, E. A., Damaraju, E., Plis, S. M., Erhardt, E. B., Eichele, T., & Calhoun, V. D. (2014). Tracking Whole‐Brain Connectivity Dynamics in the Resting State. Cerebral Cortex, 24, 663–676.Allen, E. A., Erhardt, E. B., Damaraju, E., Gruner, W., Segall, J. M., Silva, R. F., … Calhoun, V. D. (2011). A baseline for the multivariate comparison of resting‐state networks. Frontiers in Systems Neuroscience, 5, 2.Battaglia, D., Thomas, B., Hansen, E. C., Chettouf, S., Daffertshofer, A., McIntosh, A. R., … Jirsa, V. (2017). Functional connectivity dynamics of the resting state across the human adult lifespan. bioRxiv.Brenon, H., Goldbeck, W., Howe, J. W., Bronson, B., Torrence, E., Faire, V. B., … Barrie, J. M. Famous Players‐Lasky Corporation., Paramount Pictures Corporation (1914–1927), AFI/Walt Disney Films Collection (Library of Congress), Walt Disney Company Collection (Library of Congress), 1924. Peter Pan. Paramount Pictures, United States, p. 2 videocassettes (inc.) (ca. 67 min.).Calhoun, V. D., & Adali, T. (2012). Multisubject independent component analysis of fMRI: A decade of intrinsic networks, default mode, and neurodiagnostic discovery. IEEE Reviews in Biomedical Engineering, 5, 60–73.Calhoun, V. D., Adali, T., Pearlson, G. D., & Pekar, J. J. (2001). A method for making group inferences from functional MRI data using independent component analysis. Human Brain Mapping, 14, 140–151.Chang, C., & Glover, G. H. (2010). Time‐frequency dynamics of resting‐state brain connectivity measured with fMRI. NeuroImage, 50, 81–98.Chen, J. E., & Glover, G. H. (2015). BOLD fractional contribution to resting‐state functional connectivity above 0.1 Hz. NeuroImage, 107, 207–218.Chen, J. E., Jahanian, H., & Glover, G. H. (2017). Nuisance Regression of High‐Frequency Functional Magnetic Resonance Imaging Data: Denoising Can Be Noisy. Brain Connect,Eavani, H., Satterthwaite, T. D., Gur, R. E., Gur, R. C., & Davatzikos, C. (2013). Unsupervised learning of functional network dynamics in resting state fMRI. Information Processing in Medical Imaging: Proceedings of the. Conference, 23, 426–437.Erhardt, E. B., Rachakonda, S., Bedrick, E. J., Allen, E. A., Adali, T., & Calhoun, V. D. (2011). Comparison of Multi‐Subject ICA Methods for Analysis of fMRI Data. Human Brain Mapping, 32, 2075–2095.Ferri, C. P., Prince, M., Brayne, C., Brodaty, H., Fratiglioni, L., Ganguli, M., … Alzheimer's, D. I. (2005). Global prevalence of dementia: A Delphi consensus study. Lancet, 366, 2112–2117.Friston, K. J., Buechel, C., Fink, G. R., Morris, J., Rolls, E., & Dolan, R. J. (1997). Psychophysiological and modulatory interactions in neuroimaging. NeuroImage, 6, 218–229.Handwerker, D. A., Roopchansingh, V., Gonzalez‐Castillo, J., & Bandettini, P. A. (2012). Periodic changes in fMRI connectivity. NeuroImage, 63, 1712–1719.Hutchison, R. M., Womelsdorf, T., Allen, E. A., Bandettini, P. A., Calhoun, V. D., Corbetta, M., … Chang, C. (2013). Dynamic functional connectivity: Promise, issues, and interpretations. NeuroImage, 80, 360–378.Kaiser, R. H., Whitfield‐Gabrieli, S., Dillon, D. G., Goer, F., Beltzer, M., Minkel, J., … Pizzagalli, D. A. (2016). Dynamic Resting‐State Functional Connectivity in Major Depression. Neuropsychopharmacology, 41, 1822–1830.Kim, J., & Horwitz, B. (2008). Investigating the neural basis for fMRI‐based functional connectivity in a blocked design: Application to interregional correlations and psycho‐physiological interactions. Magnetic Resonance Imaging, 26, 583–593.Liu, X., & Duyn, J. H. (2013). Time‐varying functional network information extracted from brief instances of spontaneous brain activity. Proceedings of the National Academy of Sciences of Sciences, 110, 4392–4397.Ma, S., Correa, N. M., Li, X. L., Eichele, T., Calhoun, V. D., & Adali, T. (2011). Automatic identification of functional clusters in FMRI data using spatial dependence. IEEE Transactions on Biomedical Engineering, 58, 3406–3417.Rashid, B., Damaraju, E., Pearlson, G. D., & Calhoun, V. D. (2014). Dynamic Connectivity States Estimated from Resting fMRI Identify Differences among Schizophrenia, Bipolar Disorder, and Healthy Control Subjects. Frontiers in Human Neuroscience, 8,Shakil, S., Lee, C. H., & Keilholz, S. D. (2016). Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. NeuroImage, 133, 111–128.Trapp, C., Vakamudi, K., & Posse, S. (2017). On the detection of high frequency correlations in resting state fMRI. NeuroImage.Yaesoubi, M., Allen, E. A., Miller, R. L., & Calhoun, V. D. (2015a). Dynamic coherence analysis of resting fMRI data to jointly capture state‐based phase, frequency, and time‐domain information. NeuroImage, 120, 133–142.Yaesoubi, M., Miller, R. L., & Calhoun, V. D. (2015b). Mutually temporally independent connectivity patterns: A new framework to study the dynamics of brain connectivity at rest with application to explain group difference based on gender. NeuroImage, 107, 85–94.

Journal

Human Brain MappingWiley

Published: Jan 1, 2018

Keywords: ; ; ; ;

There are no references for this article.