Skip to main content

Addictive brain-network identification by spatial attention recurrent network with feature selection

Abstract

Addiction in the brain is associated with adaptive changes that reshape addiction-related brain regions and lead to functional abnormalities that cause a range of behavioral changes, and functional magnetic resonance imaging (fMRI) studies can reveal complex dynamic patterns of brain functional change. However, it is still a challenge to identify functional brain networks and discover region-level biomarkers between nicotine addiction (NA) and healthy control (HC) groups. To tackle it, we transform the fMRI of the rat brain into a network with biological attributes and propose a novel feature-selected framework to extract and select the features of addictive brain regions and identify these graph-level networks. In this framework, spatial attention recurrent network (SARN) is designed to capture the features with spatial and time-sequential information. And the Bayesian feature selection(BFS) strategy is adopted to optimize the model and improve classification tasks by restricting features. Our experiments on the addiction brain imaging dataset obtain superior identification performance and interpretable biomarkers associated with addiction-relevant brain regions.

1 Introduction

Neuroscience is stepping into a period marked by large amounts of complex neural data obtained from large-scale neural systems [1]. Most of these large data are presented in the form of data from networks covering the relationships or interconnections of elements within different types of large-scale neurobiological systems, for example, connections and anatomical projections of neural circuitry between brain regions and patterns of neural signals in brain regions associated with spontaneous and task-induced brain activities. Brain networks are segmented by anatomical structures that partition different brain regions and connect them together, and functional brain networks display complex neuronal communication and signaling patterns.

Moreover, neuroimaging [2] is a bridging field that integrates medical imaging computing and neuroscience and has been evolving in recent years. Brain imaging [3] is a powerful tool for studying neuroscience, diagnosing and treating brain disorders through qualitative and quantitative analysis of two- and three-dimensional images [4], and using imaging methods to explain the anatomical structure and activity of the brain [5], as well as to address unanswered questions in the field of neuroscience [6, 7]. Addiction is a brain dysfunction characterized by abnormal behavior, and addicts are driven by an overwhelming compulsion to seek and consume drugs constantly. Drug addiction treatment is difficult [8], and its biological mechanisms have not been fully illuminated. Meanwhile, imaging studies have revealed neurochemical and functional changes in the brains of addicted individuals, providing new insights into the mechanisms of addiction.

Owing to advances in modern imaging techniques and advanced medical image analysis methods [9], patterns of such complex neural signals can be analyzed from functional images, which reveal their association with neuronal activity [10], such as behavior and cognition, as well as brain diseases [11]. However, few computational brain imaging methods use functional MRI to investigate the relationship between nicotine addiction and altered neuronal activity patterns throughout the brain [12], identify these patterns and detect regional neuroimaging biomarkers. Therefore, brain imaging studies of the neural mechanisms and supporting diagnoses associated with nicotine and other drug addiction have become increasingly critical.

Functional magnetic resonance (fMRI) [13] has been used to study nicotine dependence’s neural basis and develop smoking cessation strategies. Resting-state functional magnetic resonance imaging (rs-fMR) is the most powerful non-invasive functional imaging technique. It has the potential to radically revolutionize researchers’ understanding of the physical basis of the brain and provide valuable tools for clinical and research purposes [14]. Because the neurological and behavioral effects of acute drug administration are often of short duration, the temporal pattern of change over a short time is critical. Such dynamic alterations can be detected by fMRI, which can reflect average values over shorter periods. fMRI studies reveal a complex dynamic pattern of brain changes during drug intoxication, with different temporal patterns, with some regions activated and others blocked. Functional connectivity in brain networks is commonly generated by analyzing fMRI time series, and functional brain networks characterize the statistical correlation patterns between neuronal regions. In the last decade, the significant progress has been made in using fMRI data for brain functional network analysis [15]. Alterations in functional connectivity between brain regions have been extensively studied in the field of brain disorders, as well as the association between cognitive impairment [16] and degenerative neurological and psychiatric disorders [17].

2 Related work

Machine learning techniques have been widely applied in recognition of medical scenes [18]. In brain image computing, machine learning-based artificial intelligence approaches powerful capabilities to drive brain image analysis technology forward [19, 20], effectively refining physicians’ diagnoses and improving the accuracy of disease prediction. Recent advances in machine learning, particularly in deep learning, contribute to identifying, classifying and quantifying brain images [21]. Deep learning-based brain image analysis methods [22, 23] for brain disease research can explore the disease’s mechanism and understand the brain disorder process. The core of these advances is the capability to automatically generalize hierarchical features [24] from data rather than manually discovering and designing features [25] depending on specific knowledge.

Because of the improvement of deep learning, the performance of several neuroscience applications has concurrently increased dramatically [26]. Deep learning techniques are a novel and efficient way of processing and extracting low-dimensional information from high-dimensional brain imaging data. For example, convolutional neural network (CNN) approaches reduce medical image data dimensionality to identify patterns in brain imaging [27]; generative adversarial networks (GAN) methods [28, 29] are frequently employed in medical image fields, which are built on variational inference methods. Generative adversarial techniques can simulate the actual distribution of data to decrease noise interference and improve model resilience [30].

For the past few years, deep learning medical image analysis methods based on graph neural networks have yielded successful results in the fields of disease classification and marker detection [31]. Graph neural networks are learning models that combine attributes and structural features into a single graph for processing. In contrast to traditional graph methods, graph neural networks can automatically propagate information carried by neighboring nodes, which can then be used to analyze patterns of brain disorders.

However, processing network-structured data to obtain interpretable and determinable biomarkers is still challenging by existing methods. Traditional statistical-based methods require complex and redundant computational operations on image data. In contrast, by adopting common deep learning methods, the high dimensionality and small sample size of fMRI image data lead to difficult training, and complex features result in low identification accuracy. To address these issues, we develop a novel learning framework with feature selection techniques and make the following contributions:

  1. 1.

    Spatial attention recurrent network (SARN) is designed to identify effective patterns of addiction-related brain networks from fMRI data, which can learn the spatial structure and sequential information.

  2. 2.

    A Bayesian feature selection approach is utilized to obtain effective and interpretable brain network embeddings for better identification performance.

  3. 3.

    The feature-selected brain regions can be considered addiction-related biomarkers verified by our experiments and neuroscience knowledge. And the discovery of these brain regions will help research addiction mechanisms.

This work extends a preliminary version of the paper presented at the 2022 International Conference on Brain Informatics [32]. We build on the work of that paper by supplementing analyses and expanding the encoder with a novel recurrent network that can better handle the temporal information of dynamic brain networks.

Fig. 1
figure 1

Proposed spatial attention recurrent network with Bayesian feature selection for identifying brain addiction. The top part of the figure is the raw fMRI preprocessing and brain network construction, the middle is the designed encoder, and the bottom is the Bayesian feature selector and addiction classifier

3 Method

As shown in Fig. 1, the detailed architecture of the proposed framework is demonstrated. Our framework comprises three main components: 1) SARN encoder consists of graph positional attention layers and sliding-window attention recurrent layers; 2) a feature selector with Bayesian feature selection strategy; and 3) a classifier for identifying addiction-related brain network embeddings.

Raw fMRI data are firstly fed as input and preprocessed into network data with structural information and attributes. Generally, in the encoder(E), self-attention mechanism is adopted to transform the time series of brain regions \(X=\left\{ x_n\right\} ^N_{n=1}\in {\mathbb {R}}^{N\times D}\) and dynamic brain functional connections \(\left\{ A^t\right\} ^T_{t=1}\) into the embeddings \(Z=\left\{ z_n\right\} ^N_{n=1}\in {\mathbb {R}}^{N\times d}\). Moreover, in the feature selector, the latent binary random vectors \(B=\{b_n\}^N_{n=1}\) are created to infer the posterior probability distribution and select more efficient brain regional features. Therefore, the encoder is trained with double objectives: a Bayesian feature selection loss considered as the feature sparsity penalty and a classification loss for identifying nicotine addiction. After sufficient training, the model in the framework can finally output addiction probability scores of specific brain regions and addiction brain network identification results.

3.1 Graph spatial attention network

The graph spatial attention encoder aims to embed the regional brain imaging features aggregated with dynamic brain network attributes into a low-dimensional latent space. The proposed layer that composes the encoder is based on the graph attention networks (GAT) [33] with the addition of spatial encoding. It allows each regional brain node to focus adaptively on other nodes according to the spatial information of the graph-structure connectivities in the brain networks.

Therefore, the attention coefficient, which is combined a shared attentional mechanism and spatial encoding for brain connectivities, can be expressed as:

$$\begin{aligned} &\mathbf {\alpha }^{l}_{(i, j)}= \\&\quad \frac{\exp \left( \tanh \left( \left[ {\textbf{h}}_{(i)}^{l} {\textbf{W}}^{l} , {\textbf{h}}_{(j)}^{l} {\textbf{W}}^{l}\right] \cdot {\textbf{c}}^{l}+s_{\psi (x_i,x_j)}\right) \right) }{\sum _{j \in {\mathcal {N}}(i)} \exp \left( \tanh \left( \left[ {\textbf{h}}_{(i)}^{l} {\textbf{W}}^{l} , {\textbf{h}}_{(j)n}^{l} {\textbf{W}}^{l}\right] \cdot {\textbf{c}}^{l}+s_{\psi (x_i,x_j)}\right) \right) }, \end{aligned}$$
(1)

where \(h^l_{(i)}\) is a hidden representation for brain node i at lth layer, \(W^l\in {\mathbb {R}}^{d_l\times d_{l+1}}\) is a parameterized weight matrix considered as the graph convolutional filter, \(c^l\) is a weight vector that can be learned in the train phase, and \(S_{\psi (x_i,x_j)}\) is a scalar that can be learned and is indexed by \(\psi (x_i,x_j)\) with positional information. It indicates the spatial encoding and is accessible throughout all layers.

Formally, let \({\textbf{h}}_{(i)}^{l+1}\) represent the output representation at lth layer, our graph spatial attention layer is given as follows:

$$\begin{aligned} {\textbf{h}}_{(i)}^{l+1}=\sigma \left( \sum _{j \in {\mathcal {N}}(i)} \mathbf {\alpha }_{(i, j)}^{l} {\textbf{h}}_{(j)}^{l} {\textbf{W}}^{l}\right) . \end{aligned}$$
(2)

In Eq. 2, the feature propagation mechanism aggregates the effects across overall neighboring brain nodes and attaches spatial encoding information from dynamic brain network connectivity \(\left\{ A^t\right\} ^T_{t=1}\).

Fig. 2
figure 2

The detailed unit of sliding-window attention recurrent network

3.2 Sliding-window attention recurrent network

As is known, the sliding-window technique [34] is commonly used for capturing dynamic changes of functional brain imaging and extracting efficient time courses. Inspired by this basic approach and attention mechanism [35], sliding-window attention network is designed for further processing network embeddings and extracting the time sequential representation of dynamic functional attributions. Sliding-window attention recurrent layer consists of individual units in series; the structural details of the unit are shown in Fig. 2. In this network, the data input is the brain network embedding from the former layer at different time steps, and the output is the brain network representation of the whole time series. We consider previous memory state \(M^{t-1}=[H^{t-1},H^{t-2},...,H^{t-w}]\) as the time window interest of the dynamic brain networks within w time steps. At step t, the output embeddings of graph spatial attention network \(Z^t\) become the original input of this recurrent unit, and the input query matrix of self-attention \(Q=Z^t\), key and value matrices are \(K,V=[Z^t,M^{t-1}]\). To implement self-attentions, the weighted coefficients \(e^t_m\) and attention coefficients \(a^t_m\) are calculated as given below:

$$\begin{aligned} e^t_m= & {} \frac{W_qQ \cdot (W_kK_m)^T}{\sqrt{d_k}}, \end{aligned}$$
(3)
$$\begin{aligned} a^t_m= \,& {} SelfAttn(Z^t,[Z^t,M^{t-1}],[Z^t,M^{t-1}]) \nonumber \\=\, & {} \frac{exp(e^t_m)}{\sum ^{w}_{m=0}exp(e^t_m)}. \end{aligned}$$
(4)

The intermediate hidden state \(H^{t-1}\) is updated as \(H^t\), and \(M^t\) is updated by adding \(H^T\) to \(M^{t-1}\). The calculation that occurs during the update is shown in the following equations:

$$\begin{aligned} f^t= & {} \sigma (W_fZ^t+U_fa^tV+b_f), \end{aligned}$$
(5)
$$\begin{aligned} {\tilde{H}}^t= & {} tanh(W_hZ^t+U_ha^tV+b_h), \end{aligned}$$
(6)
$$\begin{aligned} H^t= & {} f^t\odot H^{t-1}+(1-f^t)\odot {\tilde{H}}^t. \end{aligned}$$
(7)

We finally get \(H^t\) containing the spatial and temporal features after this network. Although it is similar to simplified LSTM [36] or GRU with attention [37], the differences are that it only retains the forget gate to reduce the redundancy of the model and combines sliding-window and self-attention mechanisms to obtain better temporal features under the time window.

3.3 Bayesian feature selector

To find the most effective features for identification from many regional brain features and to acquire a set of fewer but discriminative biomarkers to reduce classification error, we employ the Bayesian feature selector. We define \({\textbf{H}}=\{H_1^o,...,H_n^o\}\) and \({\textbf{Y}}=\{y_1,...,y_n\}\) as the output features from the encoder and labels of addiction or not. By introducing binary masking matrix B to achieve the goal of selecting features, the expected posterior distribution is denoted as \(p({\textbf{B}}\mid {\textbf{H}},{\textbf{Y}})\) and an approximate distribution is represented as \(q(\cdot )\). In order to improve the identification performance and the accuracy of the model in discriminating features, in the view of variational method [38] and Bayesian inference, we optimize the model by minimizing the KL divergence between the posterior distribution and the approximate distribution:

$$\begin{aligned}&\underset{q(\cdot )}{{\text {argmin}}}KL(q({\textbf{B}})\Vert p({\textbf{B}}\mid {\textbf{H}},{\textbf{Y}}))=\\&\quad -E_{q}\left[ \log \left( p({\textbf{B}}\mid {\textbf{H}},{\textbf{Y}})\right) \right] +K L\left( q\left( {\textbf{B}}\right) \Vert p\left( {\textbf{B}}\right) \right).\end{aligned}$$
(8)

In Eq. 8, the first term corresponds to a binary cross-entropy loss for identification task where the input features \({\textbf{H}}\) are masked by \({\textbf{B}}\), and the second term becomes a loss for learning the probability scores \({\textbf{z}}\) which is used to compute the binary matrix \({\textbf{B}}\) by Bernoulli sampling method:

$$\begin{aligned} {\textbf{b}}_{n}=\sigma \left( \frac{\log \left( {\textbf{z}}\right) -\log \left( 1-{\textbf{z}}\right) +\log \left( {\textbf{u}}_{n}\right) -\log \left( 1-{\textbf{u}}_{n}\right) }{r}\right) , \end{aligned}$$
(9)

where \({\textbf{u}}_n\) is sampled from a uniform distribution from 0 to 1, and r is the relaxation parameter of Bernoulli sampling.

3.4 Classifier and loss function

To integrate the information of each node for the graph-level identification, we utilize a readout function to cluster node features together by simply averaging them:

$$\begin{aligned} {\mathcal {R}}({\textbf{H}})=\sigma \left( \frac{1}{N} \sum _{i=1}^{N} \textbf{h}_{i}\right) , \end{aligned}$$
(10)

where \(\sigma\) is nonlinear activation function. The readout function is similar to the graph pooling operation. Other graph pooling methods can be used to replace it. The selected and readout features are delivered to a multi-layer perceptron (MLP) to derive the final identification of predicted labels \({\hat{y}}\).

The total loss function is the interpretation of Eq. 8:

$$\begin{aligned} {\mathcal {L}}\left( {\textbf{X}}, {\textbf{A}}\right)&= -\sum _{n=1}^{N}\left( y_{n} \log \left( {\hat{y}}_{n}\right) +\left( 1-y_{n}\right) \log \left( 1-{\hat{y}}_{n}\right) \right) \\&\quad +K L\left( {\text {Ber}}\left( {\textbf{z}}\right) \Vert {\text {Ber}}({\textbf{s}})\right) . \end{aligned}$$
(11)

The first term is used to guide the MLP in the classification of the selected features. Furthermore, the second term is applied for training the selector to learn the probability mapping to the feature mask. Here, \({\text {Ber}}({\textbf{s}})\) is a binary random vector that contains sparse elements for the purpose of complying with sparsity.

4 Experiments

In this section, we first evaluate the capability of the proposed framework in identification performance through an ablation study and comparison experiment. And we utilize four kinds of binary classification metrics to measure the experiment results of dynamic brain network identification. Then, we analyze the scores of the other framework output to detect addiction-related brain regions. These identified brain regions are visualized and validated as interpretable biomarkers.

Fig. 3
figure 3

Identification experiments on comparative methods. The red box represents our method, and the green and the yellow are DGI and GCN methods. The maximum, minimum, mean, and quartiles are shown in the box plot

Table 1 Ablation study for assessing the efficiency of the encoder and feature selection
Fig. 4
figure 4

Comparison of identification results at different four time steps

Fig. 5
figure 5

Visualization of top five addiction-related brain regions with the highest BFS-weighted scores

4.1 Dataset and preparation

An fMRI dataset on the addiction animal model is adopted for our experiments. It contains 8 normal control fMRI, addiction-irrelevant samples, and 16 addiction-related fMRI samples, each of which has 800 time points. The quality of these image data is strictly controlled, with the signal-to-noise ratio as large as possible. To transform fMRI data into the dynamic brain network data, the following preprocessing is implemented. Functional data are aligned and unwarped to account for head motion, and the mean motion-corrected imaging is coregistered with the higher resolution anatomical T2 imaging. Then the preprocessing images are smoothed by an isotropic Gaussian kernel with a 3-mm full-width at half-maximum. With the Wistar rat brain atlas [39], 150 rat brain regions are defined and fixed in the normalized space. We assessed the functional connection between regional time series by calculating the Pearson correlation coefficient, resulting in a \(150\times 150\) adjacent matrix for each time step, and we divided the whole time series, 800 time points, into 4 time steps equally. The adjacency matrix and the temporal properties of brain regions at all time steps form the dynamic brain network data as the input dataset. Besides, the major part of the preprocessing steps is done with the assistance of toolboxes, including Statistical Parametric Mapping 12 (SPM12) [40] and Graph Theoretical Network Analysis (GRETNA) [41].

4.2 Implementation detail

The PyTorch backend was used to implement FGSAN. One Nvidia GeForce RTX 3080 Ti was used to speed up the network’s training. During training, the learning rate was set at 0.001, and the training epoch was set to 1000. Adam was used as an optimizer with a weight decay of 0.01 to reduce overfitting. The proposed SARN encoder is composed of three graph spatial attention layers and one sliding-window attention recurrent layer. All trials are repeated ten times, and the results are averaged. The regularization value was set to 0.5 for all datasets and techniques.

4.3 Metrics

Evaluation of binary classification performance is based on quantitative measures in four key metrics: (1) accuracy (ACC); (2) precision (PREC); (3) recall; and (4) F1-score. Our proposed method is evaluated by fourfold cross-validation.

4.4 Ablation study of identification performance

As indicated in Table 1, we conducted ablation research on identification to evaluate the effectiveness of our proposed encoder and Bayesian feature selector, and two significant results are achieved:

1) In the comparison to the baseline encoder, GAT showed impressive performance on the binary addiction-related classification. This is due to the fact that the spatial encoding enables the self-attention mechanism to get more positional information and achieve better graph embeddings, and the recurrent network learns more effective representations in the temporal dimension;

2) The Bayesian feature selector comprehensively improves performance of the identification methods. It represents that feature selection plays its role as an auxiliary to identifying the graph-structure patterns, and task-involved embeddings are selected to make the model perform better on the classification.

Table 2 TOP five regional brain biomarkers extracted by the FGSAN model.
Fig. 6
figure 6

BOLD signal visualization of addiction-related brain regions in original imaging data

4.5 Comparison of identification performance

We conduct comparative experiments to verify the superiority of the method proposed in this part of the section. It is compared with the methods with graph neural networks, including GCN [47] and DGI [48]. GCN is a classical graph learning model for the graph-level classification task. Based on GCN, DGI learns node embeddings in an unsupervised technique. And it can continuously optimize model results by maximizing the mutual information between the local representation and the global representation. After our experimental verifications, it is found that the method of our framework is significantly better than contrastive methods on classification metrics. As shown in Fig. 3, SARN with BFS outperforms compared with DGI and GCN in every indicator of the binary classification experiment. It is probable that our approach has a stronger capacity to identify patterns of addiction in the brain. Moreover, in order to explore the identification of the dynamic temporal properties, we perform different classification experiments according to the four divided time steps. And the results are shown in Fig. 4, where it can be observed that our model shows robustness and superior performance in the different time steps.

4.6 Interpretable brain regional biomarkers

The brain regional features that are selected by our method have the corresponding selection probability frequency and scores, which are presented importance of brain regions and criteria for inferences. We weight these values cumulatively to make the statistics. As shown in Table 2, the five brain regions with the highest weights are Midbrain.R, Diagonal domain.R, Primary motor cortex.R, Hippocampal formation.L, and Insular cortex.L. The higher probability score not only means that the brain region is more deterministic in inferences, but also means that this feature is more implied to the differences caused by addiction, and the corresponding brain region is more addiction-related. To validate these addiction-related brain regions are interpretable, we confirmed them in terms of neuroscience-supported knowledge and imaging.

On one side, these five brain regions discovered by our method have been proven to be associated with nicotine addiction in previous research work. We collect and categorize the relevant references of prior research on each brain region and also list them in Table 2. In addition, we visualized the locations of these five brain regions. As shown in Fig. 5, the locations of the five addiction-related brain regions found by the model in the rat brain are shown in axial, coronal, and sagittal directions.

On the other side, we observe the BOLD signal of the discovered addiction-related brain regions in the raw images, which indicates the functional activity of the brain, and find that the top five brain regions have significant BOLD signal differences between the addiction and non-addiction animal models. In the same way, we visualize these regions, respectively, in Fig. 6. It is verified as evidence for becoming the interpretable biomarkers from the raw image data.

5 Conclusion

In this research, we propose a new framework, spatial attention recurrent network (SARN) with Bayesian feature selection, for discovering effective and interpretable regional brain biomarkers and utilizing features of these biomarkers to identify the addiction-related brain network patterns dynamically. Our model is investigated and discussed in detail through designed experiments to present the superiority of the encoder and feature selector in the proposed framework. We obtain better results than the comparative method by using the selected graph representations for classification, indicating an advantage in graph feature extraction that may yield better graph embeddings in the latent space. More significantly, the importance of these regional features can be well explained in the neuroscience of addiction, and the direct support of the corresponding biomarkers can be found in the original image data. Delving into these addiction-related brain regions and matching the interconnections between them to the different kinds of addiction mechanisms and studying such causal addiction circuits and mechanisms will be the direction of our continuing research in the future.

Availability of data and materials

Data used in the manuscript will be available on request after the manuscript is published.

References

  1. Bullmore E, Sporns O (2009) Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci 10(3):186–198

    Article  Google Scholar 

  2. Woo C-W, Chang LJ, Lindquist MA, Wager TD (2017) Building better biomarkers: brain models in translational neuroimaging. Nat Neurosci 20(3):365–377

    Article  Google Scholar 

  3. Peters L, De Smedt B (2018) Arithmetic in the developing brain: a review of brain imaging studies. Dev Cogn Neurosci. 30:265–279

    Article  Google Scholar 

  4. Wang S, Wang H, Cheung AC, Shen Y (2019) Gan M (2020) Ensemble of 3d densely connected convolutional network for diagnosis of mild cognitive impairment and Alzheimer’s disease. Neurocomputing. 333:145–56

    Article  Google Scholar 

  5. Yu W, Lei B, Wang S, Liu Y, Feng Z, Hu Y, Shen Y, Ng MK (2022) Morphological feature visualization of alzheimer’s disease via multidirectional perception gan. IEEE Transactions on Neural Networks and Learning Systems, pp 1–15. https://doi.org/10.1109/TNNLS.2021.3118369

  6. Stam CJ (2014) Modern network science of neurological disorders. Nat Rev Neurosci 15(10):683–695

    Article  Google Scholar 

  7. Hu S, Yuan J, Wang S (2019) Cross-modality synthesis from MRI to pet using adversarial u-net with different normalization. In: 2019 International Conference on Medical Imaging Physics and Engineering (ICMIPE), IEEE, pp 1–5

  8. Hartmann-Boyce J, Chepkin SC, Ye W, Bullen C, Lancaster T (2018) Nicotine replacement therapy versus control for smoking cessation. Cochrane Database Syst Rev. 5(5):CD000146. https://doi.org/10.1002/14651858.CD000146.pub5

    Article  Google Scholar 

  9. Wang S-Q, Li X, Cui J-L, Li H-X, Luk KD, Hu Y (2015) Prediction of myelopathic level in cervical spondylotic myelopathy using diffusion tensor imaging. J Magn Reson Imaging 41(6):1682–1688

    Article  Google Scholar 

  10. Heeger DJ, Ress D (2002) What does FMRI tell us about neuronal activity? Nat Rev Neurosci 3(2):142–151

    Article  Google Scholar 

  11. Wang S, Chen Z, You S, Wang B, Shen Y, Lei B (2022) Brain stroke lesion segmentation using consistent perception generative adversarial network. Neural Comput Appl 34(11):8657–8669

    Article  Google Scholar 

  12. Li Z, DiFranza JR, Wellman RJ, Kulkarni P, King JA (2008) Imaging brain activation in nicotine-sensitized rats. Brain Res 1199:91–99

    Article  Google Scholar 

  13. Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD (2014) Tracking whole-brain connectivity dynamics in the resting state. Cereb Cortex 24(3):663–676

    Article  Google Scholar 

  14. Hu S, Lei B, Wang S, Wang Y, Feng Z, Shen Y (2022) Bidirectional mapping generative adversarial networks for brain MR to pet synthesis. IEEE Trans Med Imaging. 41(1):145–157. https://doi.org/10.1109/TMI.2021.3107013

    Article  Google Scholar 

  15. Salvador R, Suckling J, Coleman MR, Pickard JD, Menon D, Bullmore E (2005) Neurophysiological architecture of functional magnetic resonance images of human brain. Cereb Cortex 15(9):1332–1342

    Article  Google Scholar 

  16. Wang S, Shen Y, Chen W, Xiao T, Hu J (2017) Automatic recognition of mild cognitive impairment from MRI images using expedited convolutional neural networks. In: International Conference on Artificial Neural Networks, Springer, pp 373–380

  17. Wang S, Wang H, Shen Y, Wang X (2018) Automatic recognition of mild cognitive impairment and Alzheimers disease using ensemble based 3D densely connected convolutional networks. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE, pp 517–523

  18. Currie G, Hawk KE, Rohren E, Vial A, Klein R (2019) Machine learning and deep learning in medical imaging: intelligent imaging. J Med Imaging Radiat Sci 50(4):477–487

    Article  Google Scholar 

  19. Wang S, Hu Y, Shen Y, Li H (2018) Classification of diffusion tensor metrics for the diagnosis of a myelopathic cord using machine learning. Int J Neural Syst 28(02):1750036

    Article  Google Scholar 

  20. Mo L-F, Wang S-Q (2009) A variational approach to nonlinear two-point boundary value problems. Nonlinear Anal Theory Methods Appl 71(12):834–838

    Article  MathSciNet  Google Scholar 

  21. Hu S, Shen Y, Wang S, Lei B (2020) Brain mr to pet synthesis via bidirectional generative adversarial network. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham; pp 698–707

  22. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88

    Article  Google Scholar 

  23. Wang S, Wang X, Shen Y, He B, Zhao X, Cheung PW-H, Cheung JPY, Luk KD-K, Hu Y (2020) An ensemble-based densely-connected deep learning system for assessment of skeletal maturity. IEEE Trans Syst Man Cybern Syst 52(1):426–437

    Article  Google Scholar 

  24. Yu W, Lei B, Ng MK, Cheung AC, Shen Y, Wang S (2022) Tensorizing GAN with high-order pooling for Alzheimer’s disease assessment. IEEE Trans Neural Netw Learn Syst. 33(9):4945–4959. https://doi.org/10.1109/TNNLS.2021.3063516

    Article  Google Scholar 

  25. Zeng D, Wang S, Shen Y, Shi C (2017) A GA-based feature selection and parameter optimization for support tucker machine. Procedia Comput Sci 111:17–23

    Article  Google Scholar 

  26. Hanlon CA, Canterberry M (2012) The use of brain imaging to elucidate neural circuit changes in cocaine addiction. Subst Abuse Rehabil 3:115

    Article  Google Scholar 

  27. Wang S, Shen Y, Zeng D, Hu Y (2018) Bone age assessment using convolutional neural networks. In: 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), IEEE, pp 175–178

  28. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139–144

    Article  MathSciNet  Google Scholar 

  29. You S, Lei B, Wang S, Chui CK, Cheung AC, Liu Y, Gan M, Wu G, Shen Y (2022) Fine perceptive gans for brain MR image super-resolution in wavelet domain. In: IEEE Transactions on Neural Networks and Learning Systems, pp 1–13. https://doi.org/10.1109/TNNLS.2022.3153088

  30. Wang S, Wang X, Hu Y, Shen Y, Yang Z, Gan M, Lei B (2020) Diabetic retinopathy diagnosis using multichannel generative adversarial network with semisupervision. IEEE Trans Autom Sci Eng 18(2):574–585

    Article  Google Scholar 

  31. Huang W, Bolton TA, Medaglia JD, Bassett DS, Ribeiro A, Van De Ville D (2018) A graph signal processing perspective on functional brain imaging. Proc IEEE 106(5):868–885

    Article  Google Scholar 

  32. Gong C, Jing C, Pan J, Wang Y, Wang S (2022) Feature-selected graph spatial attention network for addictive brain-networks identification. In: International Conference on Brain Informatics, Springer, pp 316–326

  33. Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks. arXiv preprint arXiv:1710.10903

  34. Thompson GJ, Magnuson ME, Merritt MD, Schwarb H, Pan W-J, McKinley A, Tripp LD, Schumacher EH, Keilholz SD (2013) Short-time windows of correlation between large-scale functional brain networks predict vigilance intraindividually and interindividually. Hum Brain Mapp 34(12):3280–3298

    Article  Google Scholar 

  35. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neu Inform Process Syst 30

  36. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  37. Ruiz L, Gama F, Ribeiro A (2020) Gated graph recurrent neural networks. IEEE Trans Signal Process 68:6303–6318. https://doi.org/10.1109/TSP.2020.3033962

    Article  MathSciNet  MATH  Google Scholar 

  38. Wang S-Q (2009) A variational approach to nonlinear two-point boundary value problems. Comput Math Appl 58(11–12):2452–2455

    Article  MathSciNet  MATH  Google Scholar 

  39. Valdés-Hernández PA, Sumiyoshi A, Nonaka H, Haga R, Aubert-Vásquez E, Ogawa T, Iturria-Medina Y, Riera JJ, Kawashima R (2011) An in vivo MRI template set for morphometry, tissue segmentation, and FMRI localization in rats. Front Neuroinform 5:26

    Google Scholar 

  40. Penny WD, Friston KJ, Ashburner JT, Kiebel SJ, Nichols TE (2007) Statistical parametric mapping: the analysis of functional brain images. Neurosurgery

  41. Wang J, Wang X, Xia M, Liao X, Evans A, He Y (2015) Gretna: a graph theoretical network analysis toolbox for imaging connectomics. Front Hum Neurosci 9:386

    Google Scholar 

  42. Nguyen C, Mondoloni S, Le Borgne T, Centeno I, Come M, Jehl J, Solié C, Reynolds LM, Durand-de Cuttoli R, Tolu S et al (2021) Nicotine inhibits the VTA-to-amygdala dopamine pathway to promote anxiety. Neuron 109(16):2604–2615

    Article  Google Scholar 

  43. Flannery JS, Riedel MC, Poudel R, Laird AR, Ross TJ, Salmeron BJ, Stein EA, Sutherland MT (2019) Habenular and striatal activity during performance feedback are differentially linked with state-like and trait-like aspects of tobacco use disorder. Sci Adv 5(10):2084

    Article  Google Scholar 

  44. Smolka MN, Bühler M, Klein S, Zimmermann U, Mann K, Heinz A, Braus DF (2006) Severity of nicotine dependence modulates cue-induced brain activity in regions involved in motor preparation and imagery. Psychopharmacology 184(3):577–588

    Article  Google Scholar 

  45. Ghasemzadeh Z, Sardari M, Javadi P, Rezayof A (2020) Expression analysis of hippocampal and amygdala CREB-BDNF signaling pathway in nicotine-induced reward under stress in rats. Brain Res 1741:146885

    Article  Google Scholar 

  46. Keeley RJ, Hsu L-M, Brynildsen JK, Lu H, Yang Y, Stein EA (2020) Intrinsic differences in insular circuits moderate the negative association between nicotine dependence and cingulate-striatal connectivity strength. Neuropsychopharmacology 45(6):1042–1049

    Article  Google Scholar 

  47. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907

  48. Velickovic P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD (2019) Deep graph infomax. ICLR (Poster) 2(3):4

    Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundations of China under Grant 62172403, 61872351, the Distinguished Young Scholars Fund of Guangdong under Grant 2021B1515020019, the Excellent Young Scholars of Shenzhen under Grant RCYX20200714114641211 and Shenzhen Key Basic Research Project under Grant JCYJ20200109115641762.

Author information

Authors and Affiliations

Authors

Contributions

CG and SW contributed to the conception of the study; CG performed the experiment; CG and SW performed the data analyses and wrote the manuscript; XC, and BM helped perform the analysis with constructive discussions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shuqiang Wang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gong, C., Chen, X., Mughal, B. et al. Addictive brain-network identification by spatial attention recurrent network with feature selection. Brain Inf. 10, 2 (2023). https://doi.org/10.1186/s40708-022-00182-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40708-022-00182-4

Keywords