Skip to main content

Smart imaging to empower brain-wide neuroscience at single-cell levels

Abstract

A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to ‘smart’ imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.

1 Introduction

To understand the brain structures is one of the primary targets of modern science. Among those the shapes of neurons play a fundamental role. The function of a neuron both dictates and is constrained by its morphology and connection with other neurons. The neuron circuits and connectivity provide scientific evidence and basis to understand the emotional and memorial activities and the brain diseases [1,2,3,4]. It is of uttermost importance to accurately locate and identify the neural morphologies at the scale of the entire brain. Under this light, brain initiatives have been announced to build brain-wide atlases to unravel the neuronal connectivity and neural circuits, including the U.S. BRAIN Initiative [5, 6], Europe’s Human Brain Project [7], and China’s Brain Project [8]. These projects are expected to facilitate the treatment of neurological and psychiatric disorders and to promote new breakthroughs in neuromorphic computing and artificial intelligence. To this end, brain-wide imaging with single-cell resolution is desired to simultaneously access morphological features of a neuron as well as to delineate the connectivity patterns of the neuron networks.

With the remarkable advances in sparse labeling [9,10,11], tissue clearing [12,13,14], light microscopes [15,16,17,18] and computational methods [19, 20], brain-wide mapping with single-cell resolution has become possible for small mammals, such as rodents. This has brought invaluable opportunities to understand the brain structures and the underlying mechanisms of brain diseases. Nonetheless, studies at larger scale and more detailed level are needed to explore a large variety of neuron types as well as to get more comprehensive understanding of neuron connectivity and projection patterns. More importantly, non-human primates (NHP) are getting more attentions as better experimental models of human cognitive functions and brain diseases considering the fundamentally different brain structures and behaviors between different species [21,22,23]. To investigate NHP and human brains in a similar way as rodents has become one of the most important tasks to date in neuroscience. Unfortunately, the approaches for rodents are not directly applicable to NHPs due to the much larger brain sizes, stringent limitations on the numbers, and the substantially increased individual variability of the brains [24]. Challenges remain from all aspects, i.e., sample preparation, imaging, and massive data processing. Tissue clearing in combination with light-sheet microscopy is particularly applicable for large-volume imaging, yet with compromised spatial resolution [14, 25, 26]. Clearing of human brain tissues is also notoriously challenging as the penetration depth of chemicals is strongly limited by the dense and opaque molecules accumulated over decades-age [27]. The block-face imaging methods, e.g., the serial two-photon tomography (STPT) [15, 28] and micro-optical sectioning tomography [16, 29, 30], do not rely on tissue clearing yet normally take several days to image an entire mouse brain with cellular resolution, which make them hardly scalable for much larger NHP or human brains. In addition, the processing and analyzing of petabytes (PB) of volumetric image data from primate brains is another critical challenge. High-throughput systems are in urgent need, not only for imaging itself, but also for sample preparation and data analysis that facilitate the brain-wide neuroscience at single-cell resolution.

Smart imaging systems emerge in this context (Fig. 1), in which the data acquisition and analysis are intensively supported by the artificial intelligence, software platforms, and computational facilities. All the three aspects are indispensable in a smart imaging system and strongly rely on each other. The data acquisition can be improved via automation to minimize error-prone manual operations and human interruptions. The information extracted from the images can be used to guide the data acquisition to reach better image quality at lower cost. The qualified data simplifies and very likely speeds up the subsequent data analysis and guarantees valid and meaningful biological information to be extracted. In addition, tools and platforms that support massive data management and analysis lower the barrier of carrying out analysis and developing new methods. Moreover, sophisticated manners of data sharing boost the world-wide cooperation; large-scale data annotation produces critical resource for method development and validation in artificial intelligence; data visualization provides a human–machine interaction for annotation, quality control, proofreading, etc. All such tools and platforms together can substantially speed up the advances of smart systems.

Fig. 1
figure 1

Key components to build up a smart imaging system in brain-wide neuroscience at single-cell level

With all the considerations mentioned above, we investigate here the topics and techniques, ranging from sample preparation to data mining, that are considered critical to build up the smart imaging systems. We will start from the topic of data acquisition, including the sample preparation and optical imaging. This is followed with the discussion on data processing techniques, ranging from image preprocessing to data mining. The tools, platforms and database relevant to the above tasks will be summarized afterward. We will briefly touch on the growing waves of deep learning and cloud computing in neuroscience before conclusion. Note that we do not specifically limit the discussion on the field of neuroscience in this review, rather will include as well the techniques being used in other biological studies, with a hope to provide a broader view of smart imaging for neuroscience.

2 Sample preparation

Sample preparation is perhaps the first major challenge to the imaging of large brains. It covers a variety of topics including sparse labeling, tissue clearing, tissue expansion. Sparse labeling plays an essential role to make the morphology of individual neurons well visible under an optical microscope. A different variety labeling methods and tracers are available and applied in combination with tissue clearing and expansion in brain imaging [11, 31]. Among those the genetic engineering and virus transfection provide rich information for the neuron labeling on animal models [9, 11, 32,33,34]. Simultaneous multi-color labeling and tracing of neurons is enabled by the brainbow AAVs [35].

Nonetheless, the labeling approaches mentioned so far are not applicable to human-brain analysis due to the ethical limitations. Labeling neurons by injecting dye, plasmid, or other markers into neurons is considered a powerful method in this regard. Despite the limitations to access long-range projections, dye injection in combination with surgery biopsies enables the investigation of human neurons of various cell types and from different brain regions based on local morphologies. To do so, the markers need to be injected into the cell body and transported to the axon terminals, for example, by alternative injection method, pressure injection or iontophoretic injection [36]. This provides a good way to target neurons of interest with sparsity, flexibility and specificity. Yet the procedure is very tedious, time consuming, and requires well-trained personnel. ‘Smart’ systems have been reported to automate and speed up the procedure. For instance, the ‘SmartACT’ was able to automatically guide the pipette to target the cell [37]; the automation was achieved for steps of pipette calibration, the target cell body targeting and the control of pipette movement in ‘Autopatcher IG’ [38]; A robotic system was applied to fill and move the pipette, and to break in the neuron cells [39]. Lately, a deep-learning based system was built up to detect and guide the pipette to approach, attach and break-in the neuron cells in vitro [40].

Following the sparse labeling, tissue clearing aims to change the optical properties of tissues to increase the penetration depth of optical imaging [12,13,14]. This can be achieved typically by replacing the lipid and water content of the tissue with a medium that has a refractive index matching the cellular content [12, 13]. In combination with light sheet microscopy, tissue clearing is playing a growing role in brain-wide imaging. Tissue expansion adopts the swellable polyelectrolytes to separate closely locating biological components [13, 41,42,43]. It is compatible with many labeling and clearing approaches and allows for nanoscale imaging with conventional microscopes. The spatial resolution of optical imaging can be substantially improved in this way. While tissue clearing and tissue expansion both are demonstrated successful in neuroscience, their performance is a mixture effect of the properties of samples and the imaging setups. To design and optimize the protocols for a specific study is pretty difficult and no smart systems, to our best knowledge, are adopted so far for these procedures. Yet we kindly refer to literatures [42, 44] in which critical guidance is presented with the pitfalls and approaches to optimal tissue clearing and expansion.

3 Optical microscopy

In combination with the labeling approaches, optical imaging makes up one of the most important toolsets in neuroscience, with unique advantages of spatial resolution to other imaging modalities (Fig. 2) [45]. Optical imaging has witnessed multiple technical innovations (see Table 1) [45,46,47] and is continuously empowered by structured light illumination [48], digital scanned LSFM [49], Lattice LSFM [50, 51], 2P-LSFM [52], etc. Many other techniques are reported as well, for instance, to speed up via the extended depth of field microscopy (EDOF) [53,54,55] or the customized chips [56,57,58], to improve the spatial resolution via the acousto-optic modulators or spatial light modulators [59, 60], to improve the axial accuracy via the depth sensors [61], to compensate the brain movements with the acousto-optic lens (AOL) 3D random-access pointing and scanning [62]. Aside from these hardware adaptions, AI techniques are vastly adopted to automate and speed up the imaging procedure as well as to improve the imaging quality [63,64,65,66,67,68]. For instance, to automatically adjust the illumination at real-time and minimize the need of imaging parameters tuning [64], to correct the aberrations using wave-front sensing method [65], to tackle the defocusing of the light sheet microscope with adaptive refocusing method [68]. In addition, content-aware imaging [63, 69, 70] and stitching [71, 72] are developed to suppress sample degradation, speed up the imaging, and enlarge the sample coverage. These advances have laid solid foundation for optical microscopy to be applied and adapted for brain-wide imaging with single-cell resolution, as is summarized in the following.

Fig. 2
figure 2

Different imaging modalities with respective to the spatial resolution

Table 1 Milestone innovations of imaging techniques

To be applied in neuroscience, many efforts were made for a good balance between resolution, large volume and speed. In combination with sparse labeling and physical sectioning, the block-face systems enabled the 3D mapping of individual neurons across brain areas [83]. In particular, the micro-optical sectioning tomography (MOST) achieves sub-micro imaging of an entire mouse brain [29]. Serial two-photon tomography (STPT) achieves high-throughput fluorescence imaging of entire-brain in combination with an optical section in 50-micron-thickness tissue layers [28]. The fluorescence micro-optical sectioning tomography (fMOST) achieves micron imaging of entire mouse brain after fluorescent labeling and enables continuous tracing of neuronal circuits [84]. In combination with two-photon fluorescence and an acoustical optical deflector (AOD), moreover, the high-throughput two-photon MOST (2p-MOST) system obtained entire-brain imaging of ~ 0.32 µm × 0.32 µm × 1 µm resolution within 1 week [30]. The brain-wide positioning system (BPS) adopted multi-channel wide-field large-volume tomography (WVT) and acquired both labelled neural structures and cytoarchitecture reference in the same brain simultaneously [16]. The BPS system allows for precise localization of individual neurons and it takes 3 days for the entire brain imaging with ~ 0.32 µm × 0.32 µm × 2 µm resolution. Lately, a significant improvement in the penetration depth and background suppression was achieved in the HD-fMOST system via a line-illumination modulation (LiMo) technique [85]. In addition to the block-face systems, light sheet illumination shows unique advantage of high imaging throughput. Based on the light sheet microscopy, the entire-brain imaging at cellular resolution is achieved within a few hours for mouse brains [86, 87] and 100 h for monkey brains [17]. In combination with online image analysis, the sparsity property of neuron structures is highly implemented to speed up the imaging procedure [70, 71, 88]. As a new trend, the miniaturized imaging setups are being developed to bring new chances for in vivo brain science [89].

4 Image preprocessing

Optical imaging is a mixture result of the optical properties of the sample and the setup. The influence of illumination, detector, lenses, etc. can introduce unavoidable yet significant contaminants into the raw images. Removing such destructing sources is one of the major tasks for image preprocessing. To this end, many image enhancement methods were developed, varying from filtering and rescaling to image transform, to achieve the aims of denoising [90,91,92], uneven illumination correction [93,94,95,96], deconvolution [97,98,99], etc. Many of the approaches, however, are not directly applicable to brain and neuron images, which contain rich tubular structures. In this regard, different approaches were proposed using the features of tubular structures characterized either by the eigenvalues of the local gradients or by the response of multi-directional filters [100,101,102,103]. The content-aware neuron image enhancement (CaNE) method [104], in particular, employed the properties of tubular structure in combination with the gradient sparsity of the neuron images. In addition, based on the sparsity of the neurites signals, the image enhancement was achieved by removing the background signal resulting from auto-fluorophores and substantially improved the subsequent neuron tracing [105].

Stitching is another task of image preprocessing encountered in brain imaging. Considering the large volume of brains and typically long-range projection of a neuron, imaging of multiple tiles and mostly also multiple tissue stacks is needed to get complete features of interest. This produces terabyte- and even petabyte-sized data sets comprised of many unaligned volumetric image tiles. Stitching is desired to reconstruct a complete volumetric image for further analysis. Therein, the globally optimal placement of image tiles is determined according to pre-defined quantities, such as the normalized cross-correlation and mutual information. For brain-wide imaging, approaches have been proposed particularly to deal with the massive data volume, including TeraStitcher [106] and BigStitcher [107] for overlapping tiles, and the custom software for non-overlapping tiles [17, 108]. Specifically, instead of stitching the image tiles, the NeuroStitcher proposed a way to assemble the neuron fragments after the neuron tracing from individual image tiles [109].

Another issue for image preprocessing comes to the re-slicing/reformatting to support the processing of large data volume. Herein the complete data volume is reformatted as blocks and mostly with hierarchical resolutions. The three typical data structures developed for this aim are depicted in Fig. 3 [110]. Therein the TeraFly [111] combined the pyramid image organization with the ‘mean and shift’ strategy to create smooth 3D exploration similar to ‘Google-earth’. The BigDataViewer [112] adopted caching mechanism for faster image reading. The TDat [110] read only cuboid data to control the memory consumption and sped up the data reformatting via distributed computation.

Fig. 3
figure 3

(adapted from Ref. [110])

Principles of different data reformatting. BigDataViewer: the green blocks in the original space represent the data to be loaded into memory. One slice is read into memory once and cached. TDat: after recursively down-sampling the original data, only a CUBOID is read into memory and split into 3D blocks. Vaa3D-TeraFly: the data is read once and transformed in to multi-resolution

Last but not the least, registration is desired to align the entire-brain images from their respective coordinates to a standard brain space. This enables cross-brain and cross-modality analysis, as well as the analysis relative to brain regions and projection patterns. A common coordinate framework (CCF) for the mouse brain, in this regard, was built by co-aligning 1675 individual whole-brain data sets from STP tomography [113]. There 43 cortical areas, 330 subcortical gray matter areas, 82 fiber tracts, and 8 ventricle and associated structure volumes were all delineated natively in 3D. Registration methods were vastly investigated to map different brain data onto the reference atlas, including aMAP [114], ClearMap [115], qBrain [116], WholeBrain [117], SyN [118], etc. The procedure typically involves features/landmarks detection and image transformation [119]. As simple as it sounds, however, challenges exist, especially for the landmark detection, considering the variations in brain anatomy and intensity diversity caused by different sample preparation and imaging procedures. For this reason, a coherent landmark mapping (CLM) method was adopted to coherently deform the landmark points in the target image to find their best matches in the reference image [120]. The robustness of the registration is enhanced taking into consideration the brain regions segmented by a deep neural network. Nonetheless, the registration still requires semiautomatic refinement. Automatic registration, especially for TB- and PB-scale data is still an issue to conquer.

5 Data mining

The aim of data mining is to extract information of interest from the image data and to finally draw biologically meaningful conclusions. In the field of neuroscience, and for the investigations of neuron circuits and connectivity in particular, major tasks of data mining include the neuron tracing and morphology analysis.

The shape of a neuron both determines and is constrained by its function and connection with other neurons. To understand and analyze the neuron morphologies hence plays a fundamental role in neuroscience. Neuron tracing is a critical step in this perspective [121,122,123]. It aims to create a digital reconstruction of the soma, dendrites, axon, and other sub-cellular components (e.g., spines, boutons, etc. [124, 125]) of a neuron. The traced neuron morphology are typically represented as a connected tree with the soma as the root and saved as SWC files, where each row gives the type, coordinates, radius/diameter, and parent of a node. Promoted by the DIADEM challenge [126], considerable efforts were made over the past years for automate neuron tracing with the aims to improve the speed, accuracy and reproducibility. Existent algorithms are normally composed of the elemental procedures including skeletonization [127, 128], seed generation [129], graph algorithms [130], deformable curves [131], image transforms, such as gradient vector field [127, 130], or learning-based approaches with annotated training data [129, 132]. Specifically, the tracing can be either obtained according to the information of the whole image [127, 133], or by exploring the image within the neighborhood of relevant structures [134, 135], categorized as global and local methods, respectively. On top of the basic algorithms, mechanisms are also introduced for the large-scale neuron tracing, such as the TReMap [136], UltraTracer [137], G-Tree [138], etc. A comprehensive summary of existent algorithms is available from ref. [123]. With the goal to “define and advance the state of the art of single neuron reconstruction, develop a tool-kit of standardized reconstruction protocols, analyze neuron morphologies, and establish a data resource for neuroscience”, the BigNeuron project [20] was jointly launched by several well-known brain research institutions. Therein numerous algorithms [130,131,132, 136, 139,140,141,142] were collected and evaluated with the ultimate goal to reach standard and unambiguous neuron tracings. Nonetheless, the automatic neuron tracing is still far from sophistication. It is hardly possible to trace single neurons without any human intervention at current stage. Manual inspection and correctness are always necessary post automatic tracing to remove errors, such as missing arbors, loops, trifurcations, etc. Last but not the least, the correctness of a neuron reconstruction is hard to justify considering the varying signal-to-noise ratio or the inadequate spatial resolution of the imaging.

Following the neuron tracing, to comprehensively analyze the traced morphometry data is critical to unravel the spatial properties of neurons and networks at multiple scales and to understand the mechanisms behind the nervous systems [143,144,145]. Many techniques have been developed for this aim [146, 147]. Among those the morphological grouping has been vastly applied, with the support of many similarity analysis [148,149,150] and clustering methods, such as UMAP [151], K-Means [152], and HCA [153]. In particular, morphological features such as L-measure were defined to quantify the morphological properties of neurons [154]. This in combination with machine learning and statistical analysis has been applied in cell typing, cross-brain-areal correlation analysis, etc. [3, 155,156,157]. With the help of a standard brain space, moreover, neurons were also classified according to the projection patterns [158]. Furthermore, a sequence representation was proposed to characterize the topologies of a neuron and successfully used for neuron classification [159, 160]. As a growing trend, the morphological analysis is being combined with other data modalities such as genomics to achieve better understanding in brain functions, such as cross-species comparison [161]. Nonetheless, the morphological analysis is still in its infancy. How to characterize the morphology remains to be a key bottleneck. There is a long way to adopt more techniques from statistics and machine learning.

6 Tools, platforms, and database

The advances of smart systems largely benefited from the open-source tools and platforms, which have enabled researchers to reuse existing techniques and easily scale up or develop custom analysis strategies. This is not limited to the many programing libraries such as VTK, ITK, and OpenCV for general image analysis and Caffe, Keras, Tensorflow, PyTorch, and Theano for deep learning [162, 163]. Integrated platforms are also being developed [164,165,166,167,168,169] to lower the barrier of data analysis. Their featured functions and URLs are listed in Table 2. Along with the commercial software including Amira [170], Imaris (Bitplane Scientific Software), Neurolucida [171], and Image Pro (MediaCybernetics), these platforms have enabled the biologists and neuroscientists to conduct data analysis in an easier and more friendly way. In particular, the trees toolbox [172] provides a great platform for morphological analysis of individual neurons in isolation. The Natverse [173] was reported as a suite of R-packages for large-scale neuronal data processing, including the functions from local/remote data import, visualization, data transformation across template spaces, clustering and graph-theoretic analysis of neuronal branching. Vaa3D as a platform of big biological data analysis and computation has involved the many functions of data annotation, visualization, registration, neuron reconstruction and morphological analysis, etc. [111, 174, 175]. Tools for massive data storage, visualization, annotation, and indexing are being constructed worldwide as well. Additional to the hierarchical data structures of BigDataViewer [112], TeraFly [111] and TDat [110], for example, the Open Microscopy Environment’s Remote Objects (OMERO) and the Bio-Image Semantic Query User Environment (BISQUE) [176, 177] are constructed to facilitate the data annotation and content based data searching. Tools such as VirtualFinger [178], TeraVR [175], etc. boost the manual neuron tracing, data annotation and proofreading by creating an intuitive and immersive environment. Additional to these software platforms, public data repositories are being released with collective efforts over the world. The recently released database Morphohub [179], for example, enables the petabyte-scale multi-modal morphometry data storage, sharing, and analysis [179]. Image databases in the field of brain and neuroscience are summarized in Table 3, covering the various species from mouse to human and non-human primates. With the growing awareness to share and document data, public database starts to play a critical role for the development and validation of a new method as well as the comparison between different analysis methods.

Table 2 Typical open-source platforms for data processing
Table 3 Open database in neuroscience, which contains multiple species as highlighted in bold

7 Growing trends

The last two decades witnessed the prosperity of deep learning in biological data analysis and neuroscience, under the support of the variety of network architectures. Among those the convolutional neural network (CNN) and recurrent neural network (RNN) are the most commonly used. In general, CNNs are mostly employed for image analysis and computer vision, while RNNs are more applied in time-series problems. Many networks of these two kinds were developed in the past several years: the AlexNet [191], ResNet [192], Inception [193, 194], DenseNet [195], VGG [196], DCGAN [197], GoogleNet [198] for CNN and LSTM [199], Bi-RNN [200], and GRU [201] for RNN. Typically, CNN is composed of a series layers including convolution (followed by activation and normalization), pooling (for sub-sampling), and fully connected layers (Fig. 4a). The flatten and fully connected layers are removed in full convolution networks (FCN), which is particularly useful if the size of input data varies from time to time. ResNet as a typical CNN is made up of residual blocks (Fig. 4b), where a skip connection is adopted to deal with the gradient vanish problem and hence enables to train deeper networks. The LSTM as a typical example of RNN consist of the blocks of memory cell state (Fig. 4c) that are regulated by the input, forget and output gates. This helps to retain knowledge of earlier states and partly addresses the problem of vanishing gradients.

Fig. 4
figure 4

Building blocks of a CNNs, b ResNet, and c LSTM

Deep learning brings vast opportunities. Image preprocessing is more and more achieved via deep learning [202]. For example, the CARE [203] for super-resolution wide-field images [204, 205] and the VCD on a light-field microscopy for artifact-free volumetric imaging with uniform spatial resolution at video-rate [206]. Unsupervised networks such as N2V [207], PN2V [208], Noise2noise [209, 210], Noise2Self [211] and Cyclegan [212] have shown to compete the supervised networks in denoising tasks. Promising results of deep learning are demonstrated for registration [120, 213, 214]. 3D segmentation is well achieved via 3D neural networks, such as CDEEP3M [215] and Cellpose [216]. Rapid 3D neuron tracing was achieved via the circulating neural network based on flood algorithm for the 3D image segmentation [217]. The networks of DeepNeuron [218] and SmartTracing [132] were proven promising in neuron reconstruction. The advances in natural language processing and the networks such as transformer [219, 220] along with the sequence representation [159, 160] of neuron morphologies has provided promising approach to cell typing particularly for full-neuron-morphology classification.

Cloud computing is another product of the fast advancing computational resources and internet [215, 221]. It allows a user to access and share data, applications, and infrastructures from a remote location. The most popular cloud services are Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure/Hardware as a Service (IaaS/HaaS), with varying need of user management (see Table 4) [222]. With SaaS, the users are able to access software via a browser from the third-party provider without complex installation or hardware management. The PaaS provides platforms on the server so that a user can develop web applications without installing any tools. The IaaS means the provider shares the IT Infrastructure to users, which releases the need to purchase and maintain the infrastructure.

Table 4 Need of user management in different models of cloud computing

The cloud-computing has made life much easier for data sharing, annotation and analysis, with smoother multi-user interaction and cooperation in remote and worldwide [222]. Moreover, server-based platforms have lowered the barriers of model construction, distribution and re-training. For example, cloud-based deep neural networks are being developed to release the users from the tedious configurations of deep learning environment [215]. The interactive machine learning platform ‘ilastik’ efficiently combines the annotation and model training; it thus allows to begin the model training with a small amount of annotated data and add more annotations interactively over the training steps [223]. A server-less web application ‘imJoy’ works across different systems and on both desktop and mobile devices [224]. It provides an easy-to-use data analysis tool that allows visualization, classification, deep learning, etc. All functions are provided as independent plugins that can be built using different programming languages. The ‘NeuroCAAS’ is constructed as a cloud-based platform for the data analysis in neuroscience [221]. Through the drag-and-drop interface, users can simply choose and configure the algorithms available from the platform. The requested analysis is then automatically deployed as a ‘blueprint’ and performed by the platform. Taking advantages of all these techniques, a ‘laboratory as a server’ could not be far away, in which researchers can control and share the imaging equipment from remote. All such cloud-based platforms together will bring vast opportunities to facilitate artificial intelligence and smart systems, and further promote our exploration of brain-wide neuroscience at single cell level.

Last but not the least, the technologies discussed so far are being increasingly combined into integrated systems to play a larger role than they can do individually. Therein, the computational technologies, resources and internet are considered the muscles and blood vessels on top of the hardware skeletons to smoothly combine data acquisition, management and analysis [17]. With minimal human interruption, such integrated systems can intrinsically minimize human errors and improve the throughput. In addition, the closed loop from data acquisition to analysis helps largely to improve the performance of a system. More importantly, an integrated system shows collective wisdom from experts of multiple disciplines (e.g., neuroscientists, physicists, computer scientists, etc.) and worldwide to ensure ‘optimal and correct’ output of the system. With carefully designed pipelines, therefore, integrated systems are to become another trend in the current era of big science and play an essential role in brain research. With modules such as optical sectioning, image acquisition, data reconstruction and analysis, etc. all combined together, the past years have witnessed several integrated systems being developed for brain research [16, 17, 87, 88]. This has led to many exciting discoveries and particularly leveraged the studies on NHP and human brains [1, 2, 17]. With continuously advancing AI technologies, we expect more smart integrated systems being established to push brain research towards a new milestone.

8 Conclusions

To conduct brain-wide imaging at single-cell resolution for non-human primates and humans has become an important task in neuroscience. This is expected to produce similar discoveries as it was for rodents and finally lead to deeper understanding of the structures and connectivity of human brains. High-throughput imaging systems are in urgent demand considering the large brain sizes. Smart systems empowered by AI techniques and computational resources show huge potential to this end. In this review, we investigated the AI techniques that have been or can be applied in neuroscience, ranging from the tasks of sample preparation, image acquisition and analysis. We also discussed the software tools and database that can facilitate the development of AI techniques and smart systems. By absorbing more AI techniques and taking advantages of the super computational resources, such as deep learning and cloud computing, apparently, the smart systems supporting ‘super’ high-throughput imaging and scalable massive data processing will certainly play an invaluable role for neuroscience to reach deeper and broader knowledge on brain structures and connectivity.

Data availability

Data sharing is not applicable to this review article as no new data were created or analyzed in this study.

References

  1. Amunts K et al (2013) BigBrain: an ultrahigh-resolution 3D human brain model. Science 340(6139):1472–1475

    Article  Google Scholar 

  2. Amunts K et al (2020) Julich-Brain: a 3D probabilistic atlas of the human brain’s cytoarchitecture. Science 369(6506):988–992

    Article  Google Scholar 

  3. Gouwens NW et al (2019) Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nat Neurosci 22(7):1182–1195

    Article  Google Scholar 

  4. Tasic B et al (2018) Shared and distinct transcriptomic cell types across neocortical areas. Nature 563(7729):72–78

    Article  Google Scholar 

  5. Alivisatos AP et al (2012) The brain activity map project and the challenge of functional connectomics. Neuron 74(6):970–974

    Article  Google Scholar 

  6. Ecker JR et al (2017) The BRAIN initiative cell census consortium: lessons learned toward generating a comprehensive brain cell atlas. Neuron 96(3):542–557

    Article  Google Scholar 

  7. Kandel ER et al (2013) Neuroscience thinks big (and collaboratively). Nat Rev Neurosci 14(9):659–664

    Article  Google Scholar 

  8. Poo M-M et al (2016) China brain project: basic neuroscience, brain diseases, and brain-inspired computing. Neuron 92(3):591–596

    Article  Google Scholar 

  9. Rotolo T et al (2008) Genetically-directed, cell type-specific sparse labeling for the analysis of neuronal morphology. PLoS ONE 3(12):e4099

    Article  Google Scholar 

  10. Madisen L et al (2015) Transgenic mice for intersectional targeting of neural sensors and effectors with high specificity and performance. Neuron 85(5):942–958

    Article  Google Scholar 

  11. Graybuck LT et al (2021) Enhancer viruses for combinatorial cell-subclass-specific labeling. Neuron 109(9):1449-1464.e13

    Article  Google Scholar 

  12. Ertürk A et al (2012) Three-dimensional imaging of solvent-cleared organs using 3DISCO. Nat Protoc 7(11):1983–1995

    Article  Google Scholar 

  13. Murakami TC et al (2018) A three-dimensional single-cell-resolution whole-brain atlas using CUBIC-X expansion microscopy and tissue clearing. Nat Neurosci 21(4):625–637

    Article  Google Scholar 

  14. Ueda HR et al (2020) Whole-brain profiling of cells and circuits in mammals by tissue clearing and light-sheet microscopy. Neuron 106(3):369–387

    Article  Google Scholar 

  15. Economo MN et al (2016) A platform for brain-wide imaging and reconstruction of individual neurons. Elife 5:e10566

    Article  Google Scholar 

  16. Gong H et al (2016) High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level. Nat Commun 7(1):1–12

    Article  Google Scholar 

  17. Xu F et al (2021) High-throughput mapping of a whole rhesus monkey brain at micrometer resolution. Nat Biotechnol 39(12):1521–1528

    Article  Google Scholar 

  18. Osten P, Margrie TW (2013) Mapping brain circuitry with a light microscope. Nat Methods 10(6):515–523

    Article  Google Scholar 

  19. Winnubst J et al (2019) Reconstruction of 1,000 projection neurons reveals new cell types and organization of long-range connectivity in the mouse brain. Cell 179(1):268-281.e13

    Article  Google Scholar 

  20. Peng H et al (2015) BigNeuron: large-scale 3D neuron reconstruction from optical microscopy images. Neuron 87(2):252–256

    Article  Google Scholar 

  21. Belmonte JCI et al (2015) Brains, genes, and primates. Neuron 86(3):617–631

    Article  Google Scholar 

  22. Bakken TE et al (2021) Comparative cellular analysis of motor cortex in human, marmoset and mouse. Nature 598(7879):111–119

    Article  Google Scholar 

  23. DeFelipe J (2015) The anatomical problem posed by brain complexity and size: a potential solution. Front Neuroanat 9:104

    Article  Google Scholar 

  24. Lin MK et al (2019) A high-throughput neurohistological pipeline for brain-wide mesoscale connectivity mapping of the common marmoset. Elife 8:e40042

    Article  Google Scholar 

  25. Yang B et al (2014) Single-cell phenotyping within transparent intact tissue through whole-body clearing. Cell 158(4):945–958

    Article  Google Scholar 

  26. Susaki EA et al (2014) Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell 157(3):726–739

    Article  Google Scholar 

  27. Zhao S et al (2020) Cellular and molecular probing of intact human organs. Cell 180(4):796-812.e19

    Article  Google Scholar 

  28. Ragan T et al (2012) Serial two-photon tomography for automated ex vivo mouse brain imaging. Nat Methods 9(3):255–258

    Article  Google Scholar 

  29. Li A et al (2010) Micro-optical sectioning tomography to obtain a high-resolution atlas of the mouse brain. Science 330(6009):1404–1408

    Article  Google Scholar 

  30. Zheng T et al (2013) Visualization of brain circuits using two-photon fluorescence micro-optical sectioning tomography. Opt Express 21(8):9839–9850

    Article  Google Scholar 

  31. Haberl MG et al (2015) An anterograde rabies virus vector for high-resolution large-scale reconstruction of 3D neuron morphology. Brain Struct Funct 220(3):1369–1379

    Article  Google Scholar 

  32. Wouterlood FG et al (2014) A fourth generation of neuroanatomical tracing techniques: exploiting the offspring of genetic engineering. J Neurosci Methods 235(1):331–348

    Article  Google Scholar 

  33. Veldman MB et al (2020) Brainwide genetic sparse cell labeling to illuminate the morphology of neurons and glia with cre-dependent MORF mice. Neuron 108(1):111-127.e6

    Article  Google Scholar 

  34. Ibrahim LA et al (2021) Sparse labeling and neural tracing in brain circuits by STARS strategy: revealing morphological development of type II spiral ganglion neurons. Cereb Cortex 31(5):2759–2772

    Article  Google Scholar 

  35. Cai D et al (2013) Improved tools for the Brainbow toolbox. Nat Methods 10(6):540–547

    Article  Google Scholar 

  36. Kobbert C et al (2000) Current concepts in neuroanatomical tracing. Prog Neurobiol 62(4):327–351

    Article  Google Scholar 

  37. Long B et al (2015) 3D image-guided automatic pipette positioning for single cell experiments in vivo. Sci Rep 5(1):1–8

    Article  Google Scholar 

  38. Wu Q, Chubykin AA (2017) Application of automated image-guided patch clamp for the study of neurons in brain slices. JoVE 125:e56010

    Google Scholar 

  39. Holst GL et al (2019) Autonomous patch clamp robot for functional characterization of neurons in vivo: development and application to mouse visual cortex. J Neurophysiol 121(6):2341–2357

    Article  Google Scholar 

  40. Koos K et al (2021) Automatic deep learning-driven label-free image-guided patch clamp system. Nat Commun 12(1):936

    Article  Google Scholar 

  41. Bürgers J et al (2019) Light-sheet fluorescence expansion microscopy: fast mapping of neural circuits at super resolution. Neurophotonics 6(1):015005

    Article  Google Scholar 

  42. Wassie AT, Zhao Y, Boyden ES (2019) Expansion microscopy: principles and uses in biological research. Nat Methods 16(1):33–41

    Article  Google Scholar 

  43. Chen F, Tillberg PW, Boyden ES (2015) Optical imaging. Expansion microscopy. Science 347(6221):543–548

    Article  Google Scholar 

  44. Weiss KR et al (2021) Tutorial: practical considerations for tissue clearing and imaging. Nat Protoc 16(6):2732–2748

    Article  Google Scholar 

  45. Tichauer KM et al (2015) Quantitative in vivo cell-surface receptor imaging in oncology: kinetic modeling and paired-agent principles from nuclear medicine and optical imaging. Phys Med Biol 60(14):R239

    Article  Google Scholar 

  46. Balas C (2009) Review of biomedical optical imaging—a powerful, non-invasive, non-ionizing technology for improving in vivo diagnosis. Meas Sci Technol 20(10):104020

    Article  Google Scholar 

  47. Luker GD, Luker KE (2008) Optical imaging: current applications and future directions. J Nucl Med 49(1):1–4

    Article  Google Scholar 

  48. Fiolka R et al (2012) Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination. Proc Natl Acad Sci 109(14):5311–5315

    Article  Google Scholar 

  49. Zong W et al (2015) Large-field high-resolution two-photon digital scanned light-sheet microscopy. Cell Res 25(2):254–257

    Article  Google Scholar 

  50. Chen B-C et al (2014) Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346(6208):1257998

    Article  Google Scholar 

  51. Tsai Y-C et al (2020) Rapid high resolution 3D imaging of expanded biological specimens with lattice light sheet microscopy. Methods 174:11–19

    Article  Google Scholar 

  52. Truong TV et al (2011) Deep and fast live imaging with two-photon scanned light-sheet microscopy. Nat Methods 8(9):757–760

    Article  Google Scholar 

  53. Olivier N et al (2009) Two-photon microscopy with simultaneous standard and extended depth of field using a tunable acoustic gradient-index lens. Opt Lett 34(11):1684–1686

    Article  Google Scholar 

  54. Dufour P et al (2006) Two-photon excitation fluorescence microscopy with a high depth of field using an axicon. Appl Opt 45(36):9246–9252

    Article  Google Scholar 

  55. Ji N (2017) Adaptive optical fluorescence microscopy. Nat Methods 14(4):374–380

    Article  Google Scholar 

  56. Logan SL et al (2018) Automated high-throughput light-sheet fluorescence microscopy of larval zebrafish. PLoS ONE 13(11):e0198705

    Article  Google Scholar 

  57. Sala F et al (2020) High-throughput 3D imaging of single cells with light-sheet fluorescence microscopy on chip. Biomed Opt Express 11(8):4397–4407

    Article  Google Scholar 

  58. Govindan S et al (2021) Mass generation, neuron labeling, and 3D imaging of minibrains. Front Bioeng Biotechnol 8:1436

    Article  Google Scholar 

  59. Szalay G et al (2016) Fast 3D imaging of spine, dendritic, and neuronal assemblies in behaving animals. Neuron 92(4):723–738

    Article  Google Scholar 

  60. Nikolenko V et al (2008) SLM microscopy: scanless two-photon imaging and photostimulation using spatial light modulators. Front Neural Circuits 2:5

    Article  Google Scholar 

  61. Lindell DB, O’Toole M, Wetzstein G (2018) Single-photon 3D imaging with deep sensor fusion. ACM Trans Graph 37(4):113-1-113–12

    Article  Google Scholar 

  62. Griffiths VA et al (2020) Real-time 3D movement correction for two-photon imaging in behaving animals. Nat Methods 17(7):741–748

    Article  Google Scholar 

  63. Power RM, Huisken J (2018) Adaptable, illumination patterning light sheet microscopy. Sci Rep 8(1):1–11

    Article  Google Scholar 

  64. Štefko M et al (2018) Autonomous illumination control for localization microscopy. Opt Express 26(23):30882–30900

    Article  Google Scholar 

  65. Hubert A et al (2019) Adaptive optics light-sheet microscopy based on direct wavefront sensing without any guide star. Opt Lett 44(10):2514–2517

    Article  Google Scholar 

  66. Wilding D et al (2016) Adaptive illumination based on direct wavefront sensing in a light-sheet fluorescence microscope. Opt Express 24(22):24896–24906

    Article  Google Scholar 

  67. Durand A et al (2018) A machine learning approach for online automated optimization of super-resolution optical microscopy. Nat Commun 9(1):1–16

    Article  Google Scholar 

  68. Royer LA et al (2016) Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat Biotechnol 34(12):1267–1278

    Article  Google Scholar 

  69. Fang C et al (2021) Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy. Nat Commun 12(1):1–13

    Article  Google Scholar 

  70. Chen H et al (2021) Sparse imaging and reconstruction tomography for high-speed high-resolution whole-brain imaging. Cell Rep Methods 1(6):100089

    Article  Google Scholar 

  71. Long B et al (2017) SmartScope2: simultaneous imaging and reconstruction of neuronal morphology. Sci Rep 7(1):1–7

    Article  Google Scholar 

  72. He J, Huisken J (2020) Image quality guided smart rotation improves coverage in microscopy. Nat Commun 11(1):1–9

    Google Scholar 

  73. Paddock SW (1999) Confocal laser scanning microscopy. Biotechniques 27(5):992–1004

    Article  Google Scholar 

  74. Gräf R, Rietdorf J, Zimmermann T (2005) Live cell spinning disk microscopy. Microsc Tech 95:57–75

    Article  Google Scholar 

  75. Stehbens S et al (2012) Imaging intracellular protein dynamics by spinning disk confocal microscopy. Methods Enzymol 504:293–313

    Article  Google Scholar 

  76. Denk W, Strickler JH, Webb WW (1990) Two-photon laser scanning fluorescence microscopy. Science 248(4951):73–76

    Article  Google Scholar 

  77. Weber M, Huisken J (2011) Light sheet microscopy for real-time developmental biology. Curr Opin Genet Dev 21(5):566–572

    Article  Google Scholar 

  78. Huisken J et al (2004) Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305(5686):1007–1009

    Article  Google Scholar 

  79. Tomer R et al (2012) Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat Methods 9(7):755–763

    Article  Google Scholar 

  80. McDole K et al (2018) In toto imaging and reconstruction of post-implantation mouse development at the single-cell level. Cell 175(3):859-876.e33

    Article  Google Scholar 

  81. Levoy M et al (2006) Light field microscopy. In: ACM SIGGRAPH 2006 Papers. pp 924–934

  82. Prevedel R et al (2014) Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat Methods 11(7):727–730

    Article  Google Scholar 

  83. Choquet D, Sainlos M, Sibarita J-B (2021) Advanced imaging and labelling methods to decipher brain cell organization and function. Nat Rev Neurosci 22(4):237–255

    Article  Google Scholar 

  84. Gong H et al (2013) Continuously tracing brain-wide long-distance axonal projections in mice at a one-micron voxel resolution. Neuroimage 74:87–98

    Article  Google Scholar 

  85. Zhong Q et al (2021) High-definition imaging using line-illumination modulation microscopy. Nat Methods 18(3):309–315

    Article  Google Scholar 

  86. Narasimhan A et al (2017) Oblique light-sheet tomography: fast and high resolution volumetric imaging of mouse brains. BioRxiv. https://doi.org/10.1101/132423

    Article  Google Scholar 

  87. Yang X et al (2018) High-throughput light sheet tomography platform for automated fast imaging of whole mouse brain. J Biophotonics 11(9):e201800047

    Article  Google Scholar 

  88. Zhang Z et al (2021) Multi-scale light-sheet fluorescence microscopy for fast whole brain imaging. Front Neuroanat. https://doi.org/10.3389/fnana.2021.732464

    Article  Google Scholar 

  89. Kashekodi AB et al (2018) Miniature scanning light-sheet illumination implemented in a conventional microscope. Biomed Opt Express 9(9):4263–4274

    Article  Google Scholar 

  90. Dabov K et al (2006) Image denoising with block-matching and 3D filtering. Image processing: algorithms and systems, neural networks, and machine learning. International Society for Optics and Photonics, Bellingham

    Google Scholar 

  91. Buades A, Coll B, Morel J-M (2005) A non-local algorithm for image denoising. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). IEEE

  92. Xu J, Zhang L, Zhang D (2018) A trilateral weighted sparse coding scheme for real-world image denoising. In: Proceedings of the European conference on computer vision (ECCV)

  93. Smith K et al (2015) CIDRE: an illumination-correction method for optical microscopy. Nat Methods 12(5):404–406

    Article  Google Scholar 

  94. Chernavskaia O et al (2017) Correction of mosaicking artifacts in multimodal images caused by uneven illumination. J Chemom 31(6):e2901

    Article  Google Scholar 

  95. Rahman S et al (2016) An adaptive gamma correction for image enhancement. EURASIP J Image Video Process 2016(1):1–13

    Article  Google Scholar 

  96. Peng T et al (2017) A BaSiC tool for background and shading correction of optical microscopy images. Nat Commun 8(1):1–7

    Article  Google Scholar 

  97. Becker K et al (2019) Deconvolution of light sheet microscopy recordings. Sci Rep 9(1):1–14

    Article  Google Scholar 

  98. Preibisch S et al (2014) Efficient Bayesian-based multiview deconvolution. Nat Methods 11(6):645–648

    Article  Google Scholar 

  99. Zhao W et al (2021) Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy. Nat Biotechnol 40:606–617

    Article  Google Scholar 

  100. Hayman M et al (2004) Enhanced neurite outgrowth by human neurons grown on solid three-dimensional scaffolds. Biochem Biophys Res Commun 314(2):483–488

    Article  Google Scholar 

  101. Li Q, Sone S, Doi K (2003) Selective enhancement filters for nodules, vessels, and airway walls in two-and three-dimensional CT scans. Med Phys 30(8):2040–2051

    Article  Google Scholar 

  102. Zhou Z et al (2015) Adaptive image enhancement for tracing 3D morphologies of neurons and brain vasculatures. Neuroinformatics 13(2):153–166

    Article  Google Scholar 

  103. Mukherjee S, Acton ST (2015) Oriented filters for vessel contrast enhancement with local directional evidence. In: 2015 IEEE 12th international symposium on biomedical imaging (ISBI). IEEE

  104. Liang H, Acton ST, Weller DS (2017) Content-aware neuron image enhancement. In: 2017 IEEE international conference on image processing (ICIP). IEEE.

  105. Guo S et al (2022) Image enhancement to leverage the 3D morphological reconstruction of single-cell neurons. Bioinformatics 38(2):503–512

    Article  Google Scholar 

  106. Bria A, Iannello G (2012) TeraStitcher-a tool for fast automatic 3D-stitching of teravoxel-sized microscopy images. BMC Bioinform 13(1):1–15

    Article  Google Scholar 

  107. Hörl D et al (2019) BigStitcher: reconstructing high-resolution image datasets of cleared and expanded samples. Nat Methods 16(9):870–874

    Article  Google Scholar 

  108. Hayworth KJ et al (2015) Ultrastructurally smooth thick partitioning and volume stitching for large-scale connectomics. Nat Methods 12(4):319–322

    Article  Google Scholar 

  109. Chen H et al (2017) Fast assembling of neuron fragments in serial 3D sections. Brain Inform 4(3):183–186

    Article  Google Scholar 

  110. Li Y et al (2017) TDat: an efficient platform for processing petabyte-scale whole-brain volumetric images. Front Neural Circuits 11:51

    Article  Google Scholar 

  111. Bria A et al (2016) TeraFly: real-time three-dimensional visualization and annotation of terabytes of multidimensional volumetric images. Nat Methods 13(3):192–194

    Article  Google Scholar 

  112. Pietzsch T et al (2015) BigDataViewer: visualization and processing for large image data sets. Nat Methods 12(6):481–483

    Article  Google Scholar 

  113. Wang Q et al (2020) The Allen mouse brain common coordinate framework: a 3D reference atlas. Cell 181(4):936-953.e20

    Article  Google Scholar 

  114. Niedworok CJ et al (2016) aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat Commun 7(1):1–9

    Article  Google Scholar 

  115. Renier N et al (2016) Mapping of brain activity by automated volume analysis of immediate early genes. Cell 165(7):1789–1802

    Article  Google Scholar 

  116. Kim Y et al (2017) Brain-wide maps reveal stereotyped cell-type-based cortical architecture and subcortical sexual dimorphism. Cell 171(2):456-469.e22

    Article  Google Scholar 

  117. Fürth D et al (2018) An interactive framework for whole-brain maps at cellular resolution. Nat Neurosci 21(1):139–149

    Article  Google Scholar 

  118. Ni H et al (2020) A robust image registration interface for large volume brain atlas. Sci Rep 10(1):1–16

    Article  Google Scholar 

  119. Arganda-Carreras I et al (2008) bunwarpj: consistent and elastic registration in imagej, methods and applications. In: Second imageJ user & developer conference

  120. Qu L et al (2022) Cross-modal coherent registration of whole mouse brains. Nat Methods 19(1):111–118

    Article  Google Scholar 

  121. Donohue DE, Ascoli GA (2011) Automated reconstruction of neuronal morphology: an overview. Brain Res Rev 67(1–2):94–102

    Article  Google Scholar 

  122. Svoboda K (2011) The past, present, and future of single neuron reconstruction. Neuroinformatics 9(2–3):97

    Article  Google Scholar 

  123. Acciai L, Soda P, Iannello G (2016) Automated neuron tracing methods: an updated account. Neuroinformatics 14(4):353–367

    Article  Google Scholar 

  124. Gala R et al (2017) Computer assisted detection of axonal bouton structural plasticity in in vivo time-lapse images. Elife 6:e29315

    Article  Google Scholar 

  125. Tyson AL et al (2021) A deep learning algorithm for 3D cell detection in whole mouse brain image datasets. PLoS Comput Biol 17(5):e1009074

    Article  Google Scholar 

  126. Ascoli GA (2008) Neuroinformatics grand challenges. Neuroinformatics 6(1):1–3

    Article  MathSciNet  Google Scholar 

  127. Yuan X et al (2009) MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images. Neuroinformatics 7(4):213–232

    Article  Google Scholar 

  128. Lee PC et al (2012) High-throughput computer method for 3D neuronal structure reconstruction from the image stack of the Drosophila brain and its applications. PLoS Comput Biol 8(9):e1002658

    Article  Google Scholar 

  129. Gala R et al (2014) Active learning of neuron morphology for accurate automated tracing of neurites. Front Neuroanat 8:37

    Article  Google Scholar 

  130. Xiao H, Peng HJB (2013) APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics 29(11):1448–1454

    Article  Google Scholar 

  131. Wang Y et al (2011) A broadly applicable 3-D neuron tracing method based on open-curve snake. Neuroinformatics 9(2):193–217

    Article  Google Scholar 

  132. Chen H et al (2015) SmartTracing: self-learning-based neuron reconstruction. Brain Inform 2(3):135–144

    Article  Google Scholar 

  133. Yang J, Gonzalez-Bellido PT, Peng H (2013) A distance-field based automatic neuron tracing method. BMC Bioinform 14(1):1–11

    Article  Google Scholar 

  134. Zhao T et al (2011) Automated reconstruction of neuronal morphology based on local geometrical and global structural models. Neuroinformatics 9(2):247–261

    Article  Google Scholar 

  135. Choromanska A, Chang S-F, Yuste R (2012) Automatic reconstruction of neural morphologies with multi-scale tracking. Front Neural Circuits 6:25

    Article  Google Scholar 

  136. Zhou Z et al (2016) TReMAP: automatic 3D neuron reconstruction based on tracing, reverse mapping and assembling of 2D projections. Neuroinformatics 14(1):41–50

    Article  Google Scholar 

  137. Peng H et al (2017) Automatic tracing of ultra-volumes of neuronal images. Nat Methods 14(4):332–333

    Article  Google Scholar 

  138. Zhou H et al (2021) GTree: an open-source tool for dense reconstruction of brain-wide neuronal population. Neuroinformatics 19(2):305–317

    Article  Google Scholar 

  139. Yang J et al (2019) FMST: an automatic neuron tracing method based on fast marching and minimum spanning tree. Neuroinformatics 17(2):185–196

    Article  Google Scholar 

  140. Liu S et al (2016) Rivulet: 3d neuron morphology tracing with iterative back-tracking. Neuroinformatics 14(4):387–401

    Article  Google Scholar 

  141. Peng H, Long F, Myers GJB (2011) Automatic 3D neuron tracing using all-path pruning. Bioinformatics 27(13):i239–i247

    Article  Google Scholar 

  142. Mukherjee S, Condron B, Acton ST (2014) Tubularity flow field—a technique for automatic neuron segmentation. IEEE Trans Image Process 24(1):374–389

    Article  MathSciNet  MATH  Google Scholar 

  143. DeFelipe J et al (2013) New insights into the classification and nomenclature of cortical GABAergic interneurons. Nat Rev Neurosci 14(3):202–216

    Article  Google Scholar 

  144. Jiang X et al (2015) Principles of connectivity among morphologically defined cell types in adult neocortex. Science 350(6264):aac9462

    Article  Google Scholar 

  145. Zeng H, Sanes JR (2017) Neuronal cell-type classification: challenges, opportunities and the path forward. Nat Rev Neurosci 18(9):530–546

    Article  Google Scholar 

  146. Yang J, He Y, Liu X (2020) Retrieving similar substructures on 3D neuron reconstructions. Brain Inform 7(1):1–9

    Article  Google Scholar 

  147. Wan Y et al (2015) BlastNeuron for automated comparison, retrieval and clustering of 3D neuron morphologies. Neuroinformatics 13(4):487–499

    Article  Google Scholar 

  148. Li Y et al (2017) Metrics for comparing neuronal tree shapes based on persistent homology. PLoS ONE 12(8):e0182184

    Article  Google Scholar 

  149. Sholl D (1953) Dendritic organization in the neurons of the visual and motor cortices of the cat. J Anat 87(Pt 4):387

    Google Scholar 

  150. Zhao T, Plaza SM (2014) Automatic neuron type identification by neurite localization in the drosophila medulla. arXiv preprint arXiv:1409.1892

  151. McInnes L, Healy J, Melville J (2018) UMAP: uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426

  152. Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern Recogn 36(2):451–461

    Article  Google Scholar 

  153. Johnson SC (1967) Hierarchical clustering schemes. Psychometrika 32(3):241–254

    Article  MATH  Google Scholar 

  154. Scorcioni R, Polavaram S, Ascoli GA (2008) L-Measure: a web-accessible tool for the analysis, comparison and search of digital reconstructions of neuronal morphologies. Nat Protoc 3(5):866–876

    Article  Google Scholar 

  155. Peng H et al (2021) Morphological diversity of single neurons in molecularly defined cell types. Nature 598(7879):174–181

    Article  Google Scholar 

  156. Mihaljević B et al (2015) Bayesian network classifiers for categorizing cortical GABAergic interneurons. Neuroinformatics 13(2):193–208

    Article  Google Scholar 

  157. Santana R et al (2013) Classification of neocortical interneurons using affinity propagation. Front Neural Circuits 7:185

    Article  Google Scholar 

  158. Sümbül U et al (2014) A genetic and computational approach to structurally classify neuronal types. Nat Commun 5(1):1–12

    Google Scholar 

  159. Gillette TA, Ascoli GA (2015) Topological characterization of neuronal arbor morphology via sequence representation: I-motif analysis. BMC Bioinform 16(1):1–15

    Google Scholar 

  160. Gillette TA, Hosseini P, Ascoli GA (2015) Topological characterization of neuronal arbor morphology via sequence representation: II-global alignment. BMC Bioinform 16(1):1–17

    Google Scholar 

  161. Network BICC (2021) A multimodal cell census and atlas of the mammalian primary motor cortex. Nature 598(7879):86–102

    Article  Google Scholar 

  162. Shrestha A, Mahmood A (2019) Review of deep learning algorithms and architectures. IEEE Access 7:53040–53065

    Article  Google Scholar 

  163. Eliceiri KW et al (2012) Biological imaging software tools. Nat Methods 9(7):697–710

    Article  Google Scholar 

  164. Mosaliganti KR et al (2012) ACME: automated cell morphology extractor for comprehensive reconstruction of cell membranes. PLoS Comput Biol 8(12):e1002780

    Article  Google Scholar 

  165. Piccinini F et al (2017) Advanced cell classifier: user-friendly machine-learning-based software for discovering phenotypes in high-content imaging data. Cell Syst 4(6):651-655.e5

    Article  Google Scholar 

  166. Stegmaier J et al (2016) Real-time three-dimensional cell segmentation in large-scale microscopy data of developing embryos. Dev Cell 36(2):225–240

    Article  Google Scholar 

  167. Fernandez R et al (2010) Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution. Nat Methods 7(7):547–553

    Article  Google Scholar 

  168. Carpenter AE et al (2006) CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol 7(10):1–11

    Article  Google Scholar 

  169. McQuin C et al (2018) CellProfiler 3.0: next-generation image processing for biology. PLoS Biol 16(7):e2005970

    Article  Google Scholar 

  170. Stalling D, Westerhoff M, Hege HC (2005) Amira: a highly interactive system for visual data analysis. The visualization handbook. Elsevier Inc., Amsterdam, pp 749–767

    Chapter  Google Scholar 

  171. Glaser JR, Glaser EM (1990) Neuron imaging with neurolucida—a PC-based system for image combining microscopy. Comput Med Imaging Graph 14(5):307–317

    Article  Google Scholar 

  172. Cuntz H et al (2011) The TREES toolbox—probing the basis of axonal and dendritic branching. Neuroinformatics 9(1):91–96

    Article  Google Scholar 

  173. Bates AS et al (2020) The natverse, a versatile toolbox for combining and analysing neuroanatomical data. Elife 9:e53350

    Article  Google Scholar 

  174. Peng H et al (2010) V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat Biotechnol 28(4):348–353

    Article  Google Scholar 

  175. Wang Y et al (2019) TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat Commun 10(1):1–9

    Google Scholar 

  176. Allan C et al (2012) OMERO: flexible, model-driven data management for experimental biology. Nat Methods 9(3):245–253

    Article  Google Scholar 

  177. Kvilekval K et al (2010) Bisque: a platform for bioimage analysis and management. Bioinformatics 26(4):544–552

    Article  Google Scholar 

  178. Peng H et al (2014) Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis. Nat Commun 5(1):1–13

    Article  Google Scholar 

  179. Jiang S et al (2022) Petabyte-scale multi-morphometry of single neurons for whole brains. Neuroinformatics

  180. Schindelin J et al (2015) The ImageJ ecosystem: an open platform for biomedical image analysis. Mol Reprod Dev 82(7–8):518–529

    Article  Google Scholar 

  181. Kankaanpää P et al (2012) BioImageXD: an open, general-purpose and high-throughput image-processing platform. Nat Methods 9(7):683–689

    Article  Google Scholar 

  182. De Chaumont F et al (2012) Icy: an open bioimage informatics platform for extended reproducible research. Nat Methods 9(7):690–696

    Article  Google Scholar 

  183. Wan Y et al (2012) FluoRender: an application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research. In: 2012 IEEE pacific visualization symposium. IEEE

  184. Arshadi C et al (2021) SNT: a unifying toolbox for quantification of neuronal anatomy. Nat Methods 18(4):374–377

    Article  Google Scholar 

  185. Sunkin SM et al (2012) Allen Brain Atlas: an integrated spatio-temporal portal for exploring the central nervous system. Nucleic Acids Res 41(D1):D996–D1008

    Article  Google Scholar 

  186. Mikula S et al (2007) Internet-enabled high-resolution brain mapping and virtual microscopy. Neuroimage 35(1):9–15

    Article  MathSciNet  Google Scholar 

  187. Sato A et al (2008) Cerebellar development transcriptome database (CDT-DB): profiling of spatio-temporal gene expression during the postnatal development of mouse cerebellum. Neural Netw 21(8):1056–1069

    Article  Google Scholar 

  188. Johnson KA (2001) The whole brain atlas. Harvard University, Cambridge

    Google Scholar 

  189. Rosen GD et al (2000) The mouse brain library@ www.mbl.org. In: International mouse genome conference

  190. Ascoli GA, Donohue DE, Halavi M (2007) NeuroMorpho.Org: a central resource for neuronal morphologies. J Neurosci 27(35):9247–9251

    Article  Google Scholar 

  191. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  192. Allen-Zhu Z, Li Y (2019) What can ResNet learn efficiently, going beyond kernels? arXiv preprint arXiv:1905.10337

  193. Szegedy C et al (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-first AAAI conference on artificial intelligence

  194. Szegedy C et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition

  195. Huang G et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition

  196. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  197. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

  198. Szegedy C et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition

  199. Graves A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw 18(5–6):602–610

    Article  Google Scholar 

  200. Graves A, Jaitly N, Mohamed A-R (2013) Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE workshop on automatic speech recognition and understanding. IEEE

  201. Chung J et al (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555

  202. Li X et al (2020) Fast confocal microscopy imaging based on deep learning. In: 2020 IEEE international conference on computational photography (ICCP). IEEE

  203. Weigert M et al (2017) Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer

  204. Ouyang W et al (2018) Deep learning massively accelerates super-resolution localization microscopy. Nat Biotechnol 36(5):460–468

    Article  Google Scholar 

  205. Nehme E et al (2018) Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5(4):458–464

    Article  Google Scholar 

  206. Wang Z et al (2021) Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat Methods 18(5):551–556

    Article  Google Scholar 

  207. Laine S et al (2019) High-quality self-supervised deep image denoising. arXiv preprint arXiv:1901.10277

  208. Krull A et al (2020) Probabilistic noise2void: unsupervised content-aware denoising. Front Comput Sci 2:5

    Article  Google Scholar 

  209. Lehtinen J et al (2018) Noise2noise: learning image restoration without clean data. arXiv preprint arXiv:1803.04189

  210. Buchholz T-O et al (2019) Cryo-care: content-aware image restoration for cryo-transmission electron microscopy data. In: 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019). IEEE

  211. Batson J, Royer L (2019) Noise2self: Blind denoising by self-supervision. In: International conference on machine learning. PMLR

  212. Zhu J-Y et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision

  213. Fan J et al (2019) BIRNet: brain image registration using dual-supervised fully convolutional networks. Med Image Anal 54:193–206

    Article  Google Scholar 

  214. Cao X et al (2017) Deformable image registration based on similarity-steered CNN regression. In: International conference on medical image computing and computer-assisted intervention. Springer

  215. Haberl MG et al (2018) CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat Methods 15(9):677–680

    Article  Google Scholar 

  216. Stringer C et al (2020) Cellpose: a generalist algorithm for cellular segmentation. Nat Methods 18(1):100–106

    Article  Google Scholar 

  217. Januszewski M et al (2018) High-precision automated reconstruction of neurons with flood-filling networks. Nat Methods 15(8):605–610

    Article  Google Scholar 

  218. Zhou Z et al (2018) DeepNeuron: an open deep learning toolbox for neuron tracing. Brain Inform 5(2):3

    Article  Google Scholar 

  219. Church KW (2017) Word2Vec. Nat Lang Eng 23(1):155–162

    Article  Google Scholar 

  220. Vaswani A et al (2017) Attention is all you need. arXiv preprint arXiv:1706.03762

  221. Abe T et al (2021) Neuroscience cloud analysis as a service. bioRxiv. https://doi.org/10.1101/2020.06.11.146746

    Article  Google Scholar 

  222. Agrawal D, Das S, El Abbadi A (2011) Big data and cloud computing: current state and future opportunities. In: Proceedings of the 14th international conference on extending database technology

  223. Berg S et al (2019) Ilastik: interactive machine learning for (bio) image analysis. Nat Methods 16(12):1226–1232

    Article  Google Scholar 

  224. Ouyang W et al (2019) ImJoy: an open-source computational platform for the deep learning era. Nat Methods 16(12):1199–1200

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the National Natural Science Foundation of China (Grant: 21773028), and the Fundamental Research Funds for the Central Universities (Grant: 2242022R10037).

Funding

National Natural Science Foundation of China (Grant: 21773028) and the Fundamental Research Funds for the Central Universities (Grant: 2242022R10037).

Author information

Authors and Affiliations

Authors

Contributions

SG, XH and HP performed the conception and design of the review. SG, JX, JL, XY, and YG wrote the manuscript. SG, JX, and JL prepared the figures. All authors discussed and contributed the review. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Shuxia Guo or Xiaofeng Han.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, S., Xue, J., Liu, J. et al. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inf. 9, 10 (2022). https://doi.org/10.1186/s40708-022-00158-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40708-022-00158-4

Keywords