Skip to main content

Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity

Abstract

Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers’ approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.

1 Introduction

The brain consists of a complex network, intricately interconnected through countless neurons connecting various brain regions. These networks form the foundation for critical brain functions such as movement, social interaction, memory formation, decision-making, and perception. By investigating the structural and functional properties of neural circuits, researchers seek to understand and uncover the fundamental principles of information processing in the brain. A key focus in neuroscience has been to map neural circuits, which involves visualizing and characterizing the connections between neurons to deepen our understanding of brain organization. This mapping is crucial for grasping normal brain functions and addressing related disorders.

Over the past decade, significant progress has been made in mapping the connectivity of the brain at the mesoscale level, which encompasses intermediate scales between individual neurons and large brain regions [1,2,3,4,5]. Although various connectivity studies utilize numerous animal models [1, 3, 6,7,8,9], connectome research primarily leverages mouse models due to their prevalence. This review will focus on the mouse brain, examining mesoscale connectivity through fluorescent imaging. This mesoscale connectivity mapping provides insights into the structural and functional relationships between brain regions, shedding light on information flow, neural circuits, and their contributions to overall brain functionality.

The term connecto-informatics, first introduced in a study that investigated the circuit- and cellular-level connectivity of the STN-GPe [10], is analogous to neuroinformatics but focuses specifically on extracting and analyzing information about neural connectivity. In the context of connecto-informatics, researches employ various imaging and computational techniques, data analysis and modeling to deepen our understanding of brain structure and function through brain circuit connectivity. Recently, advances in neural labeling, tissue clearing, and imaging methods – such as mGRASP, CLARITY, iDISCO, MOST, fMOST, and ExM – have significantly accelerated neural circuit mapping efforts [11,12,13,14,15,16,17,18,19,20]. The datasets primarily used in connecto-informatics are fluorescence-based, and there has been a high throughput of fluorescence imaging [21]. However, fluorescence imaging datasets often contain discrepancies due to biological variations such as brain size differences among animals, inevitable damages from histological sample processing, and technical problems like artifacts and optical aberrations [22, 23]. These can lead to signal loss and image distortion, underscoring the critical need for sophisticated image processing tools and methodologies in mesoscopic connectivity mapping.

By extracting information through analysis of neural connectivity mapping from neural images obtained through various microscopies and using various emerging image processing tools, extraction, analysis, and interpretation of complex brain connectivity data is possible. The image processing pipeline in connecto-informatics begins with the vital step of aligning neural images to a standardized template atlas. This is followed by segmentation of specific brain regions or structures, essential for isolating areas of interest for detailed analysis. To improve image quality and clarity, the pipeline incorporates low-level techniques such as denoising and super-resolution, enabling the visualization of finer structural details [57,58,59,60,61,62, 71,72,73,74,75,76,77]. Advanced procedures, including cell segmentation and neuronal morphology reconstruction [91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110], are also employed to comprehend the intricate connectivity and dynamics of neural circuits at the mesoscopic scale. The ultimate goal of these image processing steps is to accurately map and analyze neural circuits, providing insights into the complex networks of connectivity and interactions that underpin various brain functions (Fig. 1a).

Fig. 1
figure 1

Workflow of image processing for connecto-informatics at mesoscale. (a) Schematics diagram of key image processing steps for neural data obtained from imaging. (b) Whole brain images obtained in 2D are aligned into a 3D stack and is registered to the Allen CCFv3 [26]. (c) The registered neural images are segmented using the annotated Allen CCFv3. (d) Noisy images are processed using denoising algorithm to remove unwanted artifacts that can arise from numerous factors [65]. (e) Cell segmentation using CNN allow automatic detection and segmentation of cells in neural images, allowing cellular level connectivity analysis [97]. (f) 3D reconstruction of neuron using neuTube 1.0 shows synaptic connectivity of hippocampal region with mGRASP labeled synapses [110]. All scale bar represents 1000μm

Despite advancements, the availability and integration of image-processing resources remain scattered, posing challenges in their effective utilization. The substantial data volume generated by sophisticated imaging techniques demands significant computational resources and processing time. Variability in biological samples and imaging conditions adds complexity, often requiring customized approaches and manual intervention, which impedes the development of streamlined, automated workflows. Moreover, integrating and analyzing diverse data types to map neural circuits efficiently remains a formidable challenge, highlighting a gap between the capabilities of current tools and research needs.

In this updated review, we delve into the advancements in image processing tools for mapping mesoscopic brain connectivity, addressing the challenges encountered and introducing tools to overcome them.

  • Mapping brain connectivity through atlas-based registration.

    • Types of Brain Atlases.

    • Atlas-based Registration and Segmentation.

    • Atlas-based registration and segmentation open-source tools.

    • Deep learning-based atlas-based registration and segmentation tools.

  • Mapping brain connectivity through feature extraction.

    • Low-level image processing

      • Image Denoising.

      • Image super-resolution.

    • Cell Segmentation.

    • Neurononal morphology reconstruction.

Starting with a brief introduction to the importance and significance of each image processing step, we discuss latest advancements in tools and methods tailored for analyzing neural images in the context of connecto-informatics at the mesoscale (Abbreviations listed in Table 1). This review aims to provide insights into the current state of image processing techniques and their pivotal role in advancing our understanding of brain connectivity.

Table 1 List of abbreviations used in this review

2 Mapping brain connectivity through atlas-based registration

Many connectivity datasets are derived from various resolutions and imaging modalities, necessitating the crucial first step of registering images onto a standardized reference framework. This alignment enables comparative analyses across different experiments, datasets, and subjects, providing valuable information about neuronal structures and functions. Moreover, registering images to a common coordinate space facilitates the annotation of brain regions and is fundamental for qualitative and quantitative assessments. This allows for more precise comparisons and analyses of specific regions of interest. Systematic image processing, including steps like registration and segmentation against reference atlases, is essential for accurately mapping neural connectivity.

2.1 Types of brain atlases

A brain atlas is a comprehensive and detailed map illustrating the brain’s anatomical structures and functional organization. In connecto-informatics, whole-brain atlases provide a spatial framework for analyzing whole-brain images. The Franklin-Paxinos atlas [24] and the Allen reference atlas [25] are among the most widely used for mouse brains. However, these are 2D reference atlases, primarily derived from Nissl and acetylcholine esterase antibody staining in histological sections. Although reference atlases such as the Franklin-Paxinos or Allen have assisted researchers in locating and annotating brain regions of interest, their 2D nature limits their effectiveness and application.

The shift to 3D brain atlas, like the Allen Common Coordinate Framework v3 (CCFv3) offers several advantages including spatial accuracy, depth visualization, cross-sectional view of volumetric data, and navigational aids. The Allen CCFv3 is a 3D whole-brain mouse atlas available through the Allen Institute for Brain Science (https://mouse.brain-map.org). It was created interpolating serial two-photon tomography (STPT) images from 1675 adult mice and features 658 delineated brain regions. This atlas integrates data from immunohistochemistry, transgene expression, in situ hybridization, and anterograde tracer connectivity data [26].

To enhance anatomical delineation, the 2D segmentation labels from the Franklin-Paxinos atlas have been merged onto the Allen CCFv3, creating an enhanced and unified mouse brain atlas [27] (Fig. 2a). Though this atlas is based on the Allen CCFv3, additional anatomical regions were further segmented by combining data from cell type-specific transgenic mice and MRI. Other mouse brain 3D atlases were also developed using unsupervised classification single-cell RNA profiles to define anatomical divisions based molecular composition (Fig. 2b). The gene expression signatures are obtained using spatial transcriptomics of mRNAs [28, 29]. These atlases help in identifying distinct subregions by segmenting the hippocampal subfields into sublayers or revealing unique patterns at the dorsoventral borders in the hippocampal subfields (Fig. 2c). Using the Allen CCFv3, these atlases have added important information about region segmentation, gene, and cell expressions, allowing researchers to compare significant results across experiments onto a common reference framework.

Fig. 2
figure 2

Comparison of mouse brain atlases. Rebuilt illustration using publicly available atlases for comparison between the enhanced and unified anatomical atlas, and the molecular atlas of the mouse brain combined. (a) The left hemisphere is the Allen reference atlas [26] and the right hemisphere is the enhance and unified mouse brain atlas that combines labels from the Franklin-Paxinos atlas and the common coordinate framework from the Allen Institute to create a unified mouse brain atlas [27]. (b) The left hemisphere is the Allen reference atlas and the right hemisphere is the molecular atlas of the adult mouse brain that shows anatomical divisions based molecular composition [29]. (c) Comparison of the hippocampus region delineation between mouse brain atlases. Scale bars for (a-b) represents 1000μm; scale bar for (c) represents 500μm

Creating an atlas that accurately delineates regions to match biological features remains challenging. Efforts are also underway to develop a developmental mouse brain atlas covering various age points, which is critical for understanding growth and development stages [30,31,32]. Given that providing a generalized adult brain atlas is a challenge, creating a lifespan atlas is an even more significant challenge. Currently, easy online access to comprehensive 3D atlas similar to those available of adult mice is not yet reality for developmental stages. However, the establishment of a standardized developing mouse atlas would mark a significant advancement. It would provide a generalized framework for studying the developing mouse brain and analyzing the connecto-informatics of brain circuits throughout various stages of development.

2.2 Atlas-based registration and segmentation

From connecto-informatics analysis, integrating images into a reference space is crucial for extracting neural information, requiring a registration process. Image registration involves spatially aligning two images from various modalities to identify or correlate changes in structure or function [33] (Fig. 1b). Specifically, this process entails merging a neural image with a reference image – typically a corresponding 2D section from an atlas – for detailed analysis of neural circuit. There are two primary methods for this integration. One approach maps the reference image onto a neural image, maintaining the integrity of the neural image without distortion. Alternatively, the image data can be transformed to fit the reference space, which, while potentially distorting the original image data, facilitates comparison across different datasets and experiments within the same reference framework.

Image registration is a critical step in neural analysis, typically performed using transformations provided by open-source libraries [34, 35], such as NiftyReg [36], Elastix [37], and ANTs [38], which are widely recognized for their effectiveness. Elastix and ANTs, built on the ITK framework, employ both linear and non-linear transformations to align sample data with reference images through deformation processes. This precise alignment is crucial for the subsequent step of brain region segmentation, as the accuracy of segmentation directly depends on how well the brain and atlas have been registered. This segmentation step usually occurs after the reference image has been aligned with the data image (Fig. 1c). Even though these open-source libraries are readily available, they require a degree of computational expertise, posing a barrier for many biologists. Additionally, the lack of standardized methods implementing data complicated the use of these tools on diverse datasets, presenting ongoing challenges for researchers in the field.

Atlases serve as essential backbone for connecto-informatics, with registration and segmentation of brain regions heavily dependent on them. However, inconsistencies in the boundaries of each segmented region across different atlases can significantly impact analysis outcomes. Therefore, selecting the appropriate atlases is crucial for ensuring reliable results, and the use of uniform atlases could facilitate the establishment of a standardized research pipeline.

Efforts are underway to address these challenges and improve the accessibility for researchers. One notable attempt is the BrainGlobe [39], a platform that consolidates available atlases to offer a common interface processing data across various model organisms. While significant progress has been made in creating more accurate and accessible atlases, the need for standardized and precise reference atlases remains paramount. These atlases not only support atlas based image processing but also enhance the integration and combination of diverse datasets from different research projects, fostering collaborative efforts in brain connectivity mapping.

2.3 Atlas-based registration and segmentation open-source tools

The advancement of atlas-based registration and segmentation tools, alongside with the development of standardized brain atlases, has significantly advanced neuroscience image processing, particularly in mapping mesoscale neural circuits. These tools, by simplifying the alignment of neural images to brain atlases, address critical challenges like the need for high-level computational resources and expertise. They also help mitigate issues associated with damaged or incomplete datasets.

One of the recent developments are tools designed for 2D and 3D image processing tasks. Software like WholeBrain [40] and Neuroinfo [41] has emerged to offer semi-automatic solutions for 2D registration and segmentation, utilizing advanced algorithms and integrating the comprehensive Allen CCFv3 brain atlas. These tools are specifically engineered to simplify the initial stages of image processing, enabling researchers to accurately align experimental data with reference spaces and automatically annotate critical regions based on the atlas. This process is significantly facilitated by the software’s capability to automatically register image data to the reference slice once the researcher identifies the corresponding region in the 2D section image. However, these tools are not without limitations; they can be time-consuming to use and may not offer the necessary flexibility for handling various image modalities, highlighting a trade-off between automation and adaptability.

Recognizing the time-intensive nature of manual 2D registration, QuickNII offers an advanced semi-automatic approach that significantly reduces the effort required to register serial section data to a 3D reference atlas [42]. By applying affine spatial transformations, QuickNII efficiently aligns each section across the entire series, alleviating one of the most laborious aspects of neural image processing. Similarly, FASTMAP, a plugin for ImageJ, generates custom mouse brain atlas plates [43]. This feature address the unique requirements of diverse experimental setups, enhancing the tool’s utility and flexibility in registration tasks.

Transitioning from 2D to 3D, tools like aMAP, MIRACL, and MagellanMapper are each designed to address the complexities of 3D registration and segmentation. aMAP, leveraging the NiftyReg framework, offers a validated approach that aligns with expert manual segmentation for fluorescent mouse brain images [44]. This validation ensures that researchers can rely on aMAP for accurate 3D analysis. MIRACL [45] and MagellanMapper [46] further extend the capabilities of 3D image processing, implementing fully automated registration pipelines tailored for cleared brain images and diffusion MRI data. By utilizing frameworks like ANTs and Elastix, these tools not only automate the processing of high-resolution data but also assure precision in aligning and segmenting images within the 3D neural features.

The transition from manual registration libraries to sophisticated, user-friendly software tools in neuroscience reflects ongoing efforts to address image processing challenges. While these tools have significantly streamlined processing and reduced manual intervention, they continue to evolve to meet the increasing complexity of imaging data and analysis demands. Despite these advancements, practical challenges persist, particularly with atlas-based registration and segmentation. Variability among individual brains can lead to registration errors, and the existing atlases may not capture all anatomical variations needed for specific research, underscoring the limitations in completeness and specificity. Consequently, expert judgment remains crucial in interpreting and correcting misalignments, ensuring accurate segmentation and integration. This blend of technological advancement and the need for skilled human oversight highlights the enduring necessity for expert involvement in refining and utilizing these advanced tools.

2.4 Deep learning-based atlas-based registration and segmentation tools

With the rapid progress in artificial intelligence (AI), significant efforts have been made towards developing deep learning-based tools for automatic registration and segmentation, aiming to ease the bottleneck caused by the vast volumes of image data generated. DeepSlice, an automated registration library, aligns and registers mouse brain histology data to the Allen CCFv3 from the Allen Brain Institute [47]. This tool uses estimated Euclidean data to provide a standardized and simplified registration process. Additionally, Mesonet facilitates automatic mouse brain segmentation by utilizing landmarks on brain image to automate segmentation according to the atlas [48]. Furthermore, DeepMapi, a fully automated registration method for mesoscopic optical brain images, uses convolution neural network (CNN) to predict deformation field to align mesoscopic images with the atlas, demonstrating how deep learning can be used to streamline these processes [49].

Another notable software is mBrainAligner, an open-source software for cross-modal registration that employs deep neural network (DNN) to align whole mouse brain with the the standard Allen CCFv3 atlas [50]. mBrainAligner has shown more accurate segmentation results compared to the tools mentioned above. The implementation of deep learning in such software not only accelerates processing but also achieves results comparable to manual registration and segmentation, thereby ensuring high accuracy. Additionally, D-LMBmap has been developed as a fully automated, deep learning-based end-to-end package for comprehensive profiling of neural circuity across the entire brain [51]. This tool provides an integrated workflow that encompasses whole-brain registration, region segmentation, and axon segmentation, facilitating brain circuit profiling with minimum manual input. Although currently limited to light sheet fluorescent microscopy, D-LMBmap features a novel method of registration and segmentation with a user-friendly graphical interface. Once validated on high-resolution images, it will be a powerful tool with competitiveness comparable to other already available software. These developments in deep learning-based software allow high throughput automatic registration without manual intervention. This capability allows for the rapid precise analysis of vast dataset generated by advanced imaging technology.

Deep learning-based registration and segmentation tools like DeepBrainSeg and BIRDS have not only streamline the processes of registration and segmentation but have also addressed more complex challenges inherent in neural data processing. DeepBrainSeg is an automated brain region segmentation tool for micro-optical images that employs dual-pathway CNN to capture both local details and broader contextual information across various scales [52]. This approach significantly enhances the accurate segmentation of brain regions, even in noisy datasets, through sophisticated image registration and the application of domain-specific constraints.

BIRDS, a Fiji plugin software, extends the utility of deep learning by offering an open-source algorithm that can be implemented on various image modalities, allowing easy access and usability to many users [53]. In addition to providing automatic registration and segmentation, BIRDS offers a deep learning-based direct-inference segmentation on incomplete datasets, such as irregularly or partially cut brain sections or hemispheres. These types of datasets often present considerable challenges due to their lack of comprehensive morphological information, making traditional segmentation based on standard atlases like the Allen brain atlas difficult. By integrating DNN, BIRDS effectively segments these partial images.

The continued development of deep learning-based, open-source tools for registration and segmentation represents a significant advancement in preprocessing neural images. These tools have transformed the image processing procedure, making it more convenient and time-efficient for researchers, and effectively alleviating the possible bottleneck in the analysis pipeline. Moreover, they have shown promising results in addressing common challenges in biological experiments, such as image noise and partial image section. While these tools have substantially improved the efficiency and throughput of image processing pipelines, accuracy and methodologies continue to evolve, with ongoing development providing insight refining these technologies. Despite the advancements, the role of expert judgement and the quality of input images remain crucial. Even the most advanced algorithms require high-quality data to function optimally, and expert oversight is essential to accurately interpret the complexities of neural images. Therefore, quality control is indispensable when using these advanced tools to maintain the integrity and reliability of the results. (Table 2)

Table 2 Summary of selected whole-brain registration and segmentation tools

3 Mapping brain connectivity through feature extraction

So far, we have discussed atlas-based registration and segmentation tools, which are indispensable for comprehensive region-to-region connectivity analysis. However, obtaining more detailed insight into individual neuronal compositions – such as the number of specific cell types or synaptic proteins – require additional steps. Researchers typically utilize high-resolution imaging of cells and specific immunostaining-labeled molecules to extract these crucial features. These image datasets require other processing steps beyond basic atlas-based registration and segmentation, although they similarly rely on feature extraction through segmentation. Firstly, despite significant advancements in imaging technologies, further image processing is essential to eliminate noise and enhance resolution, enabling accurate segmentation of somas and neurons. Neuro-reconstruction poses particular challenges due to the difficulty of extracting fine structures from often noisy images. In the following sections, we will outline image processing techniques aimed at image quality through noise reduction and resolution enhancement, followed by detail methods for cellular detection and neuron morphology reconstruction.

3.1 Low-level image processing

Low-level image processing is essential to remove unwanted attributes and artifacts that may be misinterpreted as meaningful signals in biological image sets. Image denoising and super-resolution techniques are crucial for enhancing the quality and resolution of neural images, thereby facilitating studies in connecto-informatics. These images are often compromised with noise, artifacts, and limited resolution, which can obscure accurate interpretation and analysis. Image denoising techniques aim to reduce noise and improve the clarity of images, while super-resolution methods aim to increase the resolution and detail of low-resolution images. Together, these image processing techniques hold immense promise for advancing our understanding of the brain’s structure and function. High-quality neural images, refined through denoising and super-resolution processes, enable more accurate segmentation, precise localization of neural activity, and detailed analysis of brain connectivity.

3.1.1 Image denoising

Image denoising involves removing or reducing unwanted noise while preserving essential image features and structures in neural images. This noise can originate from various sources, including labeling imperfections, signal acquisition processes, and innate tissue features. Image denoising techniques utilize statistical models, filtering algorithms, and increasingly, machine learning approaches to effectively suppress noise and improve the image’s signal-to-noise (SNR) [54]. These techniques are particularly crucial for fluorescence images, where specific noise patterns and characteristics must be accurately managed to ensure precise data analysis and interpretation. However, the challenges of image denoising are significant, especially when the original images are of low quality. High noise levels and low resolution complicate the denoising process, making it difficult to distinguishing between noise and essential image features.

The advent of deep learning has brought significant attention to advanced image denoising algorithms. Initially, supervised learning methods like denoising CNNs were prevalent, but they require extensive high-resolution training data, which can be challenging to obtain for fluorescent biological images [55,56,57,58]. Consequently, recent developments have shifted towards self-supervised methods, which can operate with minimal or even single-image datasets.

One of the earliest deep learning-based image denoising methods that presented a solution to the difficulty in obtaining training data was CARE, which used the U-Net architecture to enhance the quality of images using pairs of low and high SNR images as training datasets [59]. More recently, frameworks like Wang et al.’s [60] use transfer learning to integrate supervised and self-supervised learning to maintain denoising performance without extensive training datasets. Noise2Void introduced a novel approach using CNNs to leverage the inherent noise characteristics within single noisy images, employing a blind-spot strategy that allows training directly on the data without needing a clean target image [61]. Despite these advancements, practical challenges remain, such as the correlation of noise among adjacent pixels in microscopy, which Noise2Void’s assumptions may not address. Structured Noise2Void [62] and Noise2SR [63] have evolved these concepts by enhancing self-supervised learning techniques and integrating super-resolution modules to improve training and denoising outcomes.

MCSC-net is another image-denoising approach tailored exclusively to fluorescent images that utilizes DNN for training and modeling the noise in the image using a Poisson-Gaussian distribution [57]. Real-time denoising methods like DeepCAD-RT uses adjacent frames for training, enabling denoising during ongoing imaging processes [64]. However, challenges such as brightness shift due to non-zero-mean noise have been addressed by innovative algorithms like ISCL: Independent Self-Cooperative Learning for Unpaired Image Denoising [65] (Fig. 1d). This method uses self-supervised learning and cyclic adversarial learning for unpaired learning and has been shown to outperform other unpaired and blind denoising methods.

The primary goal of image denoising in fluorescent imaging is to facilitate further processing, and accordingly, many algorithms are designed to incorporate additional processing steps beyond mere denoising. For instance, DenoiSeg used a self-supervised learning approach for denoising and segmentation using a single noisy image [66]. Similarly, Deconoising employs a self-supervised method that combines denoising with deconvolution of fluorescent images, allowing sharper and clearer images, essential in images with fine structures such as axons [67].

As deep learning-based image denoising continues to evolve, it remains essential for enhancing feature detection in neural imaging, crucial for analyzing neural connectivity and function. However, the application of these tools involves careful consideration of balance between reducing noise and preserving crucial image details [68]. Over-denoising may results in the loss of important details, while under-denoising may leave excessive noise, potentially leading to data misinterpretation, which makes diligent judgement from the users to maintain balance. Additionally, the generalizability of these algorithms is challenged by variability in imaging conditions and data diversity, underscoring the need for comprehensive training datasets. Despite these challenges, these algorithms significantly enhance the SNR, facilitating more accurate segmentation, visualization, and interpretation of neural structures. This improvement is indispensable for neural circuit analysis and mesoscale connectome mapping, serving as a key preprocessing step in fluorescence microscopy.

3.1.2 Image super-resolution

Super-resolution image processing techniques are crucial for enhancing spatial resolution and detail, particularly important in 3D microscopy, where axial resolution is typically two times worse than the lateral resolution, creating resolution anisotropy [69]. These techniques, using interpolation, regularization, and advanced learning-based methods, reconstruct missing details by leveraging spatial and contextual information within images [70, 71]. Reconstruction-based approaches combine multiple low-resolution images to recapture lost high-frequency components, whereas deep learning-based methods predict these components to refine image resolution [72].

Despite these advances, practical challenges persist, including high computational demands, sensitivity to input image quality, and steep learning curves, particularly for users without a background in computational imaging or machine learning. Additionally, the dependency on extensive, high-quality training datasets for learning-based methods limits their applicability across different microscopy modalities due to data availability and representativeness issues.

Addressing these problems, recent innovations have been proposed. Weigert et al. [73] introduced a super-resolution framework that reconstructs isotropic 3D data by pairing high-resolution lateral images with low-resolution axial images blurred from a non-isotropic image for training the network. Generative adversarial network (GAN)-based frameworks have been pivotal, utilizing matched pairs of low- and high-resolution images acquired through experiments for training [74]. Another GAN-based approach uses an image-degrading model to artificially create low-resolution images required for training, derived from their high-resolution counterparts, allowing the network to reconstruct super-resolution images from new low-resolution inputs [75].

In scenarios where training data is scarce, particularly in fluorescent microscopy, Eilers and Ruckebusch [76] introduced a non-deep learning super-resolution algorithm that employs interpolation on single images, requiring no training for a fast, simple resolution improvement. For cases with a limited training set available, a CNN-based approach was proposed for super-resolution [77]. Deep-SLAM, focusing on light-sheet microscopy, uses DNNs to restore z-axis resolution using raw lateral slices and degraded resolution as paired training data to restore the isotropic resolution of axial slices [78].

Particularly noteworthy are cycleGAN-based algorithm [79] and Self-Net’s [80] rapid, self-supervised learning approach, which minimize the need for extensive datasets by leveraging high-resolution lateral images as training targets for low-resolution axial counterparts. These methods streamline the training process, reduce computational requirements, and facilitate high-quality image restoration across all types of 3D fluorescence microscopy.

Super-resolution processing not only enhances image resolution beyond the limits of current imaging technology but also improves visualization of fine structures, such as neuronal components. Although there are numerous promising developments and research efforts on super-resolution algorithms, a universally applicable method for various modalities has not yet been developed. Establishing a standardized method would greatly benefit researchers, integrating these advancements into the connecto-informatics image processing pipeline.

3.2 Cell segmentation

The brain comprises a multitude of cell types, such as neurons and glial cells, ditinguished by their morphology, topographic position, molecular signatures and so forth. Cell segementaion (i.e. cell body or soma) provide information about cell density and type in distinct brain regions that is crucial for understanding the intricate organization of brain connectivity at the cellular level. Variations in these attributes, like cell density and type, within specific brain regions have been linked to neurological disorders such as Parkinson’s disease [81,82,83,84,85,86]. The 3D topographical organization of cells, which relates to cell-type-specific connectivity, further highlights the complexity of neural networks [10]. Techniques like STPT have enabled researchers to map spatial cell type distributions withn the cerebrovascular network, revealing the elaborate cellular organization underlying brain circuits [87, 88]. Accurate detection and identification of cells are essential for unraveling the complexities of neural circuit connectivity, function, and organization. This understanding is pivotal for advancing our knowledge of brain functionality in both health and disease, potentially leading to improved treatments for neurological conditions.

However, accurate detection and identification of cells pose significant challenges, including the resolution limitations of current imaging technologies and the difficulty of distinguishing between cell types in densely packed regions. These issues underscore the need for advanced segmentation and identification tools for precise analysis. ImageJ, a conventional image analysis tool, facilitates soma detection through plugins that allows segmentation and quantification via manual parameter adjustments [89, 90]. Yet, the rapid advancement in imaging technology has produced large-scale, high-resolution images, making manual segmentation time-consuming and labor-intensive.

To address this, several automatic 3D soma detection algorithms for fluorescent images have been developed [91,92,93]. One algorithm focused on automatic large-scale 3D soma detection through multiscale-morphological closing and adaptive thresholds implemented to the images [94]. The shift from manual manipulation to automated algorithms marks a significant development in cellular-level analysis in neural circuit mapping. Continuous efforts are being made to overcome the almost impossible manual-intensive cellular segmentation, and recently, AI has been implemented to overcome the issue [95]. Particularly, deep learning-based approaches have been instrumental in advancing cell detection [96]. CNNs have been trained to detect and segment densely packed cells automatically, even in partially labeled datasets, revealing crucial topographical information and into possible cell-type-specific functions about PV cells in the STN [10, 97] (Fig. 1e). Another method uses DNN to automatically detect 3D soma in a mouse whole-brain image, which allows the detection of a large population of cells [98].

Tools like Fiji/ImageJ, enhanced with deep learning plugins such as DeepImageJ, allow users to either use pre-trained CNN models or train their models for cell detection tasks [99]. Despite the advantages of deep learning-based methods, they often face challenges such as slow processing due to the need for extensive data training, limitations in applying to whole-brain images, or handling high-throughput image generation. Recent methods like a two-stage DNN-based algorithm have been developed for fast and accurate soma detection in whole mouse brain images, addressing these challenges by filtering images without somas and segmenting those with identified somas [100].

Further advancements include weak- and self-supervised cell segmentation methods developed to reduce the burden of manually making pixel-level ground-truth training labels [101, 102]. Open-source software like Cellpose, which uses a U-Net-based algorithm, requires minimal user intervention and allows rooms for additional training, making it accessible and user-friendly for various cell segmentation tasks [103].

While accurate cell segmentation is crucial for further brain mapping analysis at the cellular level, further quantification and identification are also essential in following connecto-informatics analysis. CellProfiler [104], an early software widely used for cell phenotype identification, and newer tools such as CellCognition and CellSighter, use deep learn and unsupervised learning to automate the analysis of cell based on their phenotypes [105, 106]. Another algorithm demonstrated the accurate classification of cells by their phenotypes in a mixed cell population image with high accuracy [107]. This algorithm used self-label clustering, where the primary objective was to achieve precise cell identification based on morphological characteristics. These tools offer potential for expedited circuit mapping analysis, alleviating a time-consuming bottleneck in the workflow.

Accurate cell detection and identification are pivotal for exploring the morphological, connectivity, and functional aspects of cells, thereby enhancing our understanding of mesoscale neural circuits. Although automated and manual detection methods offered by various software tools facilitate this analysis, challenges such as high variability in cell morphology and potential algorithmic bias in automated tools can affect the reliability of cell identification and subsequent analyses. Recognizing and addressing these challenges is essential for advancing our comprehension of neural circuitry, function, and organization.

3.3 Neuronal morphology reconstruction

Neurites are cellular processes that project from the cell body of a neuron. These extensions encompass both axons and dendrites, essential for neural communication and connectivity, facilitating information transmission throughout the nervous system. Digitally reconstructing these neuronal morphologies from imaging data enables the analysis and integration of neural networks across various modalities. Recent advancements in computer-assisted tracing algorithms and technologies have enabled large-scale neuron reconstruction efforts, providing insights into the brain’s mesoscale connectivity patterns and enhancing our understanding of its structure and organization [108, 109]. However, challenges such as high computational cost and the technical complexity of capturing detailed neuronal structures persists, highlighting the need for advances tools in neuron reconstruction.

NeuTube1.0, an open-source platform, allows for detailed neuron reconstruction neural tracing [110]. It facilitates both 2D and 3D visualization and tracing of neurons for reconstruction from fluorescent images, employing a semi-automatic approach, with seed-based tracing and path-searching algorithms within a cylindrical fitting model. This method allows efficient visualization, reconstruction, and editing of neuron structures, providing a valuable resource for researchers (Fig. 1f). Through neuTube1.0, researchers have analyzed the spatial synaptic connectivity pattern of the hippocampus region using mGRASP and shown in reconstructed 3D neuron structures [12, 111]. Additionally, neuTube1.0 was used to create a comprehensive atlas of the larval zebrafish brain at cellular resolution by systematically mapping the cellular composition and connectivity patterns of 1,955 reconstructed single neurons [112].

Another open-source program, Vaa3D, integrated with TeraFly and TeraVR, is a cross-platform visualization and analysis system that allows visualization on tera-byte scale images and neuron tracing in a virtual-reality environment [113, 114]. TeraFly efficiently handles large-scale 3D image data, focusing on specific regions of interest with varying levels of details, while TeraVR provides an immersive environment for neuron reconstruction, facilitating precise tracing and annotation [115, 116]. Utilizing the ‘Virtual Finger’ algorithm, Vaa3D has facilitated the semi-automatic tracing of over 1,700 neurons from mouse brain images obtained using the fMOST, revealing the morphological diversity of single neurons in a brain-wide scale [117]. Additionally, the same tools were used to characterize neurons in the human brain by reconstructing 852 neurons from images obtained using a newly proposed cell adaptive tomography (ACTomography) to capture cortical neurons injected with dyes that targeted individual neurons in the human brain tissues [118].

The MouseLight project has reconstructed the morphology of 1,000 projection neurons using a semi-automatic pipeline that classifies axonal structures, generates a probability stack for skeleton extraction and segmentation, and refines axonal segment reconstructions through human annotation [119]. This project has uncovered previously unknown cell types and elucidated the organization of long-range connections within the mouse brain.

Recent work in cortical cell subtype mapping has reconstructed 6,357 single neurons in the mPFC through the Fast Neurite Tracer (FNT) software tool using images obtained with fMOST, classify axon projections into subtypes and revealing the topographical organization of PFC axon projections [120]. The FNT software facilitates the tracing of large image datasets by dividing them into smaller three-dimensional cubes. It employs Dijkstra’s algorithm, a method for finding the shortest paths between nodes in a graph, which in this context, helps visualize and trace neurons accurately by determining the most efficient routes for neuron paths. Furthermore, using the single-neuron reconstruction data traced through neuTube and FNT, Gao et al. [121] further reconstructed over 2,000 neurons and classified the organization into finer subtypes based on the axon-dendrite features, which revealed inter-connectivity among projection neuron types in the PFC. Most recently, Qui et al. [122] reconstructed 10,100 single neurons to map the brain-wide spatial organization of neurons in the mouse hippocampus. By manually reconstructing single neurons, they revealed patterns and subtypes of neurons within the hippocampus, which serve as a basis for understanding its functions further.

Although semi-automatic tools predominate, significant strides have been made in developing automatic algorithms for neuron reconstruction [123,124,125,126]. Yet, the inherent variability in datasets, influenced by different animal models, imaging techniques, and neuron types, presents considerable challenges to solely rely on automatic algorithms [127,128,129,130,131]. Automated algorithms also face challenges with densely interwoven dendrites and axons from multiple labeled neurons.

While existing methods excel in single neuron morphology, they struggle with accurately reconstructing densely structured neurons. Computational strategies like the TREES toolbox simulate and analyze the complex branching patterns of neurons based on branch order to reconstruct multiple neurons, while NeuroGPS-Tree utilizes spatial information of cell bodies and statistical distribution to iteratively detect and eliminate incorrect connections between two neuron reconstructions to accurately separate intertwined neurons [132, 133]. Li et al. [134] introduced G-Cut, a novel development that segments densely interwoven neuron clusters. This tool uses a graph-based representation of cell bodies to calculate the global optimum, automatically segmenting individual neurons within a cluster. G-Cut demonstrates higher accuracy in segmentation compared to previously mentioned methods. GTree was developed as an open-source tool for brain-wide dense neuron reconstruction by building on NeuroGPS to identify neurons and integrate a display module to check errors for higher reconstruction accuracy [135].

While software tools like NeuroGPS, TREES toolbox, and G-Cut advance neuron reconstruction, they often overlook errors such as neuron entanglement and interference from passing axons, which are crucial for pruning. The SNAP pipeline addresses this gap by offering structured pruning to eliminate reconstruction errors and disentangle neuron reconstructions, enhancing accuracy and reducing the need for manual curation [136].

Despite these state-of-the-art advances, semi-automatic methods are preferred in large-scale brain-wide neuron reconstruction efforts. Central to understanding the limitations and potentials of automated tracing algorithms is the BigNeuron project, a collaborative project aimed at benchmarking the performance of these algorithms across diverse light microscopy datasets [137]. BigNeuron aims to enhance automatic neuron tracing tools by offering a standardized comparison platform. It creates a diverse, cross-species dataset for benchmarking, provides gold standard annotations for select datasets, and evaluates 35 automatic tracing algorithms. This initiative advances algorithm development for broader benchmarking and underscores the importance of human expertise in generating gold-standard datasets for accurate comparisons.

The evolution of AI, particularly deep learning, offers a promising future for neuron reconstruction, automating tasks that once heavily relied on human expertise, especially in dataset preparation. Emerging methods are significantly reducing, and in some cases eliminating, the need for human intervention in creating training datasets for neuron reconstruction. By combining traditional tracing methods to create pseudo-labels needed for training and the 3D deep learning network for neuron reconstruction, Zhao et al. [138] suggested a neuron tracing framework that does not require manual annotation. Another novel approach utilized a weakly supervised CNN for a fully automatic neuron tracing method, including generating automatic training labels [139]. This method was further improved to detect and trace distorted or broken structures using probability maps estimated by 3D residual CNN [140].

Additionally, using a self-supervised approach, a 3D CNN was used to predict the order of permuted slices in the 3D image, leveraging the tube-like structure of axons for label-free feature extraction and enhancing downstream segmentation with a 3D U-Net model [141]. MPGAN also utilized a self-supervised method to develop a two-stage generative model strategy that creates synthetic 3D images with voxel-level labels from unlabeled data, enhancing segmentation network performance and improving neuron reconstruction methods [142]. These approaches promise to alleviate the bottleneck in neuron tracing by streamlining the process of generating training datasets and applications.

Neuron reconstruction is critical for analyzing neural circuits, including measurements like dendritic length and synaptic connections. Incorporating deep learning into this process marks a significant shift towards automation, reducing the dependence on human expertise. Future improvements should focus on enhancing the models’ accuracy, reliability, and generalizability. As deep learning evolves, it offers biologists advanced tools for uncovering the complex organization of neural structures. However, challenges related to data quality, algorithmic adaptability, and the integration of diverse imaging data remain, highlighting the need for continued innovation in automated neuron reconstruction methodologies (Table 3).

Table 3 Summary of selected cell and neuron reconstruction tools

4 Discussion & conclusion

In summary, this review provides an up-to-date overview of the current advances in image processing tools, highlighting the integration of AI to tackle the challenges arising from the growing volume and diversity of generated images. The integration of AI has shown promising results in alleviating the image processing bottleneck, potentially revolutionizing the field. However, the need for manual intervention persists due to factors such as quality variants and complexity in neural data. Additionally, certain advanced tools may initially encounter accessibility limitations or implementation constraints across different modalities.

While AI framework may provide enhanced accuracy and faster image processing speed, the inherent features found in neural data makes human intervention inevitable. Moreover, the challenge of gathering sufficient training datasets for deep learning poses as a significant limitation. Ongoing efforts are being made to overcome these challenges, aiming to integrate deep learning throughout the image processing workflow more comprehensively. This integration aims to minimize manual input and provide a more unified, efficient image processing pipeline that accommodates various experimental and imaging approaches. Such an approach is crucial for expedited analysis of mesoscale brain connectivity mapping data, highlighting continuous pursuit towards automation while acknowledging the indispensable role of human expertise.

Connecto-informatics, as applied at this level of analysis, holds great promise in illuminating the underlying mechanisms behind diverse brain functions and the development of neurological diseases linked to disruptions in neural circuits. Furthermore, it is essential to note that advanced tools for connecto-informatics at the microscale are equally significant despite being omitted in this review. As the field continues to evolve, the pivotal role of interdisciplinary collaboration and the integration of cutting-edge technologies cannot be overstated. These collaborative efforts will undoubtedly drive further advancements in our comprehension of brain connectivity at the mesoscale level, paving the way for new insights and potential therapeutic strategies.

Data availability

No datasets were generated or analysed during the current study.

References

  1. Oh SW, Harris JA, Ng L et al (2014) A mesoscale connectome of the mouse brain. Nature 508:207–214. https://doi.org/10.1038/nature13186

    Article  Google Scholar 

  2. Friedmann D, Pun A, Adams EL et al (2020) Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network. Proc Natl Acad Sci 117:11068–11075. https://doi.org/10.1073/pnas.1918465117

    Article  Google Scholar 

  3. Scheffer LK, Xu CS, Januszewski M et al (2020) A connectome and analysis of the adult Drosophila central brain. Elife 9:e57443. https://doi.org/10.7554/elife.57443

    Article  Google Scholar 

  4. Foster NN, Barry J, Korobkova L et al (2021) The mouse cortico-basal ganglia-thalamic network. Nature 598:188–194. https://doi.org/10.1038/s41586-021-03993-3

    Article  Google Scholar 

  5. Zingg B, Hintiryan H, Gou L et al (2014) Neural networks of the Mouse Neocortex. Cell 156:1096–1111. https://doi.org/10.1016/j.cell.2014.02.023

    Article  Google Scholar 

  6. Xu F, Shen Y, Ding L et al (2021) High-throughput mapping of a whole rhesus monkey brain at micrometer resolution. Nat Biotechnol 39:1521–1528. https://doi.org/10.1038/s41587-021-00986-5

    Article  Google Scholar 

  7. Sporns O, Tononi G, Kötter R (2005) The human connectome: a structural description of the human brain. PLoS Comput Biol 1:e42. https://doi.org/10.1371/journal.pcbi.0010042

    Article  Google Scholar 

  8. Stephan KE, Kamper L, Bozkurt A et al (2001) Advanced database methodology for the Collation of Connectivity data on the macaque brain (CoCoMac). Philos Trans R Soc Lond Ser B: Biol Sci 356:1159–1186. https://doi.org/10.1098/rstb.2001.0908

    Article  Google Scholar 

  9. Cook SJ, Jarrell TA, Brittin CA et al (2019) Whole-animal connectomes of both Caenorhabditis elegans sexes. Nature 571:63–71. https://doi.org/10.1038/s41586-019-1352-7

    Article  Google Scholar 

  10. Jeon H, Lee H, Kwon D-H et al (2022) Topographic connectivity and cellular profiling reveal detailed input pathways and functionally distinct cell types in the subthalamic nucleus. Cell Rep 38:110439. https://doi.org/10.1016/j.celrep.2022.110439

    Article  Google Scholar 

  11. Chen F, Tillberg PW, Boyden ES (2015) Expansion microscopy. Science 347:543–548. https://doi.org/10.1126/science.1260088

    Article  Google Scholar 

  12. Kim J, Zhao T, Petralia RS et al (2012) mGRASP enables mapping mammalian synaptic connectivity with light microscopy. Nat Methods 9:96–102. https://doi.org/10.1038/nmeth.1784

    Article  Google Scholar 

  13. Chung K, Wallace J, Kim S-Y et al (2013) Structural and molecular interrogation of intact biological systems. Nature 497:332–337. https://doi.org/10.1038/nature12107

    Article  Google Scholar 

  14. Renier N, Wu Z, Simon DJ et al (2014) iDISCO: a simple, Rapid Method to Immunolabel large tissue samples for volume imaging. Cell 159:896–910. https://doi.org/10.1016/j.cell.2014.10.010

    Article  Google Scholar 

  15. Sofroniew NJ, Flickinger D, King J, Svoboda K (2016) A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging. eLife 5:e14472. https://doi.org/10.7554/elife.14472

    Article  Google Scholar 

  16. Seiriki K, Kasai A, Hashimoto T et al (2017) High-speed and scalable whole-brain imaging in rodents and Primates. Neuron 94:1085–1100e6. https://doi.org/10.1016/j.neuron.2017.05.017

    Article  Google Scholar 

  17. Ueda HR, Dodt H-U, Osten P et al (2020) Whole-brain profiling of cells and circuits in mammals by tissue Clearing and Light-Sheet Microscopy. Neuron 106:369–387. https://doi.org/10.1016/j.neuron.2020.03.004

    Article  Google Scholar 

  18. Zhong Q, Li A, Jin R et al (2021) High-definition imaging using line-illumination modulation microscopy. Nat Methods 18:309–315. https://doi.org/10.1038/s41592-021-01074-x

    Article  Google Scholar 

  19. Gong H, Zeng S, Yan C et al (2013) Continuously tracing brain-wide long-distance axonal projections in mice at a one-micron voxel resolution. NeuroImage 74:87–98. https://doi.org/10.1016/j.neuroimage.2013.02.005

    Article  Google Scholar 

  20. Li A, Gong H, Zhang B et al (2010) Micro-optical Sectioning Tomography to obtain a high-resolution atlas of the mouse brain. Science 330:1404–1408. https://doi.org/10.1126/science.1191776

    Article  Google Scholar 

  21. Zeng H (2018) Mesoscale connectomics. Curr Opin Neurobiol 50:154–162. https://doi.org/10.1016/j.conb.2018.03.003

    Article  Google Scholar 

  22. Bon P, Cognet L (2022) On some current challenges in High-Resolution Optical Bioimaging. ACS Photonics 9:2538–2546. https://doi.org/10.1021/acsphotonics.2c00606

    Article  Google Scholar 

  23. Meiniel W, Olivo-Marin J-C, Angelini ED (2018) Denoising of Microscopy images: a review of the state-of-the-Art, and a new sparsity-based method. IEEE Trans Image Process 27:3842–3856. https://doi.org/10.1109/tip.2018.2819821

    Article  MathSciNet  Google Scholar 

  24. Paxinos G, Franklin KB (2019) Paxinos and Franklin’s the mouse brain in stereotaxic coordinates. Academic

  25. Dong HW (2008) The Allen reference atlas: a digital color brain atlas of the C57Bl/6J male mouse. The Allen reference atlas: a digital color brain atlas of the C57Bl/6J male mouse ix, 366–ix, 366

  26. Wang Q, Ding S-L, Li Y et al (2020) The Allen Mouse Brain Common coordinate Framework: a 3D reference Atlas. Cell 181:936–953e20. https://doi.org/10.1016/j.cell.2020.04.007

    Article  Google Scholar 

  27. Chon U, Vanselow DJ, Cheng KC, Kim Y (2019) Enhanced and unified anatomical labeling for a common mouse brain atlas. Nat Commun 10:5067. https://doi.org/10.1038/s41467-019-13057-w

    Article  Google Scholar 

  28. Erö C, Gewaltig M-O, Keller D, Markram H (2018) A cell atlas for the mouse brain. Front Neuroinform 12:84. https://doi.org/10.3389/fninf.2018.00084

    Article  Google Scholar 

  29. Ortiz C, Navarro JF, Jurek A et al (2020) Molecular atlas of the adult mouse brain. Sci Adv 6:eabb3446. https://doi.org/10.1126/sciadv.abb3446

    Article  Google Scholar 

  30. Bulovaite E, Qiu Z, Kratschke M et al (2022) A brain atlas of synapse protein lifetime across the mouse lifespan. https://doi.org/10.1016/j.neuron.2022.09.009. Neuron

  31. Young DM, Darbandi SF, Schwartz G et al (2021) Constructing and optimizing 3D atlases from 2D data with application to the developing mouse brain. Elife 10:e61408. https://doi.org/10.7554/elife.61408

    Article  Google Scholar 

  32. Newmaster KT, Nolan ZT, Chon U et al (2020) Quantitative cellular-resolution map of the oxytocin receptor in postnatally developing mouse brains. Nat Commun 11:1885. https://doi.org/10.1038/s41467-020-15659-1

    Article  Google Scholar 

  33. Weber DA, Ivanovic M (1994) Correlative image registration. Semin Nucl Med 24:311–323. https://doi.org/10.1016/s0001-2998(05)80021-2

    Article  Google Scholar 

  34. Cardoso M, Clarkson M, Modat M, Ourselin S (2012) NiftySeg: open-source software for medical image segmentation, label fusion and cortical thickness estimation

  35. Du X, Dang J, Wang Y et al (2016) A parallel Nonrigid Registration Algorithm based on B-Spline for medical images. Comput Math Method M 2016(7419307). https://doi.org/10.1155/2016/7419307

  36. Modat M, Ridgway GR, Taylor ZA et al (2010) Fast free-form deformation using graphics processing units. Comput Methods Programs Biomed 98:278–284. https://doi.org/10.1016/j.cmpb.2009.09.002

    Article  Google Scholar 

  37. Klein S, Staring M, Murphy K et al (2010) Elastix: a toolbox for intensity-based Medical Image Registration. Ieee T Med Imaging 29:196–205. https://doi.org/10.1109/tmi.2009.2035616

    Article  Google Scholar 

  38. Avants BB, Tustison NJ, Song G et al (2011) A reproducible evaluation of ANTs similarity metric performance in brain image registration. NeuroImage 54:2033–2044. https://doi.org/10.1016/j.neuroimage.2010.09.025

    Article  Google Scholar 

  39. Claudi F, Petrucco L, Tyson A et al (2020) BrainGlobe Atlas API: a common interface for neuroanatomical atlases. J Open Source Softw 5:2668. https://doi.org/10.21105/joss.02668

    Article  Google Scholar 

  40. Fürth D, Vaissière T, Tzortzi O et al (2017) An interactive framework for whole-brain maps at cellular resolution. Nat Neurosci 21:1–11. https://doi.org/10.1038/s41593-017-0027-7

    Article  Google Scholar 

  41. Tappan SJ, Eastwood BS, O’Connor N et al (2019) Automatic navigation system for the mouse brain. J Comp Neurol 527:2200–2211. https://doi.org/10.1002/cne.24635

    Article  Google Scholar 

  42. Puchades MA, Csucs G, Ledergerber D et al (2019) Spatial registration of serial microscopic brain images to three-dimensional reference atlases with the QuickNII tool. PLoS ONE 14:e0216796. https://doi.org/10.1371/journal.pone.0216796

    Article  Google Scholar 

  43. Terstege DJ, Oboh DO, Epp JR (2022) FASTMAP: open-source flexible Atlas Segmentation Tool for Multi-area Processing of Biological images. Eneuro 9. https://doi.org/10.1523/eneuro.0325-21.2022. ENEURO.0325-21.2022

  44. Niedworok CJ, Brown APY, Cardoso MJ et al (2016) aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat Commun 7:11879. https://doi.org/10.1038/ncomms11879

    Article  Google Scholar 

  45. Goubran M, Leuze C, Hsueh B et al (2019) Multimodal image registration and connectivity analysis for integration of connectomic data from microscopy to MRI. Nat Commun 10:5504. https://doi.org/10.1038/s41467-019-13374-0

    Article  Google Scholar 

  46. Young DM, Duhn C, Gilson M et al (2020) Whole-brain image analysis and anatomical Atlas 3D generation using MagellanMapper. Curr Protoc Neurosci 94:e104. https://doi.org/10.1002/cpns.104

    Article  Google Scholar 

  47. Carey H, Pegios M, Martin L et al (2023) DeepSlice: rapid fully automatic registration of mouse brain imaging to a volumetric atlas. Nat Commun 14:5884. https://doi.org/10.1038/s41467-023-41645-4

    Article  Google Scholar 

  48. Xiao D, Forys BJ, Vanni MP, Murphy TH (2021) MesoNet allows automated scaling and segmentation of mouse mesoscale cortical maps using machine learning. Nat Commun 12:5992. https://doi.org/10.1038/s41467-021-26255-2

    Article  Google Scholar 

  49. Ni H, Feng Z, Guan Y et al (2021) DeepMapi: a fully Automatic Registration Method for Mesoscopic Optical Brain images using Convolutional neural networks. Neuroinformatics 19:267–284. https://doi.org/10.1007/s12021-020-09483-7

    Article  Google Scholar 

  50. Qu L, Li Y, Xie P et al (2022) Cross-modal coherent registration of whole mouse brains. Nat Methods 19:111–118. https://doi.org/10.1038/s41592-021-01334-w

    Article  Google Scholar 

  51. Li Z, Shang Z, Liu J et al (2023) D-LMBmap: a fully automated deep-learning pipeline for whole-brain profiling of neural circuitry. Nat Methods 20:1593–1604. https://doi.org/10.1038/s41592-023-01998-6

    Article  Google Scholar 

  52. Tan C, Guan Y, Feng Z et al (2020) DeepBrainSeg: Automated Brain Region Segmentation for micro-optical images with a convolutional neural network. Front Neurosci-switz 14:179. https://doi.org/10.3389/fnins.2020.00179

    Article  Google Scholar 

  53. Wang X, Zeng W, Yang X et al (2021) Bi-channel image registration and deep-learning segmentation (BIRDS) for efficient, versatile 3D mapping of mouse brain. Elife 10:e63455. https://doi.org/10.7554/elife.63455

    Article  Google Scholar 

  54. Fan L, Zhang F, Fan H, Zhang C (2019) Brief review of image denoising techniques. Vis Comput Ind Biomed Art 2:7. https://doi.org/10.1186/s42492-019-0016-7

    Article  Google Scholar 

  55. Zhang K, Zuo W, Chen Y et al (2017) Beyond a Gaussian Denoiser: residual learning of Deep CNN for Image Denoising. IEEE Trans Image Process 26:3142–3155. https://doi.org/10.1109/tip.2017.2662206

    Article  MathSciNet  Google Scholar 

  56. Chaudhary S, Moon S, Lu H (2022) Fast, efficient, and accurate neuro-imaging denoising via supervised deep learning. Nat Commun 13:5165. https://doi.org/10.1038/s41467-022-32886-w

    Article  Google Scholar 

  57. Chen G, Wang J, Wang H et al (2023) Fluorescence microscopy images denoising via deep convolutional sparse coding. Signal Process: Image Commun 117:117003. https://doi.org/10.1016/j.image.2023.117003

    Article  Google Scholar 

  58. Maji SK, Yahia H (2023) Image denoising in fluorescence microscopy using feature based gradient reconstruction. J Méd Imaging 10:064004–064004. https://doi.org/10.1117/1.jmi.10.6.064004

    Article  Google Scholar 

  59. Weigert M, Schmidt U, Boothe T et al (2018) Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods 15:1090–1097. https://doi.org/10.1038/s41592-018-0216-7

    Article  Google Scholar 

  60. Wang Y, Pinkard H, Khwaja E et al (2021) Image denoising for fluorescence microscopy by supervised to self-supervised transfer learning. Opt Express 29:41303. https://doi.org/10.1364/oe.434191

    Article  Google Scholar 

  61. Krull A, Buchholz T, Jug F (2019) Noise2Void - Learning Denoising From Single Noisy Images. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp 2124–2132

  62. Broaddus C, Krull A, Weigert M et al (2020) Removing structured noise with self-supervised blind-spot networks. 2020 IEEE 17th Int Symp Biomed Imaging (ISBI) 00:159–163. https://doi.org/10.1109/isbi45749.2020.9098336

    Article  Google Scholar 

  63. Tian X, Wu Q, Wei H, Zhang Y Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, 25th International, Conference (2022) Singapore, September 18–22, 2022, Proceedings, Part VI. Lect Notes Comput Sci 334–343. https://doi.org/10.1007/978-3-031-16446-0_32

  64. Li X, Li Y, Zhou Y et al (2023) Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat Biotechnol 41:282–292. https://doi.org/10.1038/s41587-022-01450-8

    Article  Google Scholar 

  65. Lee K, Jeong W-K (2021) ISCL: interdependent Self-Cooperative Learning for unpaired image Denoising. Ieee T Med Imaging 40:3238–3248. https://doi.org/10.1109/tmi.2021.3096142

    Article  Google Scholar 

  66. Buchholz T-O, Prakash M, Schmidt D et al (2020) DenoiSeg: Joint Denoising and Segmentation. ECCV 2020 Workshops. ECCV 2020. Lecture notes in Computer Science(). Springer International Publishing, pp 324–337

  67. Goncharova AS, Honigmann A, Jug F, Krull A (2020) Improving Blind Spot Denoising for Microscopy. ECCV 2020 Workshops. ECCV 2020. Lecture notes in Computer Science(). Springer, Cham, pp 380–393

    Google Scholar 

  68. Cai Y, Wu J, Dai Q (2022) Review on data analysis methods for mesoscale neural imaging in vivo. Proc Spie 9:041407–041407. https://doi.org/10.1117/1.nph.9.4.041407

    Article  Google Scholar 

  69. Fischer RS, Wu Y, Kanchanawong P et al (2011) Microscopy in 3D: a biologist’s toolbox. Trends Cell Biol 21:682–691. https://doi.org/10.1016/j.tcb.2011.09.008

    Article  Google Scholar 

  70. Isaac JS, Kulkarni R (2015) Super Resolution Techniques for Medical Image Processing. 2015 Int Conf Technologies Sustain Dev Ictsd 1–6. https://doi.org/10.1109/ictsd.2015.7095900

  71. Li Y, Sixou B, Peyrin F (2021) A review of the deep learning methods for medical images super resolution problems. Irbm 42:120–133. https://doi.org/10.1016/j.irbm.2020.08.004

    Article  Google Scholar 

  72. Nguyen K, Fookes C, Sridharan S et al (2018) Super-resolution for biometrics: a comprehensive survey. Pattern Recogn 78:23–42. https://doi.org/10.1016/j.patcog.2018.01.002

    Article  Google Scholar 

  73. Weigert M, Royer L, Jug F, Myers G (2017) Medical Image Computing and Computer-assisted Intervention – MICCAI 2017, 20th International Conference, Quebec, City, QC, Canada, September 11–13, 2017, proceedings, Part II. Lect Notes Comput Sci 126–134. https://doi.org/10.1007/978-3-319-66185-8_15

  74. Wang H, Rivenson Y, Jin Y et al (2019) Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods 16:103–110. https://doi.org/10.1038/s41592-018-0239-0

    Article  Google Scholar 

  75. Zhang H, Fang C, Xie X et al (2019) High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network. Biomed Opt Express 10:1044. https://doi.org/10.1364/boe.10.001044

    Article  Google Scholar 

  76. Eilers PHC, Ruckebusch C (2022) Fast and simple super-resolution with single images. Sci Rep 12:11241. https://doi.org/10.1038/s41598-022-14874-8

    Article  Google Scholar 

  77. Mannam V, Zhang Y, Yuan X, Howard S (2021) Deep learning-based super-resolution fluorescence microscopy on small datasets. In: Single Molecule Spectroscopy and Superresolution Imaging XIV. SPIE, pp 60–68

  78. Zhao F, Zhu L, Fang C et al (2020) Deep-learning super-resolution light-sheet add-on microscopy (Deep-SLAM) for easy isotropic volumetric imaging of large biological specimens. Biomed Opt Express 11:7273. https://doi.org/10.1364/boe.409732

    Article  Google Scholar 

  79. Park H, Na M, Kim B et al (2022) Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy. Nat Commun 13:3297. https://doi.org/10.1038/s41467-022-30949-6

    Article  Google Scholar 

  80. Ning K, Lu B, Wang X et al (2023) Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy. Light: Sci Appl 12:204. https://doi.org/10.1038/s41377-023-01230-2

    Article  Google Scholar 

  81. Mallet N, Micklem BR, Henny P et al (2012) Dichotomous Organization of the External Globus Pallidus. Neuron 74:1075–1086. https://doi.org/10.1016/j.neuron.2012.04.027

    Article  Google Scholar 

  82. Lilascharoen V, Wang EH-J, Do N et al (2021) Divergent pallidal pathways underlying distinct parkinsonian behavioral deficits. Nat Neurosci 24:504–515. https://doi.org/10.1038/s41593-021-00810-y

    Article  Google Scholar 

  83. Hirsch E, Graybiel AM, Agid YA (1988) Melanized dopaminergic neurons are differentially susceptible to degeneration in Parkinson’s disease. Nature 334:345–348. https://doi.org/10.1038/334345a0

    Article  Google Scholar 

  84. Dauer W, Przedborski S (2003) Parkinson’s Disease mechanisms and models. Neuron 39:889–909. https://doi.org/10.1016/s0896-6273(03)00568-3

    Article  Google Scholar 

  85. Lawler AJ, Brown AR, Bouchard RS et al (2020) Cell type-specific oxidative stress genomic signatures in the Globus Pallidus of dopamine-depleted mice. J Neurosci 40:9772–9783. https://doi.org/10.1523/jneurosci.1634-20.2020

    Article  Google Scholar 

  86. Mastro KJ, Zitelli KT, Willard AM et al (2017) Cell-specific pallidal intervention induces long-lasting motor recovery in dopamine-depleted mice. Nat Neurosci 20:815–823. https://doi.org/10.1038/nn.4559

    Article  Google Scholar 

  87. Liwang JK, Bennett HC, Pi H-J, Kim Y (2023) Protocol for using serial two-photon tomography to map cell types and cerebrovasculature at single-cell resolution in the whole adult mouse brain. STAR Protoc 4:102048. https://doi.org/10.1016/j.xpro.2023.102048

    Article  Google Scholar 

  88. Wu Y, Bennett HC, Chon U et al (2022) Quantitative relationship between cerebrovascular network and neuronal cell types in mice. Cell Rep 39:110978. https://doi.org/10.1016/j.celrep.2022.110978

    Article  Google Scholar 

  89. Ho S-Y, Chao C-Y, Huang H-L et al (2011) NeurphologyJ: an automatic neuronal morphology quantification method and its application in pharmacological discovery. BMC Bioinform 12:230. https://doi.org/10.1186/1471-2105-12-230

    Article  Google Scholar 

  90. Pool M, Thiemann J, Bar-Or A, Fournier AE (2008) NeuriteTracer: a novel ImageJ plugin for automated quantification of neurite outgrowth. J Neurosci Methods 168:134–139. https://doi.org/10.1016/j.jneumeth.2007.08.029

    Article  Google Scholar 

  91. Kayasandik CB, Labate D (2016) Improved detection of soma location and morphology in fluorescence microscopy images of neurons. J Neurosci Methods 274:61–70. https://doi.org/10.1016/j.jneumeth.2016.09.007

    Article  Google Scholar 

  92. Ozcan B, Negi P, Laezza F et al (2015) Automated Detection of Soma Location and morphology in neuronal network cultures. PLoS ONE 10:e0121886. https://doi.org/10.1371/journal.pone.0121886

    Article  Google Scholar 

  93. Chen J, Ding L, Viana MP et al (2020) The Allen cell and structure segmenter: a new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. bioRxiv 491035. https://doi.org/10.1101/491035

  94. Yan C, Li A, Zhang B et al (2013) Automated and Accurate Detection of Soma Location and Surface morphology in large-scale 3D neuron images. PLoS ONE 8:e62579. https://doi.org/10.1371/journal.pone.0062579

    Article  Google Scholar 

  95. He G-W, Wang T-Y, Chiang A-S, Ching Y-T (2018) Soma Detection in 3D images of neurons using machine learning technique. Neuroinformatics 16:31–41. https://doi.org/10.1007/s12021-017-9342-0

    Article  Google Scholar 

  96. Moen E, Bannon D, Kudo T et al (2019) Deep learning for cellular image analysis. Nat Methods 16:1233–1246. https://doi.org/10.1038/s41592-019-0403-1

    Article  Google Scholar 

  97. Feng L, Song JH, Kim J et al (2019) Robust nucleus detection with partially labeled exemplars. IEEE Access 7:162169–162178. https://doi.org/10.1109/access.2019.2952098

    Article  Google Scholar 

  98. Tyson AL, Rousseau CV, Niedworok CJ et al (2021) A deep learning algorithm for 3D cell detection in whole mouse brain image datasets. Plos Comput Biol 17:e1009074. https://doi.org/10.1371/journal.pcbi.1009074

    Article  Google Scholar 

  99. Gómez-de-Mariscal E, García-López-de-Haro C, Ouyang W et al (2021) DeepImageJ: a user-friendly environment to run deep learning models in ImageJ. Nat Methods 18:1192–1195. https://doi.org/10.1038/s41592-021-01262-9

    Article  Google Scholar 

  100. Wei X, Liu Q, Liu M et al (2023) 3D Soma detection in large-scale whole brain images via a two-stage neural network. IEEE Trans Méd Imaging 42:148–157. https://doi.org/10.1109/tmi.2022.3206605

    Article  Google Scholar 

  101. Oh H-J, Lee K, Jeong W-K (2022) Scribble-Supervised Cell Segmentation Using Multiscale Contrastive Regularization. 2022 IEEE 19th Int Symp Biomed Imaging (ISBI) 00:1–5. https://doi.org/10.1109/isbi52829.2022.9761608

  102. Lee H, Jeong W-K (2020) Medical Image Computing and Computer assisted intervention – MICCAI 2020. Lect Notes Comput Sci 14–23. https://doi.org/10.1007/978-3-030-59710-8_2

  103. Stringer C, Wang T, Michaelos M, Pachitariu M (2021) Cellpose: a generalist algorithm for cellular segmentation. Nat Methods 18:100–106. https://doi.org/10.1038/s41592-020-01018-x

    Article  Google Scholar 

  104. Carpenter AE, Jones TR, Lamprecht MR et al (2006) CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol 7:R100. https://doi.org/10.1186/gb-2006-7-10-r100

    Article  Google Scholar 

  105. Sommer C, Hoefler R, Samwer M, Gerlich DW (2017) A deep learning and novelty detection framework for rapid phenotyping in high-content screening. Mol Biol Cell 28:3428–3436. https://doi.org/10.1091/mbc.e17-05-0333

    Article  Google Scholar 

  106. Amitay Y, Bussi Y, Feinstein B et al (2023) CellSighter: a neural network to classify cells in highly multiplexed images. Nat Commun 14:4302. https://doi.org/10.1038/s41467-023-40066-7

    Article  Google Scholar 

  107. Yao K, Rochman ND, Sun SX (2019) Cell type classification and unsupervised morphological phenotyping from low-resolution images using deep learning. Sci Rep 9:13467. https://doi.org/10.1038/s41598-019-50010-9

    Article  Google Scholar 

  108. Xiao H, Peng H (2013) APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics 29:1448–1454. https://doi.org/10.1093/bioinformatics/btt170

    Article  Google Scholar 

  109. Zhou Z, Liu X, Long B, Peng H (2016) TReMAP: automatic 3D Neuron Reconstruction based on tracing, reverse mapping and assembling of 2D projections. Neuroinformatics 14:41–50. https://doi.org/10.1007/s12021-015-9278-1

    Article  Google Scholar 

  110. Feng L, Zhao T, Kim J (2015) neuTube 1.0: A New Design for Efficient Neuron Reconstruction Software Based on the SWC Format 123. Eneuro 2:ENEURO.0049-14.2014. https://doi.org/10.1523/eneuro.0049-14.2014

  111. Feng L, Zhao T, Kim J (2012) Improved synapse detection for mGRASP-assisted brain connectivity mapping. Bioinformatics 28:i25–i31. https://doi.org/10.1093/bioinformatics/bts221

    Article  Google Scholar 

  112. Kunst M, Laurell E, Mokayes N et al (2019) A Cellular-Resolution Atlas of the larval zebrafish brain. Neuron 103:21–38e5. https://doi.org/10.1016/j.neuron.2019.04.034

    Article  Google Scholar 

  113. Peng H, Bria A, Zhou Z et al (2014) Extensible visualization and analysis for multidimensional images using Vaa3D. Nat Protoc 9:193–208. https://doi.org/10.1038/nprot.2014.011

    Article  Google Scholar 

  114. Bria A, Iannello G, Peng H (2015) An Open-Source VAA3D Plugin for Real-Time 3D Visualization of Terabyte-Sized Volumetric Images. 2015 IEEE 12th Int Symp Biomed Imaging (ISBI) 520–523. https://doi.org/10.1109/isbi.2015.7163925

  115. Bria A, Iannello G, Onofri L, Peng H (2016) TeraFly: real-time three-dimensional visualization and annotation of terabytes of multidimensional volumetric images. Nat Methods 13:192–194. https://doi.org/10.1038/nmeth.3767

    Article  Google Scholar 

  116. Wang Y, Li Q, Liu L et al (2019) TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat Commun 10:3474. https://doi.org/10.1038/s41467-019-11443-y

    Article  Google Scholar 

  117. Peng H, Xie P, Liu L et al (2021) Morphological diversity of single neurons in molecularly defined cell types. Nature 598:174–181. https://doi.org/10.1038/s41586-021-03941-1

    Article  Google Scholar 

  118. Han X, Guo S, Ji N et al (2023) Whole human-brain mapping of single cortical neurons for profiling morphological diversity and stereotypy. Sci Adv 9:eadf3771. https://doi.org/10.1126/sciadv.adf3771

    Article  Google Scholar 

  119. Winnubst J, Bas E, Ferreira TA et al (2019) Reconstruction of 1,000 projection neurons reveals new cell types and Organization of Long-Range Connectivity in the mouse brain. Cell 179:268–281e13. https://doi.org/10.1016/j.cell.2019.07.042

    Article  Google Scholar 

  120. Gao L, Liu S, Gou L et al (2022) Single-neuron projectome of mouse prefrontal cortex. Nat Neurosci 25:515–529. https://doi.org/10.1038/s41593-022-01041-5

    Article  Google Scholar 

  121. Gao L, Liu S, Wang Y et al (2023) Single-neuron analysis of dendrites and axons reveals the network organization in mouse prefrontal cortex. Nat Neurosci 26:1111–1126. https://doi.org/10.1038/s41593-023-01339-y

    Article  Google Scholar 

  122. Qiu S, Hu Y, Huang Y et al (2024) Whole-brain spatial organization of hippocampal single-neuron projectomes. Science 383:eadj9198. https://doi.org/10.1126/science.adj9198

    Article  Google Scholar 

  123. Ming X, Li A, Wu J et al (2013) Rapid Reconstruction of 3D neuronal morphology from light Microscopy images with augmented Rayburst Sampling. PLoS ONE 8:e84557. https://doi.org/10.1371/journal.pone.0084557

    Article  Google Scholar 

  124. Radojević M, Meijering E (2019) Automated Neuron Reconstruction from 3D fluorescence Microscopy images using Sequential Monte Carlo Estimation. Neuroinformatics 17:423–442. https://doi.org/10.1007/s12021-018-9407-8

    Article  Google Scholar 

  125. Li Q, Shen L (2020) 3D Neuron Reconstruction in tangled neuronal image with Deep Networks. IEEE Trans Méd Imaging 39:425–435. https://doi.org/10.1109/tmi.2019.2926568

    Article  MathSciNet  Google Scholar 

  126. Zhou Z, Kuo H-C, Peng H, Long F (2018) DeepNeuron: an open deep learning toolbox for neuron tracing. Brain Inf 5:3. https://doi.org/10.1186/s40708-018-0081-2

    Article  Google Scholar 

  127. Donohue DE, Ascoli GA (2011) Automated reconstruction of neuronal morphology: an overview. Brain Res Rev 67:94–102. https://doi.org/10.1016/j.brainresrev.2010.11.003

    Article  Google Scholar 

  128. Chen H, Xiao H, Liu T, Peng H (2015) SmartTracing: self-learning-based Neuron reconstruction. Brain Inf 2:135–144. https://doi.org/10.1007/s40708-015-0018-y

    Article  Google Scholar 

  129. Peng H, Meijering E, Ascoli GA (2015) From DIADEM to BigNeuron. Neuroinformatics 13:259–260. https://doi.org/10.1007/s12021-015-9270-9

    Article  Google Scholar 

  130. Brown KM, Barrionuevo G, Canty AJ et al (2011) The DIADEM Data sets: Representative Light Microscopy images of neuronal morphology to Advance Automation of Digital Reconstructions. Neuroinformatics 9:143–157. https://doi.org/10.1007/s12021-010-9095-5

    Article  Google Scholar 

  131. Liu Y, Wang G, Ascoli GA et al (2022) Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 38:5329–5339. https://doi.org/10.1093/bioinformatics/btac712

    Article  Google Scholar 

  132. Cuntz H, Forstner F, Borst A, Häusser M (2010) One rule to grow them all: a general theory of neuronal branching and its practical application. PLoS Comput Biol 6:e1000877. https://doi.org/10.1371/journal.pcbi.1000877

    Article  MathSciNet  Google Scholar 

  133. Quan T, Zhou H, Li J et al (2016) NeuroGPS-Tree: automatic reconstruction of large-scale neuronal populations with dense neurites. Nat Methods 13:51–54. https://doi.org/10.1038/nmeth.3662

    Article  Google Scholar 

  134. Li R, Zhu M, Li J et al (2019) Precise segmentation of densely interweaving neuron clusters using G-Cut. Nat Commun 10:1549. https://doi.org/10.1038/s41467-019-09515-0

    Article  Google Scholar 

  135. Zhou H, Li S, Li A et al (2021) GTree: an open-source Tool for dense Reconstruction of Brain-wide neuronal Population. Neuroinformatics 19:305–317. https://doi.org/10.1007/s12021-020-09484-6

    Article  Google Scholar 

  136. Ding L, Zhao X, Guo S et al (2023) SNAP: a structure-based neuron morphology reconstruction automatic pruning pipeline. Front Neuroinformatics 17:1174049. https://doi.org/10.3389/fninf.2023.1174049

    Article  Google Scholar 

  137. Manubens-Gil L, Zhou Z, Chen H et al (2023) BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 20:824–835. https://doi.org/10.1038/s41592-023-01848-5

    Article  Google Scholar 

  138. Zhao J, Chen X, Xiong Z, et al et al (2019) Medical Image Computing and Computer assisted intervention – MICCAI 2019, 22nd International Conference, Shenzhen, China, October 13–17, 2019, proceedings, Part I. Lect Notes Comput Sci 750–759. https://doi.org/10.1007/978-3-030-32239-7_83

  139. Huang Q, Chen Y, Liu S et al (2020) Weakly supervised learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 14:38. https://doi.org/10.3389/fnana.2020.00038

    Article  Google Scholar 

  140. Huang Q, Cao T, Chen Y et al (2021) Automated neuron tracing using content-aware adaptive Voxel Scooping on CNN predicted probability map. Front Neuroanat 15:712842. https://doi.org/10.3389/fnana.2021.712842

    Article  Google Scholar 

  141. Klinghoffer T, Morales P, Park Y-G et al (2020) Self-supervised feature extraction for 3D Axon Segmentation. 2020 IEEECVF Conf Comput Vis Pattern Recognit Work (CVPRW) 00(4213–4219). https://doi.org/10.1109/cvprw50498.2020.00497

  142. Liu C, Wang D, Zhang H et al (2022) Using simulated Training Data of Voxel-Level Generative models to improve 3D Neuron Reconstruction. IEEE Trans Méd Imaging 41:3624–3635. https://doi.org/10.1109/tmi.2022.3191011

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Korea Institute of Science and Technology (KIST) Intramural Program, Republic of Korea (2E32901), and the K-brain Project of the National Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (RS-2023-00262880).

Funding

Korea Institute of Science and Technology (KIST) Intramural Program (2E32901) and K-brain Project of the National Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (RS-2023-00262880).

Author information

Authors and Affiliations

Authors

Contributions

YC, JK wrote and edited the manuscript. LF, WKJ, JK reviewed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jinhyun Kim.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, Y.K., Feng, L., Jeong, WK. et al. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inf. 11, 15 (2024). https://doi.org/10.1186/s40708-024-00228-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40708-024-00228-9

Keywords