|
|
REVIEW ARTICLE |
|
Year : 2018 | Volume
: 4
| Issue : 4 | Page : 157-165 |
|
Biological image analysis using deep learning-based methods: Literature review
Hongkai Wang1, Shang Shang1, Ling Long2, Ruxue Hu3, Yi Wu4, Na Chen4, Shaoxiang Zhang4, Fengyu Cong1, Sijie Lin2
1 School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, Liaoning Province, China 2 College of Environmental Science and Engineering, Tongji University, Shanghai, China 3 Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland 4 Institute of Digital Medicine, Biomedical Engineering College, Third Military Medical University, Chongqing, China
Date of Web Publication | 28-Dec-2018 |
Correspondence Address: Fengyu Cong Faculty of Electronic Information and Electrical Engineering, School of Biomedical Engineering, Dalian University of Technology, No. 2, Linggong Road, Dalian 116024, Liaoning Province China Sijie Lin College of Environmental Science and Engineering, Tongji University, Shanghai 200092 China
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/digm.digm_16_18
Automatic processing large amount of microscopic images is important for medical and biological studies. Deep learning has demonstrated better performance than traditional machine learning methods for processing massive quantities of images; therefore, it has attracted increasing attention from the research and industry fields. This paper summarizes the latest progress of deep learning methods in biological microscopic image processing, including image classification, object detection, and image segmentation. Compared to the traditional machine learning methods, deep neural networks achieved better accuracy without tedious feature selection procedure. Obstacles of the biological image analysis with deep learning methods include limited training set and imperfect image quality. Viable solutions to these obstacles are discussed at the end of the paper. With this survey, we hope to provide a reference for the researchers conducting biological microscopic image processing.
Keywords: Biological image analysis, convolutional network, deep learning, microscopic image processing, phenotype analysis
How to cite this article: Wang H, Shang S, Long L, Hu R, Wu Y, Chen N, Zhang S, Cong F, Lin S. Biological image analysis using deep learning-based methods: Literature review. Digit Med 2018;4:157-65 |
How to cite this URL: Wang H, Shang S, Long L, Hu R, Wu Y, Chen N, Zhang S, Cong F, Lin S. Biological image analysis using deep learning-based methods: Literature review. Digit Med [serial online] 2018 [cited 2023 Jun 8];4:157-65. Available from: http://www.digitmedicine.com/text.asp?2018/4/4/157/248978 |
Introduction | |  |
Microscopic images of cells,[1],[2],[3],[4],[5],[6],[7] tissues,[8] and whole organisms (e.g., Caenorhabditis elegans and zebrafish)[9],[10],[11],[12],[13],[14],[15],[16] are frequently used in medical and biological studies. In the era of big data, researchers are facing the challenge of increasingly large datasets of images. Since manual analysis is no longer efficient enough to cope with the high throughput image data,[17] automated analysis has become a particular interest.[18] By far, machine learning methods have been extensively used in biological image analysis, including cell type identification,[6],[19] cell population analysis,[3],[16],[20],[21] cell lineage reconstruction,[22] C. elegans image analysis,[23],[24] zebrafish phenotype measurement[11],[25],[26],[27],[28],[29] and toxicity effects classification,[30] embryo development stage recognition, and adult fish behavioral patterns.[31],[32],[33] However, most existing techniques are based on traditional machine learning methods whose accuracy and robustness are limited by quality of the extracted image features and performance of the feature classification methods.
In recent years, owing to the improvement of computer hardware,[34],[35] the growth of training data, and the progress in algorithm development, deep neural networks demonstrated boosted performance of image analysis compared to the traditional machine learning methods. By far, there have been several important reviews for medical image analysis or bioinformatics processing with deep learning approaches,[36],[37],[38],[39] but the survey on biological image processing is still rare. Therefore, this paper summarizes the ongoing research progress of deep learning in biological microscopic image processing for the scientists of related research directions. In this paper, we will briefly introduce the basic knowledge and development status of deep learning and then review its applications in microscopic image analysis. A comparison between deep learning and traditional machine learning methods is made, and the existing obstacles and viable solutions of biological image analysis are also discussed.
Basic Knowledge of Deep Learning Image Processing | |  |
In the last decade, deep learning methods based on convolutional neural networks (CNNs)[40],[41],[42] have experienced a dramatic development, particularly in the field of image processing.[43],[44],[45] They have made revolutionary changes in many areas such as face detection,[46] natural language processing,[47],[48] human behavioral recognition,[49],[50] image segmentation,[51],[52],[53] and video processing.[54],[55],[56],[57] Typical CNN consists of one or more of the following layers, i.e., convolutional layer, pooling layer, and fully connected layer. CNNs are extensively used in the realm of image analysis, including image classification and recognition, object detection, and image segmentation. The following subsections will briefly introduce these applications.
Deep learning for image classification
Deep learning-based image classification is to recognize and classify specific objects in images using specially trained deep neural networks. LeNet series deep neural networks were the early models for classification and recognition, with LeNet-5[58] as a pioneer. In 2012, AlexNet with highly promoted performance was developed.[59] Compared with LeNet-5, the major updates for AlexNet are the adoption of several new training techniques, including dropout,[60] regularization,[61] data augmentation, and the employment of Relu[62] as the activation function. More importantly, with the development of modern computer hardware, AlexNet was able to employ multi-GPU parallel methods for training deeper network structure to improve the overall performance. VGG nets[63] in 2014 have acceded the elite of AlexNet but exceeded it with much deeper network structure. With a better hardware support, VGG net could reach as many as 19 layers. Among the set of VGG nets, VGG-16 net is proved to have the best performance. To make the network deeper, ResNet[64] was developed by Microsoft Research, with as many as 152 layers trained, which is 8 times deeper than a comparable VGG network. The principle adjustment made by ResNet is to employ the idea of highway networks,[65] which means that the inputs of a lower layer are made available to a node in a higher layer. With revised deeper structure, ResNet is easier to be trained and has better accuracy. Among these deep learning-based image classification models, VGG-16 and ResNet are more popular and are widely used as an image classification model in varied applications, due to the better performance–cost ratio and less computation burden.
Deep learning for object detection
In some practical situations, one needs to recognize and locate multiple objects in the image. Object detection networks usually solve two large categories of problems at the same time, i.e., to frame the objects with bounding boxes (localization) and to recognize the category of the objects (classification). Deep learning was successfully extended to object detection field by the invention and maturity of several well-performing networks, with Faster region-based CNNs (Faster R-CNN)[66] as the pioneer. Faster R-CNN is based on region proposals, which selects candidate bounding boxes of possible objects from the original images using a pioneered bounding box selection network called region proposal network (RPN). The selected bounding boxes from RPN are fed into the following networks to get precise locations of the object bounding boxes and classify what kind of objects are in the boxes. Born in 2015, Faster R-CNN has already been invested in many practical applications.[67],[68] Several improved methods are proposed referencing Faster R-CNN, such as region-based fully convolutional networks (R-FCNs)[69] and single-shot multibox detector.[70] However, Faster R-CNN is still more popular due to more stable performance and the more mature platform.
Deep learning for image segmentation
With the development and progress of faster algorithms and stronger hardware support, deep learning has further developed from coarse to fine and is able to make predictions for every single pixel.[69] The pixel-level prediction has inspired the invention of a series of deep learning-based image segmentation methods. FCN[71] is the precursor. FCN enables segmentation by predicting classes for every pixel on the training images. With the advancement of deep learning-based object detection methods, Mask R-CNN[51] was developed based on Faster R-CNN but gives the new feature of segmentation by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. With only a minor adjustment to Faster R-CNN, Mask R-CNN is simple to train and adds only a small overhead, running at 5 frames/s, which could be called instant segmentation.
Deep Learning for Biological Image Analysis | |  |
In recent years, deploying deep learning methods for microscopic image processing in phenotype analysis offers plenty of research opportunities for biologists.[18] With the increasingly improvement of overall functionality and performance, deep learning-based image processing methods begin to show preliminary results in the field of biological microscopic image analysis.[72],[73]
Image classification
For phenotype analysis, image classification is one of the most widely used functions of deep learning. Deep learning has been applied to recognize and count particular biological objects from phenotype images such as cells, microarray, and bacterial colonies. The study of trans-differentiated neural progenitor cell (NPC) differentiation is an important method to investigate the cell's function. Methods for automatically classifying NPCs and non-NPCs are the most critical component of an automatic NPCs fundamental research system. Jiang et al. proposed a novel CNN-based classification method of NPCs and non-NPCs from bright-field microscopic images. Their method is based on the classical LeNet-5 model, which has three convolution layers with subsampling and two fully connected layers. Experiments with varying size of training and test data show a 90% average success rate.[74] Automatically detecting the cellular compartment where a tagged protein resides is easy for an experienced human but difficult for a computer. Pärnamaa and Parts constructed a network called DeepYeast for classifying protein subcellular localization from high-throughput microscopy yeast cell images.[75] DeepYeast is composed of 8 convolutional layers and 3 fully connected layers. It has outperformed the 75% accuracy reached by the random forest-based method with 87% accuracy. Similarly, Kraus et al. also developed a CNN framework named DeepLoc to classify the green fluorescent protein expression localization type in yeast cells from fluorescence microscopic images. DeepLoc is able to classify divergent cell types including pheromone-arrested cells with abnormal cellular morphology from images acquired by different laboratories.[76] Counting bacterial colonies is an essential task for most microbiology research, yet it has always been time-consuming and error-prone. Ferrari et al. developed three different methods for automating the counting process.[77] They first segmented the images of blood agar plates by a software made available on the WASPLab system and then tested three methods based on the segmented images, including a CNN-based classification system, a traditional crafted feature-based method, and a conventional watershed-based image processing approach, with a per-colony error being 0.28, 0.80, and 0.88, respectively. The CNN-based method is proved to be much more accurate than the traditional methods. Besides cells and bacteria, C. elegans image is another frequently studied biological model whose images require automated classification. Hakim et al. developed a CNN-based network called worMarchine to distinguish C. elegans image from noise images without C. elegans.[78] Wang et al. used the deep Q-net to classify C. elegans embryo cell movement type. With this network, they introduced deep learning into agent-based modeling so as to explore cell migration paths in a reverse engineering perspective.[79] With all the efforts above, deep learning technique is rapidly adopted to enhance the automation level of biological microscopy image classification.
Object detection
Since image classification model requires only one object in each image, a preprocessing of segmenting original images into small partitions is needed. For some cases with many objects in a single image, such as microscopy images of yeast and cells, object detection performing segmentation and classification at the same time is considered a better choice. Kraus et al. have developed a CNN-based method for object detection. They have enhanced the performance of CNN by employing multiple instance learning,[73],[80] so that the network could accomplish both tasks of segmenting and classifying. The approach is tested on mammalian and yeast microscopy images, is proved to outperform several previous traditional methods, and is considered to be an early attempt for developing CNN-based object detection methods in biology. With the popularization of R-CNN object detection platform, Zhang et al. have employed this new technology into biological research for cancer cell detection. A circle scanning algorithm is proposed to improve the detection of adhesive cells.[81] This improved detection system could automatically detect and frame cancer cells, even for those cells that are adhesive. Besides, CNN and its variants were also used for the detection of different types of cells (e.g., immune cell, tumor cell, and astrocyte),[82],[83] as well as the localization of interested targets (e.g., bacteria, injection site, and tyrosine hydroxylase-containing [TH-labeled] cells)[84],[85],[86] from zebrafish microscopic images. In recent years, convolutional selective autoencoder (CSA) network emerged as an effective tool for detecting small-sized targets from complex background, such as the identification of nematode eggs from microscopic images of soil samples.[87],[88] Different from the CNN-based detection frameworks which output the location of the target objects, CSA reconstructs an output image which only contains the interested target object and thus significantly decreases the difficulty of target detection from complex background.
Image segmentation
Unlike the rough localization of tasks in object detection applications, image segmentation is a process of partitioning an image into multiple segments of connected subunits based on pixel-level classification. Deep learning technique has been used for segmenting different types of cells from widefield,[89] confocal,[90] fluorescence,[91],[92],[93],[94] and phase-contrast[91],[93] microscopic images. Most of these applications used basic CNN architecture, while some of them used more advanced networks such as Mask-RCNN,[89] FCN,[93] and 3D CNN.[95] Besides the segmentation of cells, Jones proposed a solution to segment microarray images with noise and corruption based on a deep neural network[96] of three convolutional layers and two fully connected layers. Ciresan et al. used a deep neural network as a pixel classifier to segment biological neuron membranes.[97] Ning et al. implemented the segmentation and detection networks together to locate and segment cells and nuclei in microscopic images of developing C. elegans embryos.[98] They have trained a convolutional network to classify each pixel into five categories: cell wall, cytoplasm, nucleus membrane, nucleus, and outside medium. The output images from the convolutional layer are processed by an energy-based model to satisfy local consistency constraints. They adopt a set of deformable templates which are matched to the label image at the final stage to identify the stage of development of the embryo and to precisely locate the cell nuclei. This method is trainable and is extendable to other image-based phenotyping applications.
Image enhancement
In the field of computer vision, deep learning technique has been used for improving the image quality of photographic images, such as image super-resolution (i.e., generating high-resolution image with rich texture details from low-resolution image)[99] and image synthesis.[100] Similar techniques have been quickly applied for biological microscopic image analysis. CNN-based image super-resolution methods were used to improve the image resolution of conventional optical microscopic images,[101],[102] mobile phone microscopic device,[103] photoactivated localization microscopy, and stochastic optical reconstruction microscopy.[104] In addition to CNN, generative adversarial network (GAN) is another powerful framework for learning microscopic image patterns and compensating for the missing image information of the low-resolution microscopic images.[105] GAN was also used for synthesizing phase-contrast microscopic image from optical microscopic images.[106] Similarly, Christiansen et al. used deep learning method to predict fluorescent labels of nuclei, cell type, and cell state from transmitted-light images.[107] Although transmitted-light images do not provide fluorescent information, the authors train the neural network based on coupled transmitted-light images and fluorescent images of the same objects, so that the network learns to predict fluorescent labels from transmitted-light images.
Statistics of the literature
In summary, the applications of deep learning techniques in biomedical microscopic image analysis have experienced a significant incensement in recent years. [Figure 1] reports the number of papers published for different application types, target objects, microscopy types, and neural network structures in different years. It is obvious that the number of papers increased dramatically in the years 2017 and 2018. For 2018, our statistics only includes the first 8 months, and some of the 2018 papers are collected from open accessed preprint website like arXiv. Among different application types, segmentation and enhancement are the two most active applications, because deep learning significantly outperforms conventional methods for these applications. Comparing different target objects, cells are the most processed biological model using deep learning techniques, because the localization, classification, delineation, and counting of cells require highly automated and accurate machine learning methods. The statistics on microscopy type coincides reasonably well with the current usage frequency of different microscopic devices. Brightfield and fluorescence microscopy is widely used in biological experiments; therefore, the analysis of these images attracts most research focuses. Considering the network structures, CNN and its variants (e.g., FCN and RCNN) are most frequently investigated, due to their popularity in the image processing field. Compared to CNN, autoencoder and GAN are more suitable for image synthesis and image super-resolution; therefore, they are mostly used for image enhancement and selective target image reconstruction. The only one network type belonging to the “other” category is the deep Q-net used for classifying C. elegans embryo cell movement type;[79] this application is less general than segmentation and localization. In future years, we would expect that deep learning can be applied to more specific biological image analysis tasks, such as cell motion tracking and organism development. In that perspective, more network types such as the recurrent neural network and deep Boltzmann machine will be adopted for the future applications. | Figure 1: The number of published papers regarding the applications of deep learning for biological image processing, in terms of different application fields, target objects, publication years, microscopy types, and neural network structures
Click here to view |
Comparison with Traditional Machine Learning Methods | |  |
From methodology perspective, traditional machine learning methods commonly follow a two-step workflow of feature extraction and pattern recognition, which first extract object features (including geometrical and textural features) from the microscopic image and then use a machine learning classifier (e.g., supported vector machine and random forest) to perform the pattern recognition task based on extracted features. To achieve the best classification performance for a specific type of biological images, researchers usually spend great efforts designing feature extraction method and selecting appropriate machine learning classifier, which is tedious and subjective.
In contrast, deep learning methods automatically learn feature extraction and pattern recognition without human supervision. Within a deep neural network, the low-level layers (i.e., the convolutional and pooling layers) are responsible for feature extraction, and the high-level layers are responsible for pattern classification. Therefore, the burden of algorithm design is much relieved, and the algorithm performance is also improved. As revealed by several recent studies, deep learning technique leads to better classification accuracy for cell localization and segmentation,[75] bacterial colony counting,[77] cancer cell detection,[81] neuronal membrane segmentation from electron microscopy images,[97] C. elegans embryos cell video analysis,[98] etc.
Another advantage of deep learning method is the ability of inferring the information not contained in the processed image. As shown in the study of Christiansen et al.,[107] by training the neural network with coupled transmitted-light images and fluorescent images, the neural network learns the ability to predict fluorescent labels of the cells from only transmitted-light images. Moreover, the image super-resolution with deep learning methods[99],[108] can also be applied to microscopic image analysis, so as to enhance the image resolution limited by the microscopy imaging devices. These applications will dramatically extend the ability of computerized algorithms for biological microscopic image analysis and will lead to new life science discoveries which are not achievable with the traditional machine learning methods.
Opportunities and Challenges | |  |
Although deep learning techniques are gradually promoted for microscopic image processing, most of the current applications are based on the early models of basic CNNs. It is still relatively crude if one compares these applications with the deep leaning applications in other fields, such as computer vision and clinical medical image processing. There are still several challenges limiting the application of deep learning, such as small training datasets, imperfect image quality, small target objects, and the difficulty of generating ground truth object labels at the single-cell level.
The drawback of small training datasets is the liability to cause overfitting, the most serious problem in neural network training that prevents convergence.[109] To prevent overfitting, several techniques are proved to be effective including data augmentation,[59],[61] dropout,[60] and regularization.[110] Effective data augmentation methods for microscopic image datasets include image rotation, vertical and horizontal flip, zoom in and zoom out in particular range, random horizontal shifts, and vertical shifts. However, combined applications of multiple data augmentation techniques may result in distortion of the object shapes. Transfer learning and fine tuning are also effective methods for solving the small sample set problem.[73] There are many mature deep learning models developed and trained for the computer vision applications, such as VGG16, Google-LeNet, and ResNet. These models are capable of learning hierarchical features from raw input images. Adopting these models with pretrained weights as initialization would improve the overfitting conditions. Maree et al. applied deep CNN with transfer learning for annotating gene expression patterns in the mouse brain, demonstrating that the transfer learning is applicable to such microscopic image sets.[111]
Microscopic images also suffer from imperfect quality, such as image noises, limited spatial resolution, small object size, and low image contrast. Therefore, effective image preprocessing techniques are recommended previously to the network training. There is some software available for preprocessing microscopic images. ImageJ[112],[113],[114] is one of the most popular open source tools designed by the National Institute of Health. It is capable of applying image smoothing, sharpening, edge detection, median filtering, and thresholding on both grayscale and color images. Plug-ins with more add-on functions are available for ImageJ to fulfill most microscopic image processing tasks.
To prepare the training datasets for a deep neural network, labeling or segmentation of the small objects is usually the most laborious and time-consuming parts of preprocessing. Cytomine[111],[115],[116] is a web-based handy tool for manual annotating. It allows users to upload images to be processed and annotate them with user-defined labels. A region of interest for each annotation could also be defined in corresponding images. Besides these dedicated image annotation tools, advanced weekly supervised network training methods[117],[118] are also needed to alleviate the burden of manual preprocessing, yet these methods have not been applied to the field of biological image processing.
Conclusion | |  |
Deep learning methods have advanced rapidly for biological image analysis. This paper introduces the classical and state-of-the-art deep network models for the readers planning to conduct related research. Even though there are challenges, the inflation of big data and the rapid development of deep learning methods are bringing huge opportunities to microscopic image processing and related medical research.
Financial support and sponsorship
This research is supported by the youth program of the National Natural Science Fund of China, No. 81401475 and 21607115; the general program of the National Natural Science Fund of China, No. 61571076 and 21777116; the general program of Liaoning Science and Technology Project, No. 2015020040; the Since and Technology Star Project Fund of Dalian City, No. 2016RQ019; the cultivating program of the Major National Natural Science Fund of China, No. 91546123; the National Key Research and Development Program, No. 2016YFC0103101, 2016YFC0103102, and 2016YFC0106402; the Xinghai Scholar Cultivating Funding of Dalian University of Technology, No. DUT14RC(3)066; and the Fundamental Research Funds for Central Universities, No. DUT15LN02 and DUT16RC(3)099.
Conflicts of interest
There are no conflicts of interest.
References | |  |
1. | Schnabel JA, Rueckert D, Quist M, Blackall JM, Castellano-Smith AD, Hartkens T, et al. A Generic Framework for Non-rigid Registration Based on Non-uniform Multi-level Free-Form Deformations. Medical Image Computing and Computer Assisted Intervention (MICCAI), Utrecht, The Netherlands, 2001. p. 573-81. |
2. | Sommer C, Gerlich DW. Machine learning in cell biology – Teaching computers to recognize phenotypes. J Cell Sci 2013;126:5529-39. |
3. | Matuszewski DJ, Wählby C, Puigvert JC, Sintorn IM. PopulationProfiler: A tool for population analysis and visualization of image-based cell screening data. PLoS One 2016;11:e0151554. |
4. | Barretto RP, Schnitzer MJ. In vivo optical microendoscopy for imaging cells lying deep within live tissue. Cold Spring Harb Protoc 2012;2012:1029-34. |
5. | Shao L, Kner P, Rego EH, Gustafsson MG. Super-resolution 3D microscopy of live whole cells using structured illumination. Nat Methods 2011;8:1044-6. |
6. | Schneider G, Guttmann P, Heim S, Rehbein S, Mueller F, Nagashima K, et al. Three-dimensional cellular ultrastructure resolved by X-ray microscopy. Nat Methods 2010;7:985-7. |
7. | Wang W, Yu Y, Huang H. A portable high-resolution microscope based on combination of fiber-optic array and pre-amplification lens. Meas, 2018;125:s371-6. |
8. | Swoger J, Pampaloni F, Stelzer EH. Light-sheet-based fluorescence microscopy for three-dimensional imaging of biological samples. Cold Spring Harb Protoc 2014;2014:1-8. |
9. | Blanchoud S, Budirahardja Y, Naef F, Gönczy P. ASSET: A robust algorithm for the automated segmentation and standardization of early Caenorhabditis elegans embryos. Dev Dyn 2010;239:3285-96. |
10. | Li L, LaBarbera DV. 3D high-content screening of organoids for drug discovery. Comprehensive Medicinal Chemistry III. Elsevier Science, New York, USA; 2017. p. 388-415. |
11. | Sozzani R, Benfey PN. High-throughput phenotyping of multicellular organisms: Finding the link between genotype and phenotype. Genome Biol 2011;12:219. |
12. | Boto F, Paloc C, Verbeke A, Callol C, Letamendi A, Ibarbia I. Automatic standardisation of a zebrafish embryo image database. International Conference on Information Technology and Applications in Biomedicine; 2010. p. 1-4. |
13. | Keller PJ, Schmidt AD, Wittbrodt J, Stelzer EH. Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. Science 2008;322:1065-9. |
14. | Green RA, Kao HL, Audhya A, Arur S, Mayers JR, Fridolfsson HN, et al. A high-resolution C. elegans essential gene network based on phenotypic profiling of a complex tissue. Cell 2011;145:470-82. |
15. | Schrödel T, Prevedel R, Aumayr K, Zimmer M, Vaziri A. Brain-wide 3D imaging of neuronal activity in Caenorhabditis elegans with sculpted light. Nat Methods 2013;10:1013-20. |
16. | Feizi A, Zhang Y, Greenbaum A, Guziak A, Luong M, Chan RY, et al. Rapid, portable and cost-effective yeast cell viability and concentration analysis using lensfree on-chip microscopy and machine learning. Lab Chip 2016;16:4350-8. |
17. | Eliceiri KW, Berthold MR, Goldberg IG, Ibáñez L, Manjunath BS, Martone ME, et al. Biological imaging software tools. Nat Methods 2012;9:697-710. |
18. | Unser M, Sage D, Delgado-Gonzalo R. Advanced Image Processing for Biology, and the Open Bio Image Alliance (OBIA). Signal Processing Conference. IEEE, Vancouver, Canada, 2013. p. 1-5. |
19. | Logan DJ, Shan J, Bhatia SN, Carpenter AE. Quantifying co-cultured cell phenotypes in high-throughput using pixel-based classification. Methods 2016;96:6-11. |
20. | Jones TR, Carpenter A, Golland P. Voronoi-Based segmentation of cells on image manifolds. International Workshop on Computer Vision for Biomedical Image Applications; 2005. p. 535-43. |
21. | Padfield D, Rittscher J, Thomas N, Roysam B. Spatio-temporal cell cycle phase analysis using level sets and fast marching methods. Med Image Anal 2009;13:143-55. |
22. | Jayalakshmi N. Cell lineage construction of neural progenitor cells. Int J Comput Appl 2014;90:40-7. |
23. | White AG, Lees B, Kao HL, Cipriani PG, Munarriz E, Paaby AB, et al. DevStaR: High-throughput quantification of C. elegans developmental stages. IEEE Trans Med Imaging 2013;32:1791-803. |
24. | White AG, Cipriani PG, Kao HL, Lees B, Geiger D, Sontag E, et al. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2010;2010:3089-96. |
25. | Gehrig J, Reischl M, Kalmár E, Ferg M, Hadzhiev Y, Zaucker A, et al. Automated high-throughput mapping of promoter-enhancer interactions in zebrafish embryos. Nat Methods 2009;6:911-6. |
26. | Vogt A, Cholewinski A, Shen X, Nelson SG, Lazo JS, Tsang M, et al. Automated image-based phenotypic analysis in zebrafish embryos. Dev Dyn 2009;238:656-63. |
27. | Stegmaier J, Shahid M, Takamiya M, Yang L, Rastegar S, Reischl M, et al. Automated prior knowledge-based quantification of neuronal patterns in the spinal cord of zebrafish. Bioinformatics 2014;30:726-33. |
28. | Wang K I, Bonnetat A, Andrews M, et al. Automatic image analysis of zebrafish embryo development for lab-on-a-chip[C]. International conference on mechatronics and machine vision in practice, Auckland, New Zealand, 2012. p. 194-9. |
29. | Ronneberger O, Liu K, Rath M, Rueβ D, Mueller T, Skibbe H, et al. ViBE-Z: A framework for 3D virtual colocalization analysis in zebrafish larval brains. Nat Methods 2012;9:735-42. |
30. | Mikut R, Dickmeis T, Driever W, Geurts P, Hamprecht FA, Kausler BX, et al. Automated processing of zebrafish imaging data: A survey. Zebrafish 2013;10:401-21. |
31. | Liu R, Lin S, Rallo R, Zhao Y, Damoiseaux R, Xia T, et al. Automated phenotype recognition for zebrafish embryo based in vivo high throughput toxicity screening of engineered nano-materials. PLoS One 2012;7:e35014. |
32. | Lin S, Zhao Y, Xia T, Meng H, Ji Z, Liu R, et al. High content screening in zebrafish speeds up hazard ranking of transition metal oxide nanoparticles. ACS Nano 2011;5:7284-95. |
33. | Jeanray N, Marée R, Pruvot B, Stern O, Geurts P, Wehenkel L, et al. Phenotype classification of zebrafish embryos by supervised learning. PLoS One 2015;10:e0116989. |
34. | Cheng Y, Wang D, Zhou P, Zhang T. Model compression and acceleration for deep neural networks: The principles, progress, and challenges. IEEE Signal Process Mag 2018;35:126-36. |
35. | Huynh LN, Lee Y, Balan RK. DeepMon: Mobile GPU-based deep learning framework for continuous vision applications. The International Conference; 2017. p. 82-95. |
36. | Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60-88. |
37. | Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng 2017;19:221-48. |
38. | Min S, Lee B, Yoon S. Deep learning in bioinformatics. Brief Bioinform 2017;18:851-69. |
39. | Greenspan H, Ginneken BV, Summers RM. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans Med Imag 2016;35:1153-9. |
40. | Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1986;323:533-6. |
41. | Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86:2278-324. |
42. | Azizpour H, Razavian AS, Sullivan J, Maki A, Carlsson S. Factors of transferability for a generic convNet representation. IEEE Trans Pattern Anal Mach Intell 2016;38:1790-802. |
43. | Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, et al. Recent advances in convolutional neural networks. Pattern Recognit 2017;77:354-77. |
44. | Schmidhuber J. Deep learning in neural networks: An overview. Neural Netw 2015;61:85-117. |
45. | LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436-44. |
46. | Sun Y, Wang X, Tang X. Deep convolutional network cascade for facial point detection. IEEE Conference on Computer Vision and Pattern Recognition; Portland, OR, USA, 2013. p. 3476-83. |
47. | Hu B, Lu Z, Li H, Chen Q. Convolutional neural network architectures for matching natural language sentences. International Conference on Neural Information Processing Systems, Montréal Canada, 2014. p. 2042-50. |
48. | Kalchbrenner N, Grefenstette E, Blunsom P. A convolutional neural network for modelling sentences. Meeting of the Association for Computational Linguistics. Baltimore, USA. p. 655-65. |
49. | Ji S, Yang M, Yu K. 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 2013;35:221-31. |
50. | Tompson J, Jain A, LeCun Y, Bregler C. Joint training of a convolutional network and a graphical model for human pose estimation. CoRR 2014;abs/1406.2. |
51. | He K, Gkioxari G, Dollar P, Girshick R. Mask RC. Proceedings of the IEEE International Conference on Computer Vision; 2017. p. 2980-8. |
52. | Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer Assisted Intervention; 2015. p. 234-41. |
53. | Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018;40:834-48. |
54. | Feichtenhofer C, Pinz A, Zisserman A. Convolutional two-stream network fusion for video action recognition. The 29 th IEEE conference of Computer Vision and Pattern Recognition. Las Vegas, USA, 2016. p. 1933-41. |
55. | Greenspan H, Van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans Med Imag 2016;35:1153-9. |
56. | Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Feifei L. Large-scale video classification with convolutional neural networks. Computer Vision and Pattern Recognition. Ohio, USA, 2014. p. 1725-32. |
57. | Al Hajj H, Lamard M, Conze PH, Cochener B, Quellec G. Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal 2018;47:203-18. |
58. | Lecun Y, Boser BE, Denker JS, Henderson D, Howard RE, Hubbard W, et al. Backpropagation applied to handwritten zip code recognition. Neural Comput 1989;1:541-51. |
59. | Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 2012;25:2012. |
60. | Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 2014;15:1929-58. |
61. | Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. The 26 th Annual Conference on Neural Information Processing Systems (NIPS), Nevada, USA; 2012. p. 1097-105. |
62. | Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. International Conference on Machine Learning. Haifa, Israel, 2010. p. 807-14. |
63. | Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition; 2014. p. 1-10. |
64. | He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, Nevada, USA; 2015. |
65. | Srivastava RK, Greff K, Schmidhuber J. Highway networks. arXiv preprint arXiv: 150500387; 2015. |
66. | Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017;39:1137-49. |
67. | Bobe J. Egg quality in fish: Present and future challenges. Anim Front 2015;5:66-72. |
68. | Li J, Zhang D, Zhang J, Zhang J, Li T, Xia Y, et al. Facial expression recognition with Faster R-CNN. Procedia Comput Sci 2017;107:135-40. |
69. | Li Y, He K, Sun J. R-FCN: Object detection via region-based fully convolutional networks. Neural Information Processing Systems; Thirtieth Conference on Neural Information Processing Systems, Barcelona Spain, 2016. p. 379-87. |
70. | Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, et al. SSD: Single shot multibox detector. European Conference on Computer Vision; Amsterdam, the Netherlands, 2016. p. 21-37. |
71. | Milletari F, Navab N, Ahmadi SA. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. Fourth International Conference on 3D Vision; 2016. p. 565-71. |
72. | Angermueller C, Pärnamaa T, Parts L, Stegle O. Deep learning for computational biology. Mol Syst Biol 2016;12:878. |
73. | Zhang W, Li R, Zeng T, Sun Q, Kumar S, Ye J, et al. Deep model based transfer and multi-task learning for biological image analysis. IEEE Transactions on Big Data; 2016. p. 1475-84. |
74. | Jiang B, Wang X, Luo J, Zhang X, Xiong Y, Pang H. Convolutional neural networks in automatic recognition of trans-differentiated neural progenitor cells under bright-field microscopy. Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control; Qinhuangdao, China, 2015. p. 122-6. |
75. | Pärnamaa T, Parts L. Accurate classification of protein subcellular localization from high-throughput microscopy images using deep learning. G3 (Bethesda) 2017;7:1385-92. |
76. | Kraus OZ, Grys BT, Ba J, Chong Y, Frey BJ, Boone C, et al. Automated analysis of high-content microscopy data with deep learning. Mol Syst Biol 2017;13:924. |
77. | Ferrari A, Lombardi S, Signoroni A. Bacterial colony counting with convolutional neural networks in digital microbiology imaging. Pattern Recognit 2017;61:629-40. |
78. | Hakim A, Mor Y, Toker IA, Levine A, Neuhof M, Markovitz Y, et al. WorMachine: Machine learning-based phenotypic analysis tool for worms. BMC Biol 2018;16:8. |
79. | Wang Z, Wang D, Li C, Xu Y, Li H, Bao Z, et al. Deep reinforcement learning of cell movement in the early stage of C. elegans embryogenesis. Bioinformatics 2018;34:3169-77. |
80. | Kraus OZ, Ba JL, Frey BJ. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 2016;32:i52-i59. |
81. | Zhang J, Hu H, Chen S, Huang Y, Guan Q. Cancer cells detection in phase-contrast microscopy images based on Faster R-CNN. IEEE International Symposium on Computational Intelligence and Design; Hangzhou, China, 2016. p. 363-7. |
82. | Liu K, Qiao H, Wu J, Wang H, Fang L, Dai Q. Fast 3D cell tracking with wide-field fluorescence microscopy through deep learning. arXiv preprint arXiv: 180505139; 2018. |
83. | Suleymanova I, Balassa T, Tripathi S, Molnar C, Saarma M, Sidorova Y, et al. A deep convolutional neural network approach for astrocyte detection. Sci Rep 2018;8:12878. |
84. | Hay EA, Parthasarathy R. Performance of convolutional neural networks for identification of bacteria in 3D microscopy datasets. bioRxiv; 2018. p. 273318. |
85. | Cordero-Maldonado ML, Perathoner S, van der Kolk KJ, Boland R, Heins-Marroquin U, Spaink HP, et al. Deep learning image recognition enables efficient genome editing in zebrafish by automated injections. bioRxiv; 2018. |
86. | Dong B, Shao L, Costa MD, Bandmann O, Frangi AF. Deep learning for automatic cell detection in wide-field microscopy zebrafish images. International Symposium on Biomedical Imaging; 2015. p. 772-6. |
87. | Akintayo A, Tylka GL, Singh AK, Ganapathysubramanian B, Singh A, Sarkar S, et al. A deep learning framework to discern and count microscopic nematode eggs. Sci Rep 2018;8:9145. |
88. | Akintayo A, Lee N, Chawla V, Mullaney MP, Marett CC, Singh AK, et al. An end-to-end convolutional selective autoencoder approach to Soybean Cyst Nematode eggs detection. arXiv: Computer Vision and Pattern Recognition; 2016. |
89. | Hernández CX, Sultan MM, Pande VS. Using Deep Learning for Segmentation and Counting within Microscopy Data; 2018. |
90. | Saponaro P, Treible W, Kolagunda A, Chaya T, Caplan J, Kambhamettu C, et al. DeepXScope: Segmenting Microscopy Images with a Deep Neural Network. Computer Vision and Pattern Recognition Workshops; 2017. p. 843-50. |
91. | Akram SU, Kannala J, Eklund L, Heikkilä J. Cell proposal network for microscopy image analysis. IEEE International Conference on Image Processing; 2016. |
92. | Kassim YM, Glinskii OV, Glinsky VV, Huxley VH, Palaniappan K. Deep learning segmentation for epifluorescence microscopy images. Microsc Microanal 2017;23:140-1. |
93. | Wang W, Taft DA, Chen YJ, Zhang J, Wallace CT, Xu M, et al. Learn to Segment Single Cells with Deep Distance Estimator and Deep Cell Detector; 2018. |
94. | Shan EA, Cheung L, Epstein D, Pelengaris S, Khan M, Rajpoot NM. MIMO-Net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images. IEEE International Symposium on Biomedical Imaging; 2017. p. 337-40. |
95. | Li R, Zeng T, Peng H, Ji S. Deep Learning Segmentation of Optical Microscopy Images Improves 3D Neuron Reconstruction. IEEE Trans Med Imag; 2017. p. 1. |
96. | Jones AL. Segmenting microarrays with deep neural networks. bioRxiv; 2015. p. 204. |
97. | Ciresan DC, Giusti A, Gambardella LM, Schmidhuber J. Deep neural networks segment neuronal membranes in electron microscopy images. Neural Information Processing Systems; 2012. p. 1-9. |
98. | Ning F, Delhomme D, LeCun Y, Piano F, Bottou L, Barbano PE, et al. Toward automatic phenotyping of developing embryos from videos. IEEE Trans Image Process 2005;14:1360-71. |
99. | Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 2016;38:295-307. |
100. | Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H. Generative Adversarial Text to Image Synthesis; 2016. p. 1060-9. |
101. | Rivenson Y, Gorocs Z, Gunaydin H, Zhang Y, Wang H, Ozcan A. Deep learning microscopy. arXiv Learning 2017;4:1437-43. |
102. | Rivenson Y, Ozcan A. Toward a thinking microscope: Deep learning in optical microscopy and image reconstruction. arXiv preprint arXiv: 1805.08970; 2018. |
103. | Rivenson Y, Koydemir HC, Wang H, Wei Z, Ren Z, Gunaydin H, et al. Deep Learning Enhanced Mobile-Phone Microscopy. ACS Photonics, 2018;5: 2354-64. |
104. | Strack R. Deep learning advances super-resolution imaging. Nat Methods 2018;15:403. |
105. | Wang H, Rivenson Y, Jin Y, Wei Z, Gao R, Gunaydin H, et al. Deep learning achieves super-resolution in fluorescence microscopy. bioRxiv; 2018. p. 309641. |
106. | Rivenson Y, Liu T, Wei Z, Zhang Y, Ozcan A. PhaseStain: Digital Staining of Label-Free Quantitative Phase Microscopy Images using Deep Learning; 2018. |
107. | Christiansen EM, Yang SJ, Ando DM, Javaherian A, Skibinski G, Lipnick S, et al. In silico labeling: Predicting fluorescent labels in unlabeled images. Cell 2018;173:792-803.e19. |
108. | Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. IEEE Conference on Computer Vision and Pattern Recognition; 2017. p. 105-14. |
109. | Hawkins DM. The problem of overfitting. J Chem Inf Comput Sci 2004;44:1-2. |
110. | Deng L, Yu D. Deep Learning: Methods and Applications. Vol. 7. Foundations and Trends®in Signal Processing; 2014. p. 197-387. |
111. | Maree R, Stevens B, Rollus L, Rocks N, Lopez XM, Salmon I, et al. A rich internet application for remote visualization and collaborative annotation of digital slides in histology and cytology. Diagn Pathol 2013;8:1-4. |
112. | Rueden CT, Schindelin J, Hiner MC, DeZonia BE, Walter AE, Arena ET, et al. ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinformatics 2017;18:529. |
113. | Schneider CA, Rasband WS, Eliceiri KW. NIH image to imageJ: 25 years of image analysis. Nat Methods 2012;9:671-5. |
114. | Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, et al. Fiji: An open-source platform for biological-image analysis. Nat Methods 2012;9:676-82. |
115. | Marée R, Geurts P, Wehenkel L. Towards generic image classification using tree-based learning: An extensive empirical study. Pattern Recognit Lett 2016;74:17-23. |
116. | Marée R, Rollus L, Stévens B, Hoyoux R, Louppe G, Vandaele R, et al. Collaborative analysis of multi-gigapixel imaging data using cytomine. Bioinformatics 2016;32:1395-401. |
117. | Shi Z, Yang Y, Hospedales TM, Xiang T. Weakly-supervised image annotation and segmentation with objects and attributes. IEEE Trans Pattern Anal Mach Intell 2017;39:2525-38. |
118. | Xu Y, Zhu JY, Chang EI, Lai M, Tu Z. Weakly supervised histopathology cancer image segmentation and classification. Med Image Anal 2014;18:591-604. |
[Figure 1]
This article has been cited by | 1 |
Automatic Identification of Single Bacterial Colonies Using Deep and Transfer Learning |
|
| Shimaa A. Nagro, Mohammed A. Kutbi, Wafa M. Eid, Essam J. Alyamani, Mohammed H. Abutarboush, Musaad A. Altammami, Bandar K. Sendy | | IEEE Access. 2022; 10: 120181 | | [Pubmed] | [DOI] | | 2 |
Survey of artificial intelligence approaches in the study of anthropogenic impacts on symbiotic organisms – a holistic view |
|
| Manju M. Gupta,Akshat Gupta | | Symbiosis. 2021; | | [Pubmed] | [DOI] | | 3 |
Artificial Intelligence and Cellular Segmentation in Tissue Microscopy Images |
|
| Madeleine S. Durkee,Rebecca Abraham,Marcus R. Clark,Maryellen L. Giger | | The American Journal of Pathology. 2021; | | [Pubmed] | [DOI] | | 4 |
A bird’s-eye view of deep learning in bioimage analysis |
|
| Erik Meijering | | Computational and Structural Biotechnology Journal. 2020; 18: 2312 | | [Pubmed] | [DOI] | | 5 |
A review of deep learning with special emphasis on architectures, applications and recent trends |
|
| Saptarshi Sengupta,Sanchita Basak,Pallabi Saikia,Sayak Paul,Vasilios Tsalavoutis,Frederick Atiah,Vadlamani Ravi,Alan Peters | | Knowledge-Based Systems. 2020; : 105596 | | [Pubmed] | [DOI] | |
|
 |
 |
|