Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 
  • Users Online: 210
  • Home
  • Print this page
  • Email this page


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2022  |  Volume : 8  |  Issue : 1  |  Page : 15

Application of graph-based features in computer-aided diagnosis for histopathological image classification of gastric cancer


1 Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
2 Liaoning Cancer Hospital and Institute, Shengjing Hospital, China Medical University, Shenyang, China
3 Institute of Medical Informatics, University of Luebeck, Luebeck, Germany

Date of Submission06-Mar-2022
Date of Decision30-Apr-2022
Date of Acceptance15-May-2022
Date of Web Publication07-Jul-2022

Correspondence Address:
Chen Li
Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang
China
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/digm.digm_7_22

Rights and Permissions
  Abstract 


Background: The gold standard for gastric cancer detection is gastric histopathological image analysis, but there are certain drawbacks in the existing histopathological detection and diagnosis. Method: In this paper, based on the study of computer-aided diagnosis (CAD) system, graph-based features are applied to gastric cancer histopathology microscopic image analysis, and a classifier is used to classify gastric cancer cells from benign cells. Firstly, image segmentation is performed. After finding the region, cell nuclei are extracted using the k-means method, the minimum spanning tree (MST) is drawn, and graph-based features of the MST are extracted. The graph-based features are then put into the classifier for classification. Result: Different segmentation methods are compared in the tissue segmentation stage, among which are Level-Set, Otsu thresholding, watershed, SegNet, U-Net and Trans-U-Net segmentation; Graph-based features, Red, Green, Blue features, Grey-Level Co-occurrence Matrix features, Histograms of Oriented Gradient features and Local Binary Patterns features are compared in the feature extraction stage; Radial Basis Function (RBF) Support Vector Machine (SVM), Linear SVM, Artificial Neural Network, Random Forests, k-NearestNeighbor, VGG16, and Inception-V3 are compared in the classifier stage. It is found that using U-Net to segment tissue areas, then extracting graph-based features, and finally using RBF SVM classifier gives the optimal results with 94.29%. Conclusion: This paper focus on a graph-based features microscopic image analysis method for gastric cancer histopathology. The final experimental data shows that our analysis method is better than other methods in classifying histopathological images of gastric cancer.

Keywords: Gastric cancer, Graph feature, Image classification


How to cite this article:
Zhang H, Li C, Ai S, Chen H, Zheng Y, Li Y, Li X, Sun H, Huang X, Grzegorzek M. Application of graph-based features in computer-aided diagnosis for histopathological image classification of gastric cancer. Digit Med 2022;8:15

How to cite this URL:
Zhang H, Li C, Ai S, Chen H, Zheng Y, Li Y, Li X, Sun H, Huang X, Grzegorzek M. Application of graph-based features in computer-aided diagnosis for histopathological image classification of gastric cancer. Digit Med [serial online] 2022 [cited 2023 Jun 9];8:15. Available from: http://www.digitmedicine.com/text.asp?2022/8/1/15/350175




  Introduction Top


Background

Cancer is a disease caused by the uncontrolled growth of cells,[1] while gastric cancer is a group of abnormal cells that gather in the stomach to form tumor. Comprehensive data from recent years show that the incidence and mortality rate of gastric cancer are the third-highest among women and the second-highest among men.

Effective diagnosis of gastric cancer relies on the examination of hematoxylin and eosin-stained tissue sections under a microscope by pathologists. Microscopic examination of tissue sections is tedious and time-consuming, with screening procedures usually taking 5–10 min, making it difficult for pathologists to analyze more than 70 samples a day.[2] Moreover, the potential for incorrect diagnosis is also high. Therefore, a histopathological diagnosis of gastric cancer is important.[3],[4]

Computer-aided diagnosis (CAD), the basic concept of it, is to use computed judgments as objective opinions to help pathologists make a diagnosis.[5],[6],[7],[8] The goal of CAD is to improve the quality and efficiency of histopathological images. Moreover, through increasing the accuracy and consistency of image diagnosis, it can reduce image reading time.[9],[10],[11],[12],[13],[14] In the last few decades, a lot of research has been done on the development of CAD systems that can help physicians track cancers.[15],[16],[17],[18] Meanwhile, cancerous gastric histopathological image cells proliferate indefinitely.[19] It causes cancer cells to become denser, and the graph made of plasmas with nuclei is more compact than normal cells. Therefore, the study of graph theory is applied to classify histopathological images with better results.

In this paper, a method of classifying histopathological images of gastric cancer using graph-based features is proposed. The workflow is shown in [Figure 1].
Figure 1: Workflow of the proposed method. RGB: Red, Green, Blue, RF: Random forest, KNN: k-nearest neighbor, ANN: Artificial neural network, RBF: Radial basis function, SVM: Support vector machine, RGB: Red, green, blue, HOG: Histogram of oriented gradient, GLCM: Gray-level co-occurrence matrix, LBP: Local binary pattern, VGG: Visual geometry group.

Click here to view


Research content

The method proposed in this paper consists of seven main parts: (a) six different image segmentation methods are compared to obtain the optimal one; (b) cell nuclei are extracted using the k-means clustering method; (c) the center of mass of the cell nuclei is used as the point to form the graph structure and extract the graph-based features using the minimum spanning tree (MST) algorithm; (d) graph-based features, red, green, blue (RGB) features, gray-level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, and histogram of oriented gradient (HOG) features are extracted after segmentation for a comparison; (e) based on the previous work, the extracted feature vectors are put into different classifiers: radial basis function (RBF) support vector machine (SVM), linear SVM, artificial neural network (ANN), random forests (RF), and k-nearest neighbor (KNN); (f) two deep learning comparison experiments are designed; (g) experimental results are obtained by calculating accuracy, precision, recall, specificity, and F1-score.

The main contributions of this article are as follows:

  • A new framework is designed to introduce graph-based feature image classification method to the field of histopathological image analysis of gastric cancer.
  • A large number of comparative experiments are done to demonstrate the effectiveness of our method.
  • This method for gastric cancer proposed in this study achieves good results, with a final classification accuracy of 94.29%.



  Related Works Top


The development of graph theory on computer-aided diagnosis

The application of graph theory in CAD can effectively use the topological structure of histopathological images to analyze the information in histopathological images by analyzing the structure of the graph. Moreover, the length and size of each edge and corner in the graph can be used to represent the spatial ordering of different tissues, which can intuitively reflect the content of histopathological images. These data can be better used as the basis for the judgment of pathologists. Therefore, the application of graph theory techniques to CAD analysis of histopathological images has become popular.

Graph theory is applied to extract topological and relational information from collagen frameworks through the integration of deep learning with graph theory.[20] The results are consistent with the expected pathogenicity induced by collagen deposition, demonstrating the potential for clinical application in analyzing various reticular structures in whole-slide histological images.

Computer image processing-based Voronoi diagrams, Gabriel diagrams, and MST are used to represent the spatial arrangement of blood vessels as a way to quantitatively analyze microvessels.[21] Derived features of graph structure are extracted using syntactic structure analysis of graph structure-based derived features. The most discriminative features are found using a KNN classifier.

A large number of color, texture, and morphological features are extracted from stained histopathological images of cervical cancer.[22] In addition, it extracts 29 features such as edge length and area from three graph structures, after which the nuclei are classified using linear discriminant analysis.

Instead of cell nuclei, skeletal nodes are used in histopathological images of cervical cancer, and the work constructs graph structures using MST, extracts several feature values such as the edge and corner, and clusters them using k-means.[23] In addition, four graph theoretic methods are added as a comparison in the step of constructing the graph structure to select the optimal graph-theoretic method. The deep learning network structure with gradient direction histogram is used to compare with the selected optimal graph theoretic methods, and the optimal method is obtained by the evaluation of doctors.

The development of computer-aided diagnosis in gastric cancer

Deep learning is used in many medical image processing tasks, for example, deep learning is used to identify COVID-19 samples in chest X-ray images.[24] The continuous progress of deep learning algorithms has led to the rapid development of CAD technology in gastric cancer. Currently, the deep learning methods used in the field of gastric cancer mainly include image preprocessing, image segmentation, feature extraction, and image classification methods.

In histopathological image preprocessing of gastric cancer, the work proposes an image classification model, which can alleviate bad annotation training set.[25] Through fine-tuning the neural network in two stages and introducing a new intermediate dataset, the performance of the network in image classification is improved. The work sets up an image-denoising network based on CNN and optimizes the denoising network by using the advantages of complex numerical operations to increase the tightness of convolution.[26]

In the image segmentation stage, a new polyp segmentation method based on multi-depth codec network combination is proposed.[27] The network can extract the features of different effective receptive fields and multi-size the image to represent multi-level information. It can also extract effective information features from the missing pixels in the training phase. A radiology-based deep-supervised U-Net is developed for the segmentation of prostate and prostatic lesions.[28] These methods can also be applied to histopathological images of gastric cancer. A highly effective hybrid model for filter bubble partitioning is proposed.[29]

In the stage of feature extraction, the HOG and LBP features are extracted in gastric cancer histopathological images.[30] By comparing, the LBP feature is superior to the HOG feature. A new unsupervised feature selection method that calculates the dependencies between features and avoids selecting redundant features is developed.[31] It can also be used in the histopathological images of gastric cancer to improve operational efficiency.

In terms of classifiers, this article compares RF and ANN. Through extensive experimental comparisons, the ANN classifier outperforms the RF classifier. The standard Inception-V3 network framework is used.[32] The parameters are reduced by changing the depth multiplier. After several iterations, the model with the lowest validation error is finally selected as the final model.


  Methods Top


U-Net-based image segmentation

U-Net is an Fully Convolutional Networks (FCN)-based semantic segmentation network originally applied to medical cell microscopic image segmentation tasks.[33],[34],[35] The end-to-end structure of this network can efficiently recover the information loss at the shallow level due to pooling operations. In addition, the training strategy of U-Net network uses a data expansion strategy that efficiently makes full use of the limited-labeled training data for de-training. U-Net contains two parts; the first part is the left half of U-shaped structure for feature extraction, while the second part is the right half of the U-shape, which is the up-sampling part. A copy and crop jump connection layer is used before fusion to ensure that more features are fused in the final recovered feature map, which also allows the fusion of features of different sizes, thus allowing multi-scale prediction.

[Figure 2] shows U-Net structure used in this paper. The network consists of a down-sampling (left side) and an up-sampling (right side) path. In the down-sampling path, a convolution operation using ReLU for activation is first followed by a max pooling operation. After that, the size of the feature map generated by the convolution operation is reduced to half of the original size. This set of operations is repeated four times in the down-sampling path. In the up-sampling path, each step contains three main operations, which are the up-convolution operation, the copy and stitch operation, and the convolution operation with ReLU for activation. These three operations are repeated a total of four times in the up-sampling path. The final segmentation result is generated by a 2 × 2 sized convolution kernel (activated using Sigmoid).
Figure 2: U-Net structure. Conv: Convolution. ReLU: Rectified Linear Unit.

Click here to view


Graph theory and graph-based features

Graph theory takes graphs as the object of the study.[36] A graph is an image consisting of a number of given nodes and an edge connecting the two nodes. Such graphs are usually used to describe a particular relationship between certain things, with the points representing the things and the edges connecting the two nodes, indicating that the corresponding two things have this relationship. A graph usually contains nodes, edges, paths, loops, and weights. An example of a graph is shown in [Figure 3].
Figure 3: An example of a graph with five points and eight weighted edges.

Click here to view


Graph theory is a graph structure which can record the topological structure in an image by computing the features of the graph structure. There are various ways of graph composition: MST,[37] Delaunay triangulation,[38] Voronoi Diagram,[39] etc., Our previous work finds that the graph formation characteristics of MST are better in comparison with cervical cancer histopathological images.[23],[40] A comparative analysis reveals that the topological information carried by using the MST graph structure in histopathology is the most complete.[40] Hence, the MST graph-based features obtain the optimal result. Therefore, in this work, the MST is chosen as the graph formation method for the graph structure.

This paper proposes a method for image analysis of histopathology of gastric cancer using graph-based features. While observing the experimental data, the cancerous tissues in the histopathological images of gastric cancer are significantly different from the normal images in terms of topological structure. Therefore, we intend to design a method to classify the gastric cancer pathology using topological structure. A large amount of literature shows that the topological structure information in the graph can be gotten by the method of graph theory, and the MST algorithm is chosen as the graph-forming method.

In this work, the information of edges and angles is obtained on the MST. The reason why extracting the information of edge lengths and angles is that the graph is composed of nodes and edges and more than 2–3 nodes constitute a corner. Edges and angles are the most basic elements to characterize the graph structure. Edges represent the degree of dispersion between two nodes, and angles represent structural complexity.[41] The information of edges is the edge length of the MST, and the information of angles is to calculate the angle between every two adjacent edges. The MST contains all nodes with the smallest sum of weights in the original graph, which is the least connected subgraph of the Delaunay triangulation. At the level of the sum of edge weights, the value of MST is less than or equal to the sum of the weights of all other spanning trees. As shown in [Figure 4], there are five points ABCDE, and ∠ ABC, ∠ ABD, ∠ CBD, and ∠ BDE can be calculated. Mean, variance, kurtosis, and skewness are found for all edges and angles of each tree, and eight feature values can be output for each histopathological image.
Figure 4: Example figure of a minimum spanning tree composed of cell nuclei.

Click here to view


Classification methods

In this article, we compare the performance of different kinds of classifiers, including RBF SVM, linear SVM, ANN, KNN, RF, and two classification models in deep learning: visual geometry group-16 (VGG-16) and Inception-V3. SVM is a binary classifier that uses linearity for classification,[42] and it produces different classification results depending on the kernel function. The linear kernel function, on the other hand, has the advantage of having fewer parameters and fast evolution on linearly separable data. ANN is a model that simulates the information transfer mechanism of neurons,[43] which has the advantage of high accuracy and parallel distribution processing capability but requires many initial parameters and a long training time. KNN is a machine learning a simple classification learning algorithm,[44] which is suitable for multi-label problems, but the accuracy suffers when the number of samples of classes is unbalanced. RF is an integrated learning method in machine learning.[45] RF is simple, easy to implement, and has low computational overhead, but is prone to overfitting.

VGG-16 uses small-sized convolutional kernels to reduce the parameters, and the network structure is regular and suitable for parallel acceleration.[46] The main idea of Inception architecture is to find out how to approximate the optimal local sparse knots with dense components. This network model proposed by Inception-V3 gives more accurate feature information when dealing with a larger number of features and features with high feature diversity and also reduces the computational effort.[47]

The SVM classifier with RBF kernel function is chosen for the experiments. The main reasons are: SVM classification is effective, the most fundamental verdict in SVM classification judgments is provided by support vectors, the complexity of the computation is not affected by the number of samples, but mainly by the number of support vectors, so the structure has small storage space, and the algorithm is robust. Furthermore, SVM is a small sample learning method that does not involve concepts such as probability measures, simplifying the usual problems such as classification and regression. In terms of kernel functions, the RBF kernel function is more advantageous on linearly indistinguishable data, and classification is more accurate.


  Experiments and Analysis Top


Image dataset and experimental setup for gastric cancer

The 2017 Brain of Things (BOT) competition provides 700 histopathological images with 2048 × 2048 resolution. Among them, 560 are histopathological images of gastric cancer and the rest are histopathological images of normal stomach. The training set of U-Net segmentation network contains 300 images randomly selected among 560 histopathological images of gastric cancer. 120 are the validation set and the remaining 140 are used as the test set. Then, experimental operations such as feature extraction are performed on the basis of these 140 histopathological images of gastric cancer and 140 histopathological images of normal stomach separately. In the classifier comparison section, data augmentation is performed on the existing data to improve the performance of the classifier.

During the training, only abnormal images have Ground Truth (GT) images, and normal images do not have GT images. However, in the later test, all images are subjected to image segmentation and graph-based features are calculated.

Analysis of image segmentation results

In the image segmentation stage, the same two images are segmented in this paper using six segmentation methods, respectively. In this article, the parameters of U-Net are set such that two sets of convolution operations of size 3 × 3, followed by Max Pooling operations of size 2 × 2. The size of the up-convolution operation is 3 × 3, and the third operation is two 1 × 1 convolution operations. These three operations are repeated a total of four times in the up-sampling path. The final segmentation result is produced by a convolution kernel of size 2 × 2. The results of the different segmentation methods are shown in [Figure 5].
Figure 5: Comparison of six segmentation methods on the test set for two typical examples (a and b). GT: Ground Truth.

Click here to view


In [Figure 5], it can be seen that the Level-Set segmentation method is segmented according to the edges of the image, which cannot distinguish the structure of different tissues in the pathological images and watershed segmentation only separates the whole image, but there is no gastric image segmentation method that is based on tissue structure. Moreover, the information collected is both good and bad, which cannot highlight the graph-based features of the cancer region. The foreground retained by the Otsu thresholding segmentation method includes not only the effective tissue but also the intercellular matrix, which seriously affects the quality of the minimum generated tree graph structure. Moreover, the foreground retained by the Otsu thresholding segmentation method includes not only the effective tissue but also the intercellular matrix, which seriously affects the quality of the minimum generated tree graph structure. It demonstrates that U-Net method we used is able to segment the cancer region better, with smoother edges and clear subject regions, and retains less noise compared to other methods. In addition to the visual comparison, we also make evaluation metrics, the details of which are shown in [Table 1].
Table 1: The accuracy of the image segmentation method.

Click here to view


Medical images are characterized by simpler semantics, fixed structure, and less data volume. U-Net segmentation uses U-shaped structure and skip connection to achieve more excellent performance, which can perfectly solve these problems and is very outstanding in the field of medical image segmentation. From [Table 1], except for specificity and accuracy, U-Net performs better than other methods in other indicators and lighter than Trans-U-Net.

Analysis of k-means algorithm

At this stage of the paper, the pixel grayscale values of the images are clustered, and the k-means algorithm with k = 3 is used as a benchmark for comparison. As shown in [Figure 6], we can observe that when k = 3, the nuclei in the histopathological images of gastric cancer are better expressed, and basically, all the nuclei on the tissue are labeled. When k = 4, a part of the stained gastric cancer tissue region is not labeled due to further clustering of the grayscale values. When k = 5, this becomes more obvious and the nuclei information is severely lost. Therefore, this paper selects the k-means clustering algorithm with a better effect of k = 3 to extract the cell nuclei in the extraction stage.
Figure 6: Cell nuclei results with different k. (a) The upper right quarter area of the image. (b) The image of the nucleus with k = 3. (c) The image of the nucleus with k = 4. (d) The image of the nucleus with k = 5.

Click here to view


Analysis of feature extraction methods

After segmentation, the graph-based features are used for feature extraction: k-means is used to extract the cell nucleus and the MST algorithm is used to draw the graph, which is shown in [Figure 7].
Figure 7: Comparison of minimum spanning tree graph structures of six segmentation methods on the test set for two typical examples (a and b). GT: Ground Truth.

Click here to view


By comparing the MST graph structures after the segmentation, the MST graph structure extracted from U-Net segmented images could represent the real topological information structure of gastric cancer tissues more accurately. However, other segmentation methods have poor quality of their MST graph structures due to their own drawbacks.

Then, five feature extraction methods are compared in this paper, as shown in [Table 2]. First, U-Net segmentation method is used for segmentation. After that, the MST, RGB features, HOG features, GLCM features, and LBP features are extracted and compared for the image.
Table 2: Accuracy of image segmentation method.

Click here to view


Then, the RBF SVM classifier is selected for classification in the third step. Finally, by calculating the classification accuracy, it can be seen that the graph-based features have obvious advantages in the feature extraction stage and the classification accuracy reaches 94.29%.

Analysis of graph-based features

This study uses two features of the MST that can represent the topology of a graph: edge lengths and angles of the MST. Based on this, eight statistical features are extended, including the mean, variance, skewness, and kurtosis of the edge length and the mean, variance, skewness, and kurtosis of the angle.

As shown in [Figure 8], the first column shows the characteristic statistics of the edge length, which are mean, variance, skewness, and kurtosis. The second column represents the characteristic statistics of the angle, including mean, variance, skewness, and kurtosis. The horizontal coordinates of the statistical plot indicate the number of images in the experiment, 140 in total. The first 70 are normal gastric histopathological images while the last 70 are gastric cancer histopathological images. The vertical coordinates of the statistical plot indicate the statistical values.
Figure 8: Statistical features of graph structure. The horizontal coordinates of the statistical plot indicate the number of images in the experiment. The vertical coordinates of the statistical plot indicate the statistical values. (a) Statistical plot of the mean characteristics of the edge length. (b) Statistical plot of the mean characteristics of the angle. (c) Statistical plot of the variance characteristics of the edge length. (d) Statistical plot of the variance characteristics of the angle. (e) Statistical plot of the skewness characteristics of the edge length. (f) Statistical plot of the kurtosis characteristics of the angle. (g) Statistical plot of the skewness characteristics of the edge length. (h) Statistical plot of the kurtosis characteristics of the angle.

Click here to view


Meanwhile, it can be seen that the mean values of edge lengths of normal and gastric cancer histopathological images are more stable, indicating that the size of tissue structure is more accurately described, and the size of structure of each histopathological image does not differ greatly. In terms of variance, the variance of edge length of normal images is significantly smaller than that of gastric cancer images, indicating that the edge length structure of normal images is more similar, while the edge length structure of gastric cancer images is of different lengths and full of irregular shapes of cancer. Moreover, the angle of normal images is similar to that of gastric cancer images, and their angle structures are similar. In terms of skewness, the edge lengths of the normal and gastric cancer images are also relatively close. It indicates that they have a similar degree of asymmetry relative to the mean, while the angles are significantly different and they have a greater degree of asymmetry relative to the mean. In terms of kurtosis, the edge lengths and angles of normal and gastric cancer images are similar, indicating that the steepness of their distribution patterns is similar.

Collectively, it can be seen that the topological information of histopathological images can be extracted more completely by using MST. The classification accuracy of this method is the highest among all the feature extraction methods, reaching 94.29%. Moreover, it fully illustrates the high performance of the graph-based feature extraction method on the histopathological images of gastric cancer.

Analysis of red, green, blue features

In this study, in the RGB feature extraction method, the histogram statistics of R channel, G channel, and B channel of each image are performed. As shown in [Figure 9], the horizontal coordinate of this histogram is the pixel value (the interval is from 0 to 255) and the vertical coordinate is the number of pixels for each pixel value. In the statistics of each channel, the background (the pixel value is 0) in U-Net segmentation images is removed, and this region is the part that is segmented off during U-Net segmentation process and cannot be counted into the RGB features as feature statistics.
Figure 9: Red, green, blue features histogram. The horizontal coordinate of this histogram is the pixel value (the interval is from 0 to 255) and the vertical coordinate is the number of pixels for each pixel value. (a) R channel. (b) G channel. (c) B channel.

Click here to view


Analysis of histogram of oriented gradient features

In this study, feature vectors of dimension 2,340,900 are extracted for each U-Net segmentation graph in the HOG feature extraction stage and then put into RBF SVM for classification. The HOG features use grayscale gradients to describe the local shape of the object, which marks the gradient orientation of the graph on the original graph. The advantage of HOG features is that they have better geometric invariance with optical undistorted due to the histogram of gradient orientations drawn in a small region. The disadvantage is that the noise immunity is poor. HOG features are better in detecting human body according to their characteristics, but the effect of feature extraction for histopathological images is not satisfactory. The classification accuracy of feature extraction is only 55%.

Considering that the feature dimensions extracted by different classifiers are different, the effect of different classifiers is also compared in HOG feature extraction, which is shown in [Table 3].
Table 3: The results of each metric obtained by classifying histogram of oriented gradient features using different classifiers.

Click here to view


From the above analysis, it can be concluded that the feature extraction of images using HOG features is not very effective, and the highest accuracy is 61.54% using ANN.

Analysis of gray-level co-occurrence matrix features

GLCM is a matrix that represents the grayscale relationship between pixels at each location of the image, either adjacent pixels or pixels at a specified distance pixel. The work finds that among the 14 statistics derived from GLCM, only four statistics (homogeneity, correlation, contrast, and energy) are uncorrelated, and these four features are easy to compute and give high classification accuracy;[53] six texture features are studied in detail and conclude that contrast and entropy are the most important features.[54] Therefore, in the extraction of GLCM features, this article calculates four statistical attributes in the grayscale co-occurrence matrix: homogeneity, correlation, contrast, and entropy.

The main reason for the bad effect of the GLCM features is due to the fact that the image used is a U-Net segmented image. After the image segmented by U-Net, the pixel value of the background region is zero. This significantly affects the image information carried by the grayscale co-occurrence matrix, whose four statistical attributes are very distorted in describing the texture features of the image. The final classification accuracy using GLCM features is 53.57%.

Analysis of local binary pattern features

In the LBP features extraction stage, features are extracted for each U-Net segmentation image using the LBP operator to form a grayscale image with the same resolution as the original image. From the LBP features extracted image, the LBP histogram is formed using the image grayscale, as in [Figure 10], the horizontal coordinate is the grayscale of the LBP image, and the vertical coordinate is the number of pixels per grayscale.
Figure 10: LBP histogram. The horizontal coordinate is the grayscale of the local binary pattern image, and the vertical coordinate is the number of pixels per grayscale. LBP: Local Binary Pattern.

Click here to view


The advantages of LBP features are fast operation speed and rotation invariance and grayscale invariance. The LBP value of each pixel can reflect the texture relationship between the point and the surrounding pixels, and the texture information is better preserved. The disadvantage is that it is sensitive to the orientation information, and the gradient orientation in the image has a relatively large impact on the LBP value of the pixel. The purpose of this study is to classify gastric cancer tissues and normal tissues, and the LBP feature is extracted from the local texture information of the image to form the LBP image. The grayscale histogram is then drawn from the LBP image. Instead of passing down all the information of the tissue region, the histogram passes down the local texture features, which eventually leads to the loss of the feature information needed for the experiment. The classification accuracy using LBP features is 65.71%.

Analysis of classifier design

In the classifier design stage, as shown in [Table 3], this article compares five methods. First, the image is segmented by U-Net and then MST are used to extract the features. At last, the classification accuracy of the five classifiers: RBF SVM, linear SVM, ANN, RF, and KNN are compared, respectively. In this paper, we select a six-layer ANN network, a RF with 512 trees, and a KNN structure with k = 15.

The classification accuracy in [Table 4] shows that the graph-based features in this article have good robustness and the experimentally used classifiers are effective. By comparison, the most suitable for the structural features of the histopathological image of gastric cancer is the RBF SVM classifier.
Table 4: Design method of classifier.

Click here to view


Meanwhile, a comparison experiment of two types of deep learning classification is designed: VGG-16 and Inception-V3. As shown in [Table 5], to compare the effect of different deep learning classification methods under the same experimental data, the data of the comparison experiment are 70 randomly selected from each of 140 gastric cancer images and 140 normal images, with a final training set and test set of 140 images each. To better compare the classifier performance, we achieve data augmentation by meshing the current images into patches (256 pixel × 256 pixel). After augmentation, 8960 training images and the same number of test set are obtained.
Table 5: Comparative experiments in deep learning.

Click here to view


The advantage of VGG-16 is that the framework is more concise, and the network uses the same volume of convolutional kernels and max-pooling and a combination of several small filter (3 × 3) convolutional layers instead of one large filter (5 × 5 or 7 × 7). The advantage of Inception-V3 is that it uses different-sized convolutional kernels within a layer, which improves perceptual power, and uses batch normalization, which mitigates gradient disappearance. These two deep learning networks are chosen for the classification comparison experiments in this experiment and after training. However, it is found that the classification results are not good, with VGG-16, a classification accuracy of 75%, and Inception-V3, a classification accuracy of 50%. The main reason for this is that deep learning networks require a large amount of sample data. After augmentation, the accuracy of both VGG-16 and Inception-V3 network models has been improved, which are 87.50% and 62.80%, respectively, but they are still lower than our proposed method.


  Discussion Top


For the different methods used for feature extraction, this article compares the different classification results of graph-based features with other features and draws separate confusion matrices to analyze them. The main evaluation metrics consist of the following five components: accuracy (ACC), precision (PPV), recall (TPR), specificity (TNR), and F1-score. Moreover, evaluation metrics are calculated for each confusion matrix, the results of which are shown in [Table 6].
Table 6: Evaluation index (unit: \%).

Click here to view


From the above confusion matrix and its evaluation index, we can see that in terms of classification accuracy, the graph-based features perform very well, with an accuracy rate of 94.29%. The other features methods are less effective, with RGB features and LBP features at 69.29% and 65.57%, respectively, and the worst are HOG features and GLCM features, which are even as low as 55% and 53.57%. In terms of accuracy, the graph-based features still perform the optimal, having reached 100%, followed by the RGB features at 76.47%. The accuracy of the LBP features is 68.97%, while the worst are the HOG features and GLCM features at 58.14% and 55.81%. In terms of recall (i.e., classification accuracy of normal images), the value of graph-based features is 88.57%, with eight normal images being misclassified as gastric cancer images. The values of RGB features and LBP features are 55.71% and 57.14%, and the worst are still the HOG features and GLCM features at 35.71% and 34.29%, respectively. In terms of specificity (accuracy of classification of gastric cancer images), the graph-based features work well and reach 100%, which means that no gastric cancer images are classified wrongly. This is followed by the RGB features at 82.86%. The value of HOG features is equal to the LBP features at 74.29%, and the GLCM features only reach 72.86%. In the part of F-score, the graph-based feature method is still the optimal performer at 93.94%, followed by the RGB and LBP features at 64.46% and 62.50%, and the worst are still the HOG and GLCM features at 44.24% and 42.48%.

Through the above analysis and discussion, it can be concluded that the graph-based feature extraction method performs the optimal throughout the experiment, followed by RGB features and LBP features, and the lowest are HOG features and GLCM features. Meanwhile, by analyzing the results of the comparison experiments between the classification of normal images and the classification of gastric cancer images, it can be found that the method of graph-based features is poor in classifying normal images and easily misclassified normal images into gastric cancer images. However, it performs well in classifying gastric cancer images.


  Conclusion Top


Histopathological image analysis has been a popular research direction in the medical field and plays a crucial role in the future path of intelligent medicine. In the study of the topology of histopathological images of gastric cancer, graph theory is able to address the problems in this direction. The histopathological images of gastric cancer have a wide range of tissue structures and complex morphology, especially those in the cancer nest region. Moreover, it is difficult to extract the complete tissue information with conventional features to meet the experimental requirements.

In this paper, a graph-based feature microscopic image analysis method is proposed for gastric cancer histopathology, which expands on the classical digital image processing process and mainly includes the main steps of image segmentation, feature extraction, and classifier design. This analysis method mainly takes advantage of the features that the topological structure information of gastric cancer tissue regions is significantly different from normal tissues and uses graph-based features to collect this information and then classify it. By comparing the classification result metrics, this article again validates the advantages of graph-based features on histopathological images of gastric cancer. Furthermore, by comparing multiple image segmentation methods, multiple feature extraction methods, multiple classifiers, and experiments for deep learning, the optimal method can be selected: For histopathological images of gastric cancer, the image is first segmented using U-Net, extracted by the graph-based features method, and finally, the RBF SVM classifier, which is optimal for nonlinear data processing, is selected for classification. The final experimental data show that our analysis method has an absolute advantage in classifying histopathological images of gastric cancer. Furthermore, the proposed graph-based features have a potential to work in other microscopic image analysis field, such as microorganism image analysis,[55],[56],[57] cytopathological image analysis,[58],[59],[60],[61] microscopic video analysis.[62],[63],[64],[65],[66]

Acknowledgments

We thank Miss Zixian Li and Mr. Guoxian Li for their important discussion. We also thank B. E. Jiawei Zhang for his help in the experiments.

Financial support and sponsorship

This work is supported by National Natural Science Foundation of China (No. 61806047).

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Bugdayci G, Pehlivan MB, Basol M, Yis OM. Roles of the systemic inflammatory response biomarkers in the diagnosis of cancer patients with solid. Exp Biomed Res 2019;2:37-43.  Back to cited text no. 1
    
2.
Elsheikh TM, Austin RM, Chhieng DF, Miller FS, Moriarty AT, Renshaw AA, et al. American Society of Cytopathology workload recommendations for automated Pap test screening: Developed by the productivity and quality assurance in the era of automated screening task force. Diagn Cytopathol 2013;41:174-8.  Back to cited text no. 2
    
3.
Ai S, Li C, Li X, Wang Q, Li X. A State-of-the-Art Review for Gastric Histopathology Feature Extraction Methods, In Proceedings of ISICDM 2020, ACM; 2021. p. 64-8. [Doi: 10.1145/3451421.3451436].  Back to cited text no. 3
    
4.
Ai S, Li C, Li X, Jiang T, Grzegorzek M, Sun C, et al. A state-of-the-art review for gastric histopathology image analysis approaches and future development. Biomed Res Int 2021;2021:6671417.  Back to cited text no. 4
    
5.
Li X, Li C, Rahaman MM, Sun H, Li X, Wu J, et al. A comprehensive review of computer-aided whole-slide image analysis: From datasets to feature extraction, segmentation, classification, and detection approaches. Articial Intell Rev 2022;29:609-39.  Back to cited text no. 5
    
6.
Zhou X, Li C, Rahaman MM, Yao Y, Ai S, Sun C, et al. A comprehensive review for breast histopathology image analysis using classical and deep neural networks. IEEE Access 2020;8:90931-56.  Back to cited text no. 6
    
7.
Li Y, Li C, Li X, Wang K, Rahaman MM, Sun C, et al. A comprehensive review of Markov random field and conditional random field approaches in pathology image analysis. Arch Comput Methods Eng 2021;2021:1-31.  Back to cited text no. 7
    
8.
Li C, Chen H, Li X, Xu N, Hu Z, Xue D, et al. A review for cervical histopathology image analysis using machine vision approaches. Artificial Intell Rev 2020;53:4821-62.  Back to cited text no. 8
    
9.
Doi K. Current status and future potential of computer-aided diagnosis in medical imaging. Br J Radiol 2005;78:S3-19.  Back to cited text no. 9
    
10.
Chen H, Li C, Li X, Rahaman MM, Hu W, Li Y, et al. IL-MCAM: An interactive learning and multi-channel attention mechanism-based weakly supervised colorectal histopathology image classification approach. Comput Biol Med 2022;143:105265.  Back to cited text no. 10
    
11.
Li Y, Wu X, Li C, Li X, Chen H, Sun C, et al. A hierarchical conditional random field-based attention mechanism approach for gastric histopathology image classication. Appl Intell 2022:1-22. [Doi: 10.1007/s10489-021-02886-2].  Back to cited text no. 11
    
12.
Hu W, Li C, Li X, Rahaman MM, Ma J, Zhang Y, et al. GasHisSDB: A new gastric histopathology image dataset for computer aided diagnosis of gastric cancer. Comput Biol Med 2022;142:105207.  Back to cited text no. 12
    
13.
Sun C, Li C, Zhang J, Rahaman MM, Ai S, Chen H, et al. Gastric histopathology image segmentation using a hierarchical conditional random field. Biocybern Biomed Eng 2020;40:1535-55.  Back to cited text no. 13
    
14.
Sun C, Li C, Zhang J, Kulwa F, Li X. Hierarchical conditional random field model for multi-object segmentation in gastric histopathology images. Electron Lett 2020;56:750-3.  Back to cited text no. 14
    
15.
Bengtsson E, Malm P. Screening for cervical cancer using automated analysis of PAP-smears. Comput Math Methods Med 2014;2014:842037.  Back to cited text no. 15
    
16.
Li C, Chen H, Zhang L, Xu N, Xue D, Hu Z, et al. Cervical histopathology image classification using multilayer hidden conditional random fields and weakly supervised learning. IEEE Access 2019;7:90378-97.  Back to cited text no. 16
    
17.
Xue D, Zhou X, Li C, Yao Y, Rahaman MM, Zhang J, et al. An application of transfer learning and ensemble learning techniques for cervical histopathology image classification. IEEE Access 2020;8:104603-18.  Back to cited text no. 17
    
18.
Li Y, Wu X, Li C, Sun C, Li X, Rahaman MM, et al. Intelligent gastric histopathology image classification using hierarchical conditional random field based attention mechanism. ICMLC 2021;2021:330-5.  Back to cited text no. 18
    
19.
Sun C, Li C, Xu H, Zhang J, Ai S, Zhou X, et al. A Comparison of Segmentation Methods in Gastric Histopathology Images. Proceeding of ISICDM 2020, ACM; 2021. p. 75-9. [Doi: 10.1145/3451421.3451438].  Back to cited text no. 19
    
20.
Jung H, Suloway C, Miao T, Edmondson EF, Lisle C. Integration of Deep Learning and Graph Theory for Analyzing Histopathology Whole-Slide Images. 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR); 2018. p. 1-5. [Doi: 10.1109/AIPR.2018.8707424].  Back to cited text no. 20
    
21.
Keenan SJ, Diamond J, McCluggage WG, Bharucha H, Thompson D, Bartels PH, et al. An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN). J Pathol 2000;192:351-62.  Back to cited text no. 21
    
22.
Guillaud M, Cox D, Adler-Storthz K, Malpica A, Staerkel G, Matisic J, et al. Exploratory analysis of quantitative histopathology of cervical intraepithelial neoplasia: Objectivity, reproducibility, malignancy-associated changes, and human papillomavirus. Cytometry A 2004;60:81-9.  Back to cited text no. 22
    
23.
Li C, Hu Z, Chen H, Ai S, Li X. Cervical Histopathology Image Clustering Using Graph Based Unsupervised Learning. Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019); 2020. p. 141-52. [Doi: 10.1007/s42979-021-00469-z].  Back to cited text no. 23
    
24.
Rahaman MM, Li C, Yao Y, Kulwa F, Rahman MA, Wang Q, et al. Identification of COVID-19 samples from chest X-Ray images using deep learning: A comparison of transfer learning approaches. J Xray Sci Technol 2020;28:821-39.  Back to cited text no. 24
    
25.
Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric pathology image classification using stepwise fine-tuning for deep neural networks. J Healthc Eng 2018;2018:8961781.  Back to cited text no. 25
    
26.
Quan Y, Lin P, Xu Y, Nan Y, Ji H. Nonblind image deblurring via deep learning in complex field. IEEE Trans Neural Netw Learn Syst 2021;PP:1-14.  Back to cited text no. 26
    
27.
Nguyen NQ, Lee SW. Robust boundary segmentation in medical images using a consecutive deep encoder-decoder network. IEEE Access 2019;7:33795-808.  Back to cited text no. 27
    
28.
Hambarde P, Talbar S, Mahajan A, Chavan S, Sable N. Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net. Biocyber Biomed Eng 2020;40:1421-35.  Back to cited text no. 28
    
29.
Tao S, Guo Y, Zhu C, Chen H, Zhang Y, Yang J, et al. Hybrid model enabling highly efficient follicular segmentation in thyroid cytopathological whole slide image. Intell Med 2021;1:70-9.  Back to cited text no. 29
    
30.
Korkmaz SA, Binol H. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection. J Mol Struct 2018;1156:255-63.  Back to cited text no. 30
    
31.
Lim H, Kim DW. Pairwise dependence-based unsupervised feature selection. Pattern Recognit 2021;111:107663.  Back to cited text no. 31
    
32.
Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep learning models for histopathological classification of gastric and colonic epithelial tumours. Sci Rep 2020;10:1504.  Back to cited text no. 32
    
33.
Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham; 2015. p. 234-41. [Doi: 10.1007/978-3-319-24574-4_28].  Back to cited text no. 33
    
34.
Zhang J, Li C, Kosov S, Grzegorzek M, Shirahama K, Jiang T, et al. LCU-Net: A novel low-cost U-Net for environmental microorganism image segmentation. Pattern Recognit 2021;115:1-17.  Back to cited text no. 34
    
35.
Zhang J, Li C, Kulwa F, Zhao X, Sun C, Li Z, et al. A Multiscale CNN-CRF framework for environmental microorganism image segmentation. Biomed Res Int 2020;2020:4621403.  Back to cited text no. 35
    
36.
Sanfilippo A. Graph theory. In: Encyclopedia of Language & Linguistics. 2nd ed., Vol. 311. 2006. p. 140-2. [Doi: 10.1016/B0-08-044854-2/01600-X].  Back to cited text no. 36
    
37.
Li YZ, Wen J. A novel fuzzy distance-based minimum spanning tree clustering algorithm for face detection. Cognit Comput 2022:1-12. [Doi: 10.1007/s12559-022-10002-w].  Back to cited text no. 37
    
38.
Perumal L. New approaches for Delaunay triangulation and optimisation. Heliyon 2019;5:e02319.  Back to cited text no. 38
    
39.
Yan DM, Bao G, Zhang X, Wonka P. Low-resolution remeshing using the localized restricted voronoi diagram. IEEE Trans Vis Comput Graph 2014;20:1418-27.  Back to cited text no. 39
    
40.
Li C, Hu Z, Chen H, Ai S, Li X. A cervical histopathology image clustering approach using graph based features. SN Comput Sci 2021;2:1-20. [Doi: 10.1007/s42979-021-00469-z].  Back to cited text no. 40
    
41.
Cruz-Roa A, Xu J, Madabhushi A. A Note on the Stability and Discriminability of Graph Based Features for Classification Problems in Digital Pathology. Tenth International Symposium on Medical Information Processing and Analysis (SIPAIM 2014). International Society for Optics and Photonics; 2014. [Doi: 10.1117/12.2085141].  Back to cited text no. 41
    
42.
Boser BE, Guyon IM, Vapnik VN. A Training Algorithm for Optimal Margin Classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory; 1992. p. 144-52. [Doi: 10.1145/130385.130401].  Back to cited text no. 42
    
43.
Wang SC. Artificial neural network. In: Interdisciplinary Computing in Java Programming. Boston, MA: Springer; 2003. p. 81-100. [Doi: 10.1007/978-1-4615-0377-4].  Back to cited text no. 43
    
44.
Peterson LE. K-nearest neighbor. Scholarpedia 2009;4:1883.  Back to cited text no. 44
    
45.
Breiman L. Random forests. Mach Learn 2001;45:5-32.  Back to cited text no. 45
    
46.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 1409.1556; 2014. [Doi: 10.48550/arXdioiv. 1409.1556].  Back to cited text no. 46
    
47.
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016. p. 2818-26. [Doi: 10.1109/CVPR.2016.308].  Back to cited text no. 47
    
48.
Actor JA, Fuentes DT, Rivière B. Identification of kernels in a convolutional neural network: Connections between level set equation and deep learning for image segmentation. Proc SPIE Int Soc Opt Eng 2020;11313:1131317.  Back to cited text no. 48
    
49.
Fan H, Xie F, Li Y, Jiang Z, Liu J. Automatic segmentation of dermoscopy images using saliency combined with Otsu threshold. Comput Biol Med 2017;85:75-85.  Back to cited text no. 49
    
50.
Hasan SM, Ahmad M, Ahmad M. Two-step verification of brain tumor segmentation using watershed-matching algorithm. Brain Inform 2018;5:8.  Back to cited text no. 50
    
51.
Xue L, Wang X, Yang Y, Zhao G, Han Y, Fu Z, et al. Segnet network algorithm-based ultrasound images in the diagnosis of gallbladder stones complicated with gallbladder carcinoma and the relationship between P16 expression with gallbladder carcinoma. J Healthc Eng 2021;2021:2819986.  Back to cited text no. 51
    
52.
Ying S, Wang B, Zhu H, Liu W, Huang F. Caries segmentation on tooth X-ray images with a deep network. J Dent 2022;119:104076.  Back to cited text no. 52
    
53.
Ulaby FT, Kouyate F, Brisco B, Lee Williams TH. Textural information in SAR images. IEEE Trans Geosci Remote Sens 1986;24:235-45.  Back to cited text no. 53
    
54.
Baraldi A, Panniggiani F. An investigation of the textural characteristics associated with gray level co-occurrence matrix statistical parameters. Geosci Remote Sens 1995;33:293-304.  Back to cited text no. 54
    
55.
Li C, Shirahama K, Grzegorzek M. Application of content-based image analysis to environmental microorganism classification. Biocybern Biomed Eng 2015;35:10-21.  Back to cited text no. 55
    
56.
Kosov S, Shirahama K, Li C, Grzegorzek M. Environmental microorganism classification using conditional random fields and deep convolutional neural networks. Pattern Recognit 2018;77:248-61.  Back to cited text no. 56
    
57.
Zhang J, Li C, Rahaman MM, Yao Y, Ma P, Zhang J, et al. A comprehensive review of image analysis methods for microorganism counting: From classical image processing to deep learning approaches. Artif Intell Rev 2022;55:2875-944.  Back to cited text no. 57
    
58.
Li C, Huang X, Jiang T, Xu N. Full-automatic computer aided system for stem cell clustering using content-based microscopic image analysis. Biocybern Biomed Eng 2017;37:540-58.  Back to cited text no. 58
    
59.
Rahaman MM, Li C, Wu X, Yao Y, Hu Z, Jiang T, et al. A survey for cervical cytopathology image analysis using deep learning. IEEE Access 2020;8:61687-710.  Back to cited text no. 59
    
60.
Rahaman MM, Li C, Yao Y, Kulwa F, Wu X, Li X, et al. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput Biol Med 2021;136:104649.  Back to cited text no. 60
    
61.
Liu W, Li C, Rahaman MM, Jiang T, Sun H, Wu X, et al. Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification: From convolutional neural networks to visual transformers. Comput Biol Med 2022;141:105026.  Back to cited text no. 61
    
62.
Shen M, Li C, Huang W, Szyszka P, Shirahama K, Grzegorzek M, et al. Interactive tracking of insect posture. Pattern Recognit 2015;48:3560-71.  Back to cited text no. 62
    
63.
Chen A, Li C, Zou S, Rahaman MM, Yao Y, Chen H, et al. SVIA dataset: A new dataset of microscopic videos and images for computer-aided sperm analysis. Biocybern Biomed Eng 2022;42:204-14.  Back to cited text no. 63
    
64.
Li X, Li C, Zhao W, Gu Y, Li J, Xu P. Comparison of Visual Feature Extraction Methods of Sperms in Semen Microscopic Videos. Proceedings of ISICDM 2020, ACM; 2021. p. 206-12. [Doi: 10.1145/3451421.3451465].  Back to cited text no. 64
    
65.
Zhao W, Zou S, Li C, Li J, Zhang J, Ma P, et al. A Survey of Sperm Detection Techniques in Microscopic Videos. Proceedings of ISICDM 2020, ACM; 2021. p. 219-24. [Doi: 10.1145/3451421.3451467].  Back to cited text no. 65
    
66.
Li X, Li C, Kulwa F, Rahaman MM, Zhao W, Wang X, et al. Foldover features for dynamic object behaviour description in microscopic videos. IEEE Access 2020;8:114519-40.  Back to cited text no. 66
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Related Works
Methods
Experiments and ...
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed987    
    Printed62    
    Emailed0    
    PDF Downloaded87    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]