Catch up on the latest AI articles

[No Need To Stain?] Style Conversion Of HE-stained Images To Other Stains In The Image Generation Network

[No Need To Stain?] Style Conversion Of HE-stained Images To Other Stains In The Image Generation Network


3 main points
✔️ Converts HE stained image to another stained image by deep learning
✔️ Convert to Masson trichrome staining, PAS staining, and Jones plating silver staining to improve diagnostic accuracy
✔️ This method can be adapted to various staining methods

Deep learning-based transformation of H&E stained tissues into special stains
written by 
Kevin de HaanYijie ZhangJonathan E. ZuckermanTairan LiuAnthony E. SiskMiguel F. P. DiazKuang-Yu JenAlexander NoboriSofia LiouSarah ZhangRana RiahiYair RivensonW. Dean WallaceAydogan Ozcan
(Submitted on 12 Aug 2021)
Comments: Nature Communications.

Subjects: Computer Vision and Pattern Recognition (cs.CV)


The images used in this article are from the paper, the introductory slides, or were created based on them.


In the pathological examination, HE staining is known as the most basic staining method. In addition to this, special staining methods can be used to obtain tissue- and disease-specific histological images. In this paper, we introduce a machine learning model that converts HE staining to special staining (Masson trichrome staining, PAS staining, and Jones plating silver staining) using tissue sections from kidney needle biopsies.

The model was built by supervised learning with the evaluation of three renal pathologists and the diagnosis of a fourth pathologist. The results showed that the virtual stained images of 58 cases improved the diagnostic accuracy of several non-neoplastic renal diseases (p=0.0095). In addition, the virtual stained images were found to be statistically equivalent to the actual histochemical stained images. This staining conversion can improve diagnostic accuracy and significantly reduce staining costs.

first of all

Histopathological evaluation is performed by looking through a microscope at a tissue section or by viewing a scanned image of a whole slide image (WSI) on a computer screen. Visual observation is the gold standard of histopathology and an important workflow in pathology, regardless of the type of disease.

Sections of tissue are stained to add color contrast, with hematoxylin and eosin (HE) staining being the most common staining method; HE staining is relatively simple and is performed in almost all cases, accounting for about 80% of human tissue staining.

In addition to HE staining, various other staining methods are used to highlight histological characteristics. For example, Masson's trichrome (MT) is used to view connective tissue, and periodic acid-Schiff (PAS) is used to examine the basement membrane. Jones methenamine sliver (JMS) provides a clearer visualization of glomerular structures. These stains allow the pathologist to recognize subtle basement membrane abnormalities and thus recognize the non-neoplastic renal disease.

Traditional histopathological workflows require not only time and money, but also laboratory infrastructure. Furthermore, when multiple types of staining methods are used, multiple tissue fragments themselves are required, and different staining procedures are required, resulting in high costs. In general, HE staining is performed first, and then special staining is performed at the discretion of the pathologist. Therefore, it takes time to perform special staining.

To address this issue, alternative contrast mechanisms such as nonlinear microscopy and UV tissue surface excitation are known. More recently, virtual staining by deep learning has been developed. The staining is based on a variety of modalities such as autofluorescence, hyperspectral, and quantitative phase. These techniques are performed on label-free, i.e. unstained sections.

In contrast, a method of converting already stained material to another stain (stain conversion) has been considered. Various staining conversions have been introduced in the literature, such as HE to MT, Ki67-CD8 to FAP-CK (fibroblast activation protein-cytokeratin) in situ hybridization. A variety of staining conversions have been described in the literature, including HE to MT, Ki67-CD8 to FAP-CK (fibroblast activation protein-cytokeratin) for in situ hybridization.

However, many of these staining transformation techniques rely on unsupervised approaches, adversarial generative networks known as CycleGANs. Such networks, which use only distribution matching loss, are prone to hallucination when applied to medical images. Generative images leading to misdiagnosis) is known to be prone to occur.

Therefore, in this paper, we introduce a supervised learning staining transformation work-frame (Figure 1). The authors validate this model by evaluating tissues from non-neoplastic renal diseases. In many clinical settings, the pathologist first makes a preliminary diagnosis with HE staining. Although treatment can be initiated with this diagnosis, a definitive diagnosis is often made with special stained images provided the next day. Therefore, the time required to provide special stained images can be shortened with this model, which is expected to lead to significant clinical improvement in urgent diseases such as crescentic glomerulonephritis and GVHD.

Figure 1: Deep learning is used to transform HE-stained images into specially stained images.


The validity of the stain conversion in this model was confirmed by three pathologists. There are 58 cases, and the pathologists make a diagnosis from HE sections and stain-transformed histology. They then make the diagnosis on the actual specially stained sections and compare the two diagnoses.

In the past, pathologists made a preliminary diagnosis by looking at the histological image in HE, and then applied special staining if necessary. In this study, we succeeded in skipping the special staining process by generating a special stained image, which improved the diagnostic accuracy of various non-neoplastic renal diseases.

Design and training of statin transformation networks

One CNN performs the staining transformation, but a GAN is used to generalize the input image (Style transfer network in Figure 2b).

Figure 2b. This CycleGAN transforms the image into a slightly different view of the same HE-stained image (generalization). This reproduces the coloration that varies among researchers, labs, temperatures, reagents, etc.

Figure 2a shows the network for virtual staining, with DAPI- and Texas Red-stained images as input and virtual HE-stained images as output. The actual dyed HE image is the teacher data. This network is the basis of the dye conversion network.