<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn>
<issn pub-type="ppub">0868-4952</issn>
<issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFOR442</article-id>
<article-id pub-id-type="doi">10.15388/20-INFOR442</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Deep Learning Model for Cell Nuclei Segmentation and Lymphocyte Identification in Whole Slide Histology Images</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Budginaitė</surname><given-names>Elzbieta</given-names></name><email xlink:href="elzebudg@gmail.com">elzebudg@gmail.com</email><xref ref-type="aff" rid="j_infor442_aff_001">1</xref><bio>
<p><bold>E. Budginaitė</bold> graduated with master’s degree in system biology from the Vilnius University, Lithuania, in 2019. Interests include machine learning, graph theory, natural language processing, artificial neural networks.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Morkūnas</surname><given-names>Mindaugas</given-names></name><email xlink:href="mindaugas.morkunas@mif.vu.lt">mindaugas.morkunas@mif.vu.lt</email><xref ref-type="aff" rid="j_infor442_aff_001">1</xref><xref ref-type="aff" rid="j_infor442_aff_002">2</xref><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>M. Morkūnas</bold> graduated from the Vilnius Gediminas Technical University, Lithuania, in 2002. In 2016 he started PhD studies in informatics engineering at the Institute of Data Science and Digital Technologies, Vilnius University, Lithuania. His interests include bioinformatics, cancer biology, image analysis, machine learning, artificial neural networks.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Laurinavičius</surname><given-names>Arvydas</given-names></name><email xlink:href="arvydas.laurinavicius@vpc.lt">arvydas.laurinavicius@vpc.lt</email><xref ref-type="aff" rid="j_infor442_aff_002">2</xref><bio>
<p><bold>A. Laurinavičius</bold> MD, PhD, a full-time professor at Vilnius University, Department of Pathology, Forensic Medicine and Pharmacology. Director and consultant pathologist at National Center of Pathology. Chair and board member of multiple international professional societies. Fields of interest: renal pathology, digital pathology image analysis, pathology informatics, health information systems and standards, testing of cancer biomarkers in tissue, multi-resolution analysis of biomarkers.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Treigys</surname><given-names>Povilas</given-names></name><email xlink:href="povilas.treigys@mif.vu.lt">povilas.treigys@mif.vu.lt</email><xref ref-type="aff" rid="j_infor442_aff_001">1</xref><bio>
<p><bold>P. Treigys</bold> graduated from the Vilnius Tech (former Vilnius Gediminas Technical University), Lithuania, in 2005. In 2010 he received the doctoral degree in computer science (PhD) from Vilnius University Data Science and Digital Technologies Insitute (former Institute of Mathematics and Informatics) jointly with Vilnius Tech. His interests include deep neural networks application in speech and image analysis. Among his other interests are: automated signal segmentation, medical audio and image analysis, big data, and software engineering.</p></bio>
</contrib>
<aff id="j_infor442_aff_001"><label>1</label>Institute of Data Science and Digital Technologies, <institution>Vilnius University</institution>, Akademijos str. 4, LT-08663 Vilnius, <country>Lithuania</country></aff>
<aff id="j_infor442_aff_002"><label>2</label>National Center of Pathology, <institution>Affiliate of Vilnius University Hospital Santaros klinikos</institution>, P. Baublio str. 5, LT-08406 Vilnius, <country>Lithuania</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2021</year></pub-date>
<pub-date pub-type="epub"><day>12</day><month>1</month><year>2021</year></pub-date>
<volume>32</volume><issue>1</issue><fpage>23</fpage><lpage>40</lpage>
<history>
<date date-type="received"><month>3</month><year>2020</year></date>
<date date-type="accepted"><month>12</month><year>2020</year></date>
</history>
<permissions><copyright-statement>© 2021 Vilnius University</copyright-statement><copyright-year>2021</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Anti-cancer immunotherapy dramatically changes the clinical management of many types of tumours towards less harmful and more personalized treatment plans than conventional chemotherapy or radiation. Precise analysis of the spatial distribution of immune cells in the tumourous tissue is necessary to select patients that would best respond to the treatment. Here, we introduce a deep learning-based workflow for cell nuclei segmentation and subsequent immune cell identification in routine diagnostic images. We applied our workflow on a set of hematoxylin and eosin (H&amp;E) stained breast cancer and colorectal cancer tissue images to detect tumour-infiltrating lymphocytes. Firstly, to segment all nuclei in the tissue, we applied the multiple-image input layer architecture (Micro-Net, Dice coefficient (DC) <inline-formula id="j_infor442_ineq_001"><alternatives>
<mml:math><mml:mn>0.79</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.79\pm 0.02$]]></tex-math></alternatives></inline-formula>). We supplemented the Micro-Net with an introduced texture block to increase segmentation accuracy (DC = <inline-formula id="j_infor442_ineq_002"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula>). We preserved the shallow architecture of the segmentation network with only 280 K trainable parameters (e.g. U-net with ∼1900 K parameters, DC = <inline-formula id="j_infor442_ineq_003"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.03$]]></tex-math></alternatives></inline-formula>). Subsequently, we added an active contour layer to the ground truth images to further increase the performance (DC = <inline-formula id="j_infor442_ineq_004"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.02$]]></tex-math></alternatives></inline-formula>). Secondly, to discriminate lymphocytes from the set of all segmented nuclei, we explored multilayer perceptron and achieved a 0.70 classification f-score. Remarkably, the binary classification of segmented nuclei was significantly improved (f-score = 0.80) by colour normalization. To inspect model generalization, we have evaluated trained models on a public dataset that was not put to use during training. We conclude that the proposed workflow achieved promising results and, with little effort, can be employed in multi-class nuclei segmentation and identification tasks.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>breast cancer</kwd>
<kwd>colorectal cancer</kwd>
<kwd>immune infiltrate</kwd>
<kwd>lymphocytes</kwd>
<kwd>digital pathology</kwd>
<kwd>deep learning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="j_infor442_s_001">
<label>1</label>
<title>Introduction</title>
<p>A host-tumour immune conflict is a well-known process happening during the tumourigenesis. It is now clear that tumours aim to escape host immune responses by a variety of biological mechanisms (Beatty and Gladney, <xref ref-type="bibr" rid="j_infor442_ref_005">2015</xref>; Zappasodi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_034">2018</xref>; Allard <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_002">2018</xref>). Thus the importance of tumour-infiltrating lymphocytes (TILs) in pathology diagnosis, prognosis, and treatment increases. Quantification of the immune infiltrate along tumour margins in the tumour microenvironment has gathered researchers’ attention as a reliable prognostic measure for various cancer types (Basavanhally <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_004">2010</xref>; Galon <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_010">2012</xref>; Huh <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_014">2012</xref>; Rasmusson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_025">2020</xref>). With the emergence of whole slide imaging (WSI) and recent Federal Drug Administration’s (FDA) approval for WSI usage in clinical practice, various techniques have been proposed to detect lymphocytes in digital pathology images focusing on the algorithms based on colour, texture, and shape feature extraction, morphological operations, region growing, and image classification.</p>
<p><bold>Recent works.</bold> In general, prior studies were limited to lymphocyte detection and therefore relied on unsupervised approaches such as in Basavanhally <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_004">2010</xref>), where lymphocytes were automatically detected by a combination of region growing and Markov random field algorithms. Before detection, applying tissue epithelium-stroma classification reduced the noise irrelevant for the lymphocyte nuclei detection by 18 texture features (Kuse <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_017">2010</xref>).</p>
<p>As opposed to individual nuclei detection, models proposed in Turkki <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_032">2016</xref>) and Saltz <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_030">2018</xref>) have been trained to identify TIL-enriched areas rather than stand-alone lymphocytes. In a study by Saltz <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_030">2018</xref>), authors have developed a convolutional neural network (CNN) classifier capable of identifying TIL-enriched areas in WSI slides from TCGA (The Cancer Genome Atlas) database. Similarly, in Turkki <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_032">2016</xref>), lymphocyte-rich areas were identified by training an SVM classifier on a set of features extracted by the VGG-F neural network from CD45 IHC-guided superpixel-level annotations in digitized H&amp;E specimen.</p>
<p>Such a high-level tissue segmentation approach has been widely used for cancer tissue segmentation tasks, such as stroma-epithelium tissue classification (Morkunas <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_022">2018</xref>). However, lymphocyte infiltration quantification accuracy would benefit from a more granular level analysis using object segmentation models. Convolutional encoder-decoder based model architectures (convolutional autoencoders CAEs) have been established as an efficient method for medical imaging tasks. U-Net autoencoder model, proposed in Ronneberger <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_028">2015</xref>), has become a golden standard model for medical areas ranging from cell nuclei segmentation to tissue analysis in computed tomography (CT) scans (Ma <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_019">2019</xref>). The deep, semantic feature maps from the U-Net decoder are combined with shallow, low-level feature maps from the encoder part of the model via skip connections, thus maintaining the fine-grained features of the input image. This renders U-Net applicable in medical image segmentation, where precise detail recreation is of utmost importance. Specifically for lymphocyte detection, approaches utilizing fully convolutional neural networks on the digital H&amp;E slides were published by Chen and Srinivas (<xref ref-type="bibr" rid="j_infor442_ref_007">2016</xref>) and Linder <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_018">2019</xref>). Both approaches investigate convolutional autoencoders using histology sample patches with annotated lymphocyte nuclei. Detection and classification, but not segmentation of nuclei in H&amp;E images, were done using spatially constrained CNN in Sirinukunwattana <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_031">2016</xref>). Notably, the classification into four cell types (epithelial, inflammatory, fibroblast, and miscellaneous) was performed on patches centred on nuclei considering their local neighbourhood. A more recent adaptation – the Micro-Net model – incorporates an additional input image downsampling layer that circumvents the max-pooling process, thus maintaining the input features ignored by the max-pooling layer. This way, more detailed contextual information is passed into the output layer, enabling better segmentation of adjacent cell nuclei (Raza <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_026">2019</xref>).</p>
<p>The Hover-Net model published in Graham <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_011">2019</xref>) enables simultaneous cell nuclei segmentation and classification by three dedicated branches of the model – segmenting, separating, and classifying. Hover-Net was applied to two datasets and achieved 0.573 and 0.631 classification f-score. In Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>), AlexNet was employed to identify centres of lymphocyte nuclei. The network was trained on cropped lymphocyte nuclei as a positive class, and the negative class was sampled from the most distant regions with respect to the annotated ground truth. The trained network produces the posterior class membership probabilities for every pixel in the test image; subsequently, potential centres of lymphocyte nuclei are identified by disk kernel convolution and thresholding. In Alom <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>), the same dataset was utilized to evaluate different advanced neural networks for a variety of digital pathology tasks, including lymphocyte detection. The authors proposed a densely connected recurrent convolution network (DCRCNN) to directly regress the density surface with peaks corresponding to lymphocyte centres. When compared to AlexNet, the DCRCNN improves the f-score by 1%, yet it is worth mentioning that both (Janowczyk and Madabhushi, <xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>; Alom <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>) do not demonstrate method generalization – in the respective studies, the same dataset was used for training and testing.</p>
<p>Our study focuses on the customization of cell segmentation autoencoder architecture and aims to investigate a two-step cell segmentation and subsequent lymphocyte classification workflow using digital histology images of H&amp;E stained tumour tissues. Robust separation of clumped cell nuclei is a common challenge in whole slide image analysis (Guo <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_013">2018</xref>). To tackle this nuclei segmentation challenge, our cell nuclei segmentation model renders an additional active contour layer, which increases the segmentation efficiency of adjacent cell nuclei. Apart from overlapping nuclei, image magnification is another critical factor for nuclei segmentation models. Publicly available annotated nuclei datasets contain histological samples scanned at <inline-formula id="j_infor442_ineq_005"><alternatives>
<mml:math><mml:mn>40</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$40\times $]]></tex-math></alternatives></inline-formula> magnification, preserving texture features and facilitating precise feature extraction. In pathology practice, however, samples scanned at <inline-formula id="j_infor442_ineq_006"><alternatives>
<mml:math><mml:mn>20</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$20\times $]]></tex-math></alternatives></inline-formula> magnification are more common. Image analysis at a lower resolution is faster and less memory-exhaustive, yet the precise cell nuclei segmentation becomes a more difficult task. As reported by Cui <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_008">2019</xref>), the active contour layer improves adjacent nuclei separation – this has been observed in our experiments as well. We report that multiple re-injection of downsampled images to the model – approach initially proposed in Raza <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_026">2019</xref>) in the Micro-Net model – has significantly boosted nuclei segmentation performance compared to the baseline U-Net model (Raza <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_026">2019</xref>; Ronneberger <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_028">2015</xref>). We further observe that our customized model architecture component – two parallel blocks of convolutional layers, referred to as a texture block – increases segmentation quality compared to the original Micro-Net model and reduces model complexity to less than 280 000 parameters. For the lymphocyte classification task, we utilized traditional machine learning approaches – Random Forest classifier, Multilayer perceptron, and a CNN. We have performed minimal hyperparameter tuning of classification models in a grid search procedure. We have used a private dataset to train our models, and a public dataset for final workflow evaluation, thus demonstrating the generalization of proposed models.</p>
<p>The paper is organized as follows. In Section <xref rid="j_infor442_s_003">2.1</xref>, we describe the datasets used in the study. In Section <xref rid="j_infor442_s_004">2.2</xref>, we introduce the segmentation method based on autoencoder neural network architecture, followed by the classification of segmented nuclei. In Section <xref rid="j_infor442_s_008">3</xref>, we present experimental results comparing different cell nuclei segmentation as well as lymphocyte discrimination approaches. In particular, Section <xref rid="j_infor442_s_015">3.3</xref> covers the evaluation of our method on the publicly available annotated data set of breast cancer H&amp;E images. We formulate conclusions in Section <xref rid="j_infor442_s_017">4</xref>.</p>
</sec>
<sec id="j_infor442_s_002" sec-type="materials|methods">
<label>2</label>
<title>Materials and Methods</title>
<sec id="j_infor442_s_003">
<label>2.1</label>
<title>The Datasets</title>
<p><bold>Images.</bold> In our study, we used 4 whole-slide histology sample images prepared with H&amp;E staining (2 WSI slides from breast cancer patients and 2 WSIs from colorectal cancer). These slides were produced in the National Center of Pathology, Lithuania (NCP), and digitized with the Aperio ScanScope XT Slide Scanner at <inline-formula id="j_infor442_ineq_007"><alternatives>
<mml:math><mml:mn>20</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$20\times $]]></tex-math></alternatives></inline-formula> magnification.</p>
<p>1 WSI slide was obtained from The Cancer Genome Atlas database, tile ID: TCGA_AN_A0AM (Grossman <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_012">2016</xref>), and used for both segmentation and classification testing.</p>
<p>Two additional public datasets were used for classification testing purposes. The CRCHistoPhenotypes dataset (CRCHP) contains colorectal adenocarcinoma cell nuclei. 1143 nuclei are annotated as inflammatory (used for lymphocyte category in our experiments), and 1040 annotated as epithelial (used for other cell type category) (Sirinukunwattana <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_031">2016</xref>). The breast cancer dataset (JAN) published by Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>) consists of 100 images (<inline-formula id="j_infor442_ineq_008"><alternatives>
<mml:math><mml:mn>100</mml:mn><mml:mo>×</mml:mo><mml:mn>100</mml:mn></mml:math>
<tex-math><![CDATA[$100\times 100$]]></tex-math></alternatives></inline-formula> pixel-sized) with lymphocytes annotated. Samples were digitized using <inline-formula id="j_infor442_ineq_009"><alternatives>
<mml:math><mml:mn>20</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$20\times $]]></tex-math></alternatives></inline-formula> magnification and stained with hematoxylin and eosin. An expert pathologist annotated lymphocytes by marking lymphocyte nuclei centres. In contrast to the CRCHP dataset, this image corpus is more suitable for our tasks since the data was prepared specifically for lymphocyte identification. The CRCHP dataset entails broader cell type categories, where lymphocytes are annotated under the inflammatory label and other immune cells such as mast cells and macrophages.</p>
<table-wrap id="j_infor442_tab_001">
<label>Table 1</label>
<caption>
<p>Two datasets were used for segmentation and classification tasks. Segmentation experiments were performed on <inline-formula id="j_infor442_ineq_010"><alternatives>
<mml:math><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>256</mml:mn></mml:math>
<tex-math><![CDATA[$256\times 256$]]></tex-math></alternatives></inline-formula> pixel-sized image patches. Classification experiments were performed on extracted cell nuclei embedded in blank <inline-formula id="j_infor442_ineq_011"><alternatives>
<mml:math><mml:mn>32</mml:mn><mml:mo>×</mml:mo><mml:mn>32</mml:mn></mml:math>
<tex-math><![CDATA[$32\times 32$]]></tex-math></alternatives></inline-formula> pixel-sized placeholders.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Segmentation set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Tumour type</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Raw set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Final augmented set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Origin</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC</td>
<td style="vertical-align: top; text-align: left">192</td>
<td style="vertical-align: top; text-align: left">3648</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">CRC</td>
<td style="vertical-align: top; text-align: left">82</td>
<td style="vertical-align: top; text-align: left">1558</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Training</td>
<td style="vertical-align: top; text-align: left">total</td>
<td style="vertical-align: top; text-align: left">274</td>
<td style="vertical-align: top; text-align: left">5206</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC</td>
<td style="vertical-align: top; text-align: left">54</td>
<td style="vertical-align: top; text-align: left">54</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">CRC</td>
<td style="vertical-align: top; text-align: left">16</td>
<td style="vertical-align: top; text-align: left">16</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Validation</td>
<td style="vertical-align: top; text-align: left">total</td>
<td style="vertical-align: top; text-align: left">70</td>
<td style="vertical-align: top; text-align: left">70</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC</td>
<td style="vertical-align: top; text-align: left">96</td>
<td style="vertical-align: top; text-align: left">96</td>
<td style="vertical-align: top; text-align: left">TCGA</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Testing</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">total</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">96</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">96</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">TCGA</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Classification set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Nucleus type</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Raw set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Final augmented set</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Origin</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">lymphocyte nuclei</td>
<td style="vertical-align: top; text-align: left">11032</td>
<td style="vertical-align: top; text-align: left">50950</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">other nuclei</td>
<td style="vertical-align: top; text-align: left">10922</td>
<td style="vertical-align: top; text-align: left">55825</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Training</td>
<td style="vertical-align: top; text-align: left">total nuclei</td>
<td style="vertical-align: top; text-align: left">21954</td>
<td style="vertical-align: top; text-align: left">106775</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">lymphocyte nuclei</td>
<td style="vertical-align: top; text-align: left">2588</td>
<td style="vertical-align: top; text-align: left">2588</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">other nuclei</td>
<td style="vertical-align: top; text-align: left">2751</td>
<td style="vertical-align: top; text-align: left">2751</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Validation</td>
<td style="vertical-align: top; text-align: left">total nuclei</td>
<td style="vertical-align: top; text-align: left">5339</td>
<td style="vertical-align: top; text-align: left">5339</td>
<td style="vertical-align: top; text-align: left">NCP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC lymphocytes</td>
<td style="vertical-align: top; text-align: left">903</td>
<td style="vertical-align: top; text-align: left">903</td>
<td style="vertical-align: top; text-align: left">TCGA</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">CRC lymphocytes</td>
<td style="vertical-align: top; text-align: left">1143</td>
<td style="vertical-align: top; text-align: left">1143</td>
<td style="vertical-align: top; text-align: left">CRCHP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">total lymphocytes</td>
<td style="vertical-align: top; text-align: left">2046</td>
<td style="vertical-align: top; text-align: left">2046</td>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC other</td>
<td style="vertical-align: top; text-align: left">1195</td>
<td style="vertical-align: top; text-align: left">1195</td>
<td style="vertical-align: top; text-align: left">TCGA</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">CRC other</td>
<td style="vertical-align: top; text-align: left">1040</td>
<td style="vertical-align: top; text-align: left">1040</td>
<td style="vertical-align: top; text-align: left">CRCHP</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">total other</td>
<td style="vertical-align: top; text-align: left">2235</td>
<td style="vertical-align: top; text-align: left">2235</td>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Testing I</td>
<td style="vertical-align: top; text-align: left">total nuclei</td>
<td style="vertical-align: top; text-align: left">4281</td>
<td style="vertical-align: top; text-align: left">4281</td>
<td style="vertical-align: top; text-align: left"/>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC lymphocytes</td>
<td style="vertical-align: top; text-align: left">2949</td>
<td style="vertical-align: top; text-align: left">2949</td>
<td style="vertical-align: top; text-align: left">JAN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">BC other</td>
<td style="vertical-align: top; text-align: left">1921</td>
<td style="vertical-align: top; text-align: left">1921</td>
<td style="vertical-align: top; text-align: left">JAN</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Testing II</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">total nuclei</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">4870</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">4870</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">JAN</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><bold>Segmentation dataset.</bold> To train and validate the segmentation model, we randomly selected 344 tiles of <inline-formula id="j_infor442_ineq_012"><alternatives>
<mml:math><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>256</mml:mn></mml:math>
<tex-math><![CDATA[$256\times 256$]]></tex-math></alternatives></inline-formula> pixel size. Dataset was split into training and validation sets, respectively. To test the segmentation model, we prepared 96 tiles from the breast cancer TCGA slide. Both tiles generated from the TCGA slide and tiles generated from NCP slides were manually annotated by EB and MM. In the annotation process, each cell nucleus present in an image patch was manually outlined, and 2 pixel-wide active contour borders surrounding each nucleus were added as a second layer to the nuclei segmentation masks. Each outlined nucleus was assigned a class label (a lymphocyte or other). To the training set, we applied various image augmentation methods (rotation, flip, transpose, RGB augmentation, brightness adjustment, CLAHE, Zuiderveld, <xref ref-type="bibr" rid="j_infor442_ref_035">1994</xref>) to obtain the final training set of 5206 images.</p>
<table-wrap id="j_infor442_tab_002">
<label>Table 2</label>
<caption>
<p>Image augmentation techniques and parameters used for training dataset expansion.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Augmentation</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Parameters</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Transposition, rotation axis flipping</td>
<td style="vertical-align: top; text-align: left">Perpendicular rotation angles</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">CLAHE (Zuiderveld, <xref ref-type="bibr" rid="j_infor442_ref_035">1994</xref>)</td>
<td style="vertical-align: top; text-align: left">Cliplimit = 2.0, tilegridsize = (8, 8)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Brightness adjustment</td>
<td style="vertical-align: top; text-align: left">HSV colourspace, hue layer increased by 30</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RGB augmentation</td>
<td style="vertical-align: top; text-align: left">Random pixel value adjustments up to 0.1</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">RGB2HED colour adjustments (Ruifrok and Johnston, <xref ref-type="bibr" rid="j_infor442_ref_029">2001</xref>)</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Colour values adjusted within range <inline-formula id="j_infor442_ineq_013"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.02</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mn>0.001</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mn>0.15</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[0.02,0.001,0.15]$]]></tex-math></alternatives></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The segmentation dataset is summarized in Table <xref rid="j_infor442_tab_001">1</xref>, and the techniques used to augment training patches are summarized in Table <xref rid="j_infor442_tab_002">2</xref>.</p>
<fig id="j_infor442_fig_001">
<label>Fig. 1</label>
<caption>
<p>Overall schema of the proposed workflow. On top, a training phase for both segmentation and classification models is shown. The segmentation network is trained on original image patches and manually annotated ground truth images. The classification model is trained on cropped nuclei to discriminate lymphocytes (in the red box) from other nuclei. In the middle, a testing phase is shown. The trained segmentation model accepts new images and produces segmentation masks (for clarity, the active contour layer in the resulting segmentation mask is not shown). Resulting segmentation masks are used to crop out detected cell nuclei that are fed into the classifier model and sorted into lymphocytes and non-lymphocyte nuclei. In the bottom panel, on the left, we have representative segmentation results (lymphocyte nuclei are coloured in red for clarity), and on the right, we have an original image with detected nuclei contours outlined and detected lymphocyte nuclei depicted with red dots. Green dots indicate lymphocyte ground truth.</p>
</caption>
<graphic xlink:href="infor442_g001.jpg"/>
</fig>
<p><bold>Classification dataset.</bold> To train and validate the classification models, we generated a dataset from the same image patches used to train the segmentation model. Particularly, manually generated segmentation masks were used to crop-out all types of cell nuclei from raw images. Each extracted nucleus was centred in a blank <inline-formula id="j_infor442_ineq_014"><alternatives>
<mml:math><mml:mn>32</mml:mn><mml:mo>×</mml:mo><mml:mn>32</mml:mn></mml:math>
<tex-math><![CDATA[$32\times 32$]]></tex-math></alternatives></inline-formula> pixel-sized patch. Each nucleus-containing patch inherited a class label (assigned manually to the ground truth in an annotation procedure). Nuclei containing patches were further augmented by rotation and axis flipping. The testing set for cell classifier consisted of 2098 TCGA breast cancer cell nuclei and 2183 colorectal adenocarcinoma cell nuclei from the CRCHP dataset (see Table <xref rid="j_infor442_tab_001">1</xref>).</p>
</sec>
<sec id="j_infor442_s_004">
<label>2.2</label>
<title>The Proposed Method</title>
<p>The overall schema of the proposed workflow is summarized in Fig. <xref rid="j_infor442_fig_001">1</xref>.</p>
<fig id="j_infor442_fig_002">
<label>Fig. 2</label>
<caption>
<p>The architecture of the proposed deep learning model.</p>
</caption>
<graphic xlink:href="infor442_g002.jpg"/>
</fig>
<sec id="j_infor442_s_005">
<label>2.2.1</label>
<title>Modified Micro-Net Model</title>
<p>The autoencoder architecture for nuclei segmentation is shown in Fig. <xref rid="j_infor442_fig_002">2</xref>. The model consists of 3 encoder and 3 decoder blocks consisting of 2 convolution layers (<inline-formula id="j_infor442_ineq_015"><alternatives>
<mml:math><mml:mn>3</mml:mn><mml:mo>×</mml:mo><mml:mn>3</mml:mn></mml:math>
<tex-math><![CDATA[$3\times 3$]]></tex-math></alternatives></inline-formula> convolutional filters with stride 2), dropout (dropout rate 0.2), and max-pooling layers. Our model adopts multiple downsized image input layers after each max-pooling operation, which were originally proposed in the Micro-Net model by Raza <italic>et al</italic>. We propose additional model enhancement by introducing a texture block after each image input layer. The texture block consists of 2 parallel blocks of 3 convolution layers, which enhance image texture extraction. To ensure robust nuclei separation, we supplement our nuclei annotations with an additional active contour layer. Our experiments indicate that the proposed model architecture is more compact and requires less computational resources than the original Micro-Net structure.</p>
<p>We used elu activation after each convolution layer and sigmoid activation for the output layer. Adam optimizer was used with initial learning rate <inline-formula id="j_infor442_ineq_016"><alternatives>
<mml:math><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mo>=</mml:mo><mml:mn>0.001</mml:mn></mml:math>
<tex-math><![CDATA[$lr=0.001$]]></tex-math></alternatives></inline-formula>, which was reduced by factor 0.1 if validation loss did not improve for 4 consecutive epochs (<inline-formula id="j_infor442_ineq_017"><alternatives>
<mml:math><mml:mo movablelimits="false">min</mml:mo><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mrow><mml:mn>10</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>6</mml:mn></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$\min lr=1\times {10^{-6}}$]]></tex-math></alternatives></inline-formula>) (Kingma and Ba, <xref ref-type="bibr" rid="j_infor442_ref_016">2014</xref>). Dice coefficient (<xref rid="j_infor442_eq_001">1</xref>) was used to quantify model metrics with binary crossentropy dice loss (<xref rid="j_infor442_eq_003">3</xref>) as custom loss function.</p>
<fig id="j_infor442_fig_003">
<label>Fig. 3</label>
<caption>
<p>The performance metrics of segmentation and classifier models. A: training and validation metrics (top-Dice coefficient, below-loss values per epoch) of segmentation autoencoder, B: confusion matrix depicting cell nuclei classifier performance on the testing set (true positive lymphocyte predictions and true negatives marked in grey, false predictions – in red), C: ROC curve obtained from nuclei classifier testing data.</p>
</caption>
<graphic xlink:href="infor442_g003.jpg"/>
</fig>
<p>Model converged after 36 epochs (see Fig. <xref rid="j_infor442_fig_003">3</xref>A) using batch size of 1 (input image dimensions: <inline-formula id="j_infor442_ineq_018"><alternatives>
<mml:math><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>3</mml:mn></mml:math>
<tex-math><![CDATA[$256\times 256\times 3$]]></tex-math></alternatives></inline-formula>) for training and validation. Input images were normalized by scaling pixel values to the range <inline-formula id="j_infor442_ineq_019"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mn>1</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[0,1]$]]></tex-math></alternatives></inline-formula>. 
<disp-formula id="j_infor442_eq_001">
<label>(1)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mtext mathvariant="italic">Dice</mml:mtext><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mo>∗</mml:mo><mml:mi mathvariant="italic">TP</mml:mi></mml:mrow><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">TP</mml:mi><mml:mo>+</mml:mo><mml:mi mathvariant="italic">FP</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">TP</mml:mi><mml:mo>+</mml:mo><mml:mtext mathvariant="italic">FN</mml:mtext><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \textit{Dice}=\frac{2\ast \mathit{TP}}{(\mathit{TP}+\mathit{FP})+(\mathit{TP}+\textit{FN})},\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_infor442_ineq_020"><alternatives>
<mml:math><mml:mi mathvariant="italic">TP</mml:mi></mml:math>
<tex-math><![CDATA[$\mathit{TP}$]]></tex-math></alternatives></inline-formula> is true positive, <inline-formula id="j_infor442_ineq_021"><alternatives>
<mml:math><mml:mi mathvariant="italic">FP</mml:mi></mml:math>
<tex-math><![CDATA[$\mathit{FP}$]]></tex-math></alternatives></inline-formula> is false positive and <inline-formula id="j_infor442_ineq_022"><alternatives>
<mml:math><mml:mtext mathvariant="italic">FN</mml:mtext></mml:math>
<tex-math><![CDATA[$\textit{FN}$]]></tex-math></alternatives></inline-formula> is false negative. 
<disp-formula id="j_infor442_eq_002">
<label>(2)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">L</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">y</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mo>−</mml:mo><mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">(</mml:mo><mml:mi mathvariant="italic">y</mml:mi><mml:mo>∗</mml:mo><mml:mo movablelimits="false">log</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:mi mathvariant="italic">y</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>∗</mml:mo><mml:mo movablelimits="false">log</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo mathvariant="normal" fence="true" maxsize="1.19em" minsize="1.19em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ L(y,\hat{y})=-\big(y\ast \log (\hat{y})+(1-y)\ast \log (1-\hat{y})\big),\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>y</italic> is binary class indicator and <inline-formula id="j_infor442_ineq_023"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$\hat{y}$]]></tex-math></alternatives></inline-formula> is predicted probability. 
<disp-formula id="j_infor442_eq_003">
<label>(3)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mtext mathvariant="italic">CrossentropyDiceLoss</mml:mtext><mml:mo>=</mml:mo><mml:mn>0.1</mml:mn><mml:mo>∗</mml:mo><mml:mi mathvariant="italic">L</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">y</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mn>0.9</mml:mn><mml:mo>∗</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:mtext mathvariant="italic">Dice</mml:mtext><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \textit{CrossentropyDiceLoss}=0.1\ast L(y,\hat{y})+0.9\ast (1-\textit{Dice}).\]]]></tex-math></alternatives>
</disp-formula>
</p>
</sec>
<sec id="j_infor442_s_006">
<label>2.2.2</label>
<title>Multilayer Perceptron</title>
<p>The multilayer perceptron model was employed to solve the binary classification problem of lymphocyte identification. Our experiment’s model consists of three dense layers (number of nodes: 4096, 2048, 1024), with softmax as the output layer activation function. For each layer, we used relu activation, followed by batch normalization. The dropout layer (dropout rate 0.4) was used in the middle layer instead of batch normalization to avoid model overfitting. We used Adam optimizer with initial learning rate <inline-formula id="j_infor442_ineq_024"><alternatives>
<mml:math><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mo>=</mml:mo><mml:mn>0.001</mml:mn></mml:math>
<tex-math><![CDATA[$lr=0.001$]]></tex-math></alternatives></inline-formula>, which was reduced by factor 0.1 if validation loss did not improve for 6 consecutive epochs (<inline-formula id="j_infor442_ineq_025"><alternatives>
<mml:math><mml:mo movablelimits="false">min</mml:mo><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mrow><mml:mn>10</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>6</mml:mn></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$\min lr=1\times {10^{-6}}$]]></tex-math></alternatives></inline-formula>). Accuracy was used as metrics with binary cross-entropy as loss function (<xref rid="j_infor442_eq_002">2</xref>). The model was trained until convergence using 64 and 32 batch sizes for training and validation, respectively.</p>
</sec>
<sec id="j_infor442_s_007">
<label>2.2.3</label>
<title>Implementation</title>
<p>Neural network models for nuclei segmentation and cell-type classification were trained on GeForce GTX 1050 GPU, 16 Gb RAM using Tensorflow, and Keras machine learning libraries (Abadi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_001">2016</xref>). Proposed neural model architectures are available in the GitHub repository.<xref ref-type="fn" rid="j_infor442_fn_001">1</xref><fn id="j_infor442_fn_001"><label><sup>1</sup></label>
<p>Link to GitHub repository of the project: <uri>https://github.com/HELLze/Nuclei-segmentator-classifier</uri>.</p></fn></p>
</sec>
</sec>
</sec>
<sec id="j_infor442_s_008">
<label>3</label>
<title>Results</title>
<sec id="j_infor442_s_009">
<label>3.1</label>
<title>Nuclei Segmentation</title>
<sec id="j_infor442_s_010">
<label>3.1.1</label>
<title>Hyperparameter Tuning</title>
<p>The optimal model architecture was experimentally evaluated using a hyperparameter grid search. To test segmentation robustness, we evaluated both pixel-level and object-level metrics. The dice coefficient was used to track pixel-level segmentation performance, while object-level segmentation quality was evaluated by calculating intersection over union (IoU). We treated the predicted nuclei as true positive if at least 50% of the predicted nuclei area overlapped with the ground truth nuclei mask. In order to prevent multiple predicted objects mapping to the same ground truth nucleus, ground truth nucleus mask could only be mapped to a single predicted object. Results of hyperparameter tuning are provided in Table <xref rid="j_infor442_tab_003">3</xref>. Hyperparameter space was investigated by changing dropout rates, convolution filters per network layer, and activation functions. Due to multiple image down-sampling and concatenation operations in CNN architecture, models with parameter size higher than 500 000 have exceeded memory limitations. Our experiments indicate that expansion of model layer width (tested kernel sizes 16, 32, 48) did not dramatically affect the model prediction metrics – which suggests that texture block component may ensure consistent feature extraction in a wide range of model width.</p>
<table-wrap id="j_infor442_tab_003">
<label>Table 3</label>
<caption>
<p>Performance metrics of convolutional autoencoders (CAE) used for the hyperparameter grid search for nuclei segmentation. Dice coefficients (mean Dice coefficient ± standard deviation). Mean and standard deviation values were calculated from stand-alone dice coefficients for each tile from the testing set. DO – drop out rate, BN – batch normalization.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Act func</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Output act func</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Kernel size</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">DO</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">BN</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dice coefficient</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Accuracy</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Precision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Recall</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">f-score</td>
</tr>
</thead>
<tbody>
<tr>
<td colspan="10" style="vertical-align: top; text-align: left">U-Net</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">64</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.2</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_026"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_027"><alternatives>
<mml:math><mml:mn>0.59</mml:mn><mml:mo>±</mml:mo><mml:mn>0.08</mml:mn></mml:math>
<tex-math><![CDATA[$0.59\pm 0.08$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_028"><alternatives>
<mml:math><mml:mn>0.66</mml:mn><mml:mo>±</mml:mo><mml:mn>0.09</mml:mn></mml:math>
<tex-math><![CDATA[$0.66\pm 0.09$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_029"><alternatives>
<mml:math><mml:mn>0.84</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.84\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_030"><alternatives>
<mml:math><mml:mn>0.74</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.74\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td colspan="10" style="vertical-align: top; text-align: left">Micro-Net model</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">tanh</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">64</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_031"><alternatives>
<mml:math><mml:mn>0.79</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.79\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_032"><alternatives>
<mml:math><mml:mn>0.66</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.66\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_033"><alternatives>
<mml:math><mml:mn>0.75</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.75\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_034"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_035"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td colspan="10" style="vertical-align: top; text-align: left">Our model</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">16</td>
<td style="vertical-align: top; text-align: left">0.2</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_036"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_037"><alternatives>
<mml:math><mml:mn>0.77</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.77\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_038"><alternatives>
<mml:math><mml:mn>0.86</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.86\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_039"><alternatives>
<mml:math><mml:mn>0.88</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.88\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_040"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.2</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_041"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_042"><alternatives>
<mml:math><mml:mn>0.77</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.77\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_043"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_044"><alternatives>
<mml:math><mml:mn>0.88</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.88\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_045"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">48</td>
<td style="vertical-align: top; text-align: left">0.2</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_046"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_047"><alternatives>
<mml:math><mml:mn>0.76</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.76\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_048"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_049"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_050"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">16</td>
<td style="vertical-align: top; text-align: left">0.3</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_051"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_052"><alternatives>
<mml:math><mml:mn>0.77</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.77\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_053"><alternatives>
<mml:math><mml:mn>0.86</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.86\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_054"><alternatives>
<mml:math><mml:mn>0.88</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.88\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_055"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">0.3</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_056"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_057"><alternatives>
<mml:math><mml:mn>0.76</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.76\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_058"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_059"><alternatives>
<mml:math><mml:mn>0.88</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.88\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_060"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">48</td>
<td style="vertical-align: top; text-align: left">0.3</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_061"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_062"><alternatives>
<mml:math><mml:mn>0.76</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.76\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_063"><alternatives>
<mml:math><mml:mn>0.86</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.86\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_064"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_065"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left">+</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_066"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_067"><alternatives>
<mml:math><mml:mn>0.74</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.74\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_068"><alternatives>
<mml:math><mml:mn>0.84</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.84\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_069"><alternatives>
<mml:math><mml:mn>0.86</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.86\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_070"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>relu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>sigmoid</italic></td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left">+</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_071"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_072"><alternatives>
<mml:math><mml:mn>0.74</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.74\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_073"><alternatives>
<mml:math><mml:mn>0.84</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.84\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_074"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_075"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><italic>elu</italic></td>
<td style="vertical-align: top; text-align: left"><italic>softmax</italic></td>
<td style="vertical-align: top; text-align: left">32</td>
<td style="vertical-align: top; text-align: left">−</td>
<td style="vertical-align: top; text-align: left">+</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_076"><alternatives>
<mml:math><mml:mn>0.73</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.73\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_077"><alternatives>
<mml:math><mml:mn>0.58</mml:mn><mml:mo>±</mml:mo><mml:mn>0.08</mml:mn></mml:math>
<tex-math><![CDATA[$0.58\pm 0.08$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_078"><alternatives>
<mml:math><mml:mn>0.63</mml:mn><mml:mo>±</mml:mo><mml:mn>0.08</mml:mn></mml:math>
<tex-math><![CDATA[$0.63\pm 0.08$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_079"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_080"><alternatives>
<mml:math><mml:mn>0.73</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.73\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>relu</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>softmax</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">32</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">+</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_081"><alternatives>
<mml:math><mml:mn>0.77</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.77\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_082"><alternatives>
<mml:math><mml:mn>0.65</mml:mn><mml:mo>±</mml:mo><mml:mn>0.07</mml:mn></mml:math>
<tex-math><![CDATA[$0.65\pm 0.07$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_083"><alternatives>
<mml:math><mml:mn>0.72</mml:mn><mml:mo>±</mml:mo><mml:mn>0.07</mml:mn></mml:math>
<tex-math><![CDATA[$0.72\pm 0.07$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_084"><alternatives>
<mml:math><mml:mn>0.87</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.87\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_085"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.0</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.0$]]></tex-math></alternatives></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="j_infor442_s_011">
<label>3.1.2</label>
<title>Model Performance Speed</title>
<p>Instead of basing our optimal model selection rationale solely on the Dice coefficient and object-level testing metrics, we evaluated the gridsearch models based on its loading and image prediction time relative to the original Micro-Net model. Since no significant changes were observed between dropout rates, we chose a custom model of a 0.2 dropout rate, elu activation function, and sigmoid activation function with differing layer widths of 16, 32, and 48 kernels. The testing results provided in Table <xref rid="j_infor442_tab_004">4</xref> indicate that the lowest relative image prediction and model loading time was observed for segmentation autoencoder consisting of 32 convolutional kernels per layer, 0.2 dropout rate using elu activation function and sigmoid activation function for output layer with total parameter size lower than 280.000. In comparison to U-Net autoencoder (&gt;1.9 M parameters), which has reached <inline-formula id="j_infor442_ineq_086"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.028</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.028$]]></tex-math></alternatives></inline-formula> Dice coefficient for testing dataset, our selected model achieved <inline-formula id="j_infor442_ineq_087"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.018</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.018$]]></tex-math></alternatives></inline-formula> Dice coefficient with over 6-fold lower model complexity.</p>
<table-wrap id="j_infor442_tab_004">
<label>Table 4</label>
<caption>
<p>A comparison table of autoencoder parameter size and performance speed. Model loading and prediction times were obtained relative to the original Micro-Net model. The best performing model is highlighted in bold.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Model</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Parameters</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Relative loading time</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Relative prediction time</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Micro-Net</td>
<td style="vertical-align: top; text-align: left">73 467 842</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">1</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Custom-16</td>
<td style="vertical-align: top; text-align: left">131 746</td>
<td style="vertical-align: top; text-align: left">0.212</td>
<td style="vertical-align: top; text-align: left">0.314</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><bold>Custom-32</bold></td>
<td style="vertical-align: top; text-align: left"><bold>279 506</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.212</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.288</bold></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Custom-48</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">507 138</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.268</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.359</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="j_infor442_s_012">
<label>3.1.3</label>
<title>Active Contour Layer</title>
<p>To evaluate the impact of the active contour layer on nuclei separation, we trained convolutional autoencoder using single-layered nuclei masks and compared the results with an identical model trained on two-layered annotations. During this experiment, we used the best-scoring model architecture from the hyperparameter search experiment. Nuclei segmentation using masks supplemented with the active contour layer has outperformed the model with single-layered masks both on pixel-level and object-level measurements, as shown in Table <xref rid="j_infor442_tab_005">5</xref>. Active-contour increased object segmentation accuracy and f-score by 1 percent (<inline-formula id="j_infor442_ineq_088"><alternatives>
<mml:math><mml:mn>0.75</mml:mn><mml:mo>±</mml:mo><mml:mn>0.062</mml:mn></mml:math>
<tex-math><![CDATA[$0.75\pm 0.062$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor442_ineq_089"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.04$]]></tex-math></alternatives></inline-formula>, respectively).</p>
<table-wrap id="j_infor442_tab_005">
<label>Table 5</label>
<caption>
<p>The active contour layer effect on nuclei segmentation autoencoder performance. Pixel-level Dice coefficients (mean Dice coefficient ± standard deviation) were obtained from a testing set consisting of 96 <inline-formula id="j_infor442_ineq_090"><alternatives>
<mml:math><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>256</mml:mn></mml:math>
<tex-math><![CDATA[$256\times 256$]]></tex-math></alternatives></inline-formula> RGB tiles, where mean and standard deviation values were calculated from stand-alone dice coefficients for each tile. Object-level accuracy, precision, recall, and f-score metrics collected if at least 50% overlap between annotated and predicted nuclei masks (mean intersection over union IoU).</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Mask layers</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dice coefficient</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Accuracy</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Precision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Recall</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">f-score</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">2-layered</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_091"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_092"><alternatives>
<mml:math><mml:mn>0.75</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.75\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_093"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_094"><alternatives>
<mml:math><mml:mn>0.86</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.86\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_095"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1-layered</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_096"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_097"><alternatives>
<mml:math><mml:mn>0.73</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.73\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_098"><alternatives>
<mml:math><mml:mn>0.84</mml:mn><mml:mo>±</mml:mo><mml:mn>0.05</mml:mn></mml:math>
<tex-math><![CDATA[$0.84\pm 0.05$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_099"><alternatives>
<mml:math><mml:mn>0.85</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.85\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_100"><alternatives>
<mml:math><mml:mn>0.84</mml:mn><mml:mo>±</mml:mo><mml:mn>0.04</mml:mn></mml:math>
<tex-math><![CDATA[$0.84\pm 0.04$]]></tex-math></alternatives></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="j_infor442_s_013">
<label>3.2</label>
<title>Nuclei Classification</title>
<sec id="j_infor442_s_014">
<label>3.2.1</label>
<title>Hyper Parameter Tuning and Model Comparison</title>
<p>The cell classification problem was approached with several different statistical models. Random Forest was chosen as a baseline machine learning algorithm. We used Python implementation of a random forest classifier from the sklearn machine learning library (Feurer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_009">2015</xref>) (using the Gini impurity criterion as split quality measurement and 10 estimators). Random forest classifier was trained on linearized nuclei images (<inline-formula id="j_infor442_ineq_101"><alternatives>
<mml:math><mml:mn>32</mml:mn><mml:mo>×</mml:mo><mml:mn>32</mml:mn></mml:math>
<tex-math><![CDATA[$32\times 32$]]></tex-math></alternatives></inline-formula> RGB-coloured images linearized to 3072-length vector), which achieved 0.77 testing accuracy. In addition, we investigated two deep-learning-based strategies for cell nuclei classification: multilayer perceptron (MLP) consisting of three consecutive dense layers, and convolutional neural network (CNN) consisting of 4 convolutional, 2 max-pooling, and 2 dense layers. Model performance metrics were evaluated for several hyperparameter combinations, including a number of nodes per layer, activation functions, and a number of convolutional kernels. Hyperparameter search is summarized in Table <xref rid="j_infor442_tab_006">6</xref>. During our experimentations, a multilayer perceptron with three dense layers, softmax for output and relu layer activation functions, 2 batch-normalization layers, and a dropout layer achieved the highest testing accuracy score of 0.78 with 0.82, 0.71, and 0.99 f-score, precision and recall values, respectively.</p>
<table-wrap id="j_infor442_tab_006">
<label>Table 6</label>
<caption>
<p>The hyperparameter grid search results for cell nuclei classifier (mean ± standard deviation). The model performance was evaluated on the testing set. Mean and standard deviation values were obtained by running each experiment 5 times.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Models</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Accuracy</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Precision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Recall</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">f-score</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Random forest</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_102"><alternatives>
<mml:math><mml:mn>0.77</mml:mn><mml:mo>±</mml:mo><mml:mn>0.002</mml:mn></mml:math>
<tex-math><![CDATA[$0.77\pm 0.002$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_103"><alternatives>
<mml:math><mml:mn>0.69</mml:mn><mml:mo>±</mml:mo><mml:mn>0.002</mml:mn></mml:math>
<tex-math><![CDATA[$0.69\pm 0.002$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_104"><alternatives>
<mml:math><mml:mn>0.99</mml:mn><mml:mo>±</mml:mo><mml:mn>0.002</mml:mn></mml:math>
<tex-math><![CDATA[$0.99\pm 0.002$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_105"><alternatives>
<mml:math><mml:mn>0.82</mml:mn><mml:mo>±</mml:mo><mml:mn>0.002</mml:mn></mml:math>
<tex-math><![CDATA[$0.82\pm 0.002$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td colspan="5" style="vertical-align: top; text-align: left">Multilayer perceptron</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_106"><alternatives>
<mml:math><mml:mn>2048</mml:mn><mml:mo mathvariant="normal" stretchy="false">/</mml:mo><mml:mn>1024</mml:mn><mml:mo mathvariant="normal" stretchy="false">/</mml:mo><mml:mn>512</mml:mn></mml:math>
<tex-math><![CDATA[$2048/1024/512$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_107"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.09</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.09$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_108"><alternatives>
<mml:math><mml:mn>0.71</mml:mn><mml:mo>±</mml:mo><mml:mn>0.1</mml:mn></mml:math>
<tex-math><![CDATA[$0.71\pm 0.1$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_109"><alternatives>
<mml:math><mml:mn>0.99</mml:mn><mml:mo>±</mml:mo><mml:mn>0.004</mml:mn></mml:math>
<tex-math><![CDATA[$0.99\pm 0.004$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_110"><alternatives>
<mml:math><mml:mn>0.83</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.83\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_111"><alternatives>
<mml:math><mml:mn>4096</mml:mn><mml:mo mathvariant="normal" stretchy="false">/</mml:mo><mml:mn>2048</mml:mn><mml:mo mathvariant="normal" stretchy="false">/</mml:mo><mml:mn>1024</mml:mn></mml:math>
<tex-math><![CDATA[$4096/2048/1024$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_112"><alternatives>
<mml:math><mml:mn>0.78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.003</mml:mn></mml:math>
<tex-math><![CDATA[$0.78\pm 0.003$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_113"><alternatives>
<mml:math><mml:mn>0.71</mml:mn><mml:mo>±</mml:mo><mml:mn>0.03</mml:mn></mml:math>
<tex-math><![CDATA[$0.71\pm 0.03$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_114"><alternatives>
<mml:math><mml:mn>0.99</mml:mn><mml:mo>±</mml:mo><mml:mn>0.0003</mml:mn></mml:math>
<tex-math><![CDATA[$0.99\pm 0.0003$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_115"><alternatives>
<mml:math><mml:mn>0.82</mml:mn><mml:mo>±</mml:mo><mml:mn>0.02</mml:mn></mml:math>
<tex-math><![CDATA[$0.82\pm 0.02$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td colspan="5" style="vertical-align: top; text-align: left">Convolutional neural network</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Kernels per layer: 16</td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_116"><alternatives>
<mml:math><mml:mn>0.76</mml:mn><mml:mo>±</mml:mo><mml:mn>0.09</mml:mn></mml:math>
<tex-math><![CDATA[$0.76\pm 0.09$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_117"><alternatives>
<mml:math><mml:mn>0.69</mml:mn><mml:mo>±</mml:mo><mml:mn>0.1</mml:mn></mml:math>
<tex-math><![CDATA[$0.69\pm 0.1$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_118"><alternatives>
<mml:math><mml:mn>0.98</mml:mn><mml:mo>±</mml:mo><mml:mn>0.004</mml:mn></mml:math>
<tex-math><![CDATA[$0.98\pm 0.004$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left"><inline-formula id="j_infor442_ineq_119"><alternatives>
<mml:math><mml:mn>0.80</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.80\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Kernels per layer: 32</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_120"><alternatives>
<mml:math><mml:mn>0.76</mml:mn><mml:mo>±</mml:mo><mml:mn>0.09</mml:mn></mml:math>
<tex-math><![CDATA[$0.76\pm 0.09$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_121"><alternatives>
<mml:math><mml:mn>0.70</mml:mn><mml:mo>±</mml:mo><mml:mn>0.1</mml:mn></mml:math>
<tex-math><![CDATA[$0.70\pm 0.1$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_122"><alternatives>
<mml:math><mml:mn>0.98</mml:mn><mml:mo>±</mml:mo><mml:mn>0.004</mml:mn></mml:math>
<tex-math><![CDATA[$0.98\pm 0.004$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><inline-formula id="j_infor442_ineq_123"><alternatives>
<mml:math><mml:mn>0.81</mml:mn><mml:mo>±</mml:mo><mml:mn>0.06</mml:mn></mml:math>
<tex-math><![CDATA[$0.81\pm 0.06$]]></tex-math></alternatives></inline-formula></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The confusion matrix for our cell classification model demonstrates that out of 2046 labelled lymphocytes, 310 were falsely misclassified as other cell types, while 13 false-positive observations were registered out of 2235 nuclei labelled as other cell types as shown in Fig. <xref rid="j_infor442_fig_003">3</xref>B. Receiver-operating curve (ROC) shown in Fig. <xref rid="j_infor442_fig_003">3</xref>C indicates the low false-positive rate of our lymphocyte classifier.</p>
<p>Of note, the proposed two-step lymphocyte detection model can potentially be adapted to detect more cell types by replacing existing lymphocyte classifier with a model trained on several classes.</p>
</sec>
</sec>
<sec id="j_infor442_s_015">
<label>3.3</label>
<title>Workflow Evaluation</title>
<fig id="j_infor442_fig_004">
<label>Fig. 4</label>
<caption>
<p>Exemplary 5 testing images from breast cancer lymphocyte dataset (Janowczyk and Madabhushi, <xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>) with corresponding lymphocyte identification model outputs. From left to right: 1st column- original testing image from the lymphocyte dataset. 2nd column: nuclei segmentation masks predicted by autoencoder. 3rd column: Expert pathologist’s annotation supplied in the dataset. 4th column: lymphocyte classifier result (if the nucleus was predicted as a lymphocyte, its centre was labelled with a green dot). 5th column: lymphocyte classifier result after Reinhard stain normalization.</p>
</caption>
<graphic xlink:href="infor442_g004.jpg"/>
</fig>
<p>The proposed lymphocyte identification workflow has been tested on the lymphocyte dataset published by Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>).<xref ref-type="fn" rid="j_infor442_fn_002">2</xref><fn id="j_infor442_fn_002"><label><sup>2</sup></label>
<p>Link to the dataset: <uri>http://www.andrewjanowczyk.com/use-case-4-lymphocyte-detection/</uri>.</p></fn> The dataset is composed of 100 breast cancer images stained with hematoxylin and eosin and digitized using <inline-formula id="j_infor442_ineq_124"><alternatives>
<mml:math><mml:mn>20</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$20\times $]]></tex-math></alternatives></inline-formula> magnification. The lymphocyte centres were manually annotated by an experienced pathologist. The same dataset was used in Alom <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>). Since our nuclei segmentation model was trained on <inline-formula id="j_infor442_ineq_125"><alternatives>
<mml:math><mml:mn>256</mml:mn><mml:mo>×</mml:mo><mml:mn>256</mml:mn></mml:math>
<tex-math><![CDATA[$256\times 256$]]></tex-math></alternatives></inline-formula> pixel image patches, each testing image was zero-padded to the desired input size while preserving the original image scale. Each testing slide was first analysed with autoencoder to segment all cell nuclei, followed by nuclei cropping and subsequent classification of each cropped nucleus using a pre-trained multilayer perceptron for lymphocyte identification. If the nucleus was classified as a lymphocyte, the cell centre was marked with a green dot. The classifier’s testing results were evaluated using dataset annotations as a reference.</p>
<p>The first analysis results – nuclei segmentation – are shown in the second column of Fig. <xref rid="j_infor442_fig_004">4</xref>. Nuclei segmentation masks generated by autoencoder demonstrate consistent cell nuclei detection efficiency regardless of image staining intensity. This can be explained by two factors. Due to robust image colour augmentation during autoencoder training, the CAE model learned to generalize the input image by texture, rather than colour. Secondly, our modified Micro-Net model architecture incorporates texture convolutional blocks shown in Fig. <xref rid="j_infor442_fig_002">2</xref>, which facilitate relevant feature extraction for the autoencoder.</p>
<p>The confusion matrix in Fig. <xref rid="j_infor442_fig_005">5</xref>A shows a low false-positive lymphocyte misclassification rate. However, the high false-negative rate suggests that the lymphocyte classification model is sensitive to image stain intensity. This is well reflected in Fig. <xref rid="j_infor442_fig_004">4</xref> <italic>Unmodified</italic> image column, where lymphocyte detection efficiency conspicuously decreases as image staining intensity fades. This is not a surprising result, given that a multilayer perceptron was trained on lymphocytes cropped from histology samples prepared in a different laboratory, where image staining is more consistent across different histology samples. This result illustrates the main limitations of the lymphocyte classification model: cropped nuclei images lose image background information, which otherwise could be leveraged in differentiating nucleus stain intensity versus its background colour intensity.</p>
<sec id="j_infor442_s_016">
<label>3.3.1</label>
<title>The Effect of Colour Normalization on Overall Model Performance</title>
<p>To address high staining variability between different histological samples, the lymphocyte testing dataset was normalized using the Reinhard stain normalization method. Reinhard algorithm adjusts the source image’s colour distribution to the colour distribution of the target image by equalizing the mean and standard deviation pixel values in each channel (Reinhard <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_027">2001</xref>). <disp-formula-group id="j_infor442_dg_001">
<disp-formula id="j_infor442_eq_004">
<label>(4)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt"><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">mapped</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[\begin{aligned}{}& {l_{\textit{mapped}}}=\frac{{l_{\textit{original}}}-{\bar{l}_{\textit{original}}}}{{\hat{l}_{\textit{original}}}}{\hat{l}_{\textit{target}}}+{\bar{l}_{\textit{target}}},\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor442_eq_005">
<label>(5)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt"><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">mapped</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">α</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[\begin{aligned}{}& {\alpha _{\textit{mapped}}}=\frac{{\alpha _{\textit{original}}}-{\bar{\alpha }_{\textit{original}}}}{{\hat{\alpha }_{\textit{original}}}}{\hat{\alpha }_{\textit{target}}}+{\bar{\alpha }_{\textit{target}}},\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_infor442_eq_006">
<label>(6)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt"><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">mapped</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">original</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext mathvariant="italic">target</mml:mtext></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[\begin{aligned}{}& {\beta _{\textit{mapped}}}=\frac{{\beta _{\textit{original}}}-{\bar{\beta }_{\textit{original}}}}{{\hat{\beta }_{\textit{original}}}}{\hat{\beta }_{\textit{target}}}+{\bar{\beta }_{\textit{target}}},\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> where <italic>l</italic>, <italic>α</italic>, <italic>β</italic> are colour channels in LAB colourspace, <inline-formula id="j_infor442_ineq_126"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mspace width="2.5pt"/></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$\hat{\hspace{2.5pt}}$]]></tex-math></alternatives></inline-formula> means standard deviation, <inline-formula id="j_infor442_ineq_127"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mspace width="2.5pt"/></mml:mrow><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$\bar{\hspace{2.5pt}}$]]></tex-math></alternatives></inline-formula> stands for mean value of all pixel values from channel. Colour normalization algorithm was implemented using openCV (Bradski, <xref ref-type="bibr" rid="j_infor442_ref_006">2000</xref>) and Numpy (Oliphant, <xref ref-type="bibr" rid="j_infor442_ref_023">2006</xref>) python libraries using representatively stained image from training dataset as target for stain normalization.</p>
<fig id="j_infor442_fig_005">
<label>Fig. 5</label>
<caption>
<p>Testing metrics for breast cancer lymphocyte dataset. A: confusion matrix for testing images with original sample staining; B: confusion matrix for testing images with Reinhard stain normalization applied on image stain.</p>
</caption>
<graphic xlink:href="infor442_g005.jpg"/>
</fig>
<p>Stain normalization effect on cell lymphocyte detection was evaluated by comparing testing metrics before stain normalization and after Reinhard algorithm implementation. The confusion matrix in Fig. <xref rid="j_infor442_fig_005">5</xref>B indicates a lower false-negative rate for lymphocytes. Stain normalization has increased accuracy, precision, recall, and f-score values by approximately 10%, as shown in Table <xref rid="j_infor442_tab_007">7</xref>. These results indicate that the stain normalization step is an effective pre-processing part which can mitigate high staining intensity variance between histology samples. Observed improvement of lymphocyte classification accuracy by applied relatively simple Reinhard stain normalization suggests this part of our workflow can be further explored. Structure-preserving image normalization methods (Vahadane <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_033">2016</xref>; Mahapatra <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_020">2020</xref>) demonstrate promising results; also, certain medical image denoising techniques (Meiniel <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_021">2018</xref>; Pham <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor442_ref_024">2020</xref>) could appear useful in future work.</p>
<table-wrap id="j_infor442_tab_007">
<label>Table 7</label>
<caption>
<p>A comparison table depicting the effect of stain normalization on lymphocyte identification efficiency is presented. For comparison, we give here the results of the studies that utilized the same dataset. It is important to note that we only used this dataset to test our method, while studies referenced in the table used part of this dataset for training as well.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Accuracy</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Precision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Recall</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">f-score</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Proposed method, original staining</td>
<td style="vertical-align: top; text-align: left">0.71</td>
<td style="vertical-align: top; text-align: left">0.76</td>
<td style="vertical-align: top; text-align: left">0.75</td>
<td style="vertical-align: top; text-align: left">0.70</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Proposed method, wt stain normalization</td>
<td style="vertical-align: top; text-align: left">0.81</td>
<td style="vertical-align: top; text-align: left">0.80</td>
<td style="vertical-align: top; text-align: left">0.81</td>
<td style="vertical-align: top; text-align: left">0.80</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>)</td>
<td style="vertical-align: top; text-align: left">–</td>
<td style="vertical-align: top; text-align: left">0.89</td>
<td style="vertical-align: top; text-align: left">–</td>
<td style="vertical-align: top; text-align: left">0.90</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Alom <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>)</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.90</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">–</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">–</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.91</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Both Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>) and Alom <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>) used the same dataset to train and evaluate their proposed models; therefore, to deal with overfitting, authors had to apply some sort of cross-validation. 5-fold cross-validation was used in Janowczyk and Madabhushi (<xref ref-type="bibr" rid="j_infor442_ref_015">2016</xref>), and Alom <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor442_ref_003">2019</xref>) reserved 10% of the dataset for testing purposes. In contrast, we used the whole dataset exclusively for the proposed model evaluation, thus completely eliminating the possibility of overfitting. Our result (f-score = 0.80) indicates good model generalization and comparable performance to both the above-mentioned methods.</p>
</sec>
</sec>
</sec>
<sec id="j_infor442_s_017">
<label>4</label>
<title>Conclusions</title>
<p>In this paper, we propose an end-to-end deep learning-based algorithm for cell nuclei segmentation and consecutive lymphocyte identification in H&amp;E stained <inline-formula id="j_infor442_ineq_128"><alternatives>
<mml:math><mml:mn>20</mml:mn><mml:mo>×</mml:mo></mml:math>
<tex-math><![CDATA[$20\times $]]></tex-math></alternatives></inline-formula> magnified breast and colorectal cancer whole slide images. Our conducted experiments suggest that:</p>
<list>
<list-item id="j_infor442_li_001">
<label>•</label>
<p>Our proposed autoencoder structure component – convolutional texture blocks – can achieve Dice nuclei segmentation score similar to that of the Micro-Net model (our model achieved 1% higher testing Dice coefficient).</p>
</list-item>
<list-item id="j_infor442_li_002">
<label>•</label>
<p>Additional active contour layer in nuclei annotation masks increases nuclei segmentation accuracy by 1.5%.</p>
</list-item>
<list-item id="j_infor442_li_003">
<label>•</label>
<p>Lymphocyte classification by multilayer perceptron network achieves <inline-formula id="j_infor442_ineq_129"><alternatives>
<mml:math><mml:mn>78</mml:mn><mml:mo>±</mml:mo><mml:mn>0.3</mml:mn></mml:math>
<tex-math><![CDATA[$78\pm 0.3$]]></tex-math></alternatives></inline-formula>% testing accuracy on the private dataset (NCP), and 0.71 on the public dataset (0.81 with Reinhard stain normalization).</p>
</list-item>
</list>
<p>Nuclei segmentation autoencoder architecture investigated in this paper has lower model complexity compared to U-Net and Micro-Net models, which brings the advantage of lower computational resource usage. Our suggested pipeline shows good generalization properties, eliminates overfitting, and can be easily extended for multi-class nuclei identification by replacing the nuclei classification MLP model and re-employing the same pre-trained segmentation autoencoder.</p>
</sec>
</body>
<back>
<ack id="j_infor442_ack_001">
<title>Acknowledgements</title>
<p>The authors are thankful for the HPC resources provided by the IT APC at the Faculty of Mathematics and Informatics of Vilnius University Information Technology Research Centre.</p></ack>
<ref-list id="j_infor442_reflist_001">
<title>References</title>
<ref id="j_infor442_ref_001">
<mixed-citation publication-type="chapter"><string-name><surname>Abadi</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Barham</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Davis</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Dean</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Devin</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ghemawat</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Irving</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Isard</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kudlur</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Levenberg</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Monga</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Moore</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Murray</surname>, <given-names>D.G.</given-names></string-name>, <string-name><surname>Steiner</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Tucker</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Vasudevan</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Warden</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Wicke</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Yu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zheng</surname>, <given-names>X.</given-names></string-name> (<year>2016</year>). <chapter-title>TensorFlow: a system for large-scale machine learning</chapter-title>. In: <source>Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation</source>, pp. <fpage>265</fpage>–<lpage>283</lpage>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_002">
<mixed-citation publication-type="journal"><string-name><surname>Allard</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Aspeslagh</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Garaud</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Dupont</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Solinas</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Kok</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Routy</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Sotiriou</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Stagg</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Buisseret</surname>, <given-names>L.</given-names></string-name> (<year>2018</year>). <article-title>Immuno-oncology-101: overview of major concepts and translational perspectives</article-title>. <source>Seminars in Cancer Biology</source>, <volume>52</volume>, <fpage>1</fpage>–<lpage>11</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.semcancer.2018.02.005" xlink:type="simple">https://doi.org/10.1016/j.semcancer.2018.02.005</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_003">
<mixed-citation publication-type="other"><string-name><surname>Alom</surname>, <given-names>Z.Md.</given-names></string-name>, <string-name><surname>Aspiras</surname>, <given-names>T.H.</given-names></string-name>, <string-name><surname>Taha</surname>, <given-names>T.M.</given-names></string-name>, <string-name><surname>Asari</surname>, <given-names>V.K.</given-names></string-name>, <string-name><surname>Bowen</surname>, <given-names>T.J.</given-names></string-name>, <string-name><surname>Billiter</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Arkell</surname>, <given-names>S.</given-names></string-name> (2019). Advanced deep convolutional neural network approaches for digital pathology image analysis: a comprehensive evaluation with different use cases. <italic>CoRR</italic>. <uri>http://arxiv.org/abs/1904.09075</uri>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_004">
<mixed-citation publication-type="other"><string-name><surname>Basavanhally</surname>, <given-names>A.N.</given-names></string-name>, <string-name><surname>Ganesan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Agner</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Monaco</surname>, <given-names>J.P.</given-names></string-name>, <string-name><surname>Feldman</surname>, <given-names>M.D.</given-names></string-name>, <string-name><surname>Tomaszewski</surname>, <given-names>J.E.</given-names></string-name>, <string-name><surname>Bhanot</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Madabhushi</surname>, <given-names>A.</given-names></string-name> (2010). Computerized image-based detection and grading of lymphocytic infiltration in HER2+ breast cancer histopathology. <italic>IEEE Transactions on Biomedical Engineering</italic>, <italic>57</italic>(3). <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TBME.2009.2035305" xlink:type="simple">https://doi.org/10.1109/TBME.2009.2035305</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_005">
<mixed-citation publication-type="journal"><string-name><surname>Beatty</surname>, <given-names>G.L.</given-names></string-name>, <string-name><surname>Gladney</surname>, <given-names>W.L.</given-names></string-name> (<year>2015</year>). <article-title>Immune escape mechanisms as a guide for cancer immunotherapy</article-title>. <source>Clinical Cancer Research</source>, <volume>21</volume>(<issue>4</issue>), <fpage>687</fpage>–<lpage>692</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1158/1078-0432.CCR-14-1860" xlink:type="simple">https://doi.org/10.1158/1078-0432.CCR-14-1860</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_006">
<mixed-citation publication-type="other"><string-name><surname>Bradski</surname>, <given-names>G.</given-names></string-name> (2000). The OpenCV Library. <italic>Dr. Dobb’s Journal of Software Tools</italic>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_007">
<mixed-citation publication-type="other"><string-name><surname>Chen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Srinivas</surname>, <given-names>C.</given-names></string-name> (2016). Automatic lymphocyte detection in H&amp;E images with deep neural networks. <italic>CoRR</italic>. <uri>http://arxiv.org/abs/1612.03217</uri>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>Cui</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Xiong</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Hu</surname>, <given-names>J.</given-names></string-name> (<year>2019</year>). <article-title>A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images</article-title>. <source>Medical &amp; Biological Engineering &amp; Computing</source>, <volume>57</volume>, <fpage>2027</fpage>–<lpage>2043</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11517-019-02008-8" xlink:type="simple">https://doi.org/10.1007/s11517-019-02008-8</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_009">
<mixed-citation publication-type="chapter"><string-name><surname>Feurer</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Eggensperger</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Springenberg</surname>, <given-names>J.T.</given-names></string-name>, <string-name><surname>Blum</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hutter</surname>, <given-names>F.</given-names></string-name> (<year>2015</year>). <chapter-title>Efficient and robust automated machine learning</chapter-title>. In: <source>Advances in Neural Information Processing Systems</source>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-3-030-05318-5_6" xlink:type="simple">https://doi.org/10.1007/978-3-030-05318-5_6</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_010">
<mixed-citation publication-type="journal"><string-name><surname>Galon</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Pagès</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Marincola</surname>, <given-names>F.M.</given-names></string-name>, <string-name><surname>Angell</surname>, <given-names>H.K.</given-names></string-name>, <string-name><surname>Thurin</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lugli</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Zlobec</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Berger</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Bifulco</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Botti</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Tatangelo</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Britten</surname>, <given-names>C.M.</given-names></string-name>, <string-name><surname>Kreiter</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Chouchane</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Delrio</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Arndt</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Asslaber</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Maio</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Masucci</surname>, <given-names>G.V.</given-names></string-name>, <string-name><surname>Mihm</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Vidal-Vanaclocha</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Allison</surname>, <given-names>J.P.</given-names></string-name>, <string-name><surname>Gnjatic</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Hakansson</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Huber</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Singh-Jasuja</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Ottensmeier</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Zwierzina</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Laghi</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Grizzi</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Ohashi</surname>, <given-names>P.S.</given-names></string-name>, <string-name><surname>Shaw</surname>, <given-names>P.A.</given-names></string-name>, <string-name><surname>Clarke</surname>, <given-names>B.A.</given-names></string-name>, <string-name><surname>Wouters</surname>, <given-names>B.G.</given-names></string-name>, <string-name><surname>Kawakami</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Hazama</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Okuno</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>O’Donnell-Tormey</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lagorce</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Pawelec</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Nishimura</surname>, <given-names>M.I.</given-names></string-name>, <string-name><surname>Hawkins</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Lapointe</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Lundqvist</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Khleif</surname>, <given-names>S.N.</given-names></string-name>, <string-name><surname>Ogino</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Gibbs</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Waring</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Sato</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Torigoe</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Itoh</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Patel</surname>, <given-names>P.S.</given-names></string-name>, <string-name><surname>Shukla</surname>, <given-names>S.N.</given-names></string-name>, <string-name><surname>Palmqvist</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Nagtegaal</surname>, <given-names>I.D.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>D’Arrigo</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Kopetz</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Sinicrope</surname>, <given-names>F.A.</given-names></string-name>, <string-name><surname>Trinchieri</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Gajewski</surname>, <given-names>T.F.</given-names></string-name>, <string-name><surname>Ascierto</surname>, <given-names>P.A.</given-names></string-name>, <string-name><surname>Fox</surname>, <given-names>B.A.</given-names></string-name> (<year>2012</year>). <article-title>Cancer classification using the immunoscore: a worldwide task force</article-title>. <source>Journal of Translational Medicine</source>, <volume>10</volume>, <fpage>205</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1186/1479-5876-10-205" xlink:type="simple">https://doi.org/10.1186/1479-5876-10-205</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_011">
<mixed-citation publication-type="journal"><string-name><surname>Graham</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Vu</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Ahmed Raza</surname>, <given-names>S.E.</given-names></string-name>, <string-name><surname>Azam</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Tsang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Kwak</surname>, <given-names>J.T.</given-names></string-name>, <string-name><surname>Rajpoot</surname>, <given-names>N.</given-names></string-name> (<year>2019</year>). <article-title>Hover-Net: simultaneous segmentation and classification of nuclei in multi-tissue histology images</article-title>. <source>Medical Image Analysis</source>, <volume>58</volume>, <elocation-id>101563</elocation-id>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Grossman</surname>, <given-names>R.L.</given-names></string-name>, <string-name><surname>Heath</surname>, <given-names>A.P.</given-names></string-name>, <string-name><surname>Ferretti</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Varmus</surname>, <given-names>H.E.</given-names></string-name>, <string-name><surname>Lowy</surname>, <given-names>D.R.</given-names></string-name>, <string-name><surname>Kibbe</surname>, <given-names>W.A.</given-names></string-name>, <string-name><surname>Staudt</surname>, <given-names>L.M.</given-names></string-name> (<year>2016</year>). <article-title>Toward a shared vision for cancer genomic data</article-title>. <source>New England Journal of Medicine</source>, <volume>375</volume>(<issue>12</issue>), <fpage>1109</fpage>–<lpage>1112</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1056/nejmp1607591" xlink:type="simple">https://doi.org/10.1056/nejmp1607591</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_013">
<mixed-citation publication-type="chapter"><string-name><surname>Guo</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Yu</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Rossetti</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Teodoro</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Brat</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Kong</surname>, <given-names>J.</given-names></string-name> (<year>2018</year>). <chapter-title>Clumped nuclei segmentation with adjacent point match and local shape-based intensity analysis in fluorescence microscopy images</chapter-title>. In: <source>Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS</source>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/EMBC.2018.8512961" xlink:type="simple">https://doi.org/10.1109/EMBC.2018.8512961</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_014">
<mixed-citation publication-type="journal"><string-name><surname>Huh</surname>, <given-names>J.W.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>J.H.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>H.R.</given-names></string-name> (<year>2012</year>). <article-title>Prognostic significance of tumor-infiltrating lymphocytes for patients with colorectal cancer</article-title>. <source>Archives of Surgery</source>, <volume>147</volume>(<issue>4</issue>), <fpage>366</fpage>–<lpage>372</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1001/archsurg.2012.35" xlink:type="simple">https://doi.org/10.1001/archsurg.2012.35</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_015">
<mixed-citation publication-type="journal"><string-name><surname>Janowczyk</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Madabhushi</surname>, <given-names>A.</given-names></string-name> (<year>2016</year>). <article-title>Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases</article-title>. <source>Journal of Pathology Informatics</source>, <volume>7</volume>, <fpage>29</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.4103/2153-3539.186902" xlink:type="simple">https://doi.org/10.4103/2153-3539.186902</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_016">
<mixed-citation publication-type="chapter"><string-name><surname>Kingma</surname>, <given-names>D.P.</given-names></string-name>, <string-name><surname>Ba</surname>, <given-names>J.</given-names></string-name> (<year>2014</year>). <chapter-title>Adam: a method for stochastic optimization</chapter-title>. In: <source>International Conference on Learning Representations</source>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_017">
<mixed-citation publication-type="chapter"><string-name><surname>Kuse</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Sharma</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Gupta</surname>, <given-names>S.</given-names></string-name> (<year>2010</year>). <chapter-title>A classification scheme for lymphocyte segmentation in H&amp;E stained histology images</chapter-title>. In: <source>Recognizing Patterns in Signals, Speech, Images and Videos, ICPR 2010</source>, <series><italic>Lecture Notes in Computer Science</italic></series>, Vol. <volume>6388</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Berlin, Heidelberg</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_018">
<mixed-citation publication-type="journal"><string-name><surname>Linder</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Taylor</surname>, <given-names>J.C.</given-names></string-name>, <string-name><surname>Colling</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Pell</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Alveyn</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Joseph</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Protheroe</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Lundin</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lundin</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Verrill</surname>, <given-names>C.</given-names></string-name> (<year>2019</year>). <article-title>Deep learning for detecting tumour-infiltrating lymphocytes in testicular germ cell tumours</article-title>. <source>Journal of Clinical Pathology</source>, <volume>72</volume>(<issue>2</issue>), <fpage>157</fpage>–<lpage>164</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1136/jclinpath-2018-205328" xlink:type="simple">https://doi.org/10.1136/jclinpath-2018-205328</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_019">
<mixed-citation publication-type="journal"><string-name><surname>Ma</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Hadjiiski</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Wei</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Chan</surname>, <given-names>H.-P.</given-names></string-name>, <string-name><surname>Cha</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Cohan</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Caoili</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Samala</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Zhou</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Lu</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <article-title>U-Net-based deep-learning bladder segmentation in CT urography</article-title>. <source>Medical Physics</source>, <volume>46</volume>(<issue>4</issue>), <fpage>1752</fpage>–<lpage>1756</lpage>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_020">
<mixed-citation publication-type="chapter"><string-name><surname>Mahapatra</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Bozorgtabar</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Thiran</surname>, <given-names>J.-P.</given-names></string-name>, <string-name><surname>Shao</surname>, <given-names>L.</given-names></string-name> (<year>2020</year>). <chapter-title>Structure preserving stain normalization of histopathology images using self-supervised semantic guidance</chapter-title>. In: <string-name><surname>Martel</surname>, <given-names>A.L.</given-names></string-name> <etal>et al.</etal> (Eds.), <source>Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020</source>, <series><italic>Lecture Notes in Computer Science</italic></series>, Vol. <volume>12265</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Cham</publisher-loc>. <comment><uri>https://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3-030-59722-1_30</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_021">
<mixed-citation publication-type="journal"><string-name><surname>Meiniel</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Olivo-Marin</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Angelini</surname>, <given-names>E.D.</given-names></string-name> (<year>2018</year>). <article-title>Denoising of microscopy images: a review of the state-of-the-art, and a new sparsity-based method</article-title>. <source>IEEE Transactions on Image Processing</source>, <volume>27</volume>(<issue>8</issue>), <fpage>3842</fpage>–<lpage>3856</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TIP.2018.2819821" xlink:type="simple">https://doi.org/10.1109/TIP.2018.2819821</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Morkunas</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Treigys</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Bernatavičiene</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Laurinavičius</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Korvel</surname>, <given-names>G.</given-names></string-name> (<year>2018</year>). <article-title>Machine learning based classification of colorectal cancer tumour tissue in whole-slide images</article-title>. <source>Informatica</source>, <volume>29</volume>(<issue>1</issue>), <fpage>75</fpage>–<lpage>90</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.15388/Informatica.2018.158" xlink:type="simple">https://doi.org/10.15388/Informatica.2018.158</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_023">
<mixed-citation publication-type="book"><string-name><surname>Oliphant</surname>, <given-names>T.</given-names></string-name> (<year>2006</year>). <source>NumPy: A Guide to NumPy</source>. <publisher-name>Trelgol Publishing</publisher-name>, <publisher-loc>USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_024">
<mixed-citation publication-type="journal"><string-name><surname>Pham</surname>, <given-names>C.T.</given-names></string-name>, <string-name><surname>Thao Tran</surname>, <given-names>T.T.</given-names></string-name>, <string-name><surname>Gamard</surname>, <given-names>G.</given-names></string-name> (<year>2020</year>). <article-title>An efficient total variation minimization method for image restoration</article-title>. <source>Informatica</source>, <volume>31</volume>(<issue>3</issue>), <fpage>539</fpage>–<lpage>560</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.15388/20-INFOR407" xlink:type="simple">https://doi.org/10.15388/20-INFOR407</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Rasmusson</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Zilenaite</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Nestarenkaite</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Augulis</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Laurinaviciene</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ostapenko</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Poskus</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Laurinavicius</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Immunogradient indicators for antitumor response assessment by automated tumor-stroma interface zone detection</article-title>. <source>The American Journal of Pathology</source>, <volume>190(6)</volume>, <fpage>1309</fpage>–<lpage>1322</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ajpath.2020.01.018" xlink:type="simple">https://doi.org/10.1016/j.ajpath.2020.01.018</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_026">
<mixed-citation publication-type="journal"><string-name><surname>Raza</surname>, <given-names>S.E.A.</given-names></string-name>, <string-name><surname>Cheung</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Shaban</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Graham</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Epstein</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Pelengaris</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Khan</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rajpoot</surname>, <given-names>N.M.</given-names></string-name> (<year>2019</year>). <article-title>Micro-Net: a unified model for segmentation of various objects in microscopy images</article-title>. <source>Medical Image Analysis</source>, <volume>52</volume>, <fpage>160</fpage>–<lpage>173</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.media.2018.12.003" xlink:type="simple">https://doi.org/10.1016/j.media.2018.12.003</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_027">
<mixed-citation publication-type="journal"><string-name><surname>Reinhard</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Ashikhmin</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Shirley</surname>, <given-names>P.</given-names></string-name> (<year>2001</year>). <article-title>Color transfer between images</article-title>. <source>IEEE Computer Graphics and Applications</source>, <volume>21</volume>(<issue>5</issue>), <fpage>34</fpage>–<lpage>41</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/38.946629" xlink:type="simple">https://doi.org/10.1109/38.946629</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_028">
<mixed-citation publication-type="chapter"><string-name><surname>Ronneberger</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Fischer</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Brox</surname>, <given-names>T.</given-names></string-name> (<year>2015</year>). <chapter-title>U-net: convolutional networks for biomedical image segmentation</chapter-title>. In: <string-name><surname>Navab</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Hornegger</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Wells</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Frangi</surname>, <given-names>A.</given-names></string-name> (Eds.), <source>Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015</source>, <series><italic>Lecture Notes in Computer Science</italic></series>, Vol. <volume>9351</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Cham</publisher-loc>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-3-319-24574-4_28" xlink:type="simple">https://doi.org/10.1007/978-3-319-24574-4_28</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_029">
<mixed-citation publication-type="journal"><string-name><surname>Ruifrok</surname>, <given-names>A.C.</given-names></string-name>, <string-name><surname>Johnston</surname>, <given-names>D.A.</given-names></string-name> (<year>2001</year>). <article-title>Quantification of histochemical staining by color deconvolution</article-title>. <source>Analytical and Quantitative Cytology and Histology</source>, <volume>23</volume>(<issue>4</issue>), <fpage>291</fpage>–<lpage>299</lpage>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_030">
<mixed-citation publication-type="journal"><string-name><surname>Saltz</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gupta</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Hou</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Kurc</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Singh</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Nguyen</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Samaras</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Shroyer</surname>, <given-names>K.R.</given-names></string-name>, <string-name><surname>Zhao</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Batiste</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Van Arnam</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Shmulevich</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Rao</surname>, <given-names>A.U.K.</given-names></string-name>, <string-name><surname>Lazar</surname>, <given-names>A.J.</given-names></string-name>, <string-name><surname>Sharma</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Thorsson</surname>, <given-names>V.</given-names></string-name> (<year>2018</year>). <article-title>Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images</article-title>. <source>Cell Reports</source>, <volume>23</volume>(<issue>1</issue>), <fpage>181</fpage>–<lpage>193</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.celrep.2018.03.086" xlink:type="simple">https://doi.org/10.1016/j.celrep.2018.03.086</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_031">
<mixed-citation publication-type="journal"><string-name><surname>Sirinukunwattana</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Raza</surname>, <given-names>S.E.A.</given-names></string-name>, <string-name><surname>Tsang</surname>, <given-names>Y.W.</given-names></string-name>, <string-name><surname>Snead</surname>, <given-names>D.R.J.</given-names></string-name>, <string-name><surname>Cree</surname>, <given-names>I.A.</given-names></string-name>, <string-name><surname>Rajpoot</surname>, <given-names>N.M.</given-names></string-name> (<year>2016</year>). <article-title>Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images</article-title>. <source>IEEE Transactions on Medical Imaging</source>, <volume>35</volume>(<issue>5</issue>), <fpage>1196</fpage>–<lpage>1206</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TMI.2016.2525803" xlink:type="simple">https://doi.org/10.1109/TMI.2016.2525803</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_032">
<mixed-citation publication-type="journal"><string-name><surname>Turkki</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Linder</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Kovanen</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Pellinen</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Lundin</surname>, <given-names>J.</given-names></string-name> (<year>2016</year>). <article-title>Antibody-supervised deep learning for quantification of tumor-infiltrating immune cells in hematoxylin and eosin stained breast cancer samples</article-title>. <source>Journal of Pathology Informatics</source>, <volume>7</volume>, <fpage>38</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.4103/2153-3539.189703" xlink:type="simple">https://doi.org/10.4103/2153-3539.189703</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_033">
<mixed-citation publication-type="journal"><string-name><surname>Vahadane</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Peng</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Sethi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Albarqouni</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Baust</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Steiger</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Schlitter</surname>, <given-names>A.M.</given-names></string-name>, <string-name><surname>Esposito</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Navab</surname>, <given-names>N.</given-names></string-name> (<year>2016</year>). <article-title>Structure-preserving color normalization and sparse stain separation for histological images</article-title>. <source>IEEE Transactions on Medical Imaging</source>, <volume>35</volume>(<issue>8</issue>), <fpage>1962</fpage>–<lpage>1971</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TMI.2016.2529665" xlink:type="simple">https://doi.org/10.1109/TMI.2016.2529665</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_034">
<mixed-citation publication-type="journal"><string-name><surname>Zappasodi</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Merghoub</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Wolchok</surname>, <given-names>J.D.</given-names></string-name> (<year>2018</year>). <article-title>Emerging concepts for immune checkpoint blockade-based combination therapies</article-title>. <source>Cancer Cell</source>, <volume>33</volume>(<issue>4</issue>), <fpage>581</fpage>–<lpage>598</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ccell.2018.03.005" xlink:type="simple">https://doi.org/10.1016/j.ccell.2018.03.005</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor442_ref_035">
<mixed-citation publication-type="chapter"><string-name><surname>Zuiderveld</surname>, <given-names>K.</given-names></string-name> (<year>1994</year>). <chapter-title>Contrast limited adaptive histogram equalization</chapter-title>. In: <source>Graphics Gems</source>, pp. <fpage>474</fpage>–<lpage>485</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/b978-0-12-336156-1.50061-6" xlink:type="simple">https://doi.org/10.1016/b978-0-12-336156-1.50061-6</ext-link>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>