<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn><issn pub-type="ppub">0868-4952</issn><issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFO1172</article-id>
<article-id pub-id-type="doi">10.15388/Informatica.2018.171</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Double Probability Model for Open Set Problem at Image Classification</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Papp</surname><given-names>Dávid</given-names></name><email xlink:href="pappd@tmit.bme.hu">pappd@tmit.bme.hu</email><xref ref-type="aff" rid="j_info1172_aff_001"/><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>D. Papp</bold> was born in 1990 in Hungary and he has received BSc and MSc in computer science (at specialization of media informatics) from Budapest University of Technology and Economics (BME) and now he is a PhD student in computer science at the same university.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Szűcs</surname><given-names>Gábor</given-names></name><email xlink:href="szucs@tmit.bme.hu">szucs@tmit.bme.hu</email><xref ref-type="aff" rid="j_info1172_aff_001"/><bio>
<p><bold>G. Szűcs</bold> was born in 1970 in Hungary. He has received MSc in electrical engineering and PhD in computer science from Budapest University of Technology and Economics (BME) in 1994 and in 2002, respectively. His research areas are data, multimedia mining, content based image retrieval, semantic search. He is an associate professor at Department of Telecommunications and Media Informatics of BME. The number of his publications is more than 80. He is the president of the Hungarian Simulation Society (EUROSIM), he is the leader of the research group DCLAB (Data Science and Content Technologies). He has earned János Bolyai Research Scholarship of the Hungarian Academy of Science some years ago.</p></bio>
</contrib>
<aff id="j_info1172_aff_001">Department of Telecommunications and Media Informatics, <institution>Budapest University of Technology and Economics</institution>, Magyar Tudósok krt. 2, H-1117, Budapest, <country>Hungary</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2018</year></pub-date><pub-date pub-type="epub"><day>1</day><month>1</month><year>2018</year></pub-date><volume>29</volume><issue>2</issue><fpage>353</fpage><lpage>369</lpage><history><date date-type="received"><month>5</month><year>2017</year></date><date date-type="accepted"><month>1</month><year>2018</year></date></history>
<permissions><copyright-statement>© 2018 Vilnius University</copyright-statement><copyright-year>2018</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>In this paper an exploratory classification, so called open set problem, is investigated. Open set recognition assumes there is incomplete knowledge of the world at training time, and unknown classes can be submitted to an algorithm during testing. For this problem we elaborated a theoretical model, Double Probability Model (DPM), based on likelihoods of a classifier. We developed it with double smoothing solution in order to solve technical difficulties avoiding zero values in the predictions. We applied the GMM based Fisher vector for the mathematical representation of the images and the C-SVC with RBF kernel for the classification. The last contributions of the paper are new goodness indicators for classification in open set problem, the new type of accuracies. The experimental results present that our Double Probability Model helps with classification, the accuracy increases by using our proposed model. We compared our method to a state-of-the-art open set recognition solution and the results showed that DPM outperforms existing techniques.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>open world problem</kwd>
<kwd>open set</kwd>
<kwd>image classification</kwd>
<kwd>unknown class</kwd>
<kwd>double probability model</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="j_info1172_s_001">
<label>1</label>
<title>Introduction to Open Set Problem</title>
<p>There are many works dealing with multi-class classification that incorporates labelled and unlabelled data. The reason of usage of both of them comes from the costs of the machine learning process. Namely, in some cases labelled instances are often expensive, difficult, or time consuming to obtain, as they require the efforts of experienced human annotators. Meanwhile unlabelled data may be relatively easy to gather, but there has been few ways to use them. This kind of learning requires less human effort and gives higher accuracy, it is of great interest both in theory and in practice. This is useful in many areas, e.g. person (Szűcs and Marosvári, <xref ref-type="bibr" rid="j_info1172_ref_025">2015</xref>) and character identification (Zhu and Goldberg, <xref ref-type="bibr" rid="j_info1172_ref_029">2009</xref>) in multimedia data (the latter one is solved by clustering procedure). This topic belongs to semi-supervised learning theory (Bauml <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_001">2013</xref>), where there are usually only small amount of labelled data with a large amount of unlabelled data. Semi-supervised learning falls between unsupervised learning and supervised learning, and it can learn from both labelled and unlabelled instances. This can be combined by active method, like active clustering based classification method, which clusters both the labelled and unlabelled data with the guidance of labelled instances, then queries the label of the most informative instances in an active learning phase and after that classifies the data set (Szűcs and Henk, <xref ref-type="bibr" rid="j_info1172_ref_024">2015</xref>).</p>
<p>In all researches mentioned above the unlabelled instances belong to known classes (in the test set the new instances should be categorized into one of known classes also), but in an exploratory learning a new type of task occurs.</p>
<p>The task to be addressed is related to what is called open-set or open-world recognition problems (Bendale and Boult, <xref ref-type="bibr" rid="j_info1172_ref_002">2015</xref>; Scheirer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>), i.e. classification problems in which the recognition system has to be robust to unseen categories. Formally, given <italic>K</italic> known classes (categories) in the training set, and the task is not only to classify the new instances into known categories, but also to recognize when an instance does not belong to any of the known classes. This new category is called unknown class, thus the test set contains <inline-formula id="j_info1172_ineq_001"><alternatives><mml:math>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$K+1$]]></tex-math></alternatives></inline-formula> classes. The task is an extended version of the single-label classification, because after training <italic>K</italic> classes the decision should be drawn among <inline-formula id="j_info1172_ineq_002"><alternatives><mml:math>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$K+1$]]></tex-math></alternatives></inline-formula> alternatives.</p>
<p>After this formalization we organize the rest of this paper as follows. First we summarize the related literature in this area, then in Section <xref rid="j_info1172_s_003">3</xref> we present our suggestion, so called Double Probability Model (DPM) for open set problem. In the next section our solution for image classification is presented. Section <xref rid="j_info1172_s_007">5</xref> contains the proposed new goodness indicators for classification in this problem, and the next one presents experimental results, finally in the last one we describe our conclusion.</p>
</sec>
<sec id="j_info1172_s_002">
<label>2</label>
<title>Related Work</title>
<p>The aim of our task was to identify data from classes that are not previously seen by a machine learning system during training. There are several works dealing with similar problem, since real-world tasks in computer vision often touch upon open set recognition (i.e. multi-class recognition with incomplete knowledge of the world and many unknown inputs). Some of those works use a new variant of SVM capable to solve the rejection problem, e.g. Support Vector Data Description (SVDD) (Tax and Duin, <xref ref-type="bibr" rid="j_info1172_ref_026">2004</xref>), and the One-class SVM (Schölkopf <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_023">2001</xref>; Cevikalp and Triggs, <xref ref-type="bibr" rid="j_info1172_ref_004">2012</xref>), RO-SVM (Zhang and Metaxas, <xref ref-type="bibr" rid="j_info1172_ref_028">2006</xref>) determines the instance labels, and the rejection region during the training phase simultaneously. Furthermore, in the literature binary classification models have been proposed specifically for open set visual recognition tasks. Scheirer <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>) developed a Compact Abating Probability model (CAP model), where the probability of class membership decreases in value (abates) as points move from known data toward open space. Based on the CAP model, they described a new variant of SVM, the novel Weibull-calibrated SVM (W-SVM) for open set recognition, which combines useful properties of statistical extreme value theory for score calibration with one-class and binary SVMs. Scheirer <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>) claim that W-SVM outperforms their previous solutions, namely the 1-vs-Set Machine (Scheirer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_021">2013</xref>) and the <inline-formula id="j_info1172_ineq_003"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{I}}$]]></tex-math></alternatives></inline-formula>-SVM (Jain <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_013">2014</xref>); besides, they included several other approaches in their experimental evaluation, which were all outperformed by W-SVM. In this paper we compare our solution to the W-SVM and discuss the results (see Section <xref rid="j_info1172_s_011">6.3</xref>). The 1-vs-Set Machine algorithm (Scheirer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_021">2013</xref>) sculpts the decision space from the marginal distance of one-class or binary SVM with a linear kernel, so that it can reduce open space risk. This approach simply assigns class labels to instances during test. On the other hand, <inline-formula id="j_info1172_ineq_004"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{I}}$]]></tex-math></alternatives></inline-formula>-SVM (Jain <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_013">2014</xref>) is developed for estimating the unnormalized posterior probability of class inclusion. The idea is based on knowledge of rejection the large set of unknown classes even under an assumption of incomplete class knowledge if an accurate model could be built for positive data for any known class without overfitting. The solution is formulated as modelling positive training data at the decision boundary, where the statistical extreme value theory can help. Bendale and Boult (<xref ref-type="bibr" rid="j_info1172_ref_002">2015</xref>) defined Open World recognition and presented the Nearest Non-Outlier (NNO) algorithm which adds object categories incrementally while detecting outliers and managing open space risk.</p>
</sec>
<sec id="j_info1172_s_003">
<label>3</label>
<title>Double Probability Model</title>
<sec id="j_info1172_s_004">
<label>3.1</label>
<title>Theoretical Model</title>
<p>In this section we present our Double Probability Model, which is based on likelihoods of a classifier. After training the classifier is able to give predictions with reliability values (scores) for each class. The range of the scores depends on the classifier type (sometimes it is from 0 to 1), but it can be any range; only one condition is required, namely the larger score for class <inline-formula id="j_info1172_ineq_005"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{i}}$]]></tex-math></alternatives></inline-formula> should represent larger likelihood to being member of class <inline-formula id="j_info1172_ineq_006"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{i}}$]]></tex-math></alternatives></inline-formula>. In the training set or in a validation set the instances with corresponding scores are investigated in each class. The ground truth is known in this set, so the positive elements can be selected from each class. Denote the set of scores of the positive and negative instances of class <inline-formula id="j_info1172_ineq_007"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{i}}$]]></tex-math></alternatives></inline-formula> by <inline-formula id="j_info1172_ineq_008"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${S_{{P_{i}}}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_009"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${S_{{N_{i}}}}$]]></tex-math></alternatives></inline-formula>, respectively. The set of negative instances of a class is the union positive instances of all other classes, as can be seen in Eq. (<xref rid="j_info1172_eq_001">1</xref>). 
<disp-formula id="j_info1172_eq_001">
<label>(1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">⋃</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo stretchy="false">≠</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {S_{{N_{i}}}}=\bigcup \limits_{j\ne i}{S_{{P_{j}}}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>In order to get conditional probability that a new instance belongs to class <inline-formula id="j_info1172_ineq_010"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{i}}$]]></tex-math></alternatives></inline-formula> provided by it’s score, cumulative distribution function (CDF) of score values in <inline-formula id="j_info1172_ineq_011"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${S_{{P_{i}}}}$]]></tex-math></alternatives></inline-formula> should be calculated, and we created a “reverse” CDF of values in <inline-formula id="j_info1172_ineq_012"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${S_{{N_{i}}}}$]]></tex-math></alternatives></inline-formula> (see Eqs. (<xref rid="j_info1172_eq_002">2</xref>) and (<xref rid="j_info1172_eq_003">3</xref>)). <disp-formula-group id="j_info1172_dg_001">
<disp-formula id="j_info1172_eq_002">
<label>(2)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">score</mml:mi>
<mml:mo mathvariant="normal">&lt;</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {F_{{P_{i}}}}(x)=p({C_{i}}|\mathit{score}<x),\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_info1172_eq_003">
<label>(3)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">‾</mml:mo></mml:mover>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">score</mml:mi>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {F_{{N_{i}}}}(x)=p(\overline{{C_{i}}}|\mathit{score}>x).\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group></p>
<p>Note that the sum of these probabilities is not always equal to 1 (this is not required). We constructed the so called Double Probability Model based on CDF and “reverse” CDF functions. After these calculations the predicted class should be decided at a new instance. The focus is on the likelihood of unknown class compared with any of the known classes. Before the comparison, the probabilities of the known classes should be calculated. We get scores (<inline-formula id="j_info1172_ineq_013"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{score}_{i}}$]]></tex-math></alternatives></inline-formula> for class <italic>i</italic>) for a new instance, as outputs of the prediction of the original classifiers, and based on them the probability of the <italic>i</italic>th class can be expressed as we describe in Eq. (<xref rid="j_info1172_eq_004">4</xref>) and the expression for the probability of class <inline-formula id="j_info1172_ineq_014"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{K+1}}$]]></tex-math></alternatives></inline-formula> can be seen in Eq. (<xref rid="j_info1172_eq_005">5</xref>). <disp-formula-group id="j_info1172_dg_002">
<disp-formula id="j_info1172_eq_004">
<label>(4)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∏</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo stretchy="false">≠</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {P_{{C_{i}}}}={F_{{P_{i}}}}({\mathit{score}_{i}}){\prod \limits_{j=1,j\ne i}^{K}}{F_{{N_{j}}}}({\mathit{score}_{j}}),\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_info1172_eq_005">
<label>(5)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∏</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {P_{{C_{K+1}}}}={\prod \limits_{j=1}^{K}}{F_{{N_{j}}}}({\mathit{score}_{j}}),\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_info1172_eq_006">
<label>(6)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo movablelimits="false">max</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {P_{{C_{K+1}}}}>\underset{i}{\max }\{{P_{{C_{i}}}}\}.\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group></p>
<p>If the condition described by the inequality in Eq. (<xref rid="j_info1172_eq_006">6</xref>) is true, then the decision in prediction of this new instance will be unknown class. Otherwise the prediction is based on the original classifier, i.e. the decision will be the class with the largest score. The decision in prediction of the <italic>j</italic>th test instance is formalized in Eq. (<xref rid="j_info1172_eq_007">7</xref>). 
<disp-formula id="j_info1172_eq_007">
<label>(7)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">decision</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfenced separators="" open="{" close="">
<mml:mrow>
<mml:mtable columnspacing="4.0pt 4.0pt" equalrows="false" columnlines="none none" equalcolumns="false" columnalign="left center left">
<mml:mtr>
<mml:mtd class="array">
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mo stretchy="false">|</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo movablelimits="false">max</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="array">
<mml:msub>
<mml:mrow>
<mml:mo movablelimits="false">argmax</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mo stretchy="false">|</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd class="array">
<mml:mtext>otherwise</mml:mtext>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {j_{\mathit{decision}}}=\left\{\begin{array}{l@{\hskip4.0pt}c@{\hskip4.0pt}l}K+1\hspace{1em}& |\hspace{1em}& {P_{{C_{K+1}}}}>{\max _{i}}\{{P_{{C_{i}}}}\},\\ {} {\operatorname{argmax}_{j}}\{{\mathit{score}_{j}}\}\hspace{1em}& |\hspace{1em}& \text{otherwise}.\end{array}\right.\]]]></tex-math></alternatives>
</disp-formula>
</p>
</sec>
<sec id="j_info1172_s_005">
<label>3.2</label>
<title>Double Smoothing</title>
<p>In order to avoid zero probabilities in the product we used smoothing. In this smoothing we add dummy data to the data set, one value to the minimum and another one to the maximum of the range, so we call it double smoothing. This double smoothing method slightly modifies the cumulative distribution function, but it helps with creating non-zero CDF. At the double smoothing the number of data increases by two in each CDF. If the number of scores (i.e. the size of the validation data) is large enough, then the modified CDF (modified by smoothing) tends to the original CDF. Let us suppose that we have <italic>N</italic> data: <inline-formula id="j_info1172_ineq_015"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{score}_{1}},{\mathit{score}_{2}},\dots ,{\mathit{score}_{N}}$]]></tex-math></alternatives></inline-formula>. Between <inline-formula id="j_info1172_ineq_016"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{score}_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_017"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">score</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{score}_{i+1}}$]]></tex-math></alternatives></inline-formula> the value of CDF changes from <inline-formula id="j_info1172_ineq_018"><alternatives><mml:math><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[$\frac{i}{N}$]]></tex-math></alternatives></inline-formula> to <inline-formula id="j_info1172_ineq_019"><alternatives><mml:math><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[$\frac{(i+1)}{(N+2)}$]]></tex-math></alternatives></inline-formula>, so the difference of them can cause the smoothing error (<inline-formula id="j_info1172_ineq_020"><alternatives><mml:math>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi></mml:math><tex-math><![CDATA[$se$]]></tex-math></alternatives></inline-formula>), described in Eq. (<xref rid="j_info1172_eq_008">8</xref>). 
<disp-formula id="j_info1172_eq_008">
<label>(8)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo maxsize="2.03em" minsize="2.03em" stretchy="true">|</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>−</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo maxsize="2.03em" minsize="2.03em" stretchy="true">|</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo maxsize="2.03em" minsize="2.03em" stretchy="true">|</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo maxsize="2.03em" minsize="2.03em" stretchy="true">|</mml:mo>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ se=\bigg|\frac{i+1}{N+2}-\frac{i}{N}\bigg|=\bigg|\frac{N-2i}{(N+2)N}\bigg|.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>If <inline-formula id="j_info1172_ineq_021"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="false">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle></mml:math><tex-math><![CDATA[$i=\frac{N}{2}$]]></tex-math></alternatives></inline-formula>, then <inline-formula id="j_info1172_ineq_022"><alternatives><mml:math>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi></mml:math><tex-math><![CDATA[$se$]]></tex-math></alternatives></inline-formula> is zero, and the maximum of the smoothing error will be at <inline-formula id="j_info1172_ineq_023"><alternatives><mml:math>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">N</mml:mi></mml:math><tex-math><![CDATA[$i=N$]]></tex-math></alternatives></inline-formula>, as can be seen in Eq. (<xref rid="j_info1172_eq_009">9</xref>). 
<disp-formula id="j_info1172_eq_009">
<label>(9)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:munder>
<mml:mrow>
<mml:mo movablelimits="false">max</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">s</mml:mi>
<mml:mi mathvariant="italic">e</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \underset{i}{\max }(se)=\frac{1}{N+2}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>The smoothing error tends to zero as <italic>N</italic> becomes infinite. Let us denote the number of elements in the original (i.e. before smoothing) CDF of the <italic>j</italic>th class by <inline-formula id="j_info1172_ineq_024"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${N_{j}}$]]></tex-math></alternatives></inline-formula>. The maximal error caused by double smoothing can be derived by Eq. (<xref rid="j_info1172_eq_010">10</xref>). 
<disp-formula id="j_info1172_eq_010">
<label>(10)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">Maxerror</mml:mi>
<mml:mtext>-</mml:mtext>
<mml:mi mathvariant="italic">smoothing</mml:mi>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∏</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:munderover><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \mathit{Maxerror}\text{-}\mathit{smoothing}={\prod \limits_{j=1}^{K}}\frac{1}{{N_{j}}+2}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
</sec>
</sec>
<sec id="j_info1172_s_006">
<label>4</label>
<title>Image Classification</title>
<p>We tested the Double Probability Model with image classification. Following the general trend, we applied the BoW (Bag-of-Words) model (Fei-Fei <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_010">2007</xref>; Chatfield <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_006">2011</xref>; Lazebnik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_014">2006</xref>) for the mathematical representation of the images and we used SVM (Support Vector Machine) (Boser <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_003">1992</xref>; Cortes and Vapnik, <xref ref-type="bibr" rid="j_info1172_ref_007">1995</xref>; Chatfield <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_006">2011</xref>) for classifier. We should note that the DPM can be used with any classification process, as long as it provides probability values for each possible category.</p>
<p>The key idea behind the BoW model is to represent an image (based on its visual content) with so-called visual code words while ignoring their spatial distribution. This technique consists of three steps: (i) feature detection, (ii) feature description, (iii) image description as usual phases in computer vision. For feature detection we used the Harris-Laplace corner detector (Harris and Stephens, <xref ref-type="bibr" rid="j_info1172_ref_011">1988</xref>; Mikolajczyk and Schmid, <xref ref-type="bibr" rid="j_info1172_ref_017">2004</xref>), and SIFT (Scale Invariant Feature Transform) (Lowe, <xref ref-type="bibr" rid="j_info1172_ref_015">2004</xref>) to describe them. Note that we used the default parameterization of SIFT proposed by Lowe; therefore the descriptor vectors had 128 dimensions. To define the visual code words from the descriptor vectors, we used GMM (Gaussian Mixture Model) (Reynolds, <xref ref-type="bibr" rid="j_info1172_ref_020">2009</xref>; Tomasi, <xref ref-type="bibr" rid="j_info1172_ref_027">2004</xref>), which is a parametric probability density function represented as a weighted sum of (in this case 256) Gaussian component densities; as can be seen in Eq. (<xref rid="j_info1172_eq_011">11</xref>). 
<disp-formula id="j_info1172_eq_011">
<label>(11)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi>
<mml:mo stretchy="false">∣</mml:mo>
<mml:mi mathvariant="italic">λ</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ω</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">g</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi>
<mml:mo stretchy="false">∣</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ p(X\mid \lambda )={\sum \limits_{j=1}^{K}}{\omega _{j}}g(X\mid {\mu _{j}}{o_{j}}),\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_info1172_ineq_025"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ω</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\omega _{j}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_info1172_ineq_026"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mu _{j}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_027"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${o_{j}}$]]></tex-math></alternatives></inline-formula> denote the weight, expected value and the variance of the <italic>j</italic>th Gaussian component respectively, furthermore <inline-formula id="j_info1172_ineq_028"><alternatives><mml:math>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>256</mml:mn></mml:math><tex-math><![CDATA[$K=256$]]></tex-math></alternatives></inline-formula>. We calculated the <italic>λ</italic> parameter with ML (Maximum Likelihood) estimation by using the iterative EM (Expectation Maximization) algorithm (Dempster <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_008">1977</xref>; Tomasi, <xref ref-type="bibr" rid="j_info1172_ref_027">2004</xref>). We performed K-means clustering (MacQueen, <xref ref-type="bibr" rid="j_info1172_ref_016">1967</xref>) over all the descriptors with 256 clusters to get the initial parameter model for the EM. The next step was to create a descriptor that specifies the distribution of the visual code words in an image, called high-level descriptor. To represent an image with high-level descriptor, the GMM based Fisher vector (see Eq. (<xref rid="j_info1172_eq_012">12</xref>)) was calculated (Perronnin and Dance, <xref ref-type="bibr" rid="j_info1172_ref_018">2007</xref>; Reynolds, <xref ref-type="bibr" rid="j_info1172_ref_020">2009</xref>). These vectors were the final representations (image descriptor) of the images. 
<disp-formula id="j_info1172_eq_012">
<label>(12)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">F</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mo>▽</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">λ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo movablelimits="false">log</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi>
<mml:mo stretchy="false">∣</mml:mo>
<mml:mi mathvariant="italic">λ</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ F={\triangledown _{\lambda }}\log p(X\mid \lambda )\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_info1172_ineq_029"><alternatives><mml:math>
<mml:mo movablelimits="false">log</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi>
<mml:mo stretchy="false">∣</mml:mo>
<mml:mi mathvariant="italic">λ</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$\log p(X\mid \lambda )$]]></tex-math></alternatives></inline-formula> is the probability density function introduced in Eq. (<xref rid="j_info1172_eq_011">11</xref>), <italic>X</italic> denotes the SIFT descriptors of an image and <italic>λ</italic> represents the parameter of GMM (<inline-formula id="j_info1172_ineq_030"><alternatives><mml:math>
<mml:mi mathvariant="italic">λ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ω</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">o</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>…</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\lambda =\{{\omega _{j}}{\mu _{j}}{o_{j}}|j=1\dots K\}$]]></tex-math></alternatives></inline-formula>).</p>
<p>For the classification subtask we used a variation of SVM, the C-SVC (C-support vector classification) (Boser <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_003">1992</xref>; Cortes and Vapnik, <xref ref-type="bibr" rid="j_info1172_ref_007">1995</xref>) with RBF (Radial Basis Function) kernel. The one-against-all technique was applied to extend the SVM for multi-class classification. We used Platt’s (Platt, <xref ref-type="bibr" rid="j_info1172_ref_019">2000</xref>) approach as probability estimator, which is included in LIBSVM (A Library for Support Vector Machines) (Chang and Lin, <xref ref-type="bibr" rid="j_info1172_ref_005">2011</xref>; Huang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_012">2006</xref>). At this point we can decide whether to use the Double Probability Model for filtering out the test samples that possibly came from a previously unseen category, or keep the original predictions of the classifier (SVM). The CDF and reverse CDF (Eqs. (<xref rid="j_info1172_eq_002">2</xref>) and (<xref rid="j_info1172_eq_003">3</xref>)) can be calculated based on the class membership probabilities (in a validation set).</p>
</sec>
<sec id="j_info1172_s_007">
<label>5</label>
<title>New Goodness Indicators for Classification in Open Set Problem</title>
<p>We call the instances with known class, and the instances with unknown class in test set by known test samples and unknown test samples, respectively. Note that the unknown classes are different from the known classes and the learning system has no information about their existence or size. The aim of the proposed model is to detect the unknown test samples with the greater accuracy. The detection part is covered by the DPM, but there are some different ways of calculating the accuracy to take these detections into consideration. The traditional accuracy (Eq. (<xref rid="j_info1172_eq_013">13</xref>)) is not an appropriate indicator for measuring the goodness of the results, because it does not consider the unknown (unseen) categories; i.e. even so a test sample belongs to an unknown class it will automatically be classified into one of the known categories, what reduces the accuracy and this reduction depends on the ratio of the unknown test samples.</p>
<p>We introduce so called extended accuracy denoted by <inline-formula id="j_info1172_ineq_031"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula>: it discards the result of the test samples that are predicted as unknown, and then it calculates the accuracy on this reduced result set (see Eq. (<xref rid="j_info1172_eq_014">14</xref>)). This way we are able to measure the efficiency of our proposed model by comparing it to the general case when the test samples are not filtered out. <disp-formula-group id="j_info1172_dg_003">
<disp-formula id="j_info1172_eq_013">
<label>(13)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:mi mathvariant="italic">Accuracy</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>∪</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="2.5pt"/>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \mathit{Accuracy}=\frac{{\textstyle\sum _{i\in K\cup U}}I({Y^{\prime }_{i}}={Y_{i}})}{|K\cup U|},\hspace{1em}{Y_{i}}\in {C_{K}}\cup {C_{U}},\hspace{2.5pt}{Y^{\prime }_{i}}\in {C_{K}},\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_info1172_eq_014">
<label>(14)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mi mathvariant="normal">&amp;</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>∪</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {\mathit{Accuracy}_{E}}=\frac{{\textstyle\sum _{i\in K\cup U}}I(({Y^{\prime }_{i}}={Y_{i}})\& ({Y^{\prime }_{i}}\in {C_{K}}))}{{\textstyle\sum _{i\in K\cup U}}I({Y^{\prime }_{i}}\in {C_{K}})},\hspace{1em}{Y_{i}},{Y^{\prime }_{i}}\in {C_{K}}\cup {C_{U}},\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> where <italic>I</italic> is an indicator function and its value is 1 if the condition in Equation (<xref rid="j_info1172_eq_014">14</xref>) is true, otherwise 0. The <italic>K</italic> and <italic>U</italic> are the sets of known and unknown instances (in the test set), <inline-formula id="j_info1172_ineq_032"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{K}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_info1172_ineq_033"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{U}}$]]></tex-math></alternatives></inline-formula> are the sets of known and unknown classes, respectively (the unknown label is only one class, but <inline-formula id="j_info1172_ineq_034"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${C_{K}}$]]></tex-math></alternatives></inline-formula> typically contains more known classes). Furthermore, <inline-formula id="j_info1172_ineq_035"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${Y_{i}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_036"><alternatives><mml:math>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup></mml:math><tex-math><![CDATA[${Y^{\prime }_{i}}$]]></tex-math></alternatives></inline-formula> denote the real and predicted class label of the <italic>i</italic>th image.</p>
<p>With the above modification we eliminate the test samples that are predicted as unknown by the DPM. While this method of calculation is good for comparison, it does not accurately reflect the classification power on the unknown; therefore a new type of accuracy (see Equation (<xref rid="j_info1172_eq_015">15</xref>)) is needed to evaluate such open set problem, denoted by <inline-formula id="j_info1172_ineq_037"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> (where the subscript <italic>O</italic> refers for open set problem). The decision for a test sample is drawn among <inline-formula id="j_info1172_ineq_038"><alternatives><mml:math>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$K+1$]]></tex-math></alternatives></inline-formula> alternatives; thus with the <inline-formula id="j_info1172_ineq_039"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> we evaluate those decisions among <inline-formula id="j_info1172_ineq_040"><alternatives><mml:math>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$K+1$]]></tex-math></alternatives></inline-formula> categories. 
<disp-formula id="j_info1172_eq_015">
<label>(15)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">K</mml:mi>
<mml:mo>∪</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>∪</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {\mathit{Accuracy}_{O}}=\frac{{\textstyle\sum _{i\in K\cup U}}I({Y^{\prime }_{i}}={Y_{i}})}{|K\cup U|},\hspace{1em}{Y_{i}},{Y^{\prime }_{i}}\in {C_{K}}\cup {C_{U}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>We can use the traditional recall <inline-formula id="j_info1172_ineq_041"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">R</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(R)$]]></tex-math></alternatives></inline-formula>, precision <inline-formula id="j_info1172_ineq_042"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">P</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(P)$]]></tex-math></alternatives></inline-formula> metrics on the decisions of DPM, i.e. the percentage of the correctly filtered out images. We calculate these metrics in the following ways: <disp-formula-group id="j_info1172_dg_004">
<disp-formula id="j_info1172_eq_016">
<label>(16)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">|</mml:mo>
<mml:mi mathvariant="italic">U</mml:mi>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>∪</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {R_{\mathit{filter}}}=\frac{{\textstyle\sum _{i\in U}}I({Y^{\prime }_{i}}={Y_{i}})}{|U|},\hspace{1em}{Y_{i}}\in {C_{U}},{Y^{\prime }_{i}}\in {C_{K}}\cup {C_{U}},\]]]></tex-math></alternatives>
</disp-formula>
<disp-formula id="j_info1172_eq_017">
<label>(17)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true" columnalign="left">
<mml:mtr>
<mml:mtd class="align-odd">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="italic">I</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">|</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mspace width="1em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">K</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>∪</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {P_{\mathit{filter}}}=\frac{{\textstyle\sum _{i\in {U^{\prime }}}}I({Y^{\prime }_{i}}={Y_{i}})}{|{U^{\prime }}|},\hspace{1em}{Y_{i}}\in {C_{K}}\cup {C_{U}},{Y^{\prime }_{i}}\in {C_{U}},\]]]></tex-math></alternatives>
</disp-formula>
</disp-formula-group> where <inline-formula id="j_info1172_ineq_043"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">instance</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="italic">Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>′</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">∈</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">U</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[${U^{\prime }}=\{{\mathit{instance}_{i}}|{Y^{\prime }_{i}}\in {C_{U}}\}$]]></tex-math></alternatives></inline-formula>.</p>
</sec>
<sec id="j_info1172_s_008">
<label>6</label>
<title>Experimental Results</title>
<sec id="j_info1172_s_009">
<label>6.1</label>
<title>Experimental Environment</title>
<p>For conducting our experiments, we used the Caltech101 (Fei-Fei <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_009">2004</xref>) collection which consists of 8677 images from 101 categories; and we created numerous data sets by randomly sampling the classes from the total data set. These subsets fit into six different types. The training set was formed from 70% of the images in the randomly selected known classes, and the test set contains the other 30% images from the known classes complemented by all of the unknown images. The reason behind isolating the unknown images is that the learning system is not allowed to use them, so all unknown images are basically unknown test samples. We randomly selected some of the known classes, and repeated this operation 20 times, so that we can take the average of the 20 results. We chose two different unknown sets, and we defined three different numbers of known classes to sample, therefore we had total of six types, as can be seen in Table <xref rid="j_info1172_tab_001">1</xref>. As we mentioned previously, we had 20 data sets of every type, so total of 120 data sets. In the rest of the paper we will consider only the types, instead of the individual data sets one by one; later on when we present the results of the data types, we mean the averaged results of the 20 individuals.</p>
<table-wrap id="j_info1172_tab_001">
<label>Table 1</label>
<caption>
<p>Types of the data sets which were created by randomly selecting the categories of Caltech101 collection.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Name</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Number of known classes</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Unknown set</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Airplanes5</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">airplanes</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Airplanes10</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">airplanes</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Airplanes20</td>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">airplanes</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Faces5</td>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">faces + faces easy</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Faces10</td>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">faces + faces easy</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Faces20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">faces + faces easy</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In case of Airplanes5, Airplanes10 and Airplanes20 data sets the known categories were sampled from 100 classes, while in case of Faces5, Faces10 and Faces20 data sets they were sampled from 99 classes. This means that the number of unknown classes were 1 and 2; the number of unknown test samples were 800 and 870, respectively. We created such basic test bed where the numbers of known test samples and unknown test samples were equal, and we achieved this by randomly selecting the appropriate amount from the larger set in each test (i.e. downscaled the unknown set if the known set was smaller and vice versa). We measured the results at 11 sampling points, as the percentage of the number of unknown test samples were increasing from 0 to 50 (by 5 percent at each step, so the basic test bed was modified according to this downsampling). Note that during the training our machine learning solution can use only known images (and none of the images with unknown class), and it has not got any information about “unknown ratio in the test set”, so DPM does not know how many images should be filtered out as unknown instance.</p>
</sec>
<sec id="j_info1172_s_010">
<label>6.2</label>
<title>Evaluation of Double Probability Model</title>
<p>In the following we present the experimental results of our proposed Double Probability Model in six diagrams (Fig. <xref rid="j_info1172_fig_001">1</xref>) and in four tables (Tables <xref rid="j_info1172_tab_002">2</xref>–<xref rid="j_info1172_tab_005">5</xref>). The diagrams in Fig. <xref rid="j_info1172_fig_001">1</xref> show that our proposed model has a positive influence on the <inline-formula id="j_info1172_ineq_044"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula>; in case of each test the usage of DPM is beneficial, because it is able to filter out many unknown images.</p>
<p>Regarding <inline-formula id="j_info1172_ineq_045"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> the Faces20 and Airplanes20 tests were the best, in these cases the predictions of our model were approximately 75% correct (see Table <xref rid="j_info1172_tab_005">5</xref> for details). We got the lowest <inline-formula id="j_info1172_ineq_046"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> on the Faces5 test and it was 0.563 which means that the true positive detections were higher then the false ones even in this “worst” case.</p>
<fig id="j_info1172_fig_001">
<label>Fig. 1</label>
<caption>
<p>Results got on the six different data set types. Each diagram shows the <inline-formula id="j_info1172_ineq_047"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula> (average <inline-formula id="j_info1172_ineq_048"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula> of 20 tests) with or without using our proposed Double Probability Model and the <inline-formula id="j_info1172_ineq_049"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> (average <inline-formula id="j_info1172_ineq_050"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> of 20 tests); represented as dashed, dotted and solid lines, respectively. The accuracy is on the <italic>y</italic>-axis and the percentage of the unknown test samples is on the <italic>x</italic>-axis.</p>
</caption>
<graphic xlink:href="info1172_g001.jpg"/>
</fig>
<p>We also calculated the <inline-formula id="j_info1172_ineq_051"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${R_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metric for every test and we experienced that increasing the percentage of the unknown test samples in the whole test set <inline-formula id="j_info1172_ineq_052"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${R_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> did not significantly change (<inline-formula id="j_info1172_ineq_053"><alternatives><mml:math>
<mml:mo>±</mml:mo>
<mml:mn>0.02</mml:mn></mml:math><tex-math><![CDATA[$\pm 0.02$]]></tex-math></alternatives></inline-formula>), thus we only present this metric at the last sampling point (i.e. when numbers of known and unknown test samples were equal): <italic>Airplanes</italic> : 0.741, <italic>Faces</italic>5 : 0.730, <italic>Airplanes</italic>10 : 0.551, <italic>Faces</italic>10 : 0.611, <italic>Airplanes</italic>20 : 0.234, <italic>Faces</italic>20 : 0.550. We can see that in the majority of tests our proposed model detected more than half of the unknown test samples, moreover, in case of Airplanes5 and Faces5 only a quarter of the unknown set remained undetected.</p>
<p>The tables below summarize the total result of our experiments with Double Probability Model. The meaning of the first column at the left is similar to the <italic>x</italic>-axis of the diagrams above, it represents the percentage of the unknown test samples. In addition to the averaged metric we also included the Q1 and the Q3 (first and third quartiles) statistical indicators to give a comprehensive view about the performance of our model. Figure <xref rid="j_info1172_fig_001">1</xref> already showed that the averaged results are better when we use DPM, but by looking at and comparing the Q1, Q3 values of <inline-formula id="j_info1172_ineq_054"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula> in Tables <xref rid="j_info1172_tab_002">2</xref>–<xref rid="j_info1172_tab_004">4</xref> we can notice that even every Q1 and Q3 is higher in case of using Double Probability Model; moreover, in some cases the Q1 with DPM outperforms the Q3 without DPM. Based on these results we conclude that our proposed model efficiently filters out the unknown test samples.</p>
<table-wrap id="j_info1172_tab_002">
<label>Table 2</label>
<caption>
<p><inline-formula id="j_info1172_ineq_055"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula>, Q1 and Q3 metrics with or without using DPM evaluated on the results of Airplanes5 and Faces5 test data set types.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="3" style="vertical-align: middle; text-align: left; border-top: solid thin; border-bottom: solid thin">%</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Airplanes5</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Faces5</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">0</td>
<td style="vertical-align: top; text-align: left">0.547</td>
<td style="vertical-align: top; text-align: left">0.419</td>
<td style="vertical-align: top; text-align: left">0.634</td>
<td style="vertical-align: top; text-align: left">0.647</td>
<td style="vertical-align: top; text-align: left">0.557</td>
<td style="vertical-align: top; text-align: left">0.720</td>
<td style="vertical-align: top; text-align: left">0.531</td>
<td style="vertical-align: top; text-align: left">0.398</td>
<td style="vertical-align: top; text-align: left">0.630</td>
<td style="vertical-align: top; text-align: left">0.730</td>
<td style="vertical-align: top; text-align: left">0.638</td>
<td style="vertical-align: top; text-align: left">0.807</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">0.517</td>
<td style="vertical-align: top; text-align: left">0.396</td>
<td style="vertical-align: top; text-align: left">0.601</td>
<td style="vertical-align: top; text-align: left">0.639</td>
<td style="vertical-align: top; text-align: left">0.547</td>
<td style="vertical-align: top; text-align: left">0.720</td>
<td style="vertical-align: top; text-align: left">0.502</td>
<td style="vertical-align: top; text-align: left">0.376</td>
<td style="vertical-align: top; text-align: left">0.596</td>
<td style="vertical-align: top; text-align: left">0.718</td>
<td style="vertical-align: top; text-align: left">0.620</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.490</td>
<td style="vertical-align: top; text-align: left">0.377</td>
<td style="vertical-align: top; text-align: left">0.569</td>
<td style="vertical-align: top; text-align: left">0.629</td>
<td style="vertical-align: top; text-align: left">0.534</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.475</td>
<td style="vertical-align: top; text-align: left">0.357</td>
<td style="vertical-align: top; text-align: left">0.567</td>
<td style="vertical-align: top; text-align: left">0.707</td>
<td style="vertical-align: top; text-align: left">0.587</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">15</td>
<td style="vertical-align: top; text-align: left">0.463</td>
<td style="vertical-align: top; text-align: left">0.355</td>
<td style="vertical-align: top; text-align: left">0.536</td>
<td style="vertical-align: top; text-align: left">0.619</td>
<td style="vertical-align: top; text-align: left">0.523</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.449</td>
<td style="vertical-align: top; text-align: left">0.336</td>
<td style="vertical-align: top; text-align: left">0.534</td>
<td style="vertical-align: top; text-align: left">0.701</td>
<td style="vertical-align: top; text-align: left">0.564</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">0.436</td>
<td style="vertical-align: top; text-align: left">0.334</td>
<td style="vertical-align: top; text-align: left">0.505</td>
<td style="vertical-align: top; text-align: left">0.610</td>
<td style="vertical-align: top; text-align: left">0.502</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.423</td>
<td style="vertical-align: top; text-align: left">0.318</td>
<td style="vertical-align: top; text-align: left">0.502</td>
<td style="vertical-align: top; text-align: left">0.686</td>
<td style="vertical-align: top; text-align: left">0.530</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left">0.409</td>
<td style="vertical-align: top; text-align: left">0.313</td>
<td style="vertical-align: top; text-align: left">0.474</td>
<td style="vertical-align: top; text-align: left">0.601</td>
<td style="vertical-align: top; text-align: left">0.485</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.397</td>
<td style="vertical-align: top; text-align: left">0.297</td>
<td style="vertical-align: top; text-align: left">0.472</td>
<td style="vertical-align: top; text-align: left">0.672</td>
<td style="vertical-align: top; text-align: left">0.518</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left">0.382</td>
<td style="vertical-align: top; text-align: left">0.292</td>
<td style="vertical-align: top; text-align: left">0.444</td>
<td style="vertical-align: top; text-align: left">0.592</td>
<td style="vertical-align: top; text-align: left">0.464</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.370</td>
<td style="vertical-align: top; text-align: left">0.277</td>
<td style="vertical-align: top; text-align: left">0.440</td>
<td style="vertical-align: top; text-align: left">0.657</td>
<td style="vertical-align: top; text-align: left">0.494</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">35</td>
<td style="vertical-align: top; text-align: left">0.354</td>
<td style="vertical-align: top; text-align: left">0.272</td>
<td style="vertical-align: top; text-align: left">0.412</td>
<td style="vertical-align: top; text-align: left">0.582</td>
<td style="vertical-align: top; text-align: left">0.437</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.344</td>
<td style="vertical-align: top; text-align: left">0.258</td>
<td style="vertical-align: top; text-align: left">0.408</td>
<td style="vertical-align: top; text-align: left">0.643</td>
<td style="vertical-align: top; text-align: left">0.463</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.327</td>
<td style="vertical-align: top; text-align: left">0.251</td>
<td style="vertical-align: top; text-align: left">0.380</td>
<td style="vertical-align: top; text-align: left">0.570</td>
<td style="vertical-align: top; text-align: left">0.412</td>
<td style="vertical-align: top; text-align: left">0.716</td>
<td style="vertical-align: top; text-align: left">0.318</td>
<td style="vertical-align: top; text-align: left">0.238</td>
<td style="vertical-align: top; text-align: left">0.378</td>
<td style="vertical-align: top; text-align: left">0.625</td>
<td style="vertical-align: top; text-align: left">0.427</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">45</td>
<td style="vertical-align: top; text-align: left">0.300</td>
<td style="vertical-align: top; text-align: left">0.230</td>
<td style="vertical-align: top; text-align: left">0.348</td>
<td style="vertical-align: top; text-align: left">0.558</td>
<td style="vertical-align: top; text-align: left">0.382</td>
<td style="vertical-align: top; text-align: left">0.715</td>
<td style="vertical-align: top; text-align: left">0.291</td>
<td style="vertical-align: top; text-align: left">0.218</td>
<td style="vertical-align: top; text-align: left">0.346</td>
<td style="vertical-align: top; text-align: left">0.610</td>
<td style="vertical-align: top; text-align: left">0.389</td>
<td style="vertical-align: top; text-align: left">0.792</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">50</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.273</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.210</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.317</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.543</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.361</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.702</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.265</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.199</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.315</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.593</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.344</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.792</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_info1172_tab_003">
<label>Table 3</label>
<caption>
<p><inline-formula id="j_info1172_ineq_056"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula>, Q1 and Q3 metrics with or without using DPM evaluated on the results of Airplanes10 and Faces10 test data set types.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="3" style="vertical-align: middle; text-align: left; border-top: solid thin; border-bottom: solid thin">%</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Airplanes10</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Faces10</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">0</td>
<td style="vertical-align: top; text-align: left">0.635</td>
<td style="vertical-align: top; text-align: left">0.561</td>
<td style="vertical-align: top; text-align: left">0.717</td>
<td style="vertical-align: top; text-align: left">0.676</td>
<td style="vertical-align: top; text-align: left">0.609</td>
<td style="vertical-align: top; text-align: left">0.727</td>
<td style="vertical-align: top; text-align: left">0.643</td>
<td style="vertical-align: top; text-align: left">0.601</td>
<td style="vertical-align: top; text-align: left">0.734</td>
<td style="vertical-align: top; text-align: left">0.745</td>
<td style="vertical-align: top; text-align: left">0.654</td>
<td style="vertical-align: top; text-align: left">0.851</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">0.602</td>
<td style="vertical-align: top; text-align: left">0.532</td>
<td style="vertical-align: top; text-align: left">0.679</td>
<td style="vertical-align: top; text-align: left">0.658</td>
<td style="vertical-align: top; text-align: left">0.607</td>
<td style="vertical-align: top; text-align: left">0.709</td>
<td style="vertical-align: top; text-align: left">0.609</td>
<td style="vertical-align: top; text-align: left">0.570</td>
<td style="vertical-align: top; text-align: left">0.695</td>
<td style="vertical-align: top; text-align: left">0.723</td>
<td style="vertical-align: top; text-align: left">0.620</td>
<td style="vertical-align: top; text-align: left">0.830</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.571</td>
<td style="vertical-align: top; text-align: left">0.504</td>
<td style="vertical-align: top; text-align: left">0.645</td>
<td style="vertical-align: top; text-align: left">0.642</td>
<td style="vertical-align: top; text-align: left">0.590</td>
<td style="vertical-align: top; text-align: left">0.686</td>
<td style="vertical-align: top; text-align: left">0.578</td>
<td style="vertical-align: top; text-align: left">0.539</td>
<td style="vertical-align: top; text-align: left">0.659</td>
<td style="vertical-align: top; text-align: left">0.704</td>
<td style="vertical-align: top; text-align: left">0.604</td>
<td style="vertical-align: top; text-align: left">0.797</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">15</td>
<td style="vertical-align: top; text-align: left">0.539</td>
<td style="vertical-align: top; text-align: left">0.477</td>
<td style="vertical-align: top; text-align: left">0.608</td>
<td style="vertical-align: top; text-align: left">0.620</td>
<td style="vertical-align: top; text-align: left">0.556</td>
<td style="vertical-align: top; text-align: left">0.668</td>
<td style="vertical-align: top; text-align: left">0.545</td>
<td style="vertical-align: top; text-align: left">0.509</td>
<td style="vertical-align: top; text-align: left">0.623</td>
<td style="vertical-align: top; text-align: left">0.686</td>
<td style="vertical-align: top; text-align: left">0.588</td>
<td style="vertical-align: top; text-align: left">0.767</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">0.508</td>
<td style="vertical-align: top; text-align: left">0.448</td>
<td style="vertical-align: top; text-align: left">0.573</td>
<td style="vertical-align: top; text-align: left">0.602</td>
<td style="vertical-align: top; text-align: left">0.524</td>
<td style="vertical-align: top; text-align: left">0.643</td>
<td style="vertical-align: top; text-align: left">0.514</td>
<td style="vertical-align: top; text-align: left">0.480</td>
<td style="vertical-align: top; text-align: left">0.587</td>
<td style="vertical-align: top; text-align: left">0.663</td>
<td style="vertical-align: top; text-align: left">0.573</td>
<td style="vertical-align: top; text-align: left">0.758</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left">0.476</td>
<td style="vertical-align: top; text-align: left">0.420</td>
<td style="vertical-align: top; text-align: left">0.537</td>
<td style="vertical-align: top; text-align: left">0.580</td>
<td style="vertical-align: top; text-align: left">0.503</td>
<td style="vertical-align: top; text-align: left">0.606</td>
<td style="vertical-align: top; text-align: left">0.482</td>
<td style="vertical-align: top; text-align: left">0.450</td>
<td style="vertical-align: top; text-align: left">0.550</td>
<td style="vertical-align: top; text-align: left">0.642</td>
<td style="vertical-align: top; text-align: left">0.567</td>
<td style="vertical-align: top; text-align: left">0.734</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left">0.444</td>
<td style="vertical-align: top; text-align: left">0.393</td>
<td style="vertical-align: top; text-align: left">0.501</td>
<td style="vertical-align: top; text-align: left">0.560</td>
<td style="vertical-align: top; text-align: left">0.491</td>
<td style="vertical-align: top; text-align: left">0.575</td>
<td style="vertical-align: top; text-align: left">0.449</td>
<td style="vertical-align: top; text-align: left">0.420</td>
<td style="vertical-align: top; text-align: left">0.513</td>
<td style="vertical-align: top; text-align: left">0.624</td>
<td style="vertical-align: top; text-align: left">0.528</td>
<td style="vertical-align: top; text-align: left">0.734</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">35</td>
<td style="vertical-align: top; text-align: left">0.412</td>
<td style="vertical-align: top; text-align: left">0.365</td>
<td style="vertical-align: top; text-align: left">0.465</td>
<td style="vertical-align: top; text-align: left">0.536</td>
<td style="vertical-align: top; text-align: left">0.469</td>
<td style="vertical-align: top; text-align: left">0.559</td>
<td style="vertical-align: top; text-align: left">0.417</td>
<td style="vertical-align: top; text-align: left">0.390</td>
<td style="vertical-align: top; text-align: left">0.476</td>
<td style="vertical-align: top; text-align: left">0.604</td>
<td style="vertical-align: top; text-align: left">0.494</td>
<td style="vertical-align: top; text-align: left">0.711</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.381</td>
<td style="vertical-align: top; text-align: left">0.337</td>
<td style="vertical-align: top; text-align: left">0.430</td>
<td style="vertical-align: top; text-align: left">0.514</td>
<td style="vertical-align: top; text-align: left">0.441</td>
<td style="vertical-align: top; text-align: left">0.544</td>
<td style="vertical-align: top; text-align: left">0.385</td>
<td style="vertical-align: top; text-align: left">0.361</td>
<td style="vertical-align: top; text-align: left">0.440</td>
<td style="vertical-align: top; text-align: left">0.577</td>
<td style="vertical-align: top; text-align: left">0.459</td>
<td style="vertical-align: top; text-align: left">0.676</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">45</td>
<td style="vertical-align: top; text-align: left">0.349</td>
<td style="vertical-align: top; text-align: left">0.309</td>
<td style="vertical-align: top; text-align: left">0.394</td>
<td style="vertical-align: top; text-align: left">0.490</td>
<td style="vertical-align: top; text-align: left">0.414</td>
<td style="vertical-align: top; text-align: left">0.521</td>
<td style="vertical-align: top; text-align: left">0.353</td>
<td style="vertical-align: top; text-align: left">0.330</td>
<td style="vertical-align: top; text-align: left">0.404</td>
<td style="vertical-align: top; text-align: left">0.551</td>
<td style="vertical-align: top; text-align: left">0.414</td>
<td style="vertical-align: top; text-align: left">0.655</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">50</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.318</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.281</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.359</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.463</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.382</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.485</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.321</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.301</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.367</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.527</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.381</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.634</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_info1172_tab_004">
<label>Table 4</label>
<caption>
<p><inline-formula id="j_info1172_ineq_057"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{E}}$]]></tex-math></alternatives></inline-formula>, Q1 and Q3 metrics with or without using DPM evaluated on the results of Airplanes20 and Faces20 test data set types.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="3" style="vertical-align: middle; text-align: left; border-top: solid thin; border-bottom: solid thin">%</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Airplanes20</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Faces20</td>
</tr>
<tr>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">Without DPM</td>
<td colspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">With DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">AVG</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q1</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Q3</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">0</td>
<td style="vertical-align: top; text-align: left">0.671</td>
<td style="vertical-align: top; text-align: left">0.607</td>
<td style="vertical-align: top; text-align: left">0.701</td>
<td style="vertical-align: top; text-align: left">0.705</td>
<td style="vertical-align: top; text-align: left">0.644</td>
<td style="vertical-align: top; text-align: left">0.757</td>
<td style="vertical-align: top; text-align: left">0.668</td>
<td style="vertical-align: top; text-align: left">0.604</td>
<td style="vertical-align: top; text-align: left">0.710</td>
<td style="vertical-align: top; text-align: left">0.718</td>
<td style="vertical-align: top; text-align: left">0.652</td>
<td style="vertical-align: top; text-align: left">0.763</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">0.637</td>
<td style="vertical-align: top; text-align: left">0.576</td>
<td style="vertical-align: top; text-align: left">0.666</td>
<td style="vertical-align: top; text-align: left">0.675</td>
<td style="vertical-align: top; text-align: left">0.614</td>
<td style="vertical-align: top; text-align: left">0.753</td>
<td style="vertical-align: top; text-align: left">0.634</td>
<td style="vertical-align: top; text-align: left">0.573</td>
<td style="vertical-align: top; text-align: left">0.674</td>
<td style="vertical-align: top; text-align: left">0.700</td>
<td style="vertical-align: top; text-align: left">0.640</td>
<td style="vertical-align: top; text-align: left">0.752</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.604</td>
<td style="vertical-align: top; text-align: left">0.545</td>
<td style="vertical-align: top; text-align: left">0.630</td>
<td style="vertical-align: top; text-align: left">0.648</td>
<td style="vertical-align: top; text-align: left">0.589</td>
<td style="vertical-align: top; text-align: left">0.724</td>
<td style="vertical-align: top; text-align: left">0.601</td>
<td style="vertical-align: top; text-align: left">0.543</td>
<td style="vertical-align: top; text-align: left">0.639</td>
<td style="vertical-align: top; text-align: left">0.682</td>
<td style="vertical-align: top; text-align: left">0.635</td>
<td style="vertical-align: top; text-align: left">0.744</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">15</td>
<td style="vertical-align: top; text-align: left">0.570</td>
<td style="vertical-align: top; text-align: left">0.516</td>
<td style="vertical-align: top; text-align: left">0.596</td>
<td style="vertical-align: top; text-align: left">0.618</td>
<td style="vertical-align: top; text-align: left">0.561</td>
<td style="vertical-align: top; text-align: left">0.688</td>
<td style="vertical-align: top; text-align: left">0.567</td>
<td style="vertical-align: top; text-align: left">0.513</td>
<td style="vertical-align: top; text-align: left">0.603</td>
<td style="vertical-align: top; text-align: left">0.663</td>
<td style="vertical-align: top; text-align: left">0.622</td>
<td style="vertical-align: top; text-align: left">0.733</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">0.537</td>
<td style="vertical-align: top; text-align: left">0.485</td>
<td style="vertical-align: top; text-align: left">0.560</td>
<td style="vertical-align: top; text-align: left">0.589</td>
<td style="vertical-align: top; text-align: left">0.533</td>
<td style="vertical-align: top; text-align: left">0.654</td>
<td style="vertical-align: top; text-align: left">0.534</td>
<td style="vertical-align: top; text-align: left">0.483</td>
<td style="vertical-align: top; text-align: left">0.568</td>
<td style="vertical-align: top; text-align: left">0.645</td>
<td style="vertical-align: top; text-align: left">0.594</td>
<td style="vertical-align: top; text-align: left">0.723</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left">0.503</td>
<td style="vertical-align: top; text-align: left">0.455</td>
<td style="vertical-align: top; text-align: left">0.526</td>
<td style="vertical-align: top; text-align: left">0.558</td>
<td style="vertical-align: top; text-align: left">0.503</td>
<td style="vertical-align: top; text-align: left">0.616</td>
<td style="vertical-align: top; text-align: left">0.501</td>
<td style="vertical-align: top; text-align: left">0.453</td>
<td style="vertical-align: top; text-align: left">0.532</td>
<td style="vertical-align: top; text-align: left">0.625</td>
<td style="vertical-align: top; text-align: left">0.570</td>
<td style="vertical-align: top; text-align: left">0.714</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left">0.470</td>
<td style="vertical-align: top; text-align: left">0.425</td>
<td style="vertical-align: top; text-align: left">0.491</td>
<td style="vertical-align: top; text-align: left">0.526</td>
<td style="vertical-align: top; text-align: left">0.471</td>
<td style="vertical-align: top; text-align: left">0.590</td>
<td style="vertical-align: top; text-align: left">0.467</td>
<td style="vertical-align: top; text-align: left">0.422</td>
<td style="vertical-align: top; text-align: left">0.497</td>
<td style="vertical-align: top; text-align: left">0.605</td>
<td style="vertical-align: top; text-align: left">0.542</td>
<td style="vertical-align: top; text-align: left">0.704</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">35</td>
<td style="vertical-align: top; text-align: left">0.436</td>
<td style="vertical-align: top; text-align: left">0.394</td>
<td style="vertical-align: top; text-align: left">0.456</td>
<td style="vertical-align: top; text-align: left">0.493</td>
<td style="vertical-align: top; text-align: left">0.439</td>
<td style="vertical-align: top; text-align: left">0.550</td>
<td style="vertical-align: top; text-align: left">0.434</td>
<td style="vertical-align: top; text-align: left">0.392</td>
<td style="vertical-align: top; text-align: left">0.462</td>
<td style="vertical-align: top; text-align: left">0.583</td>
<td style="vertical-align: top; text-align: left">0.508</td>
<td style="vertical-align: top; text-align: left">0.691</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.403</td>
<td style="vertical-align: top; text-align: left">0.364</td>
<td style="vertical-align: top; text-align: left">0.421</td>
<td style="vertical-align: top; text-align: left">0.460</td>
<td style="vertical-align: top; text-align: left">0.407</td>
<td style="vertical-align: top; text-align: left">0.513</td>
<td style="vertical-align: top; text-align: left">0.401</td>
<td style="vertical-align: top; text-align: left">0.362</td>
<td style="vertical-align: top; text-align: left">0.426</td>
<td style="vertical-align: top; text-align: left">0.560</td>
<td style="vertical-align: top; text-align: left">0.472</td>
<td style="vertical-align: top; text-align: left">0.678</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">45</td>
<td style="vertical-align: top; text-align: left">0.369</td>
<td style="vertical-align: top; text-align: left">0.334</td>
<td style="vertical-align: top; text-align: left">0.386</td>
<td style="vertical-align: top; text-align: left">0.428</td>
<td style="vertical-align: top; text-align: left">0.378</td>
<td style="vertical-align: top; text-align: left">0.478</td>
<td style="vertical-align: top; text-align: left">0.367</td>
<td style="vertical-align: top; text-align: left">0.332</td>
<td style="vertical-align: top; text-align: left">0.390</td>
<td style="vertical-align: top; text-align: left">0.537</td>
<td style="vertical-align: top; text-align: left">0.435</td>
<td style="vertical-align: top; text-align: left">0.664</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">50</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.336</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.303</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.351</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.395</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.347</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.434</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.334</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.302</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.355</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.509</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.400</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.640</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As we discussed before, <inline-formula id="j_info1172_ineq_058"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> is the appropriate evaluation of the results and its value is barely changing (see left part of Table <xref rid="j_info1172_tab_005">5</xref>) as we increase the percentage of the unknown test samples. The reason for this is that our model is able to classify the known and unknown test samples as efficient as SVM classifies the known classes. For example, in case of Airplanes10 the <inline-formula id="j_info1172_ineq_059"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.635</mml:mn></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}=0.635$]]></tex-math></alternatives></inline-formula> at the first sampling point (where every test sample is known) and <inline-formula id="j_info1172_ineq_060"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.612</mml:mn></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}=0.612$]]></tex-math></alternatives></inline-formula> at the last sampling point, while it only slightly fluctuates between them.</p>
<table-wrap id="j_info1172_tab_005">
<label>Table 5</label>
<caption>
<p><inline-formula id="j_info1172_ineq_061"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_062"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metrics evaluated on the results of all types of test data sets; A and F denote Airplanes and Faces, respectively.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="2" style="vertical-align: middle; text-align: left; border-top: solid thin; border-bottom: solid thin">%</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"><inline-formula id="j_info1172_ineq_063"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula></td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"><inline-formula id="j_info1172_ineq_064"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F20</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">0</td>
<td style="vertical-align: top; text-align: left">0.547</td>
<td style="vertical-align: top; text-align: left">0.531</td>
<td style="vertical-align: top; text-align: left">0.635</td>
<td style="vertical-align: top; text-align: left">0.643</td>
<td style="vertical-align: top; text-align: left">0.671</td>
<td style="vertical-align: top; text-align: left">0.668</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left">0.562</td>
<td style="vertical-align: top; text-align: left">0.545</td>
<td style="vertical-align: top; text-align: left">0.632</td>
<td style="vertical-align: top; text-align: left">0.640</td>
<td style="vertical-align: top; text-align: left">0.649</td>
<td style="vertical-align: top; text-align: left">0.663</td>
<td style="vertical-align: top; text-align: left">0.108</td>
<td style="vertical-align: top; text-align: left">0.069</td>
<td style="vertical-align: top; text-align: left">0.126</td>
<td style="vertical-align: top; text-align: left">0.076</td>
<td style="vertical-align: top; text-align: left">0.129</td>
<td style="vertical-align: top; text-align: left">0.143</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left">0.573</td>
<td style="vertical-align: top; text-align: left">0.559</td>
<td style="vertical-align: top; text-align: left">0.632</td>
<td style="vertical-align: top; text-align: left">0.641</td>
<td style="vertical-align: top; text-align: left">0.630</td>
<td style="vertical-align: top; text-align: left">0.660</td>
<td style="vertical-align: top; text-align: left">0.186</td>
<td style="vertical-align: top; text-align: left">0.132</td>
<td style="vertical-align: top; text-align: left">0.235</td>
<td style="vertical-align: top; text-align: left">0.151</td>
<td style="vertical-align: top; text-align: left">0.268</td>
<td style="vertical-align: top; text-align: left">0.263</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">15</td>
<td style="vertical-align: top; text-align: left">0.586</td>
<td style="vertical-align: top; text-align: left">0.576</td>
<td style="vertical-align: top; text-align: left">0.627</td>
<td style="vertical-align: top; text-align: left">0.642</td>
<td style="vertical-align: top; text-align: left">0.609</td>
<td style="vertical-align: top; text-align: left">0.655</td>
<td style="vertical-align: top; text-align: left">0.259</td>
<td style="vertical-align: top; text-align: left">0.198</td>
<td style="vertical-align: top; text-align: left">0.314</td>
<td style="vertical-align: top; text-align: left">0.222</td>
<td style="vertical-align: top; text-align: left">0.365</td>
<td style="vertical-align: top; text-align: left">0.357</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left">0.598</td>
<td style="vertical-align: top; text-align: left">0.587</td>
<td style="vertical-align: top; text-align: left">0.625</td>
<td style="vertical-align: top; text-align: left">0.640</td>
<td style="vertical-align: top; text-align: left">0.589</td>
<td style="vertical-align: top; text-align: left">0.652</td>
<td style="vertical-align: top; text-align: left">0.325</td>
<td style="vertical-align: top; text-align: left">0.251</td>
<td style="vertical-align: top; text-align: left">0.392</td>
<td style="vertical-align: top; text-align: left">0.284</td>
<td style="vertical-align: top; text-align: left">0.445</td>
<td style="vertical-align: top; text-align: left">0.445</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left">0.612</td>
<td style="vertical-align: top; text-align: left">0.602</td>
<td style="vertical-align: top; text-align: left">0.622</td>
<td style="vertical-align: top; text-align: left">0.641</td>
<td style="vertical-align: top; text-align: left">0.568</td>
<td style="vertical-align: top; text-align: left">0.648</td>
<td style="vertical-align: top; text-align: left">0.389</td>
<td style="vertical-align: top; text-align: left">0.308</td>
<td style="vertical-align: top; text-align: left">0.460</td>
<td style="vertical-align: top; text-align: left">0.346</td>
<td style="vertical-align: top; text-align: left">0.511</td>
<td style="vertical-align: top; text-align: left">0.516</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left">0.626</td>
<td style="vertical-align: top; text-align: left">0.615</td>
<td style="vertical-align: top; text-align: left">0.620</td>
<td style="vertical-align: top; text-align: left">0.644</td>
<td style="vertical-align: top; text-align: left">0.545</td>
<td style="vertical-align: top; text-align: left">0.645</td>
<td style="vertical-align: top; text-align: left">0.447</td>
<td style="vertical-align: top; text-align: left">0.364</td>
<td style="vertical-align: top; text-align: left">0.523</td>
<td style="vertical-align: top; text-align: left">0.409</td>
<td style="vertical-align: top; text-align: left">0.562</td>
<td style="vertical-align: top; text-align: left">0.578</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">35</td>
<td style="vertical-align: top; text-align: left">0.640</td>
<td style="vertical-align: top; text-align: left">0.629</td>
<td style="vertical-align: top; text-align: left">0.618</td>
<td style="vertical-align: top; text-align: left">0.648</td>
<td style="vertical-align: top; text-align: left">0.524</td>
<td style="vertical-align: top; text-align: left">0.641</td>
<td style="vertical-align: top; text-align: left">0.500</td>
<td style="vertical-align: top; text-align: left">0.417</td>
<td style="vertical-align: top; text-align: left">0.579</td>
<td style="vertical-align: top; text-align: left">0.467</td>
<td style="vertical-align: top; text-align: left">0.612</td>
<td style="vertical-align: top; text-align: left">0.629</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left">0.652</td>
<td style="vertical-align: top; text-align: left">0.640</td>
<td style="vertical-align: top; text-align: left">0.616</td>
<td style="vertical-align: top; text-align: left">0.646</td>
<td style="vertical-align: top; text-align: left">0.501</td>
<td style="vertical-align: top; text-align: left">0.638</td>
<td style="vertical-align: top; text-align: left">0.547</td>
<td style="vertical-align: top; text-align: left">0.466</td>
<td style="vertical-align: top; text-align: left">0.627</td>
<td style="vertical-align: top; text-align: left">0.517</td>
<td style="vertical-align: top; text-align: left">0.654</td>
<td style="vertical-align: top; text-align: left">0.677</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">45</td>
<td style="vertical-align: top; text-align: left">0.664</td>
<td style="vertical-align: top; text-align: left">0.654</td>
<td style="vertical-align: top; text-align: left">0.614</td>
<td style="vertical-align: top; text-align: left">0.646</td>
<td style="vertical-align: top; text-align: left">0.482</td>
<td style="vertical-align: top; text-align: left">0.635</td>
<td style="vertical-align: top; text-align: left">0.590</td>
<td style="vertical-align: top; text-align: left">0.517</td>
<td style="vertical-align: top; text-align: left">0.674</td>
<td style="vertical-align: top; text-align: left">0.564</td>
<td style="vertical-align: top; text-align: left">0.701</td>
<td style="vertical-align: top; text-align: left">0.719</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">50</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.675</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.663</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.612</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.647</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.461</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.630</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.632</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.563</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.715</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.610</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.739</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.757</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The overall results showed that the DPM is a useful technique to find the unknown test samples. One possible downside of the model is that it is less successful in case of small number of positive samples per category, because DPM cannot set up accurate CDF and reverse CDF if this issue is present; although this “negative” attribute is due to the way of its composition.</p>
</sec>
<sec id="j_info1172_s_011">
<label>6.3</label>
<title>Comparison with the Weibull-Calibrated SVM (W-SVM)</title>
<p>In this subsection we present the results of the comparison of our proposed Double Probabilty Model and the state-of-the-art W-SVM introduced by Scheirer <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>). We tested the W-SVM on each data sets (total of 120) and evaluated the <inline-formula id="j_info1172_ineq_065"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_066"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metrics, then compared them to the ones given by DPM. Figures <xref rid="j_info1172_fig_002">2</xref> and <xref rid="j_info1172_fig_003">3</xref> show the <inline-formula id="j_info1172_ineq_067"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and the <inline-formula id="j_info1172_ineq_068"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metrics, respectively. As can be seen in the diagrams below, DPM has better performance in case of each data set type than W-SVM, and this implies that it would (most likely) outperform all the other techniques that were tested in Scheirer <italic>et al.</italic> (<xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>). Table <xref rid="j_info1172_tab_006">6</xref> gives a summary of the comparison by presenting the values of <inline-formula id="j_info1172_ineq_069"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_070"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> for each data set types given by DPM and W-SVM.</p>
<p>The W-SVM is basically built up from <italic>θ</italic> one-class SVMs trained on positive examples and <italic>θ</italic> one-against-all binary SVMs, where <italic>θ</italic> denotes the number of classes. It has two parameters: one of them is <inline-formula id="j_info1172_ineq_071"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\delta _{\tau }}$]]></tex-math></alternatives></inline-formula> (fixed to 0.001 for all experiments in Scheirer <italic>et al.</italic>, <xref ref-type="bibr" rid="j_info1172_ref_022">2014</xref>), which is used to adjust the minimum threshold to consider data points in CAP model, and <inline-formula id="j_info1172_ineq_072"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\delta _{R}}$]]></tex-math></alternatives></inline-formula> is the level of confidence needed in the estimation of W-SVM. It is important to note that W-SVM was introduced and validated on LETTER and MNIST data sets, where the recognition rate is higher than in image collections that contain photos of outdoor, natural scenes. Therefore, we suspected that a parameter optimization is necessary before going on and testing the W-SVM on each data sets. We used a separate 10-class data set for the optimization and found that <inline-formula id="j_info1172_ineq_073"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn></mml:math><tex-math><![CDATA[${\delta _{\tau }}=0.1$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_074"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn></mml:math><tex-math><![CDATA[${\delta _{R}}=0.1$]]></tex-math></alternatives></inline-formula> is an appropriate setting for such type of images (the default setting of W-SVM is <inline-formula id="j_info1172_ineq_075"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.001</mml:mn></mml:math><tex-math><![CDATA[${\delta _{\tau }}=0.001$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_076"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn></mml:math><tex-math><![CDATA[${\delta _{R}}=0.1$]]></tex-math></alternatives></inline-formula>). We decided not to modify the value of <inline-formula id="j_info1172_ineq_077"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\delta _{R}}$]]></tex-math></alternatives></inline-formula>, because by systematically increasing or decreasing this parameter, the <inline-formula id="j_info1172_ineq_078"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_079"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> were not converging to a global maximum. On the other hand, increasing <inline-formula id="j_info1172_ineq_080"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\delta _{\tau }}$]]></tex-math></alternatives></inline-formula> resulted better detection rate on the unknown test samples up to a point (<inline-formula id="j_info1172_ineq_081"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn></mml:math><tex-math><![CDATA[${\delta _{\tau }}=0.1$]]></tex-math></alternatives></inline-formula>), where the number of false positive detections became high and it started to decrease both the <inline-formula id="j_info1172_ineq_082"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_083"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metrics. In Fig. <xref rid="j_info1172_fig_002">2</xref>, we present the results of W-SVM, which were produced by the default and the optimized settings; thereby the difference between these options were demonstrated and therefore Fig. <xref rid="j_info1172_fig_003">3</xref> and Table <xref rid="j_info1172_tab_006">6</xref> show only the results given by the optimized W-SVM.</p>
<fig id="j_info1172_fig_002">
<label>Fig. 2</label>
<caption>
<p>Results got on the six different data set types by evaluating DPM and CAP W-SVM with <inline-formula id="j_info1172_ineq_084"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.001</mml:mn></mml:math><tex-math><![CDATA[${\delta _{\tau }}=0.001$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_085"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">δ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">τ</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1</mml:mn></mml:math><tex-math><![CDATA[${\delta _{\tau }}=0.1$]]></tex-math></alternatives></inline-formula> parameter settings; represented as solid, dashed and dotted lines, respectively. The <inline-formula id="j_info1172_ineq_086"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> (average <inline-formula id="j_info1172_ineq_087"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> of 20 tests) is on the <italic>y</italic>-axis and the percentage of the unknown test samples is on the <italic>x</italic>-axis.</p>
</caption>
<graphic xlink:href="info1172_g002.jpg"/>
</fig>
<fig id="j_info1172_fig_003">
<label>Fig. 3</label>
<caption>
<p>Percentages of the correctly detected unknown test samples got on the six different data set types by evaluating DPM and CAP W-SVM; represented as solid and dotted lines, respectively. The <inline-formula id="j_info1172_ineq_088"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> (average <inline-formula id="j_info1172_ineq_089"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> of 20 tests) is on the <italic>y</italic>-axis and the percentage of the unknown test samples is on the <italic>x</italic>-axis.</p>
</caption>
<graphic xlink:href="info1172_g003.jpg"/>
</fig>
<table-wrap id="j_info1172_tab_006">
<label>Table 6</label>
<caption>
<p>Comparison of <inline-formula id="j_info1172_ineq_090"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_info1172_ineq_091"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula> metrics got on the results of all types of test data sets between DPM and W-SVM methods; A and F denote Airplanes and Faces, respectively.</p>
</caption>
<table>
<thead>
<tr>
<td rowspan="2" style="vertical-align: middle; text-align: left; border-top: solid thin; border-bottom: solid thin">%</td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"><inline-formula id="j_info1172_ineq_092"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">Accuracy</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">O</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\mathit{Accuracy}_{O}}$]]></tex-math></alternatives></inline-formula></td>
<td colspan="6" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"><inline-formula id="j_info1172_ineq_093"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">filter</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${P_{\mathit{filter}}}$]]></tex-math></alternatives></inline-formula></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin">Method</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F5</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F10</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">A20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">F20</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">0</td>
<td style="vertical-align: top; text-align: left"><bold>0.547</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.531</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.635</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.643</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.671</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.668</bold></td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.495</td>
<td style="vertical-align: top; text-align: left">0.420</td>
<td style="vertical-align: top; text-align: left">0.563</td>
<td style="vertical-align: top; text-align: left">0.486</td>
<td style="vertical-align: top; text-align: left">0.513</td>
<td style="vertical-align: top; text-align: left">0.505</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">0.000</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">5</td>
<td style="vertical-align: top; text-align: left"><bold>0.562</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.545</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.632</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.640</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.649</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.663</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.108</bold></td>
<td style="vertical-align: top; text-align: left">0.069</td>
<td style="vertical-align: top; text-align: left"><bold>0.126</bold></td>
<td style="vertical-align: top; text-align: left">0.076</td>
<td style="vertical-align: top; text-align: left"><bold>0.129</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.143</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.499</td>
<td style="vertical-align: top; text-align: left">0.432</td>
<td style="vertical-align: top; text-align: left">0.554</td>
<td style="vertical-align: top; text-align: left">0.480</td>
<td style="vertical-align: top; text-align: left">0.500</td>
<td style="vertical-align: top; text-align: left">0.502</td>
<td style="vertical-align: top; text-align: left">0.088</td>
<td style="vertical-align: top; text-align: left"><bold>0.073</bold></td>
<td style="vertical-align: top; text-align: left">0.095</td>
<td style="vertical-align: top; text-align: left"><bold>0.080</bold></td>
<td style="vertical-align: top; text-align: left">0.063</td>
<td style="vertical-align: top; text-align: left">0.126</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">10</td>
<td style="vertical-align: top; text-align: left"><bold>0.573</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.559</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.632</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.641</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.630</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.660</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.186</bold></td>
<td style="vertical-align: top; text-align: left">0.132</td>
<td style="vertical-align: top; text-align: left"><bold>0.235</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.151</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.268</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.263</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.502</td>
<td style="vertical-align: top; text-align: left">0.444</td>
<td style="vertical-align: top; text-align: left">0.546</td>
<td style="vertical-align: top; text-align: left">0.475</td>
<td style="vertical-align: top; text-align: left">0.488</td>
<td style="vertical-align: top; text-align: left">0.499</td>
<td style="vertical-align: top; text-align: left">0.162</td>
<td style="vertical-align: top; text-align: left"><bold>0.139</bold></td>
<td style="vertical-align: top; text-align: left">0.170</td>
<td style="vertical-align: top; text-align: left">0.147</td>
<td style="vertical-align: top; text-align: left">0.121</td>
<td style="vertical-align: top; text-align: left">0.228</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">15</td>
<td style="vertical-align: top; text-align: left"><bold>0.586</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.576</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.627</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.642</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.609</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.655</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.259</bold></td>
<td style="vertical-align: top; text-align: left">0.198</td>
<td style="vertical-align: top; text-align: left"><bold>0.314</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.222</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.365</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.357</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.505</td>
<td style="vertical-align: top; text-align: left">0.455</td>
<td style="vertical-align: top; text-align: left">0.539</td>
<td style="vertical-align: top; text-align: left">0.469</td>
<td style="vertical-align: top; text-align: left">0.476</td>
<td style="vertical-align: top; text-align: left">0.497</td>
<td style="vertical-align: top; text-align: left">0.230</td>
<td style="vertical-align: top; text-align: left"><bold>0.199</bold></td>
<td style="vertical-align: top; text-align: left">0.236</td>
<td style="vertical-align: top; text-align: left">0.210</td>
<td style="vertical-align: top; text-align: left">0.175</td>
<td style="vertical-align: top; text-align: left">0.315</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">20</td>
<td style="vertical-align: top; text-align: left"><bold>0.598</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.587</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.625</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.640</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.589</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.652</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.325</bold></td>
<td style="vertical-align: top; text-align: left">0.251</td>
<td style="vertical-align: top; text-align: left"><bold>0.392</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.284</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.445</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.445</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.509</td>
<td style="vertical-align: top; text-align: left">0.465</td>
<td style="vertical-align: top; text-align: left">0.531</td>
<td style="vertical-align: top; text-align: left">0.464</td>
<td style="vertical-align: top; text-align: left">0.464</td>
<td style="vertical-align: top; text-align: left">0.494</td>
<td style="vertical-align: top; text-align: left">0.295</td>
<td style="vertical-align: top; text-align: left"><bold>0.254</bold></td>
<td style="vertical-align: top; text-align: left">0.295</td>
<td style="vertical-align: top; text-align: left">0.266</td>
<td style="vertical-align: top; text-align: left">0.227</td>
<td style="vertical-align: top; text-align: left">0.390</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">25</td>
<td style="vertical-align: top; text-align: left"><bold>0.612</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.602</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.622</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.641</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.568</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.648</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.389</bold></td>
<td style="vertical-align: top; text-align: left">0.308</td>
<td style="vertical-align: top; text-align: left"><bold>0.460</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.346</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.511</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.516</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.513</td>
<td style="vertical-align: top; text-align: left">0.476</td>
<td style="vertical-align: top; text-align: left">0.523</td>
<td style="vertical-align: top; text-align: left">0.458</td>
<td style="vertical-align: top; text-align: left">0.451</td>
<td style="vertical-align: top; text-align: left">0.491</td>
<td style="vertical-align: top; text-align: left">0.355</td>
<td style="vertical-align: top; text-align: left"><bold>0.310</bold></td>
<td style="vertical-align: top; text-align: left">0.349</td>
<td style="vertical-align: top; text-align: left">0.318</td>
<td style="vertical-align: top; text-align: left">0.277</td>
<td style="vertical-align: top; text-align: left">0.457</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">30</td>
<td style="vertical-align: top; text-align: left"><bold>0.626</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.615</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.620</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.644</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.545</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.645</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.447</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.364</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.523</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.409</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.562</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.578</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.516</td>
<td style="vertical-align: top; text-align: left">0.487</td>
<td style="vertical-align: top; text-align: left">0.515</td>
<td style="vertical-align: top; text-align: left">0.452</td>
<td style="vertical-align: top; text-align: left">0.439</td>
<td style="vertical-align: top; text-align: left">0.488</td>
<td style="vertical-align: top; text-align: left">0.412</td>
<td style="vertical-align: top; text-align: left">0.362</td>
<td style="vertical-align: top; text-align: left">0.399</td>
<td style="vertical-align: top; text-align: left">0.368</td>
<td style="vertical-align: top; text-align: left">0.325</td>
<td style="vertical-align: top; text-align: left">0.516</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">35</td>
<td style="vertical-align: top; text-align: left"><bold>0.640</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.629</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.618</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.648</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.524</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.641</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.500</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.417</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.579</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.467</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.612</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.629</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.520</td>
<td style="vertical-align: top; text-align: left">0.498</td>
<td style="vertical-align: top; text-align: left">0.507</td>
<td style="vertical-align: top; text-align: left">0.447</td>
<td style="vertical-align: top; text-align: left">0.427</td>
<td style="vertical-align: top; text-align: left">0.485</td>
<td style="vertical-align: top; text-align: left">0.465</td>
<td style="vertical-align: top; text-align: left">0.413</td>
<td style="vertical-align: top; text-align: left">0.447</td>
<td style="vertical-align: top; text-align: left">0.415</td>
<td style="vertical-align: top; text-align: left">0.372</td>
<td style="vertical-align: top; text-align: left">0.569</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">40</td>
<td style="vertical-align: top; text-align: left"><bold>0.652</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.640</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.616</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.646</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.501</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.638</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.547</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.466</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.627</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.517</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.654</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.677</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.523</td>
<td style="vertical-align: top; text-align: left">0.509</td>
<td style="vertical-align: top; text-align: left">0.499</td>
<td style="vertical-align: top; text-align: left">0.442</td>
<td style="vertical-align: top; text-align: left">0.415</td>
<td style="vertical-align: top; text-align: left">0.483</td>
<td style="vertical-align: top; text-align: left">0.515</td>
<td style="vertical-align: top; text-align: left">0.462</td>
<td style="vertical-align: top; text-align: left">0.491</td>
<td style="vertical-align: top; text-align: left">0.458</td>
<td style="vertical-align: top; text-align: left">0.418</td>
<td style="vertical-align: top; text-align: left">0.617</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">45</td>
<td style="vertical-align: top; text-align: left"><bold>0.664</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.654</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.614</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.646</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.482</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.635</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.590</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.517</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.674</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.564</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.701</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.719</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">0.527</td>
<td style="vertical-align: top; text-align: left">0.520</td>
<td style="vertical-align: top; text-align: left">0.491</td>
<td style="vertical-align: top; text-align: left">0.436</td>
<td style="vertical-align: top; text-align: left">0.402</td>
<td style="vertical-align: top; text-align: left">0.480</td>
<td style="vertical-align: top; text-align: left">0.564</td>
<td style="vertical-align: top; text-align: left">0.509</td>
<td style="vertical-align: top; text-align: left">0.535</td>
<td style="vertical-align: top; text-align: left">0.501</td>
<td style="vertical-align: top; text-align: left">0.464</td>
<td style="vertical-align: top; text-align: left">0.661</td>
<td style="vertical-align: top; text-align: left">W-SVM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">50</td>
<td style="vertical-align: top; text-align: left"><bold>0.675</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.663</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.612</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.647</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.461</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.630</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.632</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.563</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.715</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.610</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.739</bold></td>
<td style="vertical-align: top; text-align: left"><bold>0.757</bold></td>
<td style="vertical-align: top; text-align: left">DPM</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.530</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.531</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.483</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.430</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.390</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.477</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.608</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.554</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.576</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.541</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.509</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.702</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">W-SVM</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We highlighted the higher values in each pair of rows in the table above and as can be seen, Double Probability Model has better performance than the Weibull-calibrated SVM at almost each case. There are only a few examples when W-SVM gave higher metrics and most of them got on the Faces5 data set. From these results we may conclude that our solution is more efficient than the W-SVM and the other methods that were previously overcome by it.</p>
</sec>
</sec>
<sec id="j_info1172_s_012">
<label>7</label>
<title>Conclusion</title>
<p>We presented our theoretical model called Double Probability Model, which is based on likelihoods of any classifier. The proposed model creates cumulative distribution functions on the positive samples and reverse cumulative distribution functions on the negative ones (i.e. on the union of the positive samples of all other classes) for each category. Using these functions DPM estimates whether a test sample is coming from an unseen category. In order to avoid zero probabilites our model applies double smoothing. We tested the DPM at image classification, where the representation of the images were based on visual content and we used SVM for classifier. To evaluate and compare our model we defined new goodness indicators, which are extended and modified (open-set problem) variants of the general accuracy and are able to measure the influence of DPM. Our experiments showed that the proposed Double Probability Model is able to filter out a large portion of the unknown test samples, thus it increases the classification accuracy, and it outperformed the prior state-of-the-art W-SVM.</p>
</sec>
</body>
<back>
<ref-list id="j_info1172_reflist_001">
<title>References</title>
<ref id="j_info1172_ref_001">
<mixed-citation publication-type="chapter"><string-name><surname>Bauml</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Tapaswi</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Stiefelhagen</surname>, <given-names>R.</given-names></string-name> (<year>2013</year>). <chapter-title>Semi-supervised learning with constraints for person identification in multimedia data</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>, pp. <fpage>3602</fpage>–<lpage>3609</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_002">
<mixed-citation publication-type="chapter"><string-name><surname>Bendale</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Boult</surname>, <given-names>T.</given-names></string-name> (<year>2015</year>). <chapter-title>Towards open world recognition</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>, pp. <fpage>1893</fpage>–<lpage>1902</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_003">
<mixed-citation publication-type="chapter"><string-name><surname>Boser</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Guyon</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Vapnik</surname>, <given-names>V.</given-names></string-name> (<year>1992</year>). <chapter-title>A training algorithm for optimal margin classifier</chapter-title>. In: <source>Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory</source>, pp. <fpage>144</fpage>–<lpage>152</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_004">
<mixed-citation publication-type="chapter"><string-name><surname>Cevikalp</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Triggs</surname>, <given-names>B.</given-names></string-name> (<year>2012</year>). <chapter-title>Efficient object detection using cascades of nearest convex model classifiers</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>, pp. <fpage>886</fpage>–<lpage>893</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_005">
<mixed-citation publication-type="journal"><string-name><surname>Chang</surname>, <given-names>C.-C.</given-names></string-name>, <string-name><surname>Lin</surname>, <given-names>C.-J.</given-names></string-name> (<year>2011</year>). <article-title>LIBSVM: A library for support vector machines</article-title>. <source>ACM Transactions on Intelligent Systems and Technology</source>, <volume>2</volume>(<issue>3</issue>), <fpage>27.1</fpage>–<lpage>27.27</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_006">
<mixed-citation publication-type="chapter"><string-name><surname>Chatfield</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Lempitsky</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Vedaldi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Zisserman</surname>, <given-names>A.</given-names></string-name> (<year>2011</year>). <chapter-title>The devil is in the details: an evaluation of recent feature encoding methods</chapter-title>. In: <source>Proceedings of the 22nd British Machine Vision Conference</source>, pp. <fpage>76.1</fpage>–<lpage>76.12</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_007">
<mixed-citation publication-type="journal"><string-name><surname>Cortes</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Vapnik</surname>, <given-names>V.</given-names></string-name> (<year>1995</year>). <article-title>Support-vector networks</article-title>. <source>Machine Learning</source>, <volume>20</volume>(<issue>3</issue>), <fpage>273</fpage>–<lpage>297</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>Dempster</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Laird</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Rubin</surname>, <given-names>D.</given-names></string-name> (<year>1977</year>). <article-title>Maximum likelihood from incomplete data via the EM algorithm</article-title>. <source>Journal of the Royal Statistical Society</source>, <volume>39</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>38</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_009">
<mixed-citation publication-type="chapter"><string-name><surname>Fei-Fei</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Fergus</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Perona</surname>, <given-names>P.</given-names></string-name> (<year>2004</year>). <chapter-title>Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Generative-Model Based Vision</source>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_010">
<mixed-citation publication-type="chapter"><string-name><surname>Fei-Fei</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Fergus</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Torralba</surname>, <given-names>A.</given-names></string-name> (<year>2007</year>). <chapter-title>Recognizing and learning object categories</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_011">
<mixed-citation publication-type="chapter"><string-name><surname>Harris</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Stephens</surname>, <given-names>M.</given-names></string-name> (<year>1988</year>). <chapter-title>A combined corner and edge detector</chapter-title>. In: <source>Proceedings of the Alvey Vision Conference</source>, pp. <fpage>23.1</fpage>–<lpage>23.6</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Huang</surname>, <given-names>T.-K.</given-names></string-name>, <string-name><surname>Weng</surname>, <given-names>R.C.</given-names></string-name>, <string-name><surname>Lin</surname>, <given-names>C.-J.</given-names></string-name> (<year>2006</year>). <article-title>Generalized Bradley–Terry models and multi-class probability estimates</article-title>. <source>Journal of Machine Learning Research</source>, <volume>7</volume>, <fpage>85</fpage>–<lpage>115</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_013">
<mixed-citation publication-type="chapter"><string-name><surname>Jain</surname>, <given-names>L.P.</given-names></string-name>, <string-name><surname>Scheirer</surname>, <given-names>W.J.</given-names></string-name>, <string-name><surname>Boult</surname>, <given-names>T.E.</given-names></string-name> (<year>2014</year>). <chapter-title>Multi-class open set recognition using probability of inclusion</chapter-title>. In: <source>European Conference on Computer Vision</source>, pp. <fpage>393</fpage>–<lpage>409</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_014">
<mixed-citation publication-type="chapter"><string-name><surname>Lazebnik</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Schmid</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ponce</surname>, <given-names>J.</given-names></string-name> (<year>2006</year>). <chapter-title>Beyond bags of features: spatial pyramid matching for recognizing natural scene categories</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>, Vol. <volume>2</volume>, pp. <fpage>2169</fpage>–<lpage>2178</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_015">
<mixed-citation publication-type="journal"><string-name><surname>Lowe</surname>, <given-names>D.G.</given-names></string-name> (<year>2004</year>). <article-title>Distinctive image features from scale-invariant keypoints</article-title>. <source>International Journal of Computer Vision</source>, <volume>60</volume>(<issue>2</issue>), <fpage>91</fpage>–<lpage>110</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_016">
<mixed-citation publication-type="chapter"><string-name><surname>MacQueen</surname>, <given-names>J.</given-names></string-name> (<year>1967</year>). <chapter-title>Some methods for classification and analysis of multivariate observations</chapter-title>. In: <source>Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability</source>, Vol. <volume>1</volume>, pp. <fpage>281</fpage>–<lpage>297</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_017">
<mixed-citation publication-type="journal"><string-name><surname>Mikolajczyk</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Schmid</surname>, <given-names>C.</given-names></string-name> (<year>2004</year>). <article-title>Scale &amp; affine invariant interest point detectors</article-title>. <source>International Journal of Computer Vision</source>, <volume>60</volume>(<issue>1</issue>), <fpage>63</fpage>–<lpage>86</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_018">
<mixed-citation publication-type="chapter"><string-name><surname>Perronnin</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Dance</surname>, <given-names>C.</given-names></string-name> (<year>2007</year>). <chapter-title>Fisher kernel on visual vocabularies for image categorization</chapter-title>. In: <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_019">
<mixed-citation publication-type="chapter"><string-name><surname>Platt</surname>, <given-names>J.</given-names></string-name> (<year>2000</year>). <chapter-title>Probabilistic outputs for support vector machines and comparison to regularize likelihood methods</chapter-title>. In: <source>Advances in Large Margin Classifiers</source>, pp. <fpage>61</fpage>–<lpage>74</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_020">
<mixed-citation publication-type="chapter"><string-name><surname>Reynolds</surname>, <given-names>D.A.</given-names></string-name> (<year>2009</year>). <chapter-title>Gaussian mixture models</chapter-title>. In: <source>Encyclopedia of Biometric Recognition</source>, pp. <fpage>659</fpage>–<lpage>663</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_021">
<mixed-citation publication-type="journal"><string-name><surname>Scheirer</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Rocha</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Sapkota</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Boult</surname>, <given-names>T.E.</given-names></string-name> (<year>2013</year>). <article-title>Towards open set recognition</article-title>. <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>36</volume>(<issue>7</issue>), <fpage>1757</fpage>–<lpage>1772</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Scheirer</surname>, <given-names>W.J.</given-names></string-name>, <string-name><surname>Jain</surname>, <given-names>L.P.</given-names></string-name>, <string-name><surname>Boult</surname>, <given-names>T.E.</given-names></string-name> (<year>2014</year>). <article-title>Probability models for open set recognition</article-title>. <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, <volume>36</volume>(<issue>11</issue>), <fpage>2317</fpage>–<lpage>2324</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_023">
<mixed-citation publication-type="journal"><string-name><surname>Schölkopf</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Platt</surname>, <given-names>J.C.</given-names></string-name>, <string-name><surname>Shawe-Taylor</surname>, <given-names>J.C.</given-names></string-name>, <string-name><surname>Smola</surname>, <given-names>A.J.</given-names></string-name>, <string-name><surname>Williamson</surname>, <given-names>R.C.</given-names></string-name> (<year>2001</year>). <article-title>Estimating the support of a high-dimensional distribution</article-title>. <source>Neural Computation</source>, <volume>13</volume>(<issue>7</issue>), <fpage>1443</fpage>–<lpage>1471</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_024">
<mixed-citation publication-type="journal"><string-name><surname>Szűcs</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Henk</surname>, <given-names>Zs.</given-names></string-name> (<year>2015</year>). <article-title>Active clustering based classification for cost effective prediction in few labeled data problem</article-title>. <source>Economy Informatics</source>, <volume>15</volume>(<issue>1</issue>), <fpage>5</fpage>–<lpage>13</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Szűcs</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Marosvári</surname>, <given-names>B.</given-names></string-name> (<year>2015</year>). <article-title>Half and fully automatic character identification in movies based on face detection</article-title>. <source>Acta Universitatis Sapientiae, Electrical and Mechanical Engineering</source>, <volume>7</volume>, <fpage>80</fpage>–<lpage>92</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_026">
<mixed-citation publication-type="journal"><string-name><surname>Tax</surname>, <given-names>D.M.J.</given-names></string-name>, <string-name><surname>Duin</surname>, <given-names>R.P.W.</given-names></string-name> (<year>2004</year>). <article-title>Support vector data description</article-title>. <source>Machine Learning</source>, <volume>54</volume>, <fpage>45</fpage>–<lpage>66</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_027">
<mixed-citation publication-type="other"><string-name><surname>Tomasi</surname>, <given-names>C.</given-names></string-name> (2004). <italic>Estimating Gaussian Mixture Densities with EM – A Tutorial</italic>. Technical report, Duke University.</mixed-citation>
</ref>
<ref id="j_info1172_ref_028">
<mixed-citation publication-type="chapter"><string-name><surname>Zhang</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Metaxas</surname>, <given-names>D.</given-names></string-name> (<year>2006</year>). <chapter-title>RO-SVM: Support vector machine with reject option for image categorization</chapter-title>. In: <source>Proceedings of the 17th British Machine Vision Conference</source>, pp. <fpage>1209</fpage>–<lpage>1218</lpage>.</mixed-citation>
</ref>
<ref id="j_info1172_ref_029">
<mixed-citation publication-type="journal"><string-name><surname>Zhu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Goldberg</surname>, <given-names>A.B.</given-names></string-name> (<year>2009</year>). <article-title>Introduction to semi-supervised learning</article-title>. <source>Synthesis Lectures on Artificial Intelligence and Machine Learning</source>, <volume>3</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>130</lpage>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>