<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn>
<issn pub-type="ppub">0868-4952</issn>
<issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFOR419</article-id>
<article-id pub-id-type="doi">10.15388/20-INFOR419</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Kriging Predictor for Facial Emotion Recognition Using Numerical Proximities of Human Emotions</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Karbauskaitė</surname><given-names>Rasa</given-names></name><email xlink:href="rasa.karbauskaite@mif.vu.lt">rasa.karbauskaite@mif.vu.lt</email><xref ref-type="aff" rid="j_infor419_aff_001">1</xref><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>R. Karbauskaitė</bold> is a researcher of Cognitive Computing Group at Institute of Data Science and Digital Technologies of Vilnius University. She received a bachelor’s degree in mathematics and informatics (2003) and a master’s degree in informatics (2005) from Vilnius Pedagogical University, PhD in informatics from Vytautas Magnus University and Institute of Mathematics and Informatics (2010). Her research interests include multidimensional data visualization, estimation of the visualization quality, dimensionality reduction, estimation of the intrinsic dimensionality of high-dimensional data, facial emotion recognition, and data clustering.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Sakalauskas</surname><given-names>Leonidas</given-names></name><email xlink:href="leonidas.sakalauskas@mif.vu.lt">leonidas.sakalauskas@mif.vu.lt</email><xref ref-type="aff" rid="j_infor419_aff_001">1</xref><xref ref-type="aff" rid="j_infor419_aff_002">2</xref><bio>
<p><bold>L. Sakalauskas</bold>, habil. dr. (2000), prof. (2006), research interests: data mining, operations research, stochastic optimization, statistical modelling. He developed the stochastic optimization approach by Monte Carlo series and studied its convergence, developed the theory of vectorial fractal Brownian fields with implementation for surogate modelling, developed a concept of modelling and simulation of social-behavioural phenomena, etc. He has written more than 250 scientific publications, 70 of which are referenced in Clarivate Analytics DB, has supervised 15 PhD thesis, has organised more than 20 scientific conferences.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Dzemyda</surname><given-names>Gintautas</given-names></name><email xlink:href="gintautas.dzemyda@mif.vu.lt">gintautas.dzemyda@mif.vu.lt</email><xref ref-type="aff" rid="j_infor419_aff_001">1</xref><bio>
<p><bold>G. Dzemyda</bold> received the doctoral degree in technical sciences (PhD) in 1984, and he received the degree of Doctor Habilius in 1997 from Kaunas University of Technology. He was conferred the title of professor at Kaunas University of Technology (1998) and Vilnius University (2018). Recent employment is at Vilnius University, Institute of Data Science and Digital Technologies, as the director of the Institute, the head of Cognitive Computing Group, Professor and Principal Researcher. The research interests cover visualization of multidimensional data, optimization theory and applications, data mining, multiple criteria decision support, neural networks, image analysis. He is the author of more than 260 scientific publications, two monographs, five textbooks.</p></bio>
</contrib>
<aff id="j_infor419_aff_001"><label>1</label>Institute of Data Science and Digital Technologies, <institution>Vilnius University</institution>, Akademijos 4, LT-08412, Vilnius, <country>Lithuania</country></aff>
<aff id="j_infor419_aff_002"><label>2</label><institution>Vilnius Gediminas Technical University</institution>, Saulėtekio al. 11, LT-10223, Vilnius, <country>Lithuania</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2020</year></pub-date><pub-date pub-type="epub"><day>2</day><month>6</month><year>2020</year></pub-date><volume>31</volume><issue>2</issue><fpage>249</fpage><lpage>275</lpage>
<history>
<date date-type="received"><month>1</month><year>2020</year></date>
<date date-type="accepted"><month>5</month><year>2020</year></date>
</history>
<permissions><copyright-statement>© 2020 Vilnius University</copyright-statement><copyright-year>2020</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Emotion recognition from facial expressions has gained much interest over the last few decades. In the literature, the common approach, used for facial emotion recognition (FER), consists of these steps: image pre-processing, face detection, facial feature extraction, and facial expression classification (recognition). We have developed a method for FER that is absolutely different from this common approach. Our method is based on the dimensional model of emotions as well as on using the kriging predictor of Fractional Brownian Vector Field. The classification problem, related to the recognition of facial emotions, is formulated and solved. The relationship of different emotions is estimated by expert psychologists by putting different emotions as the points on the plane. The goal is to get an estimate of a new picture emotion on the plane by kriging and determine which emotion, identified by psychologists, is the closest one. Seven basic emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Neutral</italic>) have been chosen. The accuracy of classification into seven classes has been obtained approximately 50%, if we make a decision on the basis of the closest basic emotion. It has been ascertained that the kriging predictor is suitable for facial emotion recognition in the case of small sets of pictures. More sophisticated classification strategies may increase the accuracy, when grouping of the basic emotions is applied.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>facial emotion recognition</kwd>
<kwd>Fractional Brownian Vector Field</kwd>
<kwd>kriging predictor</kwd>
<kwd>dimensional models of emotions</kwd>
<kwd>classifier</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="j_infor419_s_001">
<label>1</label>
<title>Introduction</title>
<p>Recently, a fast growth of emotion recognition research has been observed in various types of communication such as text (Shivhare and Khethawat, <xref ref-type="bibr" rid="j_infor419_ref_055">2012</xref>; Calvo and Kim, <xref ref-type="bibr" rid="j_infor419_ref_004">2013</xref>; Ramalingam <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_045">2018</xref>), speech (Tamulevičius <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_059">2017</xref>, <xref ref-type="bibr" rid="j_infor419_ref_060">2019</xref>; Sailunaz <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_050">2018</xref>), body gestures (Stathopoulou and Tsihrintzis, <xref ref-type="bibr" rid="j_infor419_ref_057">2011</xref>; Metcalfe <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_036">2019</xref>), and facial expressions (Revina and Emmanuel, <xref ref-type="bibr" rid="j_infor419_ref_047">2018</xref>; Ko, <xref ref-type="bibr" rid="j_infor419_ref_028">2018</xref>; Shao and Qian, <xref ref-type="bibr" rid="j_infor419_ref_053">2019</xref>; Sharma <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_054">2019</xref>).</p>
<p>Facial expressions are one of the most important means of interpersonal communication, since a facial expression says a lot without speaking. Therefore, research on facial emotions has received much attention in recent decades in applications in the perceptual and cognitive sciences (Purificación and Pablo, <xref ref-type="bibr" rid="j_infor419_ref_044">2019</xref>). Facial emotion recognition (FER) is widely used in distinct areas such as: neurology (Adolphs and Anderson, <xref ref-type="bibr" rid="j_infor419_ref_001">2018</xref>; Metcalfe <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_036">2019</xref>), clinical psychology (Su <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_058">2017</xref>), artificial intelligence (Ranade <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_046">2018</xref>), intelligent security (Wang and Fang, <xref ref-type="bibr" rid="j_infor419_ref_064">2008</xref>), robotics manufacturing (Weiguo <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_067">2004</xref>), behavioural sciences (Vorontsova and Labunskaya, <xref ref-type="bibr" rid="j_infor419_ref_063">2020</xref>), multimedia (Mariappan <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_031">2012</xref>), educational software (Ferdig and Mishra, <xref ref-type="bibr" rid="j_infor419_ref_015">2004</xref>; Filella <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_016">2016</xref>), etc.</p>
<p>In the literature, the common approach to facial emotion recognition consists of these steps: image pre-processing (noise reduction, normalization), face detection, facial feature extraction, and facial expression classification (recognition). Numerous techniques have been made for FER by using different methods in these steps (Bhardwaj and Dixit, <xref ref-type="bibr" rid="j_infor419_ref_002">2016</xref>; Deshmukh <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_006">2017</xref>; Ko, <xref ref-type="bibr" rid="j_infor419_ref_028">2018</xref>; Revina and Emmanuel, <xref ref-type="bibr" rid="j_infor419_ref_047">2018</xref>; Shao and Qian, <xref ref-type="bibr" rid="j_infor419_ref_053">2019</xref>; Sharma <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_054">2019</xref>). In the literature, recognition accuracy of this approach varies from approximately 48% to 98% (Deshmukh <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_006">2017</xref>; Revina and Emmanuel, <xref ref-type="bibr" rid="j_infor419_ref_047">2018</xref>; Shao and Qian, <xref ref-type="bibr" rid="j_infor419_ref_053">2019</xref>; Nonis <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_037">2019</xref>; Sharma <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_054">2019</xref>). However, the common approach has some drawbacks (Shao and Qian, <xref ref-type="bibr" rid="j_infor419_ref_053">2019</xref>): a) recognition accuracy is highly dependent on the methods used and the data set analysed; b) methods are often difficult, because of many unknown parameters and/or long computation time.</p>
<p>Recently, deep-learning-based algorithms have been employed for feature extraction, classification, and recognition tasks. The convolutional neural networks and the recurrent neural networks have been applied in many studies including object recognition, face recognition, and facial emotion recognition as well. However, deep-learning-based techniques are available with big data (Nonis <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_037">2019</xref>). A brief review of conventional FER approaches as well as deep-learning-based FER methods is presented in Ko (<xref ref-type="bibr" rid="j_infor419_ref_028">2018</xref>). It is shown that the average recognition accuracy of six conventional FER approaches is equal to 63.2% and the average recognition accuracy of six deep-learning-based FER approaches is 72.65%, i.e. deep-learning based approaches outperform conventional approaches. In Gan <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor419_ref_018">2019</xref>), a novel FER framework via convolutional neural networks with soft labels that associate multiple emotions to each expression image is proposed. Investigations are made on the FER-2013 (35 887 face images) (Goodfellow <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_020">2013</xref>), SFEW (1766 images) (Dhall <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_007">2015</xref>) and RAF (15 339 images) (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_029">2017</xref>) databases, and the proposed method achieves accuracy of 73.73%, 55.73% and 86.31%, respectively.</p>
<p>In this paper, we focus on emotion recognition by facial expression. We have developed an approach, based on the two-dimensional model of emotions as well as using the kriging predictor of Fractional Brownian Vector Field (Motion) (FBVF). The classification problem, related to the recognition of facial emotions, is formulated and solved. The relationship of different emotions is estimated by expert psychologists by putting different emotions as the points on the plane. The kriging predictor allows us to get an estimate of a new picture emotion on the plane. Then, we determine which emotion, identified by psychologists, is the closest one. Seven emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Neutral</italic>) have been chosen for recognition.</p>
<p>The advantage of our method is that it is focused on small data sets. In the literature, seven basic emotions (e.g. <italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Neutral</italic>) are usually used. However, sometimes specific emotions are measured. In this case, classical databases with basic emotions cannot be used for training of classifier. If we have little data for the study and cannot adapt other databases, then methods such as CNN will not give good accuracy with a small data set. This is an advantage of the kriging method. Our approach can be easily extended to other emotions.</p>
</sec>
<sec id="j_infor419_s_002">
<label>2</label>
<title>Computational Models of Emotions</title>
<p>Emotions can be expressed in a variety of ways, such as facial expressions and gestures, speech, and written text. There are two models to recognize emotions: the categorical model and the dimensional one. In the first model, emotions are described with a discrete number of classes, affective adjectives, and, in the second model, emotions are characterized by several perpendicular axes, i.e. by defining where they lie in a two, three or higher dimensional space (Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>). The review of these models is made in Sreeja and Mahalakshmi (<xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>), Grekow (<xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>).</p>
<p>There are many attempts in the literature to visualize similarities of emotions. This allows them to be compared not only qualitatively but also quantitatively. Such visualizations, namely the quantitative correspondence of emotions to points on the 2D plane, are reviewed below. We rely on this in the proposed new method of recognizing and classifying facial emotions.</p>
<sec id="j_infor419_s_003">
<label>2.1</label>
<title>Categorical Models of Emotions</title>
<p>Emotions are recognized with the help of words that denote emotions or class tags (Sreeja and Mahalakshmi, <xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>). The categorical model either uses some basic emotion classes (Ekman, <xref ref-type="bibr" rid="j_infor419_ref_012">1992</xref>; Johnson-Laird and Oatley, <xref ref-type="bibr" rid="j_infor419_ref_026">1989</xref>; Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>) or domain-specific expressive classes (Sreeja and Mahalakshmi, <xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>). A various set of emotions may be required for different fields, for instance, in the area of instruction and education (D’mello and Graesser, <xref ref-type="bibr" rid="j_infor419_ref_008">2007</xref>), five classes such as <italic>Boredom</italic>, <italic>Confusion</italic>, <italic>Joy</italic>, <italic>Flow</italic>, and <italic>Frustration</italic> are proposed to describe affective states of students.</p>
<fig id="j_infor419_fig_001">
<label>Fig. 1</label>
<caption>
<p>Hevner’s adjectives arranged into 8 groups (Hevner, <xref ref-type="bibr" rid="j_infor419_ref_022">1936</xref>).</p>
</caption>
<graphic xlink:href="infor419_g001.jpg"/>
</fig>
<p>Regarding categorical models of emotions, there are a lot of concepts about class quantity and grouping methods in the literature. Hevner was one of the first researchers who focused on finding and grouping terms pertaining to emotions (Hevner, <xref ref-type="bibr" rid="j_infor419_ref_022">1936</xref>). He created a list of 66 adjectives arranged into eight groups distributed on a circle (Fig. <xref rid="j_infor419_fig_001">1</xref>). Adjectives inside a group are close to each other, and the opposite groups on the circle are the furthest apart by emotion. Farnsworth (<xref ref-type="bibr" rid="j_infor419_ref_014">1954</xref>) and Schubert (<xref ref-type="bibr" rid="j_infor419_ref_052">2003</xref>) modified Hevner’s model by decreasing the number of adjectives to 50 and 46, grouped them into nine groups. Recently, many researchers have been using the concept of six basic emotions (<italic>Happiness</italic>, <italic>Sadness</italic>, <italic>Anger</italic>, <italic>Fear</italic>, <italic>Disgust</italic>, and <italic>Surprise</italic>) presented by Ekman (<xref ref-type="bibr" rid="j_infor419_ref_012">1992</xref>, <xref ref-type="bibr" rid="j_infor419_ref_013">1999</xref>), which was developed for facial expression. Ekman described features that enabled differentiating six basic emotions. Johnson-Laird and Oatley (<xref ref-type="bibr" rid="j_infor419_ref_026">1989</xref>) indicated a smaller group of basic emotions: <italic>Happiness</italic>, <italic>Sadness</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Disgust</italic>. In Hu and Downie (<xref ref-type="bibr" rid="j_infor419_ref_023">2007</xref>), five mood clusters were used for song classification. In Hu <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor419_ref_024">2008</xref>), etc., a deficiency of this categorical model was indicated, i.e. a semantic overlap among five clusters was noticed, because some clusters were quite similar. In Grekow (<xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>), a set of 4 basic emotions: <italic>Happy</italic>, <italic>Angry</italic>, <italic>Sad</italic> and <italic>Relaxed</italic>, corresponding to the four quarters of Russell’s model (Russell, <xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>), were used for the analysis of music recordings using the categorical model. More categories of emotions, used by various researchers, are indicated in Sreeja and Mahalakshmi (<xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>).</p>
<p>The main disadvantage of the categorical model is that it has poorer resolution by using categories than the dimensional model. The number of emotions and their shades met in various types of communication is much richer than the limited number of categories of emotions in the model. The smaller the number of groups in the categorical model, the greater the simplification of the description of emotions (Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>).</p>
</sec>
<sec id="j_infor419_s_004">
<label>2.2</label>
<title>Dimensional Models of Emotions</title>
<p>Emotions can be defined according to one or more dimensions. For example, Wilhelm Max Wundt, the father of modern psychology, proposed to describe emotions by three dimensions: pleasurable versus unpleasurable, arousing versus subduing, and strain versus relaxation (Wundt, <xref ref-type="bibr" rid="j_infor419_ref_070">1897</xref>).</p>
<p>In the dimensional model, emotions are identified according to their location in a space with a small number of emotional dimensions. In this way, the human emotion is represented as a point on an emotion space (Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>). Since all emotions can be understood as changing values of the emotional dimensions, the dimensional model, in contrast to the categorical one, enables us to analyse the larger number of emotions and their shades. Commonly emotions are defined in a two (valence and arousal) or three (valence, arousal, and power/dominance) dimensional space. The valence dimension (emotional pleasantness) describes the positivity or negativity of an emotion and ranges from unpleasant feelings to a pleasant feeling (sense of happiness). The arousal dimension (physiological activation) denotes the level of excitement that the emotion depicts, and it ranges from <italic>Sleepiness</italic> or <italic>Boredom</italic> to high <italic>Excitement</italic>. The dominance (power, influence) dimension represents a sense of control or freedom to act. For example, while <italic>Fear</italic> and <italic>Anger</italic> are unpleasant emotions, <italic>Anger</italic> is a dominant emotion, and <italic>Fear</italic> is a submissive one (Mehrabian, <xref ref-type="bibr" rid="j_infor419_ref_033">1980</xref>, <xref ref-type="bibr" rid="j_infor419_ref_034">1996</xref>; Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>).</p>
<p>The two-dimensional models such as the Russell’s circumplex model (Russell, <xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>) (Section <xref rid="j_infor419_s_005">2.2.1</xref>), Thayer’s model (Thayer, <xref ref-type="bibr" rid="j_infor419_ref_062">1989</xref>) (Section <xref rid="j_infor419_s_006">2.2.2</xref>), the vector model (Bradley <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_003">1992</xref>) (Section <xref rid="j_infor419_s_007">2.2.3</xref>), the Positive Affect – Negative Affect (PANA) model (Watson and Tellegen, <xref ref-type="bibr" rid="j_infor419_ref_065">1985</xref>; Watson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_066">1999</xref>) (Section <xref rid="j_infor419_s_008">2.2.4</xref>), Whissell’s model (Whissell, <xref ref-type="bibr" rid="j_infor419_ref_068">1989</xref>) (Section <xref rid="j_infor419_s_009">2.2.5</xref>), and Plutchik’s wheel of emotions (Plutchik and Kellerman, <xref ref-type="bibr" rid="j_infor419_ref_041">1980</xref>; Plutchik, <xref ref-type="bibr" rid="j_infor419_ref_040">2001</xref>) (Section <xref rid="j_infor419_s_010">2.2.6</xref>) are the most prevalent in emotion research. Among the three-dimensional models, Plutchik’s cone-shaped model (Plutchik and Kellerman, <xref ref-type="bibr" rid="j_infor419_ref_041">1980</xref>; Plutchik, <xref ref-type="bibr" rid="j_infor419_ref_040">2001</xref>) (Section <xref rid="j_infor419_s_010">2.2.6</xref>), the Pleasure–Arousal–Dominance (PAD) model (Mehrabian and Russell, <xref ref-type="bibr" rid="j_infor419_ref_035">1974</xref>) (Section <xref rid="j_infor419_s_011">2.2.7</xref>), and Lövheim cube of emotion (Lövheim, <xref ref-type="bibr" rid="j_infor419_ref_030">2011</xref>) (Section <xref rid="j_infor419_s_012">2.2.8</xref>) are the most dominant and commonly used in emotion recognition field. Researchers have noticed that, in particular cases, two or three dimensions cannot adequately describe human emotions. Consequently, four or more dimensions are necessary to identify affective states. The number of dimensions, required to represent emotions, depends on the problem the researcher is solving (Fontaine <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_017">2007</xref>; Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>). The Hourglass Model (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>) (Section <xref rid="j_infor419_s_013">2.2.9</xref>) is an interesting combination of the categorical and four-dimensional models.</p>
<p>The description of emotions by using dimensions has some advantages. Dimensions ensure a unique identification and a wide range of the emotion concepts. It is possible to identify fine emotion concepts (shades of an emotion) that differ only to a small extent. Thus, a dimensional model of emotions is a useful representation capturing all relevant emotions and providing a means for measuring the similarity between emotional states (Sreeja and Mahalakshmi, <xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>). The categorical model is more general and simplified in describing emotions, and the dimensional model is more detailed and able to detect shades of emotions (Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>).</p>
<sec id="j_infor419_s_005">
<label>2.2.1</label>
<title>Russell’s Circumplex Model</title>
<fig id="j_infor419_fig_002">
<label>Fig. 2</label>
<caption>
<p>Russell’s circumplex model (Russell, <xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>).</p>
</caption>
<graphic xlink:href="infor419_g002.jpg"/>
</fig>
<p>The first two-dimensional model was developed by Russell (<xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>) and is known as the Russell’s circumplex model (the circumplex model of affect) (Fig. <xref rid="j_infor419_fig_002">2</xref>). Russell identified two main dimensions of an emotion: arousal (physiological activation) and valence (emotional pleasantness). Arousal can be treated as high or low and valence may be positive or negative.</p>
<p>The circumplex model is formed by dividing a plane by two perpendicular axes. Valence represents the horizontal axis (negative values to the left, positive ones to the right) and arousal represents the vertical axis (low values at the bottom, high ones at the top). Emotions are mapped as points in a circumplex shape. The centre of this circle represents a neutral value of valence and a medium level of arousal, i.e. the centre point depicts a neutral emotional state. In this model, all emotions can be represented as points at any values of valence and arousal or at a neutral value of one or both of these dimensions.</p>
<p>The four basic categories of emotions can be highlighted regarding the quarters of Russell’s model as follows: 1) <italic>Happy</italic> – high valence, high arousal (top-right), 2) <italic>Angry</italic> – low valence, high arousal (top-left), 3) <italic>Sad</italic> – low valence, low arousal (bottom-left), 4) <italic>Relaxed</italic> – high valence, low arousal (bottom-right) (Wilson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_069">2016</xref>; Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>).</p>
</sec>
<sec id="j_infor419_s_006">
<label>2.2.2</label>
<title>Thayer’s Model</title>
<p>Thayer’s model (Thayer, <xref ref-type="bibr" rid="j_infor419_ref_062">1989</xref>) is a modification of Russell’s circumplex model. Thayer proposed to describe emotions by two separate arousal dimensions: energetic arousal and tense arousal, also named energy and stress, correspondingly. Valence is supposed to be a varying combination of these two aforementioned dimensions. For example, in Thayer’s model, <italic>Satisfaction</italic> and <italic>Tenderness</italic> take up a position in a part of low energy-low stress; <italic>Astonishment</italic>, <italic>Surprise</italic> position in high energy-low stress part; <italic>Anger</italic>, <italic>Fear</italic> belong to a high energy – high stress part, and <italic>Depression</italic>, <italic>Sadness</italic> take up a position in a part of low energy-high stress, correspondingly. Figure <xref rid="j_infor419_fig_003">3</xref> presents a visual perception of both Russell’s circumplex model and Thayer’s one.</p>
<fig id="j_infor419_fig_003">
<label>Fig. 3</label>
<caption>
<p>Schematic diagram of the two-dimensional models of emotions with common basic emotion categories overlaid (Eerola and Vuoskoski, <xref ref-type="bibr" rid="j_infor419_ref_010">2011</xref>).</p>
</caption>
<graphic xlink:href="infor419_g003.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_007">
<label>2.2.3</label>
<title>Vector Model</title>
<p>The vector model of emotion (Bradley <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_003">1992</xref>) holds that emotions are structured in terms of valence and arousal, but they are not continuously related or evenly distributed along these dimensions (Wilson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_069">2016</xref>). This model assumes that there is an underlying dimension of arousal and a binary choice of valence that determines a direction in which a particular emotion lies. Thus, two vectors are obtained. Both of them start at zero arousal and neutral valence and proceed as straight lines, one in a positive, and one in a negative valence direction (Rubin and Talarico, <xref ref-type="bibr" rid="j_infor419_ref_048">2009</xref>). Figure <xref rid="j_infor419_fig_004">4</xref> exhibits the Russell’s circumplex (left) and vector (right) models assuming valence is varying in the interval <inline-formula id="j_infor419_ineq_001"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mo>−</mml:mo><mml:mn>3</mml:mn><mml:mo>;</mml:mo><mml:mn>3</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[-3;3]$]]></tex-math></alternatives></inline-formula>, and the values of arousal belong to the interval <inline-formula id="j_infor419_ineq_002"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>1</mml:mn><mml:mo>;</mml:mo><mml:mn>7</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[1;7]$]]></tex-math></alternatives></inline-formula>. Squares filled with a C or a V represent predictions of where emotions should occur according to the Russell’s circumplex model or a vector model, respectively (Rubin and Talarico, <xref ref-type="bibr" rid="j_infor419_ref_048">2009</xref>; Wilson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_069">2016</xref>). Briefly, the circumplex model assumes that emotions are spread in a circular space with dimensions of valence and arousal, pattern centred on neutral valence and medium arousal. In the vector model, emotions of higher arousal tend to be defined by their valence, whereas emotions of lower arousal tend to be more neutral in respect of valence (Rubin and Talarico, <xref ref-type="bibr" rid="j_infor419_ref_048">2009</xref>).</p>
<fig id="j_infor419_fig_004">
<label>Fig. 4</label>
<caption>
<p>Instantiations of the Russell’s circumplex (left) and vector (right) two-dimensional models (Wilson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_069">2016</xref>).</p>
</caption>
<graphic xlink:href="infor419_g004.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_008">
<label>2.2.4</label>
<title>The Positive Affect – Negative Affect (PANA) Model</title>
<p>The Positive Affect – Negative Affect (also known as Positive Activation – Negative Activation) (PANA) model (Watson and Tellegen, <xref ref-type="bibr" rid="j_infor419_ref_065">1985</xref>; Watson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_066">1999</xref>) characterizes emotions at the most general level. Figure <xref rid="j_infor419_fig_005">5</xref> accurately generalizes the relations among the affective states. Terms of affect within the same octant are highly positively correlated, meanwhile, the ones in adjacent octants are moderately positively correlated. Terms 90° apart are substantially unrelated to one another, whereas those 180° apart are opposite in meaning and highly negatively correlated.</p>
<fig id="j_infor419_fig_005">
<label>Fig. 5</label>
<caption>
<p>The basic two-factor structure of affect (Watson and Tellegen, <xref ref-type="bibr" rid="j_infor419_ref_065">1985</xref>).</p>
</caption>
<graphic xlink:href="infor419_g005.jpg"/>
</fig>
<p>Figure <xref rid="j_infor419_fig_005">5</xref> schematically depicts the two-dimensional (two-factor) affective spaces. In the basic two-factor space, the axes are displayed as solid lines. The horizontal and vertical axes represent Negative Affect and Positive Affect, respectively. The first factor, Positive Affect (PA), represents the extent (from low to high) to which a person shows enthusiasm in life. The second factor, Negative Affect (NA), is the extent to which a person is feeling upset or unpleasantly aroused. At first sight, the terms Positive Affect and Negative Affect can be perceived as opposite ones, i.e. negatively correlated. However, they are independent and uncorrelated dimensions. We can notice from Fig. <xref rid="j_infor419_fig_005">5</xref> that many affective states are not pure markers of either Positive or Negative Affect as these concepts are described above. For instance, the Pleasantness includes terms representing a mixture of high Positive Affect and low Negative Affect, and Unpleasantness contains emotions between high Negative Affect and low Positive Affect. Terms denoting Strong Engagement have moderately high values of both factors PA and NA, whereas emotions representing Disengagement reflect low values of each dimension PA and NA. Thus, Fig. <xref rid="j_infor419_fig_005">5</xref> also depicts an alternative rotational scheme that is indicated by the dotted lines. The first factor (dimension) represents the Pleasantness-Unpleasantness (valence), while the second factor (dimension) represents Strong Engagement-Disengagement (arousal).</p>
<p>Thus, the PANA model is commonly understood as a 45-degree rotation of the Russell’s circumplex model as it is a circle and the dimensions of valence and arousal lay at a 45-degree rotation over the PANA model axes NA and PA, respectively (Watson and Tellegen, <xref ref-type="bibr" rid="j_infor419_ref_065">1985</xref>). In Rubin and Talarico (<xref ref-type="bibr" rid="j_infor419_ref_048">2009</xref>), it is noticed that the PANA model is more similar to the vector model than a circumplex one. The similarity between the PANA and vector models is explained as follows. In the vector model, low arousal emotions are more likely to be neutral and high arousal ones are differentiated by their valence. Most affective states cluster in the high Positive Affect and high Negative Affect octants (Watson and Tellegen, <xref ref-type="bibr" rid="j_infor419_ref_065">1985</xref>; Watson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_066">1999</xref>). This corresponds to the prediction of the vector model, i.e. an absence of high arousal and neutral valence emotions. In conclusion, the PANA model can be employed while exploring emotions of high levels of activation like in the vector model (Rubin and Talarico, <xref ref-type="bibr" rid="j_infor419_ref_048">2009</xref>).</p>
</sec>
<sec id="j_infor419_s_009">
<label>2.2.5</label>
<title>Whissell’s Model</title>
<p>Similarly to the Russell’s circumplex model, Whissell represents emotions in a two-dimensional continuous space, the dimensions of which are evaluation and activation (Whissell, <xref ref-type="bibr" rid="j_infor419_ref_068">1989</xref>). The evaluation dimension is a measure of human feelings, from negative to positive. The activation dimension measures whether a human is less or more likely to take some action under the emotional state, from passive to active. Whissell has made up the Dictionary of Affect in Language by assigning a pair of values to each of the approximately 9000 words with affective connotations. Figure <xref rid="j_infor419_fig_006">6</xref> depicts the position of some of these words in the two-dimensional circular space (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>).</p>
<fig id="j_infor419_fig_006">
<label>Fig. 6</label>
<caption>
<p>The two-dimensional representation of emotions by the Whissell’s model (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>).</p>
</caption>
<graphic xlink:href="infor419_g006.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_010">
<label>2.2.6</label>
<title>Plutchik’s Model (Plutchik’s Wheel of Emotions)</title>
<p>In 1980, Robert Plutchik created a wheel of emotions seeking to illustrate different emotions and their relationship. He proposed a two-dimensional wheel model and a three-dimensional cone-shaped model (Plutchik and Kellerman, <xref ref-type="bibr" rid="j_infor419_ref_041">1980</xref>; Plutchik, <xref ref-type="bibr" rid="j_infor419_ref_040">2001</xref>).</p>
<p>In order to make the wheel of emotions, Plutchik used eight primary bipolar emotions such as <italic>Joy</italic> versus <italic>Sadness</italic>, <italic>Anger</italic> versus <italic>Fear</italic>, <italic>Trust</italic> versus <italic>Disgust</italic>, and <italic>Surprise</italic> versus <italic>Anticipation</italic>, as well as eight advanced, derivative emotions (<italic>Optimism</italic>, <italic>Love</italic>, <italic>Submission</italic>, <italic>Awe</italic>, <italic>Disapproval</italic>, <italic>Remorse</italic>, <italic>Contempt</italic>, and <italic>Aggressiveness</italic>), each composed of two basic ones. This circumplex two-dimensional model combines the idea of an emotion circle with a colour wheel. With the help of colours, primary emotions are presented at different intensities (for instance, <italic>Joy</italic> can be expressed as <italic>Ecstasy</italic> or <italic>Serenity</italic>) and can be mixed with one another to form different emotions, for example, <italic>Love</italic> is a mixture of <italic>Joy</italic> and <italic>Trust</italic>. Emotions, obtained from two basic emotions, are shown in blank spaces. In this two-dimensional model, the vertical dimension represents intensity and the radial dimension represents degrees of similarity among the emotions (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>). The three-dimensional model depicts relations between emotions as following: the cone’s vertical dimension represents intensity, and the circle represents degrees of similarity among the emotions (Maupome and Isyutina, <xref ref-type="bibr" rid="j_infor419_ref_032">2013</xref>). Both models are shown in Fig. <xref rid="j_infor419_fig_007">7</xref>.</p>
<fig id="j_infor419_fig_007">
<label>Fig. 7</label>
<caption>
<p>Plutchik’s two-dimensional wheel of emotions and the cone-shaped model, three-dimensional wheel of emotions, demonstrating relationships between basic and derivative emotions (Maupome and Isyutina, <xref ref-type="bibr" rid="j_infor419_ref_032">2013</xref>).</p>
</caption>
<graphic xlink:href="infor419_g007.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_011">
<label>2.2.7</label>
<title>The Pleasure-Arousal-Dominance (PAD) Model</title>
<p>The Mehrabian and Russell’s Pleasure-Arousal-Dominance (PAD) model (Mehrabian and Russell, <xref ref-type="bibr" rid="j_infor419_ref_035">1974</xref>) was developed seeking to describe and measure a human emotional reaction to the environment. This model identifies emotions by using three dimensions such as pleasure, arousal, and dominance. Pleasure represents positive (pleasant) and negative (unpleasant) emotions, i.e. this dimension measures how pleasant an emotion is. For example, <italic>Joy</italic> is a pleasant emotion, and <italic>Sadness</italic> is unpleasant one. Arousal shows a level of energy and stimulation, i.e. measures the intensity of an emotion. For instance, <italic>Joy</italic>, <italic>Serenity</italic>, and <italic>Ecstasy</italic> are pleasant emotions, however, <italic>Ecstasy</italic> has a higher intensity and <italic>Serenity</italic> has a lower arousal state in comparison with <italic>Joy</italic>. Dominance represents a sense of control or freedom to act. For example, while <italic>Fear</italic> and <italic>Anger</italic> are unpleasant emotions, <italic>Anger</italic> is a much more dominant emotion than <italic>Fear</italic> (Mehrabian, <xref ref-type="bibr" rid="j_infor419_ref_033">1980</xref>, <xref ref-type="bibr" rid="j_infor419_ref_034">1996</xref>; Grekow, <xref ref-type="bibr" rid="j_infor419_ref_021">2018</xref>). The PAD model is similar to the Russell’s model, since two dimensions, arousal and pleasure that resembles valence, are the same. These models differ because of the third dominance dimension that is been used to perceive whether a human feels in control of the state or not (Sreeja and Mahalakshmi, <xref ref-type="bibr" rid="j_infor419_ref_056">2017</xref>).</p>
</sec>
<sec id="j_infor419_s_012">
<label>2.2.8</label>
<title>Lövheim Cube of Emotion</title>
<p>In 2011, Lövheim revealed that the monoamines such as serotonin, dopamine and noradrenaline greatly influence human mood, emotion and behaviour. He proposed a three-dimensional model for monoamine neurotransmitters and emotions. In this model, the monoamine systems are represented as orthogonal axes and the eight basic emotions, labelled according to Silvan Tomkins, are placed in the eight corners of a cube. According to Lövheim model, for instance, <italic>Joy</italic> is produced by the combination of high serotonin, high dopamine and low noradrenaline (Fig. <xref rid="j_infor419_fig_008">8</xref>). As neither the serotonin nor the dopamine axis is identical to the valence dimension, the cube seems somewhat rotated in comparison to aforementioned models. This model may help perceive human emotions, psychiatric illness and the effects of psychotropic drugs (Lövheim, <xref ref-type="bibr" rid="j_infor419_ref_030">2011</xref>).</p>
<fig id="j_infor419_fig_008">
<label>Fig. 8</label>
<caption>
<p>Lövheim cube of emotion (Lövheim, <xref ref-type="bibr" rid="j_infor419_ref_030">2011</xref>).</p>
</caption>
<graphic xlink:href="infor419_g008.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_013">
<label>2.2.9</label>
<title>The Hourglass Model</title>
<p>Cambria <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>) proposed a biologically inspired and psychologically motivated emotion categorization model that combines categorical and dimensional approaches. The model represents emotions both through labels and through four affective dimensions (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>). This model, also called the Hourglass of Emotions, reinterprets Plutchik’s model (Plutchik, <xref ref-type="bibr" rid="j_infor419_ref_040">2001</xref>) by organizing primary emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Anger</italic>, <italic>Fear</italic>, <italic>Trust</italic>, <italic>Disgust</italic>, <italic>Surprise</italic>, <italic>Anticipation</italic>) around four independent but concomitant affective dimensions such as pleasantness, attention, sensitivity, and aptitude, whose different levels of activation make up the total emotional state of the mind.</p>
<p>These dimensions measure how much: the user is amused by interaction modalities (pleasantness), the user is interested in interaction contents (attention), the user is comfortable with interaction dynamics (sensitivity), and the user is confident in interaction benefits (aptitude). Each dimension is characterized by six levels of activation (measuring the strength of an emotion). These levels are also labelled as a set of 24 emotions (Plutchik, <xref ref-type="bibr" rid="j_infor419_ref_040">2001</xref>). Therefore, the model specifies the affective information associated with the text both in a dimensional and in a discrete form. The model has an hourglass shape because emotions are represented according to their strength (from strongly positive to null to strongly negative) (Fig. <xref rid="j_infor419_fig_009">9</xref>).</p>
<fig id="j_infor419_fig_009">
<label>Fig. 9</label>
<caption>
<p>The 3D model and the net of the hourglass of emotions (Cambria <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_005">2012</xref>).</p>
</caption>
<graphic xlink:href="infor419_g009.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_014">
<label>2.2.10</label>
<title>2D Visualization of a Set of Emotions</title>
<p>In our research, the two-dimensional circumplex space model of emotions (Fig. <xref rid="j_infor419_fig_010">10</xref>), based on the Russell’s model (Russell, <xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>) and Scherer’s structure of the semantic space for emotions (Scherer, <xref ref-type="bibr" rid="j_infor419_ref_051">2005</xref>) as well as employing numerical proximities of human emotions (Gobron <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_019">2010</xref>), is used for facial emotion recognition. Figure <xref rid="j_infor419_fig_010">10</xref> is taken from Paltoglou and Thelwall (<xref ref-type="bibr" rid="j_infor419_ref_039">2013</xref>). Its obtainment is described below. A set of emotions is visualized on a 2D plane, giving a particular place for each emotion.</p>
<fig id="j_infor419_fig_010">
<label>Fig. 10</label>
<caption>
<p>The two-dimensional circumplex space model of emotions. Upper-case notation denotes the terms used by Russell, lower-case notation denotes the terms used by Scherer. Figure is taken from Paltoglou and Thelwall (<xref ref-type="bibr" rid="j_infor419_ref_039">2013</xref>).</p>
</caption>
<graphic xlink:href="infor419_g010.jpg"/>
</fig>
<p>Figure <xref rid="j_infor419_fig_010">10</xref> illustrates the alternative two-dimensional structures of the semantic space for emotions. In Scherer (<xref ref-type="bibr" rid="j_infor419_ref_051">2005</xref>), a number of frequently used and theoretically interesting emotion categories were arranged in a two-dimensional space that is formed (constructed) by goal conduciveness versus goal obstructiveness on the one hand and high versus low control/power on the other. Scherer used the Russell’s circumplex model that locates emotions by a circumplex way in the two-dimensional valence – arousal space. In Fig. <xref rid="j_infor419_fig_010">10</xref>, upper-case notation denotes the terms used by Russell (<xref ref-type="bibr" rid="j_infor419_ref_049">1980</xref>). Onto this representation, Scherer superimposed the two-dimensional structure based on similarity ratings of 80 German emotion terms (lower-case terms, translated to English). The exact location of the terms (emotions) in a two-dimensional space is indicated by the plus (+) sign. It was noticed that this simple superposition yielded a remarkably good fit (Scherer, <xref ref-type="bibr" rid="j_infor419_ref_051">2005</xref>).</p>
<p>In Fig. <xref rid="j_infor419_fig_010">10</xref>, every emotion is represented as a point that has two coordinates: valence and arousal. The coordinates of the mapped emotions (values of valence and arousal) are taken from Gobron <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor419_ref_019">2010</xref>) and are given in Paltoglou and Thelwall (<xref ref-type="bibr" rid="j_infor419_ref_039">2013</xref>). The valence parameter is determined by using the four parameters (two lexical, two language), derived from the data mining model that is based on a very large database (4.2 million samples). The arousal parameter is based on the intensity of the vocabulary. The valence and arousal values were generated from lexical and language classifiers and the probabilistic emotion generator (the Poisson distribution is used). A statistically good correlation with James Russell’s circumplex model of emotion was obtained. The control mechanism was based on Ekman’s Facial Action Coding System (FACS) action units (Ekman and Friesen, <xref ref-type="bibr" rid="j_infor419_ref_011">1978</xref>).</p>
<p>The Russell’s circumplex model is widely used in various areas of emotion recognition. Gobron <italic>et al.</italic> transferred lexical and language parameters, extracted from database, into coherent intensities of valence and arousal, i.e. parameters of Russell’s circumplex model. Paltoglou and Thelwall (<xref ref-type="bibr" rid="j_infor419_ref_039">2013</xref>) employed these values of valence and arousal to the emotion recognition from segments of a written text in blog posts. We have decided to use this two-dimensional model of emotions (Fig. <xref rid="j_infor419_fig_010">10</xref>) and the derived emotion coordinates for the facial emotion recognition. To our knowledge, it has not been done before.</p>
</sec>
</sec>
</sec>
<sec id="j_infor419_s_015">
<label>3</label>
<title>Kriging Predictor</title>
<p>Recently, Fractional Brownian Vector Field (Motion) (FBVF) has been very popular among mathematicians and physicists (Yancong and Ruidong, <xref ref-type="bibr" rid="j_infor419_ref_025">2011</xref>; Tan <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_061">2015</xref>). The created model for FER is based on modelling valence and arousal dimensions in Russell’s model by the two-dimensional FBVF. Hereinafter, these dimensions are called coordinates as well.</p>
<p>Stochastic model of facial emotions on pictures should incorporate uncertainty about quantities in unobserved points and to quantify the uncertainty associated with the kriging estimator. Namely, the emotion at each facial picture is considered as a realization of FBVF <inline-formula id="j_infor419_ineq_003"><alternatives>
<mml:math><mml:mi mathvariant="italic">Z</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">X</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">ω</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math>
<tex-math><![CDATA[$Z(X,\omega )$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_004"><alternatives>
<mml:math><mml:mi mathvariant="italic">Z</mml:mi><mml:mo>:</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msup><mml:mo>⊗</mml:mo><mml:mi mathvariant="normal">Ω</mml:mi><mml:mo stretchy="false">→</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$Z:{R^{n}}\otimes \Omega \to {R^{2}}$]]></tex-math></alternatives></inline-formula>,</p>
<p>which for every point in the variables space <inline-formula id="j_infor419_ineq_005"><alternatives>
<mml:math><mml:mi mathvariant="italic">X</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$X\in {R^{n}}$]]></tex-math></alternatives></inline-formula> is a measurable function of random event <inline-formula id="j_infor419_ineq_006"><alternatives>
<mml:math><mml:mi mathvariant="italic">ω</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="normal">Ω</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="normal">Σ</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">P</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math>
<tex-math><![CDATA[$\omega \in (\Omega ,\Sigma ,P)$]]></tex-math></alternatives></inline-formula> in some probability space (Pozniak <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_043">2019</xref>). As it is unknown which of all function variables will be preponderant, consider them as equivalent, thus, calculate a distance between measurement points, which now is symmetric with respect to the miscellaneous variables. Usually it is assumed the FBVF has a constant mean vector and covariance matrix at each point: 
<disp-formula id="j_infor419_eq_001">
<alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">μ</mml:mi><mml:mo>=</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">μ</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">μ</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mspace width="1em"/><mml:mtext>and</mml:mtext><mml:mspace width="1em"/><mml:mi mathvariant="italic">β</mml:mi><mml:mo>=</mml:mo><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mtable columnspacing="4.0pt" equalrows="false" columnlines="none" equalcolumns="false" columnalign="center center"><mml:mtr><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub><mml:mspace width="1em"/></mml:mtd><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mspace width="1em"/></mml:mtd><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">β</mml:mi></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mfenced><mml:mo mathvariant="normal">,</mml:mo><mml:mspace width="1em"/><mml:mi mathvariant="italic">β</mml:mi><mml:mo mathvariant="normal">&gt;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \mu =({\mu _{1}},{\mu _{2}})\hspace{1em}\text{and}\hspace{1em}\beta =\left[\begin{array}{c@{\hskip4.0pt}c}{\beta _{11}}\hspace{1em}& {\beta _{12}}\\ {} {\beta _{21}}\hspace{1em}& {\beta _{22}}\end{array}\right],\hspace{1em}\beta >0.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Thus, assume, the set <inline-formula id="j_infor419_ineq_007"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">X</mml:mi><mml:mo>=</mml:mo><mml:mo fence="true" stretchy="false">{</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:msub><mml:mo fence="true" stretchy="false">}</mml:mo></mml:math>
<tex-math><![CDATA[$\mathbb{X}=\{{X_{1}},\dots ,{X_{N}}\}$]]></tex-math></alternatives></inline-formula> of observed mutually disjoint vectors <inline-formula id="j_infor419_ineq_008"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">∈</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${X_{i}}\in {R^{n}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_009"><alternatives>
<mml:math><mml:mn>1</mml:mn><mml:mo>⩽</mml:mo><mml:mi mathvariant="italic">i</mml:mi><mml:mo>⩽</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:math>
<tex-math><![CDATA[$1\leqslant i\leqslant N$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_010"><alternatives>
<mml:math><mml:mi mathvariant="italic">N</mml:mi><mml:mo mathvariant="normal">&gt;</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$N>1$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_011"><alternatives>
<mml:math><mml:mi mathvariant="italic">n</mml:mi><mml:mo>⩾</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$n\geqslant 1$]]></tex-math></alternatives></inline-formula>, where each vector represents one facial picture, is fixed, and data of measurement <inline-formula id="j_infor419_ineq_012"><alternatives>
<mml:math><mml:mi mathvariant="italic">Y</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$Y={({Y_{1}},{Y_{2}},\dots ,{Y_{N}})^{T}}$]]></tex-math></alternatives></inline-formula> of the response vector surface, representing the emotion dimensions, at points of <inline-formula id="j_infor419_ineq_013"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">X</mml:mi></mml:math>
<tex-math><![CDATA[$\mathbb{X}$]]></tex-math></alternatives></inline-formula> is known, <inline-formula id="j_infor419_ineq_014"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="italic">Z</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">ω</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math>
<tex-math><![CDATA[${Y_{i}}=Z({X_{i}},\omega )$]]></tex-math></alternatives></inline-formula>. Hence, the matrix of fractional Euclidean distances is computed as well: 
<disp-formula id="j_infor419_eq_002">
<alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">A</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo fence="true" maxsize="1.19em" minsize="1.19em">[</mml:mo><mml:mo stretchy="false">|</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">j</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:msup><mml:mo fence="true" maxsize="1.19em" minsize="1.19em">]</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:msubsup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ A={\big[|{X_{i}}-{X_{j}}{|^{d}}\big]_{i,j=1}^{N}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Degree <italic>d</italic> is a perfect parameter of FBVF as well, which can be estimated according to observation data. The maximal likelihood estimate <inline-formula id="j_infor419_ineq_015"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$\hat{d}$]]></tex-math></alternatives></inline-formula> ensuring asymptotically efficient and unbiased estimator can be estimated by minimization of logarithmic likelihood function: 
<disp-formula id="j_infor419_eq_003">
<label>(1)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext>arg min</mml:mtext></mml:mrow><mml:mrow><mml:mn>0</mml:mn><mml:mo>⩽</mml:mo><mml:mi mathvariant="italic">d</mml:mi><mml:mo>⩽</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munder><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo movablelimits="false">ln</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mo movablelimits="false">det</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mstyle displaystyle="false"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mstyle displaystyle="false"><mml:mfrac><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">Y</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo movablelimits="false">ln</mml:mo><mml:mo stretchy="false">|</mml:mo><mml:mo movablelimits="false">det</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">A</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \hat{d}=\underset{0\leqslant d\leqslant 1}{\operatorname{arg\,min}}\bigg(\frac{\ln (\det (\frac{1}{N}(\frac{({Y^{T}}{A^{-1}}E{E^{T}}{A^{-1}}Y}{{E^{T}}{A^{-1}}E}-{Y^{T}}{A^{-1}}Y)))}{2}+\frac{\ln |\det (A)|}{N}\bigg).\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Novelty of our method is as follows: 1) We evaluate the Hurst parameter <italic>d</italic> by the maximum likelihood method; 2) We use a posteriori expectations and covariance matrix for kriging prediction of emotion model dimensions (coordinates); 3) We apply kriging predictor to FER in pictures.</p>
<p>Assume one has to predict the value of response vector surface <italic>Z</italic> at some point <inline-formula id="j_infor419_ineq_016"><alternatives>
<mml:math><mml:mi mathvariant="italic">X</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$X\in {R^{n}}$]]></tex-math></alternatives></inline-formula>. Kriging gives us a way of anticipating, with some probability, a result associated with values of the parameters that have never been met before, or have been lost, to “store” the existing information (the experimental measurements), and propagate it to any situation where no measurement has been made. According to gentle introduction to kriging (Jones, <xref ref-type="bibr" rid="j_infor419_ref_027">2001</xref>) and (Pozniak <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_043">2019</xref>), it is defined by the kriging predictor which is defined as the conditional mean of FBVF: 
<disp-formula id="j_infor419_eq_004">
<label>(2)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">Z</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">X</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mi mathvariant="italic">a</mml:mi><mml:mo>+</mml:mo><mml:mi mathvariant="italic">E</mml:mi><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">a</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ Z(X)={Y^{T}}{A^{-1}}\bigg(a+E\frac{(1-{E^{T}}{A^{-1}}a)}{{E^{T}}{A^{-1}}E}\bigg),\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>a</italic> is a distance vector, the elements of which are fractional Euclidean distances between a new (testing) data point and all the training data points.</p>
<p>This prediction is stochastic, its uncertainty is described by the conditional variance: 
<disp-formula id="j_infor419_eq_005">
<label>(3)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">β</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mi mathvariant="italic">X</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi mathvariant="italic">β</mml:mi><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">a</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">a</mml:mi><mml:mo>−</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">a</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \beta (X)=\beta \bigg({a^{T}}{A^{-1}}a-\frac{{(1-{E^{T}}{A^{-1}}a)^{2}}}{{E^{T}}{A^{-1}}E}\bigg),\]]]></tex-math></alternatives>
</disp-formula> 
where the likelihood estimate of covariance matrix is applied: 
<disp-formula id="j_infor419_eq_006">
<label>(4)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">β</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">Y</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \beta =\frac{{Y^{T}}{A^{-1}}E{E^{T}}{A^{-1}}Y}{{E^{T}}{A^{-1}}E}-{Y^{T}}{A^{-1}}Y.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Regarding the kriging model, the resent novelty is the introduction of <inline-formula id="j_infor419_ineq_017"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">≠</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$d\ne 1$]]></tex-math></alternatives></inline-formula> that expanded the possibilities of the model. So far, only <inline-formula id="j_infor419_ineq_018"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$d=1$]]></tex-math></alternatives></inline-formula> was known in Dzemyda (<xref ref-type="bibr" rid="j_infor419_ref_009">2001</xref>). It is proved in Pozniak and Sakalauskas (<xref ref-type="bibr" rid="j_infor419_ref_042">2017</xref>) that the kernel matrix and the associated covariance matrix is positively defined, when <inline-formula id="j_infor419_ineq_019"><alternatives>
<mml:math><mml:mn>0</mml:mn><mml:mo>⩽</mml:mo><mml:mi mathvariant="italic">d</mml:mi><mml:mo mathvariant="normal">&lt;</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$0\leqslant d<1$]]></tex-math></alternatives></inline-formula> for any number of features and sample size. From the continuity of the likelihood function it follows that when there are more features (such as pixels) than the sample size (number of pictures), the covariance matrix can be positively defined when <inline-formula id="j_infor419_ineq_020"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo mathvariant="normal">&gt;</mml:mo><mml:mn>1</mml:mn></mml:math>
<tex-math><![CDATA[$d>1$]]></tex-math></alternatives></inline-formula>, as well.</p>
<p>In this paper, the kriging predictor has been employed for emotion recognition from facial expression and explored experimentally because the kriging predictor performs simple calculations and has only one unknown parameter <italic>d</italic>, as well as because this method works very well with small data sets.</p>
</sec>
<sec id="j_infor419_s_016">
<label>4</label>
<title>Data Set</title>
<p>Warsaw set of emotional facial expression pictures (WSEFEP) (Olszanowski <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor419_ref_038">2015</xref>) has been used in the experiments. This set contains 210 high-quality pictures (photos) of 30 individuals (14 men and 16 women). They display six basic emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>) and <italic>Neutral</italic> display. Examples of each basic emotion displayed by one woman are shown in Fig. <xref rid="j_infor419_fig_011">11</xref>.</p>
<p>The original size of these pictures was <inline-formula id="j_infor419_ineq_021"><alternatives>
<mml:math><mml:mn>1725</mml:mn><mml:mo>×</mml:mo><mml:mn>1168</mml:mn></mml:math>
<tex-math><![CDATA[$1725\times 1168$]]></tex-math></alternatives></inline-formula> pixels. In order to avoid the redundant information (background, hair, clothes, etc.), pictures were cropped and resized to <inline-formula id="j_infor419_ineq_022"><alternatives>
<mml:math><mml:mn>505</mml:mn><mml:mo>×</mml:mo><mml:mn>632</mml:mn></mml:math>
<tex-math><![CDATA[$505\times 632$]]></tex-math></alternatives></inline-formula> pixels (Fig. <xref rid="j_infor419_fig_012">12</xref>). Brows, eyes, nose, lips, cheeks, jaws, and chin are the key features that describe an emotional facial expression in obtained pictures.</p>
<p>Each picture has been digitized, i.e. a data point consists of colour parameters of pixels, and, therefore, it is of very large dimensionality. The number of pictures (data points) is <inline-formula id="j_infor419_ineq_023"><alternatives>
<mml:math><mml:mi mathvariant="italic">N</mml:mi><mml:mo>=</mml:mo><mml:mn>210</mml:mn></mml:math>
<tex-math><![CDATA[$N=210$]]></tex-math></alternatives></inline-formula>. The images have <inline-formula id="j_infor419_ineq_024"><alternatives>
<mml:math><mml:mn>505</mml:mn><mml:mo>×</mml:mo><mml:mn>632</mml:mn></mml:math>
<tex-math><![CDATA[$505\times 632$]]></tex-math></alternatives></inline-formula> colour pixels (RGB), therefore their dimensionality is <inline-formula id="j_infor419_ineq_025"><alternatives>
<mml:math><mml:mi mathvariant="italic">n</mml:mi><mml:mo>=</mml:mo><mml:mn>957480</mml:mn></mml:math>
<tex-math><![CDATA[$n=957480$]]></tex-math></alternatives></inline-formula>.</p>
<fig id="j_infor419_fig_011">
<label>Fig. 11</label>
<caption>
<p>Examples of each basic emotion displayed by one woman (original pictures).</p>
</caption>
<graphic xlink:href="infor419_g011.jpg"/>
</fig>
<fig id="j_infor419_fig_012">
<label>Fig. 12</label>
<caption>
<p>Examples of each basic emotion displayed by one woman (cropped and resized pictures).</p>
</caption>
<graphic xlink:href="infor419_g012.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_017">
<label>5</label>
<title>Analysis of the Kriging Predictor Algorithm</title>
<p>Before presenting the kriging algorithm, some mathematical notations are introduced below. Suppose that the analysed data set <inline-formula id="j_infor419_ineq_026"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">X</mml:mi><mml:mo>=</mml:mo><mml:mo fence="true" stretchy="false">{</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:msub><mml:mo fence="true" stretchy="false">}</mml:mo></mml:math>
<tex-math><![CDATA[$\mathbb{X}=\{{X_{1}},\dots ,{X_{N}}\}$]]></tex-math></alternatives></inline-formula> consists of <italic>N n</italic>-dimensional points <inline-formula id="j_infor419_ineq_027"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">x</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">x</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math>
<tex-math><![CDATA[${X_{i}}=({x_{i1}},\dots ,{x_{in}})$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_028"><alternatives>
<mml:math><mml:mi mathvariant="italic">i</mml:mi><mml:mo>=</mml:mo><mml:mover accent="false"><mml:mrow><mml:mn>1</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:mrow><mml:mo accent="true">‾</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$i=\overline{1,N}$]]></tex-math></alternatives></inline-formula> (<inline-formula id="j_infor419_ineq_029"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">∈</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${X_{i}}\in {R^{n}}$]]></tex-math></alternatives></inline-formula>). The data point <inline-formula id="j_infor419_ineq_030"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> corresponds to the <italic>i</italic>th picture in the picture set. Seven emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Neutral</italic>) are displayed in these pictures. For the sake of simplicity, the neutral state is attributed to the emotion as well. In this paper, for short, an emotion, identified from the facial expression shown in a particular picture, is called a <italic>picture emotion</italic>.</p>
<p>Since the two-dimensional circumplex space model of emotions (Fig. <xref rid="j_infor419_fig_010">10</xref>) is used for facial emotion recognition in the investigations, every emotion is represented as a point that has two coordinates: valence and arousal. The coordinates of the seven basic emotions (values of valence and arousal) are taken from Gobron <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor419_ref_019">2010</xref>) and are given in Paltoglou and Thelwall (<xref ref-type="bibr" rid="j_infor419_ref_039">2013</xref>). These coordinates are presented in Table <xref rid="j_infor419_tab_001">1</xref>.</p>
<table-wrap id="j_infor419_tab_001">
<label>Table 1</label>
<caption>
<p>The valence and arousal coordinates of seven basic emotions in the two-dimensional circumplex emotion space.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin"/>
<td style="vertical-align: top; text-align: right; border-top: solid thin">Emotion </td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Joy</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Sadness</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Surprise</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Disgust</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Anger</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Fear</italic></td>
<td style="vertical-align: top; text-align: left; border-top: solid thin"><italic>Neutral</italic></td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Coordinates</td>
<td style="vertical-align: top; text-align: right; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Valence</td>
<td style="vertical-align: top; text-align: right"/>
<td style="vertical-align: top; text-align: left">0.95</td>
<td style="vertical-align: top; text-align: left">−0.81</td>
<td style="vertical-align: top; text-align: left">0.2</td>
<td style="vertical-align: top; text-align: left">−0.67</td>
<td style="vertical-align: top; text-align: left">−0.4</td>
<td style="vertical-align: top; text-align: left">−0.12</td>
<td style="vertical-align: top; text-align: left">0</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Arousal</td>
<td style="vertical-align: top; text-align: right; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.14</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−0.4</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.9</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.49</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.79</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.79</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As a picture emotion is known in advance, each data point <inline-formula id="j_infor419_ineq_031"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula> is related to an emotion point <inline-formula id="j_infor419_ineq_032"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math>
<tex-math><![CDATA[${Y_{i}}=({y_{i1}},{y_{i2}})$]]></tex-math></alternatives></inline-formula> that describes the <italic>i</italic>th picture emotion. Seven different combinations of (<inline-formula id="j_infor419_ineq_033"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{i1}},{y_{i2}}$]]></tex-math></alternatives></inline-formula>) are obtained (Table <xref rid="j_infor419_tab_001">1</xref>). In other words, <inline-formula id="j_infor419_ineq_034"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{i1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_035"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{i2}}$]]></tex-math></alternatives></inline-formula> mean the valence and arousal coordinates, respectively, of the <italic>i</italic>th picture emotion in the two-dimensional circumplex emotion space (Fig. <xref rid="j_infor419_fig_010">10</xref>). Then, for the whole data set <inline-formula id="j_infor419_ineq_036"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">X</mml:mi></mml:math>
<tex-math><![CDATA[$\mathbb{X}$]]></tex-math></alternatives></inline-formula>, two column vectors <inline-formula id="j_infor419_ineq_037"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_038"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{2}}$]]></tex-math></alternatives></inline-formula>, the size of which is <inline-formula id="j_infor419_ineq_039"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mi mathvariant="italic">N</mml:mi><mml:mo>×</mml:mo><mml:mn>1</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[N\times 1]$]]></tex-math></alternatives></inline-formula>, are comprised. The column vector <inline-formula id="j_infor419_ineq_040"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{1}}$]]></tex-math></alternatives></inline-formula> consists of the valence coordinates of the emotion points <inline-formula id="j_infor419_ineq_041"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${Y_{i}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_042"><alternatives>
<mml:math><mml:mi mathvariant="italic">i</mml:mi><mml:mo>=</mml:mo><mml:mover accent="false"><mml:mrow><mml:mn>1</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:mrow><mml:mo accent="true">‾</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$i=\overline{1,N}$]]></tex-math></alternatives></inline-formula>, and the column vector <inline-formula id="j_infor419_ineq_043"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{2}}$]]></tex-math></alternatives></inline-formula> consists of the arousal coordinates of these points, i.e. <inline-formula id="j_infor419_ineq_044"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${y_{1}}={({y_{11}},{y_{21}},\dots ,{y_{N1}})^{T}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_045"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal">,</mml:mo><mml:mo>…</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${y_{2}}={({y_{12}},{y_{22}},\dots ,{y_{N2}})^{T}}$]]></tex-math></alternatives></inline-formula>.</p>
<p><italic>The kriging predictor algorithm</italic> is as follows: 
<list>
<list-item id="j_infor419_li_001">
<label>1.</label>
<p>The Euclidean distance matrix <italic>D</italic> between all the data points <inline-formula id="j_infor419_ineq_046"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">i</mml:mi></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${X_{i}}$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor419_ineq_047"><alternatives>
<mml:math><mml:mi mathvariant="italic">i</mml:mi><mml:mo>=</mml:mo><mml:mover accent="false"><mml:mrow><mml:mn>1</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:mrow><mml:mo accent="true">‾</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$i=\overline{1,N}$]]></tex-math></alternatives></inline-formula> (from training data set) is calculated.</p>
</list-item>
<list-item id="j_infor419_li_002">
<label>2.</label>
<p>This matrix is normalized by dividing each element from the largest one.</p>
</list-item>
<list-item id="j_infor419_li_003">
<label>3.</label>
<p>Denote the Hurst parameter by <italic>d</italic>, where <italic>d</italic> is a real number, <inline-formula id="j_infor419_ineq_048"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo mathvariant="normal">&gt;</mml:mo><mml:mn>0</mml:mn></mml:math>
<tex-math><![CDATA[$d>0$]]></tex-math></alternatives></inline-formula>.</p>
</list-item>
<list-item id="j_infor419_li_004">
<label>4.</label>
<p>Elements of the normalized distance matrix <italic>D</italic> are raised to the power of (<inline-formula id="j_infor419_ineq_049"><alternatives>
<mml:math><mml:mn>2</mml:mn><mml:mi mathvariant="italic">d</mml:mi></mml:math>
<tex-math><![CDATA[$2d$]]></tex-math></alternatives></inline-formula>). Denote this new fractional distance matrix as <italic>A</italic>, i.e. <inline-formula id="j_infor419_ineq_050"><alternatives>
<mml:math><mml:mi mathvariant="italic">A</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[$A={D^{2d}}$]]></tex-math></alternatives></inline-formula>.</p>
</list-item>
<list-item id="j_infor419_li_005">
<label>5.</label>
<p>The <italic>kriging prediction</italic> of a new (testing) picture emotion is made by using a posteriori expectation:</p>
</list-item>
</list> 
<disp-formula id="j_infor419_eq_007">
<label>(5)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mi mathvariant="italic">a</mml:mi><mml:mo>+</mml:mo><mml:mi mathvariant="italic">E</mml:mi><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">a</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo><mml:mspace width="2em"/><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mi mathvariant="italic">a</mml:mi><mml:mo>+</mml:mo><mml:mi mathvariant="italic">E</mml:mi><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">a</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ {z_{1}}={y_{1}^{T}}{A^{-1}}\bigg(a+E\frac{1-{E^{T}}{A^{-1}}a}{{E^{T}}{A^{-1}}E}\bigg),\hspace{2em}{z_{2}}={y_{2}^{T}}{A^{-1}}\bigg(a+E\frac{1-{E^{T}}{A^{-1}}a}{{E^{T}}{A^{-1}}E}\bigg).\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Here, <inline-formula id="j_infor419_ineq_051"><alternatives>
<mml:math><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${A^{-1}}$]]></tex-math></alternatives></inline-formula> is the inverse matrix of <italic>A</italic>, <italic>E</italic> is a unit column vector of size <inline-formula id="j_infor419_ineq_052"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mi mathvariant="italic">N</mml:mi><mml:mo>×</mml:mo><mml:mn>1</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[N\times 1]$]]></tex-math></alternatives></inline-formula>, and <italic>a</italic> <inline-formula id="j_infor419_ineq_053"><alternatives>
<mml:math><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mi mathvariant="italic">N</mml:mi><mml:mo>×</mml:mo><mml:mn>1</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$[N\times 1]$]]></tex-math></alternatives></inline-formula> is a distance vector, the elements of which are fractional Euclidean distances between a new (testing) data point and all the training data points. A new (testing) data point corresponds to a new picture whose emotion is being predicted. The training data points describe pictures whose emotions are known in advance. The meaning of <inline-formula id="j_infor419_ineq_054"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_055"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{2}}$]]></tex-math></alternatives></inline-formula> are described above. Outputs <inline-formula id="j_infor419_ineq_056"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_057"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{2}}$]]></tex-math></alternatives></inline-formula> correspond to the first and the second prediction parameter, respectively. In regard to the emotion model, employed in this research (Fig. <xref rid="j_infor419_fig_010">10</xref>), values of <inline-formula id="j_infor419_ineq_058"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_059"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{2}}$]]></tex-math></alternatives></inline-formula> mean the first (valence) and the second (arousal) coordinates, respectively, of the predicted emotion of a testing picture in the two-dimensional circumplex space.</p>
<p>The kriging predictor algorithm has only one unknown parameter <italic>d</italic>. The first investigation is performed seeking to find the optimal value of <italic>d</italic>. At first, the <italic>maximum likelihood</italic> (<italic>ML</italic>) <italic>function</italic> of picture emotion features <inline-formula id="j_infor419_ineq_060"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{1}}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor419_ineq_061"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${y_{2}}$]]></tex-math></alternatives></inline-formula> is determined as follows: 
<disp-formula id="j_infor419_eq_008">
<label>(6)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="italic">f</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo movablelimits="false">ln</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mo stretchy="false">|</mml:mo><mml:mi mathvariant="italic">C</mml:mi><mml:mo stretchy="false">|</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo movablelimits="false">ln</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:mo stretchy="false">‖</mml:mo><mml:mi mathvariant="italic">A</mml:mi><mml:mo stretchy="false">‖</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ f=\frac{\ln (|C|)}{2}+\frac{\ln (\| A\| )}{N},\]]]></tex-math></alternatives>
</disp-formula> 
where <inline-formula id="j_infor419_ineq_062"><alternatives>
<mml:math><mml:mo stretchy="false">‖</mml:mo><mml:mi mathvariant="italic">A</mml:mi><mml:mo stretchy="false">‖</mml:mo></mml:math>
<tex-math><![CDATA[$\| A\| $]]></tex-math></alternatives></inline-formula> is an absolute value of the matrix <italic>A</italic> determinant, and <inline-formula id="j_infor419_ineq_063"><alternatives>
<mml:math><mml:mo stretchy="false">|</mml:mo><mml:mi mathvariant="italic">C</mml:mi><mml:mo stretchy="false">|</mml:mo></mml:math>
<tex-math><![CDATA[$|C|$]]></tex-math></alternatives></inline-formula> is a determinant of a posteriori covariance symmetric matrix <inline-formula id="j_infor419_ineq_064"><alternatives>
<mml:math><mml:mi mathvariant="italic">C</mml:mi><mml:mo>=</mml:mo><mml:mo mathvariant="normal" fence="true" maxsize="1.61em" minsize="1.61em">(</mml:mo><mml:mtable columnspacing="4.0pt" equalrows="false" columnlines="none" equalcolumns="false" columnalign="center center"><mml:mtr><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub><mml:mspace width="1em"/></mml:mtd><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mspace width="1em"/></mml:mtd><mml:mtd class="array"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable><mml:mo mathvariant="normal" fence="true" maxsize="1.61em" minsize="1.61em">)</mml:mo></mml:math>
<tex-math><![CDATA[$C=\Big(\begin{array}{c@{\hskip4.0pt}c}{c_{11}}\hspace{1em}& {c_{12}}\\ {} {c_{21}}\hspace{1em}& {c_{22}}\end{array}\Big)$]]></tex-math></alternatives></inline-formula>, elements of which are calculated as follows: 
<disp-formula id="j_infor419_eq_009">
<label>(7)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true" columnalign="right left" columnspacing="0pt"><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>−</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>−</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">N</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo><mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi><mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">E</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="italic">E</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>−</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="italic">T</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi mathvariant="italic">A</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo><mml:mo mathvariant="normal">,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd class="align-odd"/><mml:mtd class="align-even"><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="italic">c</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[\begin{aligned}{}& {c_{11}}=\frac{1}{N}\bigg(\frac{{({y_{1}^{T}}{A^{-1}}E)^{2}}}{{E^{T}}{A^{-1}}E}-{y_{1}^{T}}{A^{-1}}{y_{1}}\bigg),\\ {} & {c_{22}}=\frac{1}{N}\bigg(\frac{{({y_{2}^{T}}{A^{-1}}E)^{2}}}{{E^{T}}{A^{-1}}E}-{y_{2}^{T}}{A^{-1}}{y_{2}}\bigg),\\ {} & {c_{12}}=\frac{1}{N}\bigg(\frac{({y_{1}^{T}}{A^{-1}}E)({y_{2}^{T}}{A^{-1}}E)}{{E^{T}}{A^{-1}}E}-{y_{1}^{T}}{A^{-1}}{y_{2}}\bigg),\\ {} & {c_{21}}={c_{12}}.\end{aligned}\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>In the next step, values of the ML function <italic>f</italic> are calculated for various values of the parameter <italic>d</italic>, i.e. <inline-formula id="j_infor419_ineq_065"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.01</mml:mn><mml:mo>;</mml:mo><mml:mn>1.05</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$d\in [0.01;1.05]$]]></tex-math></alternatives></inline-formula>. As a result, the dependence of the ML function <italic>f</italic> on the parameter <italic>d</italic> is obtained (Fig. <xref rid="j_infor419_fig_013">13</xref>). Figure <xref rid="j_infor419_fig_013">13</xref> shows that this function is concave upward and has one local minimum as <inline-formula id="j_infor419_ineq_066"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo>=</mml:mo><mml:mn>0.83</mml:mn></mml:math>
<tex-math><![CDATA[$d=0.83$]]></tex-math></alternatives></inline-formula> for the considered example.</p>
<fig id="j_infor419_fig_013">
<label>Fig. 13</label>
<caption>
<p>Dependence of the maximum likelihood function <italic>f</italic> on the parameter <italic>d</italic>.</p>
</caption>
<graphic xlink:href="infor419_g013.jpg"/>
</fig>
</sec>
<sec id="j_infor419_s_018">
<label>6</label>
<title>Experimental Exploration of the Kriging Predictor for Facial Emotion Recognition</title>
<p>The first investigation is pursued in order to recognize an emotion of a particular picture and evaluate the result obtained, as well as to verify that the optimal value <inline-formula id="j_infor419_ineq_067"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>0.83</mml:mn></mml:math>
<tex-math><![CDATA[$\hat{d}=0.83$]]></tex-math></alternatives></inline-formula> has been assessed properly.</p>
<p>In fact, we have a problem of classification into seven classes. Let the analysed picture data set <inline-formula id="j_infor419_ineq_068"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">X</mml:mi></mml:math>
<tex-math><![CDATA[$\mathbb{X}$]]></tex-math></alternatives></inline-formula> of size <italic>N</italic> be divided into two groups: testing and training data so that the testing data consist of only one picture and the training data are comprised of the remaining ones. In this way, <inline-formula id="j_infor419_ineq_069"><alternatives>
<mml:math><mml:mi mathvariant="italic">N</mml:mi><mml:mo>=</mml:mo><mml:mn>210</mml:mn></mml:math>
<tex-math><![CDATA[$N=210$]]></tex-math></alternatives></inline-formula> experiments have been done. In the <italic>i</italic>th experiment, the <italic>i</italic>th picture emotion (<inline-formula id="j_infor419_ineq_070"><alternatives>
<mml:math><mml:mi mathvariant="italic">i</mml:mi><mml:mo>=</mml:mo><mml:mover accent="false"><mml:mrow><mml:mn>1</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:mrow><mml:mo accent="true">‾</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$i=\overline{1,N}$]]></tex-math></alternatives></inline-formula>) is identified. A classifier training leads to a kriging predictor training. According to formula (<xref rid="j_infor419_eq_007">5</xref>), two coordinates (<inline-formula id="j_infor419_ineq_071"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{1}}$]]></tex-math></alternatives></inline-formula> (valence) and <inline-formula id="j_infor419_ineq_072"><alternatives>
<mml:math><mml:msub><mml:mrow><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math>
<tex-math><![CDATA[${z_{2}}$]]></tex-math></alternatives></inline-formula> (arousal)) of this picture emotion are predicted by kriging and this picture emotion is mapped as a new point in the two-dimensional circumplex space. Then, a classification of the <italic>i</italic>th picture emotion is made. The task is to find out which of the seven basic emotions (Table <xref rid="j_infor419_tab_001">1</xref>) is the nearest one to the <italic>i</italic>th picture emotion, mapped in the emotion model (Fig. <xref rid="j_infor419_fig_010">10</xref>, Fig. <xref rid="j_infor419_fig_014">14</xref>). For this purpose, a measure of proximity, based on the Euclidean distances, is used. These distances are calculated between the mapped picture emotion and all the basic emotions (Table <xref rid="j_infor419_tab_001">1</xref>). The emotion that has the smallest distance to the analysed picture emotion is supposed to be the most suitable to identify the picture emotion. As a result, we get an emotion class to which the testing <italic>i</italic>th picture emotion belongs.</p>
<p>The efficiency of classifier will be estimated after such a run through all <italic>N</italic> experiments with picking different <italic>i</italic>th pictures for testing (<italic>N</italic> runs). Since the true picture emotions are known in advance, it is possible to find out how many picture emotions from the whole picture set (<inline-formula id="j_infor419_ineq_073"><alternatives>
<mml:math><mml:mi mathvariant="italic">N</mml:mi><mml:mo>=</mml:mo><mml:mn>210</mml:mn></mml:math>
<tex-math><![CDATA[$N=210$]]></tex-math></alternatives></inline-formula>) are classified (recognized) successfully. Classification accuracy (CA) is calculated as the ratio of the number of correctly classified picture emotions to the total number of pictures as follows: 
<disp-formula id="j_infor419_eq_010">
<label>(8)</label><alternatives>
<mml:math display="block"><mml:mtable displaystyle="true"><mml:mtr><mml:mtd><mml:mtext mathvariant="italic">CA</mml:mtext><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:mfrac><mml:mrow><mml:mtext>the number of correctly classified picture emotions</mml:mtext></mml:mrow><mml:mrow><mml:mtext>the total number of pictures</mml:mtext></mml:mrow></mml:mfrac></mml:mstyle><mml:mn>100</mml:mn><mml:mi mathvariant="normal">%</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>
<tex-math><![CDATA[\[ \textit{CA}=\frac{\text{the number of correctly classified picture emotions}}{\text{the total number of pictures}}100\% .\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>Figure <xref rid="j_infor419_fig_015">15</xref> illustrates the dependence of the picture emotion classification accuracy (CA) (%) on the parameter <italic>d</italic>, as <inline-formula id="j_infor419_ineq_074"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.1</mml:mn><mml:mo>;</mml:mo><mml:mn>1.05</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$d\in [0.1;1.05]$]]></tex-math></alternatives></inline-formula>. It is obvious from this figure that the best accuracy, i.e. <inline-formula id="j_infor419_ineq_075"><alternatives>
<mml:math><mml:mi mathvariant="normal">CA</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>49</mml:mn><mml:mi mathvariant="normal">%</mml:mi><mml:mo>;</mml:mo><mml:mn>50</mml:mn><mml:mi mathvariant="normal">%</mml:mi><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$\mathrm{CA}\in [49\% ;50\% ]$]]></tex-math></alternatives></inline-formula>, is obtained as <inline-formula id="j_infor419_ineq_076"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.68</mml:mn><mml:mo>;</mml:mo><mml:mn>0.92</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$d\in [0.68;0.92]$]]></tex-math></alternatives></inline-formula>. When the optimal value of the parameter <italic>d</italic> is chosen, i.e. <inline-formula id="j_infor419_ineq_077"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>0.83</mml:mn></mml:math>
<tex-math><![CDATA[$\hat{d}=0.83$]]></tex-math></alternatives></inline-formula>, the classification accuracy is 50%. Since the best classification results are obtained as <inline-formula id="j_infor419_ineq_078"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.68</mml:mn><mml:mo>;</mml:mo><mml:mn>0.92</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$d\in [0.68;0.92]$]]></tex-math></alternatives></inline-formula> and the optimal value of the parameter <italic>d</italic> belongs to this range as well, i.e. <inline-formula id="j_infor419_ineq_079"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>0.83</mml:mn><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.68</mml:mn><mml:mo>;</mml:mo><mml:mn>0.92</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$\hat{d}=0.83\in [0.68;0.92]$]]></tex-math></alternatives></inline-formula>, it means that the optimal value <inline-formula id="j_infor419_ineq_080"><alternatives>
<mml:math><mml:mover accent="true"><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mo stretchy="false">ˆ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>0.83</mml:mn></mml:math>
<tex-math><![CDATA[$\hat{d}=0.83$]]></tex-math></alternatives></inline-formula> has been established properly by the ML method.</p>
<fig id="j_infor419_fig_014">
<label>Fig. 14</label>
<caption>
<p>The basic emotions, depicted in the analysed model of emotions. The coordinates of points are given in Table <xref rid="j_infor419_tab_001">1</xref>.</p>
</caption>
<graphic xlink:href="infor419_g014.jpg"/>
</fig>
<fig id="j_infor419_fig_015">
<label>Fig. 15</label>
<caption>
<p>The dependence of the picture emotion classification accuracy on the parameter <italic>d</italic>.</p>
</caption>
<graphic xlink:href="infor419_g015.jpg"/>
</fig>
<p>Figure <xref rid="j_infor419_fig_016">16</xref> shows the mapping of predicted coordinates (valence and arousal) of all the 210 picture emotions in the two-dimensional circumplex space. It is obvious that <italic>Joy</italic> is predicted most precisely. However, the remaining emotions overlap quite strongly.</p>
<fig id="j_infor419_fig_016">
<label>Fig. 16</label>
<caption>
<p>The mapping of predicted coordinates of all the 210 picture emotions in the two-dimensional circumplex space.</p>
</caption>
<graphic xlink:href="infor419_g016.jpg"/>
</fig>
<p>For deeper analysis of this classification, a confusion matrix of the seven basic emotions is given in Table <xref rid="j_infor419_tab_002">2</xref>. The highest true positive rates were observed for <italic>Joy</italic> (80%), <italic>Neutral</italic> (76.7%), and <italic>Disgust</italic> (60%). The highest false positive rates (the numbers are written in red) were observed for <italic>Anger</italic> (56.7% of pictures with <italic>Anger</italic> emotion were classified as <italic>Disgust</italic>), <italic>Fear</italic> (36.7% as <italic>Surprise</italic>), <italic>Sadness</italic> (36.7% as <italic>Neutral</italic>), and <italic>Surprise</italic> (33.3% as <italic>Fear</italic>).</p>
<table-wrap id="j_infor419_tab_002">
<label>Table 2</label>
<caption>
<p>Confusion matrix of the seven basic emotions.</p>
</caption>
<graphic xlink:href="infor419_g017.jpg"/>
</table-wrap>
<p>The second investigation is similar to the first one because the <italic>i</italic>th picture emotion (<inline-formula id="j_infor419_ineq_081"><alternatives>
<mml:math><mml:mi mathvariant="italic">i</mml:mi><mml:mo>=</mml:mo><mml:mover accent="false"><mml:mrow><mml:mn>1</mml:mn><mml:mo mathvariant="normal">,</mml:mo><mml:mi mathvariant="italic">N</mml:mi></mml:mrow><mml:mo accent="true">‾</mml:mo></mml:mover></mml:math>
<tex-math><![CDATA[$i=\overline{1,N}$]]></tex-math></alternatives></inline-formula>) is identified, as well. However, in the 2nd investigation, differently from the 1st one, several basic emotions are combined into one group. At first, the three basic emotions, such as <italic>Fear</italic>, <italic>Anger</italic>, and <italic>Disgust</italic>, are combined into one group. It is reasonable to do this, because all the three emotions have the coordinates of negative valence and high arousal, i.e. they all are located in the second quarter of the analysed model of emotions (Fig. <xref rid="j_infor419_fig_014">14</xref>). In this case, we have a problem of classification into five classes: {<italic>Fear</italic>, <italic>Anger</italic>, <italic>Disgust</italic>}, {<italic>Surprise</italic>}, {<italic>Joy</italic>}, {<italic>Neutral</italic>}, and {<italic>Sadness</italic>}. Subsequently, the four emotions, i.e. <italic>Fear</italic>, <italic>Anger</italic>, <italic>Disgust</italic>, and <italic>Surprise</italic>, are grouped together. The decision to add the fourth emotion, i.e. <italic>Surprise</italic>, to the previous 3-emotion group is made because of the similarity of pictures with the <italic>Surprise</italic> and <italic>Fear</italic> emotions (see Fig. <xref rid="j_infor419_fig_012">12</xref>), as well as because <italic>Surprise</italic> and <italic>Fear</italic> are in a very near neighbourhood in the two-dimensional model of emotions (Fig. <xref rid="j_infor419_fig_014">14</xref>). For this reason, the picture emotion <italic>Surprise</italic> is very often classified as <italic>Fear</italic> and vice versa. So, we have a problem of classification into four classes: {<italic>Fear</italic>, <italic>Anger</italic>, <italic>Disgust</italic>, <italic>Surprise</italic>}, {<italic>Joy</italic>}, {<italic>Neutral</italic>}, and {<italic>Sadness</italic>}. Since the true picture emotions and emotion groups created are known in advance, the classification accuracy of the picture emotion set (size <italic>N</italic>) can be calculated. It is said that the picture emotion is identified rightly if the true picture emotion or emotion group (this picture emotion belongs to) is coincident with the identified one (emotion or group). Averaged values of the classification accuracy (%), when <inline-formula id="j_infor419_ineq_082"><alternatives>
<mml:math><mml:mi mathvariant="italic">d</mml:mi><mml:mo stretchy="false">∈</mml:mo><mml:mo fence="true" stretchy="false">[</mml:mo><mml:mn>0.7</mml:mn><mml:mo>;</mml:mo><mml:mn>0.9</mml:mn><mml:mo fence="true" stretchy="false">]</mml:mo></mml:math>
<tex-math><![CDATA[$d\in [0.7;0.9]$]]></tex-math></alternatives></inline-formula>, are as follows: <inline-formula id="j_infor419_ineq_083"><alternatives>
<mml:math><mml:mi mathvariant="normal">CA</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math>
<tex-math><![CDATA[$\mathrm{CA}=50\% $]]></tex-math></alternatives></inline-formula>, when emotions are not grouped, in the case of 3-emotion group, <inline-formula id="j_infor419_ineq_084"><alternatives>
<mml:math><mml:mi mathvariant="normal">CA</mml:mi><mml:mo>=</mml:mo><mml:mn>64</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math>
<tex-math><![CDATA[$\mathrm{CA}=64\% $]]></tex-math></alternatives></inline-formula>, and, in the case of 4-emotion group, <inline-formula id="j_infor419_ineq_085"><alternatives>
<mml:math><mml:mi mathvariant="normal">CA</mml:mi><mml:mo>=</mml:mo><mml:mn>76</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math>
<tex-math><![CDATA[$\mathrm{CA}=76\% $]]></tex-math></alternatives></inline-formula>. In this way, the classification accuracy is achieved rather well, i.e. <inline-formula id="j_infor419_ineq_086"><alternatives>
<mml:math><mml:mn>76</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math>
<tex-math><![CDATA[$76\% $]]></tex-math></alternatives></inline-formula>, when 4 emotions are grouped together.</p>
</sec>
<sec id="j_infor419_s_019">
<label>7</label>
<title>Conclusions</title>
<p>Facial emotion recognition (FER) is an important topic in computer vision and artificial intelligence. We have developed the method for FER, based on the dimensional model of emotions as well as using the kriging predictor of Fractional Brownian Vector Field. The classification problem, related to the recognition of facial emotions, is formulated and solved. We use the knowledge of expert psychologists about the similarity of various emotions in the plane. The goal is to get an estimate of a new picture emotion on the plane by kriging and determine which emotion, identified by psychologists, is the closest one. Seven basic emotions (<italic>Joy</italic>, <italic>Sadness</italic>, <italic>Surprise</italic>, <italic>Disgust</italic>, <italic>Anger</italic>, <italic>Fear</italic>, and <italic>Neutral</italic>) have been chosen. The experimental exploration has shown that the best classification accuracy corresponds to the optimal value of Hurst parameter, estimated by the maximum likelihood method. The accuracy of classification into seven classes has been obtained approximately 50%, if we make a decision on the basis of the closest basic emotion. It has been ascertained that the kriging predictor is suitable for facial emotion recognition in the case of small sets of pictures. More sophisticated classification strategies may increase the accuracy, when grouping of the basic emotions is applied.</p>
</sec>
</body>
<back>
<ref-list id="j_infor419_reflist_001">
<title>References</title>
<ref id="j_infor419_ref_001">
<mixed-citation publication-type="book"><string-name><surname>Adolphs</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Anderson</surname>, <given-names>D.J.</given-names></string-name> (<year>2018</year>). <source>The Neuroscience of Emotion: A New Synthesis</source>. <publisher-name>Princeton University Press</publisher-name>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_002">
<mixed-citation publication-type="journal"><string-name><surname>Bhardwaj</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Dixit</surname>, <given-names>M.</given-names></string-name> (<year>2016</year>). <article-title>A review: facial expression detection with its techniques and application</article-title>. <source>International Journal of Signal Processing, Image Processing and Pattern Recognition</source>, <volume>9</volume>(<issue>6</issue>), <fpage>149</fpage>–<lpage>158</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_003">
<mixed-citation publication-type="journal"><string-name><surname>Bradley</surname>, <given-names>M.M.</given-names></string-name>, <string-name><surname>Greenwald</surname>, <given-names>M.K.</given-names></string-name>, <string-name><surname>Petry</surname>, <given-names>M.C.</given-names></string-name>, <string-name><surname>Lang</surname>, <given-names>P.J.</given-names></string-name> (<year>1992</year>). <article-title>Remembering pictures: pleasure and arousal in memory</article-title>. <source>Journal of Experimental Psychology: Learning, Memory and Cognition</source>, <volume>18</volume>(<issue>2</issue>), <fpage>379</fpage>–<lpage>390</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_004">
<mixed-citation publication-type="journal"><string-name><surname>Calvo</surname>, <given-names>R.A.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>S.M.</given-names></string-name> (<year>2013</year>). <article-title>Emotions in text: dimensional and categorical models</article-title>. <source>Computational Intelligence</source>, <volume>29</volume>(<issue>3</issue>), <fpage>527</fpage>–<lpage>543</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_005">
<mixed-citation publication-type="chapter"><string-name><surname>Cambria</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Livingstone</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Hussain</surname>, <given-names>A.</given-names></string-name> (<year>2012</year>). <chapter-title>The hourglass of emotions</chapter-title>. In: <string-name><surname>Esposito</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Viciarelli</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Hoffmann</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Muller</surname>, <given-names>V.</given-names></string-name> (Eds.), <source>Cognitive Behavioural Systems</source>, <series><italic>Lecture Notes in Computers Science</italic></series>, Vol. <volume>7403</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Berlin, Heidelberg</publisher-loc>, pp. <fpage>144</fpage>–<lpage>157</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_006">
<mixed-citation publication-type="chapter"><string-name><surname>Deshmukh</surname>, <given-names>R.S.</given-names></string-name>, <string-name><surname>Jagtap</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Paygude</surname>, <given-names>S.</given-names></string-name> (<year>2017</year>). <chapter-title>Facial emotion recognition system through machine learning approach</chapter-title>. In: <source>2017 International Conference on Intelligent Computing and Control Systems (ICICCS)</source>, pp. <fpage>272</fpage>–<lpage>277</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_007">
<mixed-citation publication-type="chapter"><string-name><surname>Dhall</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ramana Murthy</surname>, <given-names>O.V.</given-names></string-name>, <string-name><surname>Goecke</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Joshi</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gedeon</surname>, <given-names>T.</given-names></string-name> (<year>2015</year>). <chapter-title>Video and image based emotion recognition challenges in the wild: EmotiW 2015</chapter-title>. In: <source>Proceedings of the 2015 ACM on International Conference on Multimodal Interaction</source>, pp. <fpage>423</fpage>–<lpage>426</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>D’mello</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Graesser</surname>, <given-names>A.</given-names></string-name> (<year>2007</year>). <article-title>Mind and body: dialogue and posture for affect detection in learning environments</article-title>. <source>Frontiers in Artificial Intelligence and Applications</source>, <volume>158</volume>, <fpage>161</fpage>–<lpage>168</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_009">
<mixed-citation publication-type="journal"><string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name> (<year>2001</year>). <article-title>Visualization of a set of parameters characterized by their correlation matrix</article-title>. <source>Computational Statistics &amp; Data Analysis</source>, <volume>36</volume>(<issue>1</issue>), <fpage>15</fpage>–<lpage>30</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_010">
<mixed-citation publication-type="journal"><string-name><surname>Eerola</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Vuoskoski</surname>, <given-names>J.K.</given-names></string-name> (<year>2011</year>). <article-title>A comparison of the discrete and dimensional models of emotion in music</article-title>. <source>Psychology of Music</source>, <volume>39</volume>(<issue>1</issue>), <fpage>18</fpage>–<lpage>49</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_011">
<mixed-citation publication-type="book"><string-name><surname>Ekman</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Friesen</surname>, <given-names>W.V.</given-names></string-name> (<year>1978</year>). <source>Manual for the Facial Action Code</source>. <publisher-name>Consulting Psychologist Press</publisher-name>, <publisher-loc>Palo Alto, CA</publisher-loc>,</mixed-citation>
</ref>
<ref id="j_infor419_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Ekman</surname>, <given-names>P.</given-names></string-name> (<year>1992</year>). <article-title>An argument for basic emotions</article-title>. <source>Cognition and Emotion</source>, <volume>6</volume>(<issue>3</issue>), <fpage>169</fpage>–<lpage>200</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_013">
<mixed-citation publication-type="chapter"><string-name><surname>Ekman</surname>, <given-names>P.</given-names></string-name> (<year>1999</year>). <chapter-title>Basic emotions</chapter-title>. In: <string-name><surname>Dalgleish</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Powers</surname>, <given-names>M.J.</given-names></string-name> (Eds.), <source>Handbook of Cognition and Emotion</source>. <publisher-name>Wiley</publisher-name>, <publisher-loc>Hoboken</publisher-loc>, pp. <fpage>4</fpage>–<lpage>5</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_014">
<mixed-citation publication-type="journal"><string-name><surname>Farnsworth</surname>, <given-names>P.R.</given-names></string-name> (<year>1954</year>). <article-title>A study of the Hevner adjective list</article-title>. <source>The Journal of Aesthetics and Art Criticism</source>, <volume>13</volume>(<issue>1</issue>), <fpage>97</fpage>–<lpage>103</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_015">
<mixed-citation publication-type="journal"><string-name><surname>Ferdig</surname>, <given-names>R.E.</given-names></string-name>, <string-name><surname>Mishra</surname>, <given-names>P.</given-names></string-name> (<year>2004</year>). <article-title>Emotional responses to computers: experiences in unfairness, anger, and spite</article-title>. <source>Journal of Educational Multimedia and Hypermedia</source>, <volume>13</volume>(<issue>2</issue>), <fpage>143</fpage>–<lpage>161</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_016">
<mixed-citation publication-type="journal"><string-name><surname>Filella</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Cabello</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Pérez-Escoda</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Ros-Morente</surname>, <given-names>A.</given-names></string-name> (<year>2016</year>). <article-title>Evaluation of the emotional education program “Happy 8-12” for the assertive resolution of conflicts among peers</article-title>. <source>Electronic Journal of Research in Educational Psychology</source>, <volume>14</volume>(<issue>3</issue>), <fpage>582</fpage>–<lpage>601</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_017">
<mixed-citation publication-type="journal"><string-name><surname>Fontaine</surname>, <given-names>J.R.J.</given-names></string-name>, <string-name><surname>Scherer</surname>, <given-names>K.R.</given-names></string-name>, <string-name><surname>Roesch</surname>, <given-names>E.B.</given-names></string-name>, <string-name><surname>Ellsworth</surname>, <given-names>P.C.</given-names></string-name> (<year>2007</year>). <article-title>The world of emotions is not two-dimensional</article-title>. <source>Psychological Science</source>, <volume>18</volume>(<issue>12</issue>), <fpage>1050</fpage>–<lpage>1057</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_018">
<mixed-citation publication-type="journal"><string-name><surname>Gan</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>L.</given-names></string-name> (<year>2019</year>). <article-title>Facial expression recognition boosted by soft label with a diverse ensemble</article-title>. <source>Pattern Recognition Letters</source>, <volume>125</volume>, <fpage>105</fpage>–<lpage>112</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_019">
<mixed-citation publication-type="journal"><string-name><surname>Gobron</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Ahn</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Paltoglou</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Thelwall</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Thalmann</surname>, <given-names>D.</given-names></string-name> (<year>2010</year>). <article-title>From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text</article-title>. <source>Visual Computer</source>, <volume>26</volume>, <fpage>505</fpage>–<lpage>519</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_020">
<mixed-citation publication-type="chapter"><string-name><surname>Goodfellow</surname>, <given-names>I.J.</given-names></string-name>, <string-name><surname>Erhan</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Carrier</surname>, <given-names>P.L.</given-names></string-name>, <string-name><surname>Courville</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Mirza</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hamner</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Cukierski</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Tang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Thaler</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>D.-H.</given-names></string-name>, <string-name><surname>Zhou</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ramaiah</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Feng</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Athanasakis</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Shawe-Taylor</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Milakov</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Park</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Ionescu</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Popescu</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Grozea</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Bergstra</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Xie</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Romaszko</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Chuang</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Bengio</surname>, <given-names>Y.</given-names></string-name> (<year>2013</year>). <chapter-title>Challenges in representation learning: a report on three machine learning contests</chapter-title>. In: <source>International Conference on Neural Information Processing</source>. <publisher-name>Springer</publisher-name>, pp. <fpage>117</fpage>–<lpage>124</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_021">
<mixed-citation publication-type="book"><string-name><surname>Grekow</surname>, <given-names>J.</given-names></string-name> (<year>2018</year>). <source>From Content-Based Music Emotion Recognition to Emotion Maps of Musical Pieces</source>. <series><italic>Studies in Computational Intelligence</italic></series>, Vol. <volume>747</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Warsaw, Poland</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Hevner</surname>, <given-names>K.</given-names></string-name> (<year>1936</year>). <article-title>Experimental studies of the elements of expression in music</article-title>. <source>American Journal of Psychology</source>, <volume>48</volume>(<issue>2</issue>), <fpage>246</fpage>–<lpage>268</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_023">
<mixed-citation publication-type="chapter"><string-name><surname>Hu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Downie</surname>, <given-names>J.S.</given-names></string-name> (<year>2007</year>). <chapter-title>Exploring mood metadata: relationships with genre, artist and usage metadata</chapter-title>. In: <source>Proceedings of the 8th International Conferenceon Music Information Retrieval</source>, <conf-loc>Vienna, Austria</conf-loc>, pp. <fpage>67</fpage>–<lpage>72</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_024">
<mixed-citation publication-type="chapter"><string-name><surname>Hu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Downie</surname>, <given-names>J.S.</given-names></string-name>, <string-name><surname>Laurier</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Bay</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ehmann</surname>, <given-names>A.F.</given-names></string-name> (<year>2008</year>). <chapter-title>The 2007 MIREX audio mood classification task: lessons learned</chapter-title>. In: <source>ISMIR 2008, 9th International Conference on Music Information Retrieval</source>, <conf-loc>Philadelphia, PA, USA</conf-loc>, pp. <fpage>462</fpage>–<lpage>467</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_025">
<mixed-citation publication-type="chapter"><string-name><surname>Yancong</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Ruidong</surname>, <given-names>P.</given-names></string-name> (<year>2011</year>). <chapter-title>Image analysis based on fractional Brownian motion dimension</chapter-title>. In: <source>2011 IEEE International Conference on Computer Science and Automation Engineering</source>, <conf-loc>Shanghai</conf-loc>, Vol. <volume>2</volume>, pp. <fpage>15</fpage>–<lpage>19</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_026">
<mixed-citation publication-type="journal"><string-name><surname>Johnson-Laird</surname>, <given-names>P.N.</given-names></string-name>, <string-name><surname>Oatley</surname>, <given-names>K.</given-names></string-name> (<year>1989</year>). <article-title>The language of emotions: an analysis of a semantic field</article-title>. <source>Cognition and Emotion</source>, <volume>3</volume>(<issue>2</issue>), <fpage>81</fpage>–<lpage>123</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_027">
<mixed-citation publication-type="journal"><string-name><surname>Jones</surname>, <given-names>D.R.</given-names></string-name> (<year>2001</year>). <article-title>A taxonomy of global optimization methods based on response surfaces</article-title>. <source>Journal of Global Optimization</source>, <volume>21</volume>, <fpage>345</fpage>–<lpage>383</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_028">
<mixed-citation publication-type="journal"><string-name><surname>Ko</surname>, <given-names>B.C.</given-names></string-name> (<year>2018</year>). <article-title>A brief review of facial emotion recognition based on visual information</article-title>. <source>Sensors</source>, <volume>18</volume> <elocation-id>401</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/s18020401" xlink:type="simple">https://doi.org/10.3390/s18020401</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_029">
<mixed-citation publication-type="chapter"><string-name><surname>Li</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>J.</given-names></string-name> (<year>2017</year>). <chapter-title>Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild</chapter-title>. In: <source>2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>, pp. <fpage>2584</fpage>–<lpage>2593</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_030">
<mixed-citation publication-type="journal"><string-name><surname>Lövheim</surname>, <given-names>H.</given-names></string-name> (<year>2011</year>). <article-title>A new three-dimensional model for emotions and monoamine neurotransmitters</article-title>. <source>Medical Hypotheses</source>, <volume>78</volume>(<issue>2</issue>), <fpage>341</fpage>–<lpage>348</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_031">
<mixed-citation publication-type="chapter"><string-name><surname>Mariappan</surname>, <given-names>M.B.</given-names></string-name>, <string-name><surname>Suk</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Prabhakaran</surname>, <given-names>B.</given-names></string-name> (<year>2012</year>). <chapter-title>Facefetch: a user emotion driven multimedia content recommendation system based on facial expression recognition</chapter-title>. In: <source>Proceedings of the 2012 IEEE International Symposium on Multimedia</source>, pp. <fpage>84</fpage>–<lpage>87</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_032">
<mixed-citation publication-type="journal"><string-name><surname>Maupome</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Isyutina</surname>, <given-names>O.</given-names></string-name> (<year>2013</year>). <article-title>Dental students’ and faculty members’ concepts and emotions associated with a caries risk assessment program</article-title>. <source>Journal of Dental Education</source>, <volume>77</volume>(<issue>11</issue>), <fpage>1477</fpage>–<lpage>1487</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_033">
<mixed-citation publication-type="book"><string-name><surname>Mehrabian</surname>, <given-names>A.</given-names></string-name> (<year>1980</year>). <source>Basic dimensions for a general psychological theory</source>. <publisher-name>Oelgeschlager, Gunn &amp; Hain</publisher-name>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_034">
<mixed-citation publication-type="journal"><string-name><surname>Mehrabian</surname>, <given-names>A.</given-names></string-name> (<year>1996</year>). <article-title>Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament</article-title>. <source>Current Psychology</source>, <volume>14</volume>(<issue>4</issue>), <fpage>261</fpage>–<lpage>292</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_035">
<mixed-citation publication-type="book"><string-name><surname>Mehrabian</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Russell</surname>, <given-names>J.A.</given-names></string-name> (<year>1974</year>). <source>An Approach to Environmental Psychology</source>. <publisher-name>MIT Press</publisher-name>, <publisher-loc>Cambridge</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_036">
<mixed-citation publication-type="journal"><string-name><surname>Metcalfe</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>McKenzie</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>McCarty</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Pollet</surname>, <given-names>T.V.</given-names></string-name> (<year>2019</year>). <article-title>Emotion recognition from body movement and gesture in children with Autism Spectrum Disorder is improved by situational cues</article-title>. <source>Research in Developmental Disabilities</source>, <volume>86</volume>, <fpage>1</fpage>–<lpage>10</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ridd" xlink:type="simple">https://doi.org/10.1016/j.ridd</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_037">
<mixed-citation publication-type="journal"><string-name><surname>Nonis</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Dagnes</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Marcolin</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Vezzetti</surname>, <given-names>E.</given-names></string-name> (<year>2019</year>). <article-title>3D approaches and challenges in facial expression recognition algorithms — a literature review</article-title>. <source>Applied Sciences</source>, <volume>9</volume>(<issue>3904</issue>).</mixed-citation>
</ref>
<ref id="j_infor419_ref_038">
<mixed-citation publication-type="journal"><string-name><surname>Olszanowski</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Pochwatko</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Kukliński</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Ścibor-Rylski</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lewinski</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Ohme</surname>, <given-names>R.</given-names></string-name> (<year>2015</year>). <article-title>Warsaw set of emotional facial expression pictures: a validation study of facial display photographs</article-title>. <source>Frontiers in Psychology</source>, <volume>5</volume>(<issue>1516</issue>).</mixed-citation>
</ref>
<ref id="j_infor419_ref_039">
<mixed-citation publication-type="journal"><string-name><surname>Paltoglou</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Thelwall</surname>, <given-names>M.</given-names></string-name> (<year>2013</year>). <article-title>Seeing stars of valence and arousal in blog posts</article-title>. <source>IEEE Transactions on Affective Computing</source>, <volume>4</volume>(<issue>1</issue>), <fpage>116</fpage>–<lpage>123</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_040">
<mixed-citation publication-type="journal"><string-name><surname>Plutchik</surname>, <given-names>R.</given-names></string-name> (<year>2001</year>). <article-title>The nature of emotions</article-title>. <source>American Scientist</source>, <volume>89</volume>(<issue>4</issue>), <fpage>344</fpage>–<lpage>350</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_041">
<mixed-citation publication-type="book"><string-name><surname>Plutchik</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Kellerman</surname>, <given-names>K.</given-names></string-name> (<year>1980</year>). <source>Emotion: Theory, Research, and Experience</source>. <publisher-name>Academic Press</publisher-name>, <publisher-loc>London</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_042">
<mixed-citation publication-type="journal"><string-name><surname>Pozniak</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Sakalauskas</surname>, <given-names>L.</given-names></string-name> (<year>2017</year>). <article-title>Fractional Euclidean distance matrices extrapolator for scattered data</article-title>. <source>Journal of Young Scientists</source>, <volume>47</volume>(<issue>2</issue>), <fpage>56</fpage>–<lpage>61</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_043">
<mixed-citation publication-type="journal"><string-name><surname>Pozniak</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Sakalauskas</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Saltyte</surname>, <given-names>L.</given-names></string-name> (<year>2019</year>). <article-title>Kriging model with fractional Euclidean distance matrices</article-title>. <source>Informatica</source>, <volume>30</volume>(<issue>2</issue>), <fpage>367</fpage>–<lpage>390</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_044">
<mixed-citation publication-type="journal"><string-name><surname>Purificación</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Pablo</surname>, <given-names>F.B.</given-names></string-name> (<year>2019</year>). <article-title>Cognitive control and emotional intelligence: effect of the emotional content of the task. Brief reports</article-title>. <source>Frontiers in Psychology</source>, <volume>10</volume>(<issue>195</issue>). <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3389/fpsyg.2019.00195" xlink:type="simple">https://doi.org/10.3389/fpsyg.2019.00195</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_045">
<mixed-citation publication-type="other"><string-name><surname>Ramalingam</surname>, <given-names>V.V.</given-names></string-name>, <string-name><surname>Pandian</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Jaiswal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Bhatia</surname>, <given-names>N.</given-names></string-name> (2018). Emotion detection from text. <italic>Journal of Physics: Conference Series</italic>, <italic>1000</italic>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1088/1742-6596/1000/1/012027" xlink:type="simple">https://doi.org/10.1088/1742-6596/1000/1/012027</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_046">
<mixed-citation publication-type="chapter"><string-name><surname>Ranade</surname>, <given-names>A.G.</given-names></string-name>, <string-name><surname>Patel</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Magare</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <chapter-title>Emotion model for artificial intelligence and their applications</chapter-title>. In: <source>2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC)</source>, <conf-loc>Solan Himachal Pradesh, India</conf-loc>, pp. <fpage>335</fpage>–<lpage>339</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_047">
<mixed-citation publication-type="journal"><string-name><surname>Revina</surname>, <given-names>I.M.</given-names></string-name>, <string-name><surname>Emmanuel</surname>, <given-names>W.R.S.</given-names></string-name> (<year>2018</year>). <article-title>A survey on human face expression recognition techniques</article-title>. <source>Journal of King Saud University – Computer and Information Sciences</source>, <volume>1</volume>(<issue>8</issue>). <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jksuci.2018.09.002" xlink:type="simple">https://doi.org/10.1016/j.jksuci.2018.09.002</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_048">
<mixed-citation publication-type="journal"><string-name><surname>Rubin</surname>, <given-names>D.C.</given-names></string-name>, <string-name><surname>Talarico</surname>, <given-names>J.M.</given-names></string-name> (<year>2009</year>). <article-title>A comparison of dimensional models of emotion: evidence from emotions, prototypical events, autobiographical memories, and words</article-title>. <source>Memory</source>, <volume>17</volume>(<issue>8</issue>), <fpage>802</fpage>–<lpage>808</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_049">
<mixed-citation publication-type="journal"><string-name><surname>Russell</surname>, <given-names>J.A.</given-names></string-name> (<year>1980</year>). <article-title>A circumplex model of affect</article-title>. <source>Journal of Personality and Social Psychology</source>, <volume>39</volume>(<issue>6</issue>), <fpage>1161</fpage>–<lpage>1178</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_050">
<mixed-citation publication-type="journal"><string-name><surname>Sailunaz</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Dhaliwal</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rokne</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Alhajj</surname>, <given-names>R.</given-names></string-name> (<year>2018</year>). <article-title>Emotion detection from text and speech: a survey</article-title>. <source>Social Network Analysis and Mining</source>, <volume>8</volume>(<issue>28</issue>), <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s13278-018-0505-2" xlink:type="simple">https://doi.org/10.1007/s13278-018-0505-2</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_051">
<mixed-citation publication-type="journal"><string-name><surname>Scherer</surname>, <given-names>K.R.</given-names></string-name> (<year>2005</year>). <article-title>What are emotions? And how can they be measured?</article-title> <source>Social Science Information</source>, <volume>44</volume>(<issue>4</issue>), <fpage>695</fpage>–<lpage>729</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_052">
<mixed-citation publication-type="journal"><string-name><surname>Schubert</surname>, <given-names>E.</given-names></string-name> (<year>2003</year>). <article-title>Update of the Hevner adjective checklist</article-title>. <source>Perceptual and Motor Skills</source>, <volume>96</volume>(<issue>4</issue>), <fpage>1117</fpage>–<lpage>1122</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_053">
<mixed-citation publication-type="journal"><string-name><surname>Shao</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Qian</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <article-title>Three convolutional neural network models for facial expression recognition in the wild</article-title>. <source>Neurocomputing</source>, <volume>355</volume>, <fpage>82</fpage>–<lpage>92</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_054">
<mixed-citation publication-type="journal"><string-name><surname>Sharma</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Singh</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Gautam</surname>, <given-names>S.</given-names></string-name> (<year>2019</year>). <article-title>Automatic facial expression recognition using combined geometric features</article-title>. <source>3D Research</source>, <volume>10</volume>, <elocation-id>14</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s13319-019-0224-0" xlink:type="simple">https://doi.org/10.1007/s13319-019-0224-0</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_055">
<mixed-citation publication-type="journal"><string-name><surname>Shivhare</surname>, <given-names>S.N.</given-names></string-name>, <string-name><surname>Khethawat</surname>, <given-names>S.</given-names></string-name> (<year>2012</year>). <article-title>Emotion detection from text</article-title>. <source>Computer Science and Information Technology</source>, <volume>2</volume>, <fpage>371</fpage>–<lpage>377</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5121/csit.2012.2237" xlink:type="simple">https://doi.org/10.5121/csit.2012.2237</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_056">
<mixed-citation publication-type="journal"><string-name><surname>Sreeja</surname>, <given-names>P.S.</given-names></string-name>, <string-name><surname>Mahalakshmi</surname>, <given-names>G.S.</given-names></string-name> (<year>2017</year>). <article-title>Emotion models: a review</article-title>. <source>International Journal of Control Theory and Applications</source>, <volume>10</volume>(<issue>8</issue>), <fpage>651</fpage>–<lpage>657</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_057">
<mixed-citation publication-type="chapter"><string-name><surname>Stathopoulou</surname>, <given-names>I.O.</given-names></string-name>, <string-name><surname>Tsihrintzis</surname>, <given-names>G.A.</given-names></string-name> (<year>2011</year>). <chapter-title>Emotion recognition from body movements and gestures</chapter-title>. In: <string-name><surname>Tsihrintzis</surname>, <given-names>G.A.</given-names></string-name>, <string-name><surname>Virvou</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Jain</surname>, <given-names>L.C.</given-names></string-name>, <string-name><surname>Howlet</surname>, <given-names>T.R.J.</given-names></string-name> (Eds.), <source>Intelligent Interactive Multimedia Systems and Services. Smart Innovation, Systems and Technologies</source>, Vol. <volume>11</volume>. <publisher-name>Springer</publisher-name>, <publisher-loc>Berlin, Heidelberg</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_058">
<mixed-citation publication-type="chapter"><string-name><surname>Su</surname>, <given-names>M.H.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>C.H.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>K.Y.</given-names></string-name>, <string-name><surname>Hong</surname>, <given-names>Q.B.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>H.M.</given-names></string-name> (<year>2017</year>). <chapter-title>Exploring microscopic fluctuation of facial expression for mood disorder classification</chapter-title>. In: <source>Proceedings of the International Conference on Orange Technologies</source>, pp. <fpage>65</fpage>–<lpage>69</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICOT.2017.8336090" xlink:type="simple">https://doi.org/10.1109/ICOT.2017.8336090</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_059">
<mixed-citation publication-type="chapter"><string-name><surname>Tamulevičius</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Karbauskaitė</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name> (<year>2017</year>). <chapter-title>Selection of fractal dimension features for speech emotion classification</chapter-title>. In: <source>2017 Open Conference of Electrical Electronic and Information Sciences (eStream)</source>. <publisher-name>IEEE</publisher-name>, <publisher-loc>New York</publisher-loc>, pp. <fpage>1</fpage>–<lpage>4</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_060">
<mixed-citation publication-type="journal"><string-name><surname>Tamulevičius</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Karbauskaitė</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dzemyda</surname>, <given-names>G.</given-names></string-name> (<year>2019</year>). <article-title>Speech emotion classification using fractal dimension-based features</article-title>. <source>Nonlinear Analysis: Modelling and Control</source>, <volume>24</volume>(<issue>5</issue>), <fpage>679</fpage>–<lpage>695</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_061">
<mixed-citation publication-type="chapter"><string-name><surname>Tan</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Atto</surname>, <given-names>A.M.</given-names></string-name>, <string-name><surname>Alata</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Moreaud</surname>, <given-names>M.</given-names></string-name> (<year>2015</year>). <chapter-title>ARFBF model for non stationary random fields and application in HRTEM images</chapter-title>. In: <source>2015 IEEE International Conference on Image Processing (ICIP)</source>, <conf-loc>Quebec City, QC</conf-loc>, pp. <fpage>2651</fpage>–<lpage>2655</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_062">
<mixed-citation publication-type="book"><string-name><surname>Thayer</surname>, <given-names>R.E.</given-names></string-name> (<year>1989</year>). <source>The Biopsychology of Mood and Arousal</source>. <publisher-name>Oxford University Press</publisher-name>, <publisher-loc>New York, NY, US</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_063">
<mixed-citation publication-type="journal"><string-name><surname>Vorontsova</surname>, <given-names>T.A.</given-names></string-name>, <string-name><surname>Labunskaya</surname>, <given-names>V.A.</given-names></string-name> (<year>2020</year>). <article-title>Emotional attitude to own appearance and appearance of the spouse: analysis of relationships with the relationship of spouses to themselves, others, and the world</article-title>. <source>Behavioral Sciences</source>, <volume>10</volume>(<issue>2</issue>), <fpage>44</fpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_064">
<mixed-citation publication-type="chapter"><string-name><surname>Wang</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Fang</surname>, <given-names>B.</given-names></string-name> (<year>2008</year>). <chapter-title>Affective computing and biometrics based HCI surveillance system</chapter-title>. In: <source>Proceedings of the International Symposium on Information Science and Engineering</source>, pp. <fpage>192</fpage>–<lpage>195</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_065">
<mixed-citation publication-type="journal"><string-name><surname>Watson</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Tellegen</surname>, <given-names>A.</given-names></string-name> (<year>1985</year>). <article-title>Toward a consensual structure of mood</article-title>. <source>Psychological Bulletin</source>, <volume>98</volume>(<issue>2</issue>), <fpage>219</fpage>–<lpage>235</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_066">
<mixed-citation publication-type="journal"><string-name><surname>Watson</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Wiese</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Vaidya</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Tellegen</surname>, <given-names>A.</given-names></string-name> (<year>1999</year>). <article-title>The two general activation systems of affect: structural findings, evolutionary considerations, and psychobiological evidence</article-title>. <source>Journal of Personality and Social Psychology</source>, <volume>76</volume>, <fpage>820</fpage>–<lpage>838</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_067">
<mixed-citation publication-type="chapter"><string-name><surname>Weiguo</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Qingmei</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Yu</surname>, <given-names>W.</given-names></string-name> (<year>2004</year>). <chapter-title>Development of the humanoid head portrait robot system with flexible face and expression</chapter-title>. In: <source>Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics</source>, pp. <fpage>757</fpage>–<lpage>762</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_068">
<mixed-citation publication-type="journal"><string-name><surname>Whissell</surname>, <given-names>C.</given-names></string-name> (<year>1989</year>). <article-title>The dictionary of affect in language</article-title>. <source>Emotion: Theory, Research, and Experience</source>, <volume>4</volume>, <fpage>113</fpage>–<lpage>131</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_069">
<mixed-citation publication-type="chapter"><string-name><surname>Wilson</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Dobrev</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Brewster</surname>, <given-names>S.A.</given-names></string-name> (<year>2016</year>). <chapter-title>Hot under the collar: mapping thermal feedback to dimensional models of emotion</chapter-title>. In: <source>CHI 2016</source>, <conf-loc>San Jose, CA, USA</conf-loc>, pp. <fpage>4838</fpage>–<lpage>4849</lpage>.</mixed-citation>
</ref>
<ref id="j_infor419_ref_070">
<mixed-citation publication-type="book"><string-name><surname>Wundt</surname>, <given-names>W.M.</given-names></string-name> (<year>1897</year>). <source>Outlines of Psychology</source>. <publisher-name>Leipzig, W. Engelmann</publisher-name>, <publisher-loc>New York, G.E. Stechert</publisher-loc>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>