<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn><issn pub-type="ppub">0868-4952</issn><issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFOR505</article-id>
<article-id pub-id-type="doi">10.15388/22-INFOR505</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Intelligent and Efficient IoT Through the Cooperation of TinyML and Edge Computing</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Sanchez-Iborra</surname><given-names>Ramon</given-names></name><email xlink:href="ramon.sanchez@cud.upct.es">ramon.sanchez@cud.upct.es</email><xref ref-type="aff" rid="j_infor505_aff_001">1</xref><xref ref-type="corresp" rid="cor1">∗</xref><bio>
<p><bold>R. Sanchez-Iborra</bold> is an assistant professor at the University Centre of Defense at the General Air Force Academy (Spain). He graduated from the Technical University of Cartagena (Spain), received the BSc degree in telecommunication engineering in 2007 and the MSc and PhD degrees in information and communication technologies in 2013 and 2015, respectively. He has published more than 50 papers in international journals and conferences. His main research interests are IoT/M2M architectures, management of wireless networks, and green networking techniques.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Zoubir</surname><given-names>Abdeljalil</given-names></name><email xlink:href="abdeljalil.zoubir@um6p.ma">abdeljalil.zoubir@um6p.ma</email><xref ref-type="aff" rid="j_infor505_aff_002">2</xref><bio>
<p><bold>A. Zoubir</bold> graduated in 2018 from the Royal Air School-Marrakesh with a state engineering diploma in aeronautical systems. In 2021, he will receive his master’s degree in data science. He is currently pursuing a PhD in data sciences at Mohammed VI Polytechnic University (Morocco). His research interests include embedded machine learning, graph neural networks and their applications in medicine, and distributed intelligent systems.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Hamdouchi</surname><given-names>Abderahmane</given-names></name><email xlink:href="abderahmane.hamdouchi@um6p.ma">abderahmane.hamdouchi@um6p.ma</email><xref ref-type="aff" rid="j_infor505_aff_002">2</xref><bio>
<p><bold>A. Hamdouchi</bold> is a PhD student in data science at Mohammed VI Polytechnic University (Morocco). He earned a BSc in mechanical engineering from the Royal Military Academy (Morocco) in 2013 and a Diploma of Analyst in computer science from the Signal Training Centre in 2017. He received his master’s degree in data science in 2021. His primary research interests are real-time decision support systems and TinyML systems.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Idri</surname><given-names>Ali</given-names></name><email xlink:href="ali.idri@um5.ac.ma">ali.idri@um5.ac.ma</email><xref ref-type="aff" rid="j_infor505_aff_002">2</xref><xref ref-type="aff" rid="j_infor505_aff_003">3</xref><bio>
<p><bold>A. Idri</bold> is a full professor at the Computer Science and Systems Analysis School (ENSIAS, Mohammed V University in Rabat, Morocco). He received his master and doctorate of 3rd cycle in computer science from the Mohammed V University in 1994 and 1997, respectively. He received his PhD in cognitive and computer sciences from the University of Quebec at Montreal in 2003. He was the chair of the Web and Mobile Engineering Department for the period 2014–2020 and currently he is the head of the Software Project Management Research Team since 2010. He is very active in the fields of artificial intelligence, machine learning, medical informatics, software engineering, and has published more than 220 papers in well recognized journals and conferences.</p></bio>
</contrib>
<contrib contrib-type="author">
<name><surname>Skarmeta</surname><given-names>Antonio</given-names></name><email xlink:href="skarmeta@um.es">skarmeta@um.es</email><xref ref-type="aff" rid="j_infor505_aff_004">4</xref><bio>
<p><bold>A. Skarmeta</bold> received the BS degree (Hons.) from the University of Murcia, Spain, the MS degree from the University of Granada, and the PhD degree from the University of Murcia, all in computer science. He has been a full professor with the University of Murcia, since 2009. He has taken part in many EU FP projects and even coordinated some of them. He has published more than 200 international articles. His main interests include the integration of security services, identity, the IoT, 5G, and smart cities.</p></bio>
</contrib>
<aff id="j_infor505_aff_001"><label>1</label>University Centre of Defense, <institution>General Air Force Academy</institution>, <country>Spain</country></aff>
<aff id="j_infor505_aff_002"><label>2</label><institution>MSDA, Mohammed VI Polytechnic University</institution>, Ben Guerir, <country>Morocco</country></aff>
<aff id="j_infor505_aff_003"><label>3</label><institution>Mohammed V University</institution>, Rabat, <country>Morocco</country></aff>
<aff id="j_infor505_aff_004"><label>4</label><institution>University of Murcia</institution>, Espinardo Campus, <country>Spain</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2023</year></pub-date><pub-date pub-type="epub"><day>10</day><month>1</month><year>2023</year></pub-date><volume>34</volume><issue>1</issue><fpage>147</fpage><lpage>168</lpage><history><date date-type="received"><month>9</month><year>2021</year></date><date date-type="accepted"><month>11</month><year>2022</year></date></history>
<permissions><copyright-statement>© 2023 Vilnius University</copyright-statement><copyright-year>2023</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>The coordinated integration of heterogeneous TinyML-enabled elements in highly distributed Internet of Things (IoT) environments paves the way for the development of truly intelligent and context-aware applications. In this work, we propose a hierarchical ensemble TinyML scheme that permits system-wide decisions by considering the individual decisions made by the IoT elements deployed in a certain scenario. A two-layered TinyML-based edge computing solution has been implemented and evaluated in a real smart-agriculture use case, permitting to save wireless transmissions, reduce energy consumption and response times, at the same time strengthening data privacy and security.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>TinyML</kwd>
<kwd>ensemble learning</kwd>
<kwd>IoT</kwd>
<kwd>smart-agriculture</kwd>
<kwd>LoRaWAN</kwd>
</kwd-group>
<funding-group><funding-statement>This work has been supported by the European Commission, under the DEMETER (Grant No. 857202) and FLUIDOS (Grant No. 101070473) projects; and by the Spanish Ministry of Science, Innovation and Universities, under the project ONOFRE 3 (Grant No. PID2020-112675RB-C44).</funding-statement></funding-group>
</article-meta>
</front>
<body>
<sec id="j_infor505_s_001">
<label>1</label>
<title>Introduction</title>
<p>So far, the Internet of Things (IoT) has revolutionized our lives by embedding novel communication capabilities within many type of end-devices. This has allowed these elements to be fully connected, hence, boosting the development of innovative applications in different fields (Cirillo <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_011">2019</xref>). However, following this strategy, IoT devices need an almost permanent connection with the cloud or another master device, which prevents them to be totally autonomous entities. In this line, recent advances in Artificial Intelligence (AI) and edge computing are permitting IoT devices to become truly smart units and to create a fruitful distributed intelligent ecosystem (Sanchez-Iborra and Skarmeta, <xref ref-type="bibr" rid="j_infor505_ref_034">2020</xref>). These approaches will move the intelligence of IoT systems closer to the end-devices or users, which brings a series of notable advantages related to their response time, data security and privacy, sustainability aspects, etc., as discussed in the following sections.</p>
<p>Edge computing has emerged during the last years as a ground-breaking solution that permits to enrich regular IoT deployments with novel services and possibilities. Under this paradigm, the processing and storage capabilities of end-devices and edge-nodes are exploited in order to reduce their cloud-dependency by adding a new layer in the network architecture in charge of data aggregation, filtering, processing, and storage (Marjanovic <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_022">2018</xref>). When coupled with IoT deployments, edge-nodes can host supporting services for constrained end-devices such as task offloading, data caching, or digital twins, among many others (Sanchez-Iborra <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_035">2018</xref>).</p>
<p>On the other hand, TinyML is a recently-emerged paradigm that proposes to embed optimized Machine Learning (ML) models in units with limited computing resources, such as those powered by micro-controllers (Warden and Situnayake, <xref ref-type="bibr" rid="j_infor505_ref_046">2019</xref>). To this end, the ML models produced in a non-constrained platform, e.g. a regular computer, by using widely-known frameworks such as TensorFlow, ScikitLearn, or PyTorch, among others, are converted aiming at being executable by the target device. Thus, this approach converts IoT end-devices in intelligent elements able to perform on-device ML processing (Sanchez-Iborra and Skarmeta, <xref ref-type="bibr" rid="j_infor505_ref_034">2020</xref>). TinyML (Warden and Situnayake, <xref ref-type="bibr" rid="j_infor505_ref_046">2019</xref>) is gaining great momentum, evidenced by the support given by big companies such as Microsoft or Google, which have released their own TinyML frameworks, namely, Embedded Learning Library (ELL)<xref ref-type="fn" rid="j_infor505_fn_001">1</xref><fn id="j_infor505_fn_001"><label><sup>1</sup></label>
<p><uri>https://microsoft.github.io/ELL/</uri></p></fn> and TensorFlow Lite,<xref ref-type="fn" rid="j_infor505_fn_002">2</xref><fn id="j_infor505_fn_002"><label><sup>2</sup></label>
<p><uri>https://www.tensorflow.org/lite/</uri></p></fn> respectively.</p>
<p>Although the integration of intelligent decision-making mechanisms within constrained end-devices is a great advance, real-life IoT applications call for coordinated efforts from different entities aiming at widening the cognitive capabilities of the complete system. This is crucial to consider the individual circumstances of the elements deployed in highly distributed environments; however, there is still a gap in this regard as this kind of solutions has not been deeply investigated in the literature, yet. Therefore, the objective of this work is to address this issue by designing and developing an intelligent system capable of making high-level beneficial decisions for a whole deployment by considering the particular needs of the participants. To this end, we exploit together both paradigms mentioned above, i.e. edge computing and TinyML, in order to build a two-layered intelligent IoT system. Concretely, in our proposal each end-device makes an individual decision by employing a TinyML model and local data. Then, the attained outcome is employed by a higher level ML-based Decision Support System (DSS) placed in an edge-node, which gathers the individual decisions made by each end-device and makes a final decision with a broader perspective of the target scenario, hence, adopting a stacking-based ensemble ML approach (Pavlyshenko, <xref ref-type="bibr" rid="j_infor505_ref_026">2018</xref>) implemented at the edge.</p>
<p>Thereby, we present a hierarchical TinyML-based DSS that brings notable advantages in comparison with a cloud-based ML system as the raw data do not have to be transmitted over the air, hence reducing the great energy consumption of communication activities at the time of increasing data security and privacy. This approach also permits to reduce the decision-making time as the low transmission data-rates of state-of-the-art IoT communication technologies based on the Low Power-Wide Area Network (LPWAN) paradigm, e.g. LoRaWAN, leads to very long packet transmission times, in the order of seconds (Sanchez-Gomez <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_032">2020</xref>). To the authors’ knowledge, there is no prior work proposing a hierarchical TinyML scheme leveraging the opportunities brought by an edge computing-based architecture. Besides, we contextualize the potential application of our proposal by adopting a realistic smart-agriculture case of study aiming at evaluating the feasibility and performance of our solution. Please note that although we have adopted this specific use-case, the proposed solution is extensible to other fields of application. Thus, the main contributions of this work are the following: (i) a characterization and discussion of the synergies opened by the cooperation of edge computing and TinyML paradigms is given; (ii) a novel TinyML-based hierarchical DSS is presented and described; (iii) an application use-case focused on smart-agriculture is shown to demonstrate the validity of the proposal.</p>
<p>The remaining paper is organized as follows. Section <xref rid="j_infor505_s_002">2</xref> examines the enabling technologies employed in this work, namely, edge computing, TinyML, and hierarchical stacking-based ensemble ML. Section <xref rid="j_infor505_s_006">3</xref> provides an overview of the proposal. Section <xref rid="j_infor505_s_007">4</xref> presents the experimental methodology and design of our solution and describes the application use-case. Section <xref rid="j_infor505_s_016">5</xref> shows and discusses the obtained results. Section <xref rid="j_infor505_s_020">6</xref> addresses the threats to validity of this study. Finally, the work is closed in Section <xref rid="j_infor505_s_024">7</xref>, which also draws future research lines.</p>
</sec>
<sec id="j_infor505_s_002">
<label>2</label>
<title>Related Work</title>
<p>As aforementioned, the main pillars of our proposal are edge computing, TinyML, and hierarchical intelligent schemes. In the following, we provide an overview of these paradigms by reviewing relevant works in each field.</p>
<sec id="j_infor505_s_003">
<label>2.1</label>
<title>Edge Computing for IoT</title>
<p>Recently, a lot of efforts have been devoted to develop this network architecture (Porambage <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_028">2018</xref>) with the aim of exploiting the full potential of current IoT deployments at the time of reducing their permanent dependency on the cloud (Ashouri <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_004">2018</xref>). Among others, there are two main reasons for this paradigm shift: Limiting the amount of data sent to the cloud, hence strengthening data privacy and controlling the consumed bandwidth in the backhaul network, and reducing the latency of the transactions as they may be served at the edge of the network infrastructure (Lopez Pena and Munoz Fernandez, <xref ref-type="bibr" rid="j_infor505_ref_021">2019</xref>). Besides, as investigated in Cui <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_012">2019</xref>) and Mocnej <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_023">2018</xref>), additional aspects related to end-device’s power consumption are also important when adopting an edge computing architecture in order to find a trade-off between local processing and communication tasks, which may permit to improve energy efficiency of end-devices.</p>
<p>The range of services and applications enabled by the integration of edge-nodes with certain processing and storage capabilities within IoT architectures is huge (Sanchez-Iborra <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_035">2018</xref>). As end-devices usually present highly restricted computation power, task offloading has been extensively studied as it can notably enrich the capabilities of constrained IoT units (Xu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_049">2019</xref>), even considering Quality of Service (QoS) aspects (Song <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_040">2017</xref>). Given the proximity of computation and storage resources to end-devices, many user-centric or contextual services have been developed (Breitbach <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_007">2019</xref>). For example, quick indoor positioning (Santa <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_036">2018</xref>), data caching, or digital twins (Santa <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_037">2020</xref>) are widely-extended applications enabled by edge computing together with network virtualization techniques such as Software Defined Networking (SDN) and Network Function Virtualization (NFV) (Santa <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_037">2020</xref>). In this line, edge computing may also help in networking tasks such as introducing novel data communication paradigms, e.g. Named Data Networking (NDN) (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_044">2021</xref>), helping to select the most adequate settings for end-device transmissions (Sanchez-Iborra <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_035">2018</xref>), or supporting IoT data security (Alaba <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_002">2017</xref>).</p>
<p>Finally, edge computing has also enabled the integration of ML in IoT infrastructures. As discussed in Atitallah <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_005">2020</xref>), the alliance between IoT and ML techniques paves the way for the development of innovative services, for example, in the frame of smart cities or e-health. Under this umbrella, an intelligent DSS framework on the edge for the automatic monitoring of diabetes patients through the analysis of sensed data was presented in Abdel-Basset <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_001">2020</xref>). Work in Mrozek <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_025">2020</xref>) also proposed an edge-based remote monitoring solution, in this case for fall-detection. Authors explored different ML models to evaluate their suitability in the proposed architecture, which is a crucial aspect, especially considering the processing and memory limitations of IoT entities.</p>
<p>The use of ML in edge-nodes has also been exploited for other purposes, such as intelligent network management. Work in Veeramanikandan <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_042">2020</xref>) proposed a distributed deep learning solution in the IoT-edge environment focused on the integration of data flows with the aim of reducing the latency of the communications at the time of increasing accuracy since the data-generation stage. In line with this work, authors of Liu <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_020">2019</xref>) presented an IoT network dynamic-clustering solution based on edge computing using deep reinforcement learning. The aim was to enable high-performance IoT data analytics, while fulfilling data communication requirements from IoT networks and load-balancing requirements from edge servers.</p>
<p>In this paper, we propose an IoT-edge computing multi-layer intelligent architecture to enable decision making at the highest level of the hierarchy, i.e. an edge-node, but considering the needs claimed by the individual elements deployed in a certain scenario. With this solution, computations are locally performed by the end-devices and the edge-node, which allows to have shorter response times as well as detaching the IoT system from the cloud. While most of previous research has focused on the task-offloading problem and the integration of ML at the edge, the coordinated operation of TinyML-enabled devices and edge-nodes has been scarcely addressed. Therefore, different from the related literature, we explore this synergy, which is crucial to build hierarchical distributed ML schemes as described in the following.</p>
</sec>
<sec id="j_infor505_s_004">
<label>2.2</label>
<title>Hierarchical Stacking-Based Ensemble ML</title>
<p>Ensemble ML has been employed during the last years in order to increase the accuracy of single models or to make complex system-wide decisions (Rokach, <xref ref-type="bibr" rid="j_infor505_ref_030">2010</xref>). Concretely, the stacking technique, also known as stacked generalization, consists of using the output of several ML decisors as the input of another one, known as meta-model (Wolpert, <xref ref-type="bibr" rid="j_infor505_ref_047">1992</xref>), (Chatzimparmpas <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_009">2021</xref>).</p>
<p>Stacked ensemble ML has been widely studied in the literature. In his work, Pavlyshenko (<xref ref-type="bibr" rid="j_infor505_ref_026">2018</xref>) explored different stacking techniques employed for time series forecasting and logistic regression with highly imbalanced data. From the attained results, authors demonstrated that stacking models are able to achieve more precise predictions than single models. The fields of application of this technique are multiple, e.g. sentiment analysis (Emre Isik <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_015">2018</xref>), children disability detection (Mounica <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_024">2019</xref>), star and galaxy classification (Chao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_008">2020</xref>), or early illness detection (Ksia̧żek <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_018">2020</xref>; Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_045">2019</xref>), among many others. In the following, we provide some detailed examples.</p>
<p>Work in Silva and Ribeiro (<xref ref-type="bibr" rid="j_infor505_ref_039">2006</xref>) presented a two-level hierarchical hybrid model combining Support Vector Machine (SVM)–Relevance Vector Machine (RVM) to exploit the best of both techniques. Authors demonstrated the validity of their solution on a text classification task, in which the first hierarchical level made use of an RVM to determine the most confident classified examples and the second level employed an SVM to learn and classify the tougher ones. Authors of Kowsari <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_017">2017</xref>) also tackled the text classification problem but employing a hierarchical Deep Learning (DL)system in order to provide specialized understanding at each level of the document hierarchy. In this case, different DL models were combined to increase the accuracy of conventional approaches using Naive Bayes or SVM. In Elwerghemmi <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_014">2019</xref>), SVM was employed in a stacked fashion for Quality of Experience (QoE) prediction. Authors confirmed its applicability for the manipulation of non-stationary data in real-time applications, outperforming a range of well-known QoE predictors in terms of accuracy and processing complexity. Finally, work in Cheng <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_010">2020</xref>) proposed a stacking-based ensemble ML scheme for predicting the complex dynamics of unmanned surface vehicles. In this case, authors made use of tree-based ensemble models, as well as DL techniques for producing different combinations of stacked classifiers, resulting in a notably improved accuracy in comparison with other state-of-the-art techniques.</p>
<p>As can be seen, stacking-based ensemble learning can be employed for a plethora of applications. In our case, as mentioned above, we present a distributed multi-layer stacked DSS that permits to make decisions affecting several end-devices by aggregating individual decisions made by these elements. Therefore, the isolated interests of end-devices, which compose the lowest layer of the intelligent architecture, are gathered and considered by a higher-level intelligent instance, which looks for common benefits for the whole deployment. This approach poses a step further compared to previous proposals, which just leverage multi-layer ML schemes aiming to improve the performance of conventional models, usually in terms of accuracy. Besides, with the proposed system we achieve a reduction in end-device’s communication activities, as the raw data are processed on the same device instead of sending them to the infrastructure.</p>
</sec>
<sec id="j_infor505_s_005">
<label>2.3</label>
<title>TinyML</title>
<p>TinyML is a concept proposed in Warden and Situnayake (<xref ref-type="bibr" rid="j_infor505_ref_046">2019</xref>), which is attracting great attention from both academia and industry (Sanchez-Iborra and Skarmeta, <xref ref-type="bibr" rid="j_infor505_ref_034">2020</xref>). It permits to integrate ML-based intelligence in resource-limited devices by optimizing and porting ML models built in non-constrained platforms. This paves the way for obtaining truly-intelligent IoT units able to make decisions without the support of additional devices or servers. This approach reduces end-device communication activities as the sensed data is locally processed, which permits to increase battery lifetimes given that wireless transmission are highly power demanding as mentioned above. Besides, reducing raw data exchanges between IoT devices and the infrastructure limits privacy and security risks (Sanchez-Iborra and Skarmeta, <xref ref-type="bibr" rid="j_infor505_ref_034">2020</xref>).</p>
<p>Given the novelty of this paradigm, not many works can be found in the literature exploiting the full range of opened possibilities. A clear field of TinyML application is vehicular scenarios, for example, to improve the performance of autonomous driving in mini-vehicles (de Prado <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_013">2020</xref>) or to increase driver safety (Lahade <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_019">2020</xref>). Thanks to TinyML, heavy computation tasks that are usually performed by powerful processors such as image or audio processing (Wong <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_048">2020</xref>; Pontoppidan, <xref ref-type="bibr" rid="j_infor505_ref_027">2020</xref>) are being investigated in order to be executed by end-devices. From a wider perspective, other works have proposed TinyML-based DSSs for environmental parameter prediction (Alongi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_003">2020</xref>) and smart-farming applications (Vuppalapati <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_043">2020</xref>), obtaining highly promising results although additional efforts should be performed to increase the efficiency of TinyML models.</p>
<p>In this paper, we present a novel advance in comparison with previous works by presenting a hierarchical TinyML scheme which is validated in a real use-case. As mentioned above, adopting a vertically and horizontally distributed TinyML scheme is a proposal not addressed yet in the related literature. Besides, the selected case of study (smart-agriculture) is attracting a lot of attention from the IoT research community (Raj <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_029">2021</xref>). Concretely, our solution permits to decide whether to separately irrigate or fertilize different subsets of plantations within the same green house attending to environmental parameters sensed along the scenario. The insights of this use-case and its implementation are provided in the following sections.</p>
</sec>
</sec>
<sec id="j_infor505_s_006">
<label>3</label>
<title>Hierarchical TinyML Scheme</title>
<p>TinyML has brought a new wave of opportunities for embedding intelligence within the massive number of already deployed IoT devices. Although this is a great advance in order to provide enhanced processing capabilities to these constrained units, the development of distributed computing solutions will permit to improve the performance of isolated models, not just in terms of accuracy but also by widening the limited scope of the decisions that an isolated device can make. This is crucial when certain decisions affect a set of elements instead of a single one, hence being greatly advantageous in order to weave a web of collective intelligence (Hadj Sassi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_016">2019</xref>).</p>
<fig id="j_infor505_fig_001">
<label>Fig. 1</label>
<caption>
<p>Hierarchical stacked ML scheme.</p>
</caption>
<graphic xlink:href="infor505_g001.jpg"/>
</fig>
<fig id="j_infor505_fig_002">
<label>Fig. 2</label>
<caption>
<p>Hierarchical stacked TinyML workflow.</p>
</caption>
<graphic xlink:href="infor505_g002.jpg"/>
</fig>
<p>Therefore, following the scheme shown in Fig. <xref rid="j_infor505_fig_001">1</xref>, we present a distributed hierarchical stacking-based TinyML solution. We propose an ensemble ML model as single IoT units are not always able to make an adequate decision that may affect others, given their limited sensing range. Apart from this concatenation of ML models, we also propose to adopt a hierarchical approach by placing these models at different layers. The first level constituted by IoT end-devices and a top-layer implemented in an edge-node, both of them leveraging the possibilities brought by TinyML. With this strategy, we intend to obtain a system-wide decision that takes into consideration the individual demands of each IoT device. The workflow of this distributed model is shown in Fig. <xref rid="j_infor505_fig_002">2</xref>.</p>
<p>This edge computing-based configuration presents a number of advantages in comparison with typical centralized cloud-computing models. Firstly, there is a clear detachment from the cloud as local data is processed inside the proposed system.This leads to a reduction in the number of end-device’s transmissions, which permits to save energy and avoids malicious attacks in the wireless segment of the communication infrastructure. The latency of the decision-making process is also reduced, specially considering the long transmission times that current state-of-the IoT communication technologies, i.e. LPWANs, present. This family of communication technologies is being broadly adopted in present IoT deployments given its great transmission ranges with a very low power consumption (Sanchez-Iborra and Cano, <xref ref-type="bibr" rid="j_infor505_ref_033">2016</xref>). An example of an LPWAN-based solution that is extensively used in different IoT scenarios, e.g. smart cities, smart-agriculture, Internet of Vehicles (IoV), etc., is LoRaWAN (Sanchez-Gomez <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_031">2019</xref>).</p>
<p>Adopting a hierarchical TinyML scheme permits end-devices to form part of the decision process as their individual decisions are considered by the higher-level instance. Besides, the system scalability and reliability is ensured given the modularity of the solution. Finally, this proposal also permits low-cost deployments as no expensive processing units or data centres are needed. This is achieved thanks to the adoption of the TinyML paradigm and the exploitation, in a distributed way, of the processing capabilities of IoT devices. In the following section, we present the application of our proposal in the specific use-case of smart-agriculture, which is receiving great attention from the AI research community (van Klompenburg <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor505_ref_041">2020</xref>).</p>
</sec>
<sec id="j_infor505_s_007">
<label>4</label>
<title>Experimental Design</title>
<p>In this Section, we comprehensively present the general empirical methodology and the implementation details of our specific use-case. Thereby, we explore the ML models developed as well as their conversion to TinyML ones, the employed datasets, and the equipment used in our validation and evaluation experiments.</p>
<sec id="j_infor505_s_008">
<label>4.1</label>
<title>Empirical Methodology</title>
<p>The empirical methodology consisting of the design and development of a distributed DSS with two decision levels, aiming at considering the particular needs identified by individual IoT end-devices, while dealing with the severe processing and communication limitations of these elements. The first decision level (end-device) consists of a set of <italic>p</italic> zones, and each zone contains <italic>r</italic> slave devices, so the number of elements at this level is <inline-formula id="j_infor505_ineq_001"><alternatives><mml:math>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="italic">p</mml:mi></mml:math><tex-math><![CDATA[$r\times p$]]></tex-math></alternatives></inline-formula>. Each IoT device contains a TinyML model that outputs its individual decision using a set of inputs <inline-formula id="j_infor505_ineq_002"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{x_{1}},{x_{2}},\dots ,{x_{n}}\}$]]></tex-math></alternatives></inline-formula> collected from the environment. In turn, the second level consists of a single master device (edge-node) which provides a final-decision TinyML model using as input a categorical vector of <italic>r</italic> dimensions <inline-formula id="j_infor505_ineq_003"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{s_{1}},{s_{2}},\dots ,{s_{r}}\}$]]></tex-math></alternatives></inline-formula>, which represents the outputs of the individual devices within a zone, and provides as output a final decision for that zone.</p>
<p>The training and evaluation of the different ML models have been carried out in a Python non-constrained environment and used the criterion of accuracy (for balanced data) and the balanced accuracy (for unbalanced data) as performance metrics. The conversion of these ML models into TinyML ones was done by using compatible TinyML libraries (specified below) and their evaluations were performed using the following criteria: Flash memory, SRAM, and latency as figures of merit. As further explained in following sections, diverse ML algorithms have been investigated under these conditions to select the most adequate regarding the defined criteria. Finally, as communication technology to connect end-devices with the edge-node, we have considered the use of an LPWAN-based solution, due to its low power consumption and long coverage range, which are highly valued characteristics for IoT deployments.</p>
<p>Therefore, the followed empirical methodology consists of the next steps: 
<list>
<list-item id="j_infor505_li_001">
<label>1.</label>
<p>Produce a dataset of <italic>m</italic> samples and <italic>n</italic> features <inline-formula id="j_infor505_ineq_004"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">m</mml:mi>
<mml:mo>×</mml:mo>
<mml:mi mathvariant="italic">n</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(m\times n)$]]></tex-math></alternatives></inline-formula>.</p>
</list-item>
<list-item id="j_infor505_li_002">
<label>2.</label>
<p>Apply data pre-processing when needed (standardization, check of unbalanced data problem, missing values, feature selection, etc.).</p>
</list-item>
<list-item id="j_infor505_li_003">
<label>3.</label>
<p>Use a k-folds cross validation strategy to determine training and testing sets.</p>
</list-item>
<list-item id="j_infor505_li_004">
<label>4.</label>
<p>Train and evaluate a set of classifiers using a grid search-based hyper-parameters tuning strategy to identify the best settings of each classifier. For each classifier, we retain the best five configurations, i.e. those with the lowest values of accuracy/balanced accuracy.</p>
</list-item>
<list-item id="j_infor505_li_005">
<label>5.</label>
<p>Convert these ML models into TinyML ones using adequate libraries.</p>
</list-item>
<list-item id="j_infor505_li_006">
<label>6.</label>
<p>Run and evaluate the TinyML models on a resource-limited device by means of the three defined criteria: Flash memory, SRAM, and latency.</p>
</list-item>
<list-item id="j_infor505_li_007">
<label>7.</label>
<p>Choose the best configuration for each model.</p>
</list-item>
<list-item id="j_infor505_li_008">
<label>8.</label>
<p>Repeat the previous steps for the edge decision level.</p>
</list-item>
<list-item id="j_infor505_li_009">
<label>9.</label>
<p>Connect end-devices and edge-node through an LPWAN link and test connectivity.</p>
</list-item>
<list-item id="j_infor505_li_010">
<label>10.</label>
<p>Evaluate the performance of the whole system.</p>
</list-item>
</list>
</p>
</sec>
<sec id="j_infor505_s_009">
<label>4.2</label>
<title>Scenario and Problem Description</title>
<fig id="j_infor505_fig_003">
<label>Fig. 3</label>
<caption>
<p>Smart-agriculture use-case.</p>
</caption>
<graphic xlink:href="infor505_g003.jpg"/>
</fig>
<p>We consider the case of a green house equipped with (i) a range of ground sensors that monitor the status of the plantation and (ii) a set of fixed sprinklers that cover certain plantation zones. Each of these elements is equipped with a communication module that connects it with an edge-node (Fig. <xref rid="j_infor505_fig_003">3</xref>). Each ground sensor is able to detect the needs of its surrounding plants in terms of moisture, nutrients, etc. However, as each sprinkler is placed in the green house ceiling and covers an area monitored by a number of sensors, the irrigation decision (and its composition) should be made considering the individual needs of the affected plants. Therefore, firstly each ground sensor decides the needs of its sensed plants by using its embedded TinyML model and, then, this decision is submitted to the edge-node (instead of transmitting all the raw sensed data). Once the edge-node gathers every decision made by the sensors under a common sprinkler, it finally decides the action of this sprinkler by using a top-level meta-TinyML model. As can be seen, the decision made for each sprinkler takes into consideration the individual needs of the irrigated plants thanks to the hierarchical TinyML scheme, hence obtaining a greater irrigation precision and adapting it to the actual needs of the individual plants. This may permit the exploitation of the green house for different types of plantations as well as increasing the efficiency of the irrigation and fertigation systems.</p>
</sec>
<sec id="j_infor505_s_010">
<label>4.3</label>
<title>DSS Definition</title>
<p>For the sake of clarity, in the following we explore the proposed hierarchical TinyML scheme by individually describing the two levels of decision explained previously (Fig. <xref rid="j_infor505_fig_001">1</xref>).</p>
<sec id="j_infor505_s_011">
<label>4.3.1</label>
<title>End-Device-Level Decision</title>
<p>Firstly, each deployed sensor should evaluate the status of its monitored plantation. To this end, a series of environmental parameters are tracked in order to infer the real needs of the plants. Concretely, the following parameters have been selected for this task: Air temperature (<italic>T</italic>), soil moisture (<italic>M</italic>), soil PH (<italic>PH</italic>), and soil electrical conductivity (<italic>EC</italic>). Therefore, the resulting vector of features is <inline-formula id="j_infor505_ineq_005"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:mi mathvariant="italic">T</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">M</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mtext mathvariant="italic">PH</mml:mtext>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mtext mathvariant="italic">EC</mml:mtext>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{T,M,\textit{PH},\textit{EC}\}$]]></tex-math></alternatives></inline-formula>, where the two former inputs are employed to detect irrigation needs and the others permit to evaluate the level of nutrients in the soil (although an excess of nutrients may lead to a needed correction by means of extra irrigation). With this information, the TinyML model at this level infers the most advantageous action for the monitored plant(s) and outputs <inline-formula id="j_infor505_ineq_006"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo></mml:math><tex-math><![CDATA[$(o)$]]></tex-math></alternatives></inline-formula> it, <inline-formula id="j_infor505_ineq_007"><alternatives><mml:math>
<mml:mi mathvariant="italic">o</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$o=\{{a_{0}},{a_{1}},{a_{2}}\}$]]></tex-math></alternatives></inline-formula>, where <inline-formula id="j_infor505_ineq_008"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${a_{0}}$]]></tex-math></alternatives></inline-formula> indicates “no action”, <inline-formula id="j_infor505_ineq_009"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${a_{1}}$]]></tex-math></alternatives></inline-formula> represents the “irrigation action” (watering), and <inline-formula id="j_infor505_ineq_010"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${a_{2}}$]]></tex-math></alternatives></inline-formula> symbolizes the “fertigation action” (watering with nutrients). Thereby, the individual decision made by each sensor thanks to its embedded TinyML model is transmitted to the edge-node, where the second level of decision resides.</p>
</sec>
<sec id="j_infor505_s_012">
<label>4.3.2</label>
<title>Edge-Level Decision</title>
<p>As mentioned above, a certain number of plants inside a defined zone shares a common sprinkler, therefore the objective of the edge-level decisor should be oriented to achieve a common benefit for the affected plants. Regarding our specific use-case, we consider that each sprinkler covers a zone monitored by 4 sensors. This distribution is shown in Fig. <xref rid="j_infor505_fig_004">4</xref>, which represents the partition of a greenhouse into 6 zones. Thus, the meta-model placed in the edge-node receives 4 input parameters <inline-formula id="j_infor505_ineq_011"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{s_{1}},{s_{2}},{s_{3}},{s_{4}}\}$]]></tex-math></alternatives></inline-formula>, where <inline-formula id="j_infor505_ineq_012"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${s_{i}}$]]></tex-math></alternatives></inline-formula> represents the decision made in the previous step by each of the 4 sensors within a certain zone. Finally, this model generates a single output (<italic>O</italic>) with three possible commands <inline-formula id="j_infor505_ineq_013"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{A_{0}},{A_{1}},{A_{2}}\}$]]></tex-math></alternatives></inline-formula> for the implicated sprinkler, with similar meanings as those described for the end-device-level model, i.e. “no action”, “irrigation”, and “fertigation”, respectively. Recall that, in order to produce its output, this model takes into consideration each of the implicated sensors’ decisions, hence, the end-devices are involved in the final decision process which directly affects their monitored plants.</p>
<fig id="j_infor505_fig_004">
<label>Fig. 4</label>
<caption>
<p>Elements distribution in the considered scenario.</p>
</caption>
<graphic xlink:href="infor505_g004.jpg"/>
</fig>
</sec>
</sec>
<sec id="j_infor505_s_013">
<label>4.4</label>
<title>Dataset</title>
<p>Given that the proposed scheme presents two different decisors, two different datasets for training each of the respective models are needed. Regarding the end-device level, we have produced a large dataset of 10,000 samples in which we have assigned random values to the input parameters following an uniform distribution within certain ranges, namely, <inline-formula id="j_infor505_ineq_014"><alternatives><mml:math>
<mml:mi mathvariant="italic">T</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>17</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mspace width="0.1667em"/>
</mml:mrow>
<mml:mrow>
<mml:mo>∘</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mtext>C</mml:mtext>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>33</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mspace width="0.1667em"/>
</mml:mrow>
<mml:mrow>
<mml:mo>∘</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mtext>C</mml:mtext>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$T\in [17{\hspace{0.1667em}^{\circ }}\text{C},33{\hspace{0.1667em}^{\circ }}\text{C}]$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor505_ineq_015"><alternatives><mml:math>
<mml:mi mathvariant="italic">M</mml:mi>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>0</mml:mn>
<mml:mi mathvariant="normal">%</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi mathvariant="normal">%</mml:mi>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$M\in [0\% ,100\% ]$]]></tex-math></alternatives></inline-formula>, <inline-formula id="j_infor505_ineq_016"><alternatives><mml:math>
<mml:mtext mathvariant="italic">PH</mml:mtext>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>14</mml:mn>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$\textit{PH}\in [0,14]$]]></tex-math></alternatives></inline-formula>, and <inline-formula id="j_infor505_ineq_017"><alternatives><mml:math>
<mml:mtext mathvariant="italic">EC</mml:mtext>
<mml:mo stretchy="false">∈</mml:mo>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mn>1.1</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>S</mml:mtext>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mtext>m</mml:mtext>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mn>6.3</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>S</mml:mtext>
<mml:mo mathvariant="normal" stretchy="false">/</mml:mo>
<mml:mtext>m</mml:mtext>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$\textit{EC}\in [1.1\hspace{2.5pt}\text{S}/\text{m},6.3\hspace{2.5pt}\text{S}/\text{m}]$]]></tex-math></alternatives></inline-formula>. Then, making use of the thresholds shown in Table <xref rid="j_infor505_tab_001">1</xref>, an automatic labelling algorithm has been developed to assign the proper action <inline-formula id="j_infor505_ineq_018"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{a_{0}},{a_{1}},{a_{2}}\}$]]></tex-math></alternatives></inline-formula> to each input vector. After a first round of automatic data labeling, a subsequent manual tuning has been conducted, especially in those samples with values close to the established thresholds. This process has been conducted with the support of expert agriculture engineers from the Mohammed VI Polytechnic University. Finally, the dataset has been z-score normalized and partitioned by using the <italic>k</italic>-fold cross validation method, with <inline-formula id="j_infor505_ineq_019"><alternatives><mml:math>
<mml:mi mathvariant="italic">k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>10</mml:mn></mml:math><tex-math><![CDATA[$k=10$]]></tex-math></alternatives></inline-formula>, for training and validating the different TinyML models that have been evaluated in real IoT devices.</p>
<table-wrap id="j_infor505_tab_001">
<label>Table 1</label>
<caption>
<p>Decision thresholds.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Input condition</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Plantation need</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Temperature <inline-formula id="j_infor505_ineq_020"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mi mathvariant="italic">T</mml:mi>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal">&gt;</mml:mo>
<mml:mn>26</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mspace width="0.1667em"/>
</mml:mrow>
<mml:mrow>
<mml:mo>∘</mml:mo>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$(T)>26{\hspace{0.1667em}^{\circ }}$]]></tex-math></alternatives></inline-formula>C</td>
<td style="vertical-align: top; text-align: left">Irrigation</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Soil Moisture (<italic>M</italic>) &lt; 60%</td>
<td style="vertical-align: top; text-align: left">Irrigation</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Soil PH (<italic>PH</italic>) &lt; 5.5</td>
<td style="vertical-align: top; text-align: left">Fertigation</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Soil PH (<italic>PH</italic>) &gt; 7</td>
<td style="vertical-align: top; text-align: left">Fertigation</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Soil Electrical Conductivity (<italic>EC</italic>) &lt; 2.5 S/m</td>
<td style="vertical-align: top; text-align: left">Fertigation</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Soil Electrical Conductivity (<italic>EC</italic>) &gt; 3.5 S/m</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Irrigation</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Regarding the top-level decisor, a different dataset has been generated. Considering that the input vector consists of 4 elements with 3 possible values (<inline-formula id="j_infor505_ineq_021"><alternatives><mml:math>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$\{{a_{0}},{a_{1}},{a_{2}}\}$]]></tex-math></alternatives></inline-formula>), there is a limited range of possible inputs, namely, <inline-formula id="j_infor505_ineq_022"><alternatives><mml:math>
<mml:msup>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mn>81</mml:mn></mml:math><tex-math><![CDATA[${3^{4}}=81$]]></tex-math></alternatives></inline-formula>. We have applied the mode statistical operator to obtain the final decision (<inline-formula id="j_infor505_ineq_023"><alternatives><mml:math>
<mml:mi mathvariant="italic">O</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo fence="true" stretchy="false">{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo fence="true" stretchy="false">}</mml:mo></mml:math><tex-math><![CDATA[$O=\{{A_{0}},{A_{1}},{A_{2}}\}$]]></tex-math></alternatives></inline-formula>) for each input combination and we have manually revised each of the 814-elements input vector, aiming at selecting the most adequate output in the case of conflict. Given the limited number of possible input samples, all of them have been employed for the training and testing phases of the different TinyML models evaluated in the edge-node.</p>
<p>We consider that the followed dataset generation procedures are sufficient for the purpose of validating our proposal at this point of our research. Thus, the generation of datasets from field-sampled data has been left for future work in order to better adjust our models. Besides, for replicability purposes, we have made the end-device-level dataset publicly available.<xref ref-type="fn" rid="j_infor505_fn_003">3</xref><fn id="j_infor505_fn_003"><label><sup>3</sup></label>
<p><uri>https://github.com/ramonjsi/Smart-Agriculture</uri></p></fn></p>
</sec>
<sec id="j_infor505_s_014">
<label>4.5</label>
<title>TinyML Models Generation</title>
<p>As described above, the first step to produce a TinyML model is to obtain a regular ML model in a non-constrained platform. To this end, we have used the well-known Python’s <italic>Scikit-learn</italic> library to produce a series of ML models with different configurations that will be described in Section <xref rid="j_infor505_s_016">5</xref>. Concretely, we have considered the following ML algorithms: Multi-Layer Perceptron (MLP), Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM), given that all of them are supported by the TinyML toolkits employed in our experiments.</p>
<p>Once the non-constrained ML model is produced and adjusted, it should be ported to be runnable in constrained units. For this task, we have employed a series of TinyML toolkits, depending on the involved ML algorithm, given that each toolkit is compatible with a limited set of algorithms. We have employed the <italic>emlearn</italic><xref ref-type="fn" rid="j_infor505_fn_004">4</xref><fn id="j_infor505_fn_004"><label><sup>4</sup></label>
<p><uri>https://github.com/emlearn/emlearn</uri></p></fn> and <italic>MicroMLGen</italic><xref ref-type="fn" rid="j_infor505_fn_005">5</xref><fn id="j_infor505_fn_005"><label><sup>5</sup></label>
<p><uri>https://github.com/eloquentarduino/micromlgen</uri></p></fn> frameworks given their efficiency in the porting process and their compatibility with a notable number of models generated by <italic>Scikit-learn</italic> as mentioned previously.</p>
</sec>
<sec id="j_infor505_s_015">
<label>4.6</label>
<title>Equipment</title>
<p>We have selected the Arduino Uno board as the target device for evaluating the performance of the developed models for both of the decisors described above (end-device and edge levels). We have chosen this unit given its popularity, low-cost, and notable processing and memory constraints, which make it a good benchmarking tool for efficient IoT developments. It is equipped with a 16 MHz 8-bit processor (ATmega 328p) with flash and Static RAM (SRAM) memories of 32 KB and 2 kB, respectively. Considering these resources, the Arduino Uno belongs to the most constrained type of Micontroller Units (MCUs) (Class 0) according to the classification in Bormann <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor505_ref_006">2014</xref>).</p>
<p>As communication technology to connect the deployed sensors and the edge-node, we have selected LoRA, an LPWAN-based solution that permits long-range transmissions with a great energy-efficiency (Sanchez-Iborra and Cano, <xref ref-type="bibr" rid="j_infor505_ref_033">2016</xref>), which are highly valued characteristics for the scenario under study. However, these attractive characteristics are achieved at the expense of notably reducing the transmission data-rate, leading to very long transmission times, even more than a second under certain configurations. Besides, given that LoRa makes use of unlicensed frequency bands, a strict duty cycle-based regulation that restricts LoRa transmissions to 1% of the available time, i.e. a maximum of 3.6 seconds per hour per device, is established. For these reasons, it is highly desirable to limit the number of communications using this type of communication technology, which is achieved in the proposed solution. For our experiments, we have employed the Semtech SX1272 LoRa modem (Semtech, <xref ref-type="bibr" rid="j_infor505_ref_038">2019</xref>).</p>
</sec>
</sec>
<sec id="j_infor505_s_016">
<label>5</label>
<title>Results</title>
<p>In this section, we present and discuss the performance results obtained for the range of TinyML models generated for implementing both decisors explained above, i.e. the end-device-level and edge-level decisors. Please note that we have analysed these models in the lab, as the real field deployment of the presented solution is still in process.</p>
<sec id="j_infor505_s_017">
<label>5.1</label>
<title>End-Device-Level Decisor</title>
<p>As explained previously, we have evaluated the performance of different types of ML algorithms, namely, Multi-Layer Perceptron (MLP), Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM), with different model configurations for each of them. Aiming at selecting the best setup among the plethora of evaluated alternatives for each model, the accuracy when making a decision with respect to the labels assigned in the generated dataset (see Section <xref rid="j_infor505_s_013">4.4</xref>) has been adopted as a principal figure of merit. To this end, we have employed the grid search technique that permits to obtain the optimum configuration for each model.</p>
<p>Table <xref rid="j_infor505_tab_002">2</xref> presents the performance of the best configurations for each of the ML algorithms under consideration and the values assigned to their basic configuration parameters. Regarding accuracy, observe the notable performance of all the algorithms although DT and RF stand out with an almost perfect accuracy of 99.9%. Considering the memory footprints of the models on the Arduino device, the MLP model is the heaviest one in terms of flash memory and SRAM. In turn, RF and, especially, DT are lighter models, which is a highly valued aspect considering the severe storage and memory constraints of the target device. It is remarkable that the optimized SVM model exceeded the flash memory available on the device, hence it could not be deployed on it. In order to obtain an SVM model that could be embedded on the selected unit, we had to simplify it very much, thus dramatically reducing the model accuracy. This behaviour was already detected in Sanchez-Iborra and Skarmeta (<xref ref-type="bibr" rid="j_infor505_ref_034">2020</xref>).</p>
<table-wrap id="j_infor505_tab_002">
<label>Table 2</label>
<caption>
<p>End-device decisor models’ performance.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Configuration</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">TinyML toolkit</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Flash memory</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SRAM</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Latency</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Accuracy</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">MLP</td>
<td style="vertical-align: top; text-align: left">Hidden layers: 1</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">6516 B</td>
<td style="vertical-align: top; text-align: left">1218 B</td>
<td style="vertical-align: top; text-align: left">4.1 ± 0.2 ms</td>
<td style="vertical-align: top; text-align: left">0.976</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Neurons: 10</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Act. Function: tanh</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Max. iterations: 1000</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left">DT</td>
<td style="vertical-align: top; text-align: left">Criterion: Entropy</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">3544 B</td>
<td style="vertical-align: top; text-align: left">346 B</td>
<td style="vertical-align: top; text-align: left">20 ± 2 μs</td>
<td style="vertical-align: top; text-align: left">0.999</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Depth: 3</td>
<td style="vertical-align: top; text-align: left"><italic>MicroMLGen</italic></td>
<td style="vertical-align: top; text-align: left">3392 B</td>
<td style="vertical-align: top; text-align: left">346 B</td>
<td style="vertical-align: top; text-align: left">13 ± 2 μs</td>
<td style="vertical-align: top; text-align: left">0.999</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">RF</td>
<td style="vertical-align: top; text-align: left">Criterion: Entropy</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">4444 B</td>
<td style="vertical-align: top; text-align: left">368 B</td>
<td style="vertical-align: top; text-align: left">84 ± 7 μs</td>
<td style="vertical-align: top; text-align: left">0.999</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Estimators: 6</td>
<td style="vertical-align: top; text-align: left"><italic>MicroMLGen</italic></td>
<td style="vertical-align: top; text-align: left">3952 B</td>
<td style="vertical-align: top; text-align: left">366 B</td>
<td style="vertical-align: top; text-align: left">79 ± 5 μs</td>
<td style="vertical-align: top; text-align: left">0.999</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Max. depth: 3</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left; border-bottom: solid thin">SVM</td>
<td style="vertical-align: top; text-align: left">Kernel: rbf</td>
<td style="vertical-align: top; text-align: left"><italic>MicroMLGen</italic></td>
<td style="vertical-align: top; text-align: left">out-of-range</td>
<td style="vertical-align: top; text-align: left">349 B</td>
<td style="vertical-align: top; text-align: left">–</td>
<td style="vertical-align: top; text-align: left">0.972</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Gamma: 0.01</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Decision-making latency is another key aspect when evaluating a TinyML model, given the computation restrictions of the target MCU. Again, the DT algorithm presents the best performance in comparison with RF and MLP, which is the slowest one. This behaviour is justified by the simplicity of the former, as the MLP and RF produce more complex models as evidenced by their memory and RAM footprints discussed above. Finally, comparing the performance of the TinyML toolkits under consideration, observe that in all cases, the models generated by <italic>MicroMLGen</italic> are lighter and faster than those produced by <italic>emlearn</italic>.</p>
<p>In the light of these results, the DT model generated by <italic>MicroMLGen</italic> would be the most adequate one to implement the end-device-level decisor, which permits each sensor to decide the most proper action for its monitored plants.</p>
</sec>
<sec id="j_infor505_s_018">
<label>5.2</label>
<title>Edge-Level Decisor</title>
<p>Given that this decisor should not be subject to any uncertainty, we have selected the simplest model for each of the considered algorithms that provides a perfect accuracy of 100%. Therefore, after evaluating a large set of different configurations, for the sake of simplicity, we only show the performance results of the finally chosen models (Table <xref rid="j_infor505_tab_003">3</xref>). Please note that as the SVM algorithm was not able to provide a perfect accuracy for this decision model (best result of 81.5% with linear kernel), we have not included it in the following discussion and in the table.</p>
<table-wrap id="j_infor505_tab_003">
<label>Table 3</label>
<caption>
<p>Edge-level decisor models’ performance.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Configuration</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">TinyML toolkit</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Flash memory</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SRAM</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Latency</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">MLP</td>
<td style="vertical-align: top; text-align: left">Hidden layers: 8</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">6338 B</td>
<td style="vertical-align: top; text-align: left">944 B</td>
<td style="vertical-align: top; text-align: left">3 ± 0.1 ms</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Neurons: 6</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Act. Function: ReLu</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Max. iterations: 1000</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left">DT</td>
<td style="vertical-align: top; text-align: left">Criterion: Entropy</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">4072 B</td>
<td style="vertical-align: top; text-align: left">354 B</td>
<td style="vertical-align: top; text-align: left">45 ± 3 μs</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Depth: 7</td>
<td style="vertical-align: top; text-align: left"><italic>MicroMLGen</italic></td>
<td style="vertical-align: top; text-align: left">4048 B</td>
<td style="vertical-align: top; text-align: left">356 B</td>
<td style="vertical-align: top; text-align: left">32 ± 2 μs</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left; border-bottom: solid thin">RF</td>
<td style="vertical-align: top; text-align: left">Criterion: Entropy</td>
<td style="vertical-align: top; text-align: left"><italic>emlearn</italic></td>
<td style="vertical-align: top; text-align: left">4990 B</td>
<td style="vertical-align: top; text-align: left">370 B</td>
<td style="vertical-align: top; text-align: left">84 ± 5 μs</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Estimators: 3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><italic>MicroMLGen</italic></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">4724 B</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">368 B</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">81 ± 5 μs</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Regarding the MLP algorithm, we have obtained that the simplest configuration obtaining a perfect decision accuracy is using 8 hidden layers with 6 neurons in each layer with the ReLu activation function and a limit of 1000 training iterations. The rest of omitted parameters have been set to the default values of the <italic>Scikit-learn</italic> library (also applicable for the following algorithms). For the DT algorithm, we employed the entropy function for the split criterion and a tree depth of 7 levels for obtaining a perfect accuracy. Finally, the RF algorithm was configured with the same split-criterion function as the DT model and 3 estimators. With these configurations, the memory requirements of each model and their computation delays are shown in Table <xref rid="j_infor505_tab_003">3</xref>.</p>
<p>Comparing the different families of algorithms, again the MLP presents more demanding requirements in terms of memory than the DT and RF algorithms. It is also noticeably slower than the others. As expected, DT models demand less memory and are faster than the RF ones. Finally, comparing the models generated by both <italic>emlearn</italic> and <italic>MicroMLGen</italic> libraries, it can be seen how the latter produces slightly more optimized (in terms of memory) and faster models than the former.</p>
<p>In the light of the attained results, as in the previous case, the DT model produced by <italic>MicroMLGen</italic> is the most convenient one for implementing this decisor, given its perfect accuracy with low impact on the device (13% and 17% of total flash and SRAM memories in the device, respectively) and its quick decision-making time.</p>
</sec>
<sec id="j_infor505_s_019">
<label>5.3</label>
<title>Overall System Performance</title>
<p>In the following, we explore the performance of the whole system considering the communication activities between end-devices and edge-node, too. As mentioned above, using an LPWAN-based solution such as LoRa, permits reliable long communications at the expense of severely reducing the data-rate, hence, increasing the time needed for completing a transmission. Although one of the key characteristics of LPWAN technologies is their reduced energy consumption, the communication activities are still the most power-demanding task for an end-device. For those reasons, we explore the performance of the systems from both latency and energy consumption perspectives.</p>
<p>Regarding end-to-end latency, we should consider both processing and transmission times. In LoRa, the transmission time for a given message is markedly determined by its low-level configuration, especially the Spreading Factor (SF) parameter. As the SF increases the data-rate is reduced, therefor,e the robustness of the transmission is enhanced but the transmission time notably grows. In order to explore the system performance, we have made use of the two extreme SF configurations, i.e. SF7 (high data-rate, low transmission robustness) and SF12 (low data-rate, high transmission robustness). The rest of LoRa configuration parameters have been fixed as follows: Bandwidth (BW): 125 kHz, Coding Rate (CR): 4/5, and CRC check: On. Given that there are only 3 possible messages to be exchanged between end-devices, the edge-node, and the sprinklers, i.e. “no action”, “irrigation”, and “fertigation”, just 2 bits are needed for their codification, which will be the data payload transported in each transmission.</p>
<p>Thereby, the end-to-end latency for making a decision with the previously selected models at each level (DTs) and assuming that the environmental data are already collected, is presented in Table <xref rid="j_infor505_tab_004">4</xref>. As can be seen, processing times are negligible in comparison with transmission times. This happens for the finally selected models (DTs), but observe that other algorithms such as MLP would introduce a non-negligible delay (see Table <xref rid="j_infor505_tab_002">2</xref> and Table <xref rid="j_infor505_tab_003">3</xref>), hence this fact should be taken into consideration when designing systems as the one presented. Other delays related to the transmission coordination among end-devices could be also considered, although these aspects are out of the scope of this work. Given that the greatest contribution to the end-to-end latency comes from communication activities, it is necessary to reduce them, for example, by avoiding the transmission of big volumes of raw data from the end-devices to the edge-node, as we propose in our solution.</p>
<table-wrap id="j_infor505_tab_004">
<label>Table 4</label>
<caption>
<p>Activities’ latency.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">End-device decision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">End-device → Edge-node transmission</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Edge-node decision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Edge-node → Sprinkler transmission</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">SF7</td>
<td style="vertical-align: top; text-align: left">13 μs</td>
<td style="vertical-align: top; text-align: left">25.8 ms</td>
<td style="vertical-align: top; text-align: left">32 μs</td>
<td style="vertical-align: top; text-align: left">25.8 ms</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">SF12</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">13 μs</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">663.5 ms</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">32 μs</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">663.5 ms</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_infor505_tab_005">
<label>Table 5</label>
<caption>
<p>Activities’ energy consumption with LoRa’s SF7.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">End-device decision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">End-device → Edge-node transmission/reception</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Edge-node decision</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Edge-node → Sprinkler transmission</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">End-device</td>
<td style="vertical-align: top; text-align: left">0.11 μA</td>
<td style="vertical-align: top; text-align: left">3.2 mA</td>
<td style="vertical-align: top; text-align: left">–</td>
<td style="vertical-align: top; text-align: left">–</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Edge-node</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">–</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.29 mA</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.28 μA</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3.2 mA</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Regarding energy consumption, we have considered the processing and transmission times shown in Table <xref rid="j_infor505_tab_004">4</xref>, as well as the consumption charts that can be found in the employed equipment’s datasheets (Arduino Uno’s microcontroller (ATmega 328p) and Semtech SX1272 LoRa modem). We have calculated each device’s energy consumption by assuming LoRa’s SF7 and 20 dB gain, with the microcontroller working at 16 MHz and 5 V of operating voltage (configuration by default). Please, note that this is a theoretical estimation of the involved processing and communication activities’ consumption without considering other devices’ tasks, e.g. environmental sensing. The attained results are shown in Table <xref rid="j_infor505_tab_005">5</xref>. As discussed in previous sections, communication activities are notably much more consuming than computation tasks. Nevertheless, the reduced consumption of LoRa technology and the limited number of transmissions per day permit devices to have long battery lifetimes employing usual power-banks used in IoT deployments. This is a crucial feature for ensuring the system scalability and manageability.</p>
<p>In the light of the attained results, we consider that the proposed solution may be of high interest for introducing an intelligent, low-cost, and efficient automation system into current non-digitalized green-houses or other cultivation facilities, hence enabling the transition of the traditional agriculture towards the smart-agriculture paradigm.</p>
</sec>
</sec>
<sec id="j_infor505_s_020">
<label>6</label>
<title>Threats to Validity</title>
<p>This section presents the threats that may potentially impact the validity of this empirical study and the measures taken to mitigate them. Three threats to validity are discussed, namely, internal, external, and construct validity.</p>
<sec id="j_infor505_s_021">
<label>6.1</label>
<title>Threats to Internal Validity</title>
<p>This threat concerns the evaluation process which can be inaccurate, hence leading to biased conclusions. To mitigate this possible issue, all the models were trained and tested using a 10-fold cross validation to overcome the internal limitations and to prevent the overfitting of our classifiers that may appear using the random train-test split. Moreover, we used the grid search strategy to tune the hyper-parameters of the different employed classifiers in order to search the optimum configuration for each of them.</p>
</sec>
<sec id="j_infor505_s_022">
<label>6.2</label>
<title>Threats to External Validity</title>
<p>External validity is related to the extent to which the results obtained in this study can be generalized outside the context of this study. In our case, the used datasets were generated by using a set of empirical knowledge provided by agriculture engineers. It concerned the models’ classification characteristics (temperature, soil moisture, soil PH, and soil electrical conductivity), as well as the defined classes (irrigation and fertigation). This approach helps to the generalization of the findings of this study, especially in real-life case-studies. Besides, we have comprehensively detailed the followed empirical methodology to ease the reproducibility of this study in other fields of application.</p>
</sec>
<sec id="j_infor505_s_023">
<label>6.3</label>
<title>Threats to Construct Validity</title>
<p>Construct validity addresses the reliability of the predictive model performance obtained through this study. To reduce this potential limitation, four criteria were used: three are related to the constraints of end-devices in an IoT context (flash memory, SRAM, and latency), and one is related to the ML domain (accuracy). Other ML criteria could be added to assess the reliability of our classifiers such as Area Under Curve (AUC), F1-score, precision and recall. Moreover, statistical tests or ranking methods have been left for future study to assess the significance of performances and to obtain an overall rank of our classifiers in a real deployment.</p>
</sec>
</sec>
<sec id="j_infor505_s_024">
<label>7</label>
<title>Conclusion</title>
<p>This work has presented a novel stacking-based ensemble TinyML system for enabling collaborative decision-making between IoT-devices and edge-nodes. Our proposal poses a step forward in comparison with the state-of-the-art, as it enables the development of hierarchical intelligent IoT systems by adopting an edge-computing approach and exploiting the TinyML paradigm, which has not been addressed in the literature yet. Concretely, the proposed solution permits end-devices to make individual decisions considering their surrounding information. Thereafter, these individual decisions are submitted to a top-level element at the edge, which aggregates them in order to make a system-wide one, aiming at obtaining common benefits for the deployed elements. Without loss of generality, the proposal has been evaluated in a realistic use-case focused on smart-agriculture. To this end, a real implementation has been carried out considering several ML models, which have been embedded on an Arduino Uno unit with LoRa-powered communication capabilities, using two different TinyML frameworks. The attained results show the validity of the proposal as many different TinyML models can be integrated within the IoT board for performing the desired ML-based tasks. Concretely, the DT algorithm has evidenced the most adequate performance in terms of memory and storage footprint, processing latency, energy consumption, and decision accuracy. Therefore, the presented solution enables the integration of distributed intelligence in current IoT deployments while ensuring long life-times of end-devices. This paves the way for further research in the field of embedded distributed intelligence within the scope of the IoT ecosystem and egde computing. Besides, we are currently working on the deployment of the presented system in a farm in production placed on Ben Guerir (Morocco). This implementation in a real environment will permit us to evaluate other TinyML mechanisms and toolkits while improving the performance of the system presented in this paper.</p>
</sec>
</body>
<back>
<ref-list id="j_infor505_reflist_001">
<title>References</title>
<ref id="j_infor505_ref_001">
<mixed-citation publication-type="journal"><string-name><surname>Abdel-Basset</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Manogaran</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Gamal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>V.</given-names></string-name> (<year>2020</year>). <article-title>A novel intelligent medical decision support model based on soft computing and IoT</article-title>. <source>IEEE Internet of Things Journal</source>, <volume>7</volume>(<issue>5</issue>), <fpage>4160</fpage>–<lpage>4170</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/JIOT.2019.2931647" xlink:type="simple">https://doi.org/10.1109/JIOT.2019.2931647</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8787865/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_002">
<mixed-citation publication-type="journal"><string-name><surname>Alaba</surname>, <given-names>F.A.</given-names></string-name>, <string-name><surname>Othman</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hashem</surname>, <given-names>I.A.T.</given-names></string-name>, <string-name><surname>Alotaibi</surname>, <given-names>F.</given-names></string-name> (<year>2017</year>). <article-title>Internet of things security: a survey</article-title>. <source>Journal of Network and Computer Applications</source>, <volume>88</volume>, <fpage>10</fpage>–<lpage>28</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jnca.2017.04.002" xlink:type="simple">https://doi.org/10.1016/j.jnca.2017.04.002</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S1084804517301455</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_003">
<mixed-citation publication-type="chapter"><string-name><surname>Alongi</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Nicolò</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Danilo</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Terraneo</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Fornaciari</surname>, <given-names>W.</given-names></string-name> (<year>2020</year>). <chapter-title>Tiny neural networks for environmental predictions: an integrated approach with Miosix</chapter-title>. In: <source>Fifth IEEE Workshop on Smart Service Systems (SmartSys 2020)</source>, pp. <fpage>350</fpage>–<lpage>355</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/SMARTCOMP50058.2020.00076" xlink:type="simple">https://doi.org/10.1109/SMARTCOMP50058.2020.00076</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_004">
<mixed-citation publication-type="chapter"><string-name><surname>Ashouri</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Davidsson</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Spalazzese</surname>, <given-names>R.</given-names></string-name> (<year>2018</year>). <chapter-title>Cloud, edge, or both? Towards decision support for designing IoT applications</chapter-title>. In: <source>2018 Fifth International Conference on Internet of Things: Systems, Management and Security</source>, pp. <fpage>155</fpage>–<lpage>162</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IoTSMS.2018.8554827" xlink:type="simple">https://doi.org/10.1109/IoTSMS.2018.8554827</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8554827/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_005">
<mixed-citation publication-type="journal"><string-name><surname>Atitallah</surname>, <given-names>S.B.</given-names></string-name>, <string-name><surname>Driss</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Boulila</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Ghézala</surname>, <given-names>H.B.</given-names></string-name> (<year>2020</year>). <article-title>Leveraging deep learning and IoT big data analytics to support the smart cities development: review and future directions</article-title>. <source>Computer Science Review</source>, <volume>38</volume>, <elocation-id>100303</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.cosrev.2020.100303" xlink:type="simple">https://doi.org/10.1016/j.cosrev.2020.100303</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S1574013720304032</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_006">
<mixed-citation publication-type="other"><string-name><surname>Bormann</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ersue</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Keranen</surname>, <given-names>A.</given-names></string-name> (2014). Terminology for constrained-node networks. In: <italic>IETF RFC 7228</italic>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.17487/RFC7228" xlink:type="simple">https://doi.org/10.17487/RFC7228</ext-link>. <comment><uri>https://www.rfc-editor.org/info/rfc7228</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_007">
<mixed-citation publication-type="chapter"><string-name><surname>Breitbach</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Schafer</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Edinger</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Becker</surname>, <given-names>C.</given-names></string-name> (<year>2019</year>). <chapter-title>Context-aware data and task placement in edge computing environments</chapter-title>. In: <source>IEEE International Conference on Pervasive Computing and Communications (PerCom)</source>, pp. <fpage>1</fpage>–<lpage>10</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/PERCOM.2019.8767386" xlink:type="simple">https://doi.org/10.1109/PERCOM.2019.8767386</ext-link>. <comment><ext-link ext-link-type="uri" xlink:href="https://ieeexplore.ieee.org/document/8767386/">https://ieeexplore.ieee.org/document/8767386/</ext-link></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>Chao</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Wen-hui</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Ran</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Jun-yi</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Ji-ming</surname>, <given-names>L.</given-names></string-name> (<year>2020</year>). <article-title>Research on star/galaxy classification based on stacking ensemble learning</article-title>. <source>Chinese Astronomy and Astrophysics</source>, <volume>44</volume>(<issue>3</issue>), <fpage>345</fpage>–<lpage>355</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.chinastron.2020.08.005" xlink:type="simple">https://doi.org/10.1016/j.chinastron.2020.08.005</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0275106220300783</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_009">
<mixed-citation publication-type="journal"><string-name><surname>Chatzimparmpas</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Martins</surname>, <given-names>R.M.</given-names></string-name>, <string-name><surname>Kucher</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kerren</surname>, <given-names>A.</given-names></string-name> (<year>2021</year>). <article-title>StackGenVis: alignment of data, algorithms, and models for stacking ensemble learning using performance metrics</article-title>. <source>IEEE Transactions on Visualization and Computer Graphics</source>, <volume>27</volume>(<issue>2</issue>), <fpage>1547</fpage>–<lpage>1557</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TVCG.2020.3030352" xlink:type="simple">https://doi.org/10.1109/TVCG.2020.3030352</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/9222343/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_010">
<mixed-citation publication-type="journal"><string-name><surname>Cheng</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>P.-F.</given-names></string-name>, <string-name><surname>Cheng</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Ding</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zheng</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Ge</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Sun</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>J.</given-names></string-name> (<year>2020</year>). <article-title>Ensemble learning approach based on stacking for unmanned surface vehicle’s dynamics</article-title>. <source>Ocean Engineering</source>, <volume>207</volume>, <fpage>107388</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.oceaneng.2020.107388" xlink:type="simple">https://doi.org/10.1016/j.oceaneng.2020.107388</ext-link>. <comment><ext-link ext-link-type="uri" xlink:href="https://linkinghub.elsevier.com/retrieve/pii/S0029801820304170">https://linkinghub.elsevier.com/retrieve/pii/S0029801820304170</ext-link></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_011">
<mixed-citation publication-type="journal"><string-name><surname>Cirillo</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>F.-J.</given-names></string-name>, <string-name><surname>Solmaz</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Kovacs</surname>, <given-names>E.</given-names></string-name> (<year>2019</year>). <article-title>Embracing the future Internet of things</article-title>. <source>Sensors</source>, <volume>19</volume>(<issue>2</issue>), <fpage>351</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/s19020351" xlink:type="simple">https://doi.org/10.3390/s19020351</ext-link>. <comment><uri>https://www.mdpi.com/1424-8220/19/2/351</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>Cui</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Yang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>J.Z.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Ming</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Lu</surname>, <given-names>N.</given-names></string-name> (<year>2019</year>). <article-title>Joint optimization of energy consumption and latency in mobile edge computing for Internet of things</article-title>. <source>IEEE Internet of Things Journal</source>, <volume>6</volume>(<issue>3</issue>), <fpage>4791</fpage>–<lpage>4803</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/JIOT.2018.2869226" xlink:type="simple">https://doi.org/10.1109/JIOT.2018.2869226</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8457190/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_013">
<mixed-citation publication-type="other"><string-name><surname>de Prado</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Donze</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Capotondi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Rusci</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Monnerat</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Benini</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Pazos</surname>, <given-names>N.</given-names></string-name> (2020). <italic>Robust Navigation with TinyML for Autonomous Mini-Vehicles</italic>. arXiv preprint: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/arXiv:2007.00302">arXiv:2007.00302</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_014">
<mixed-citation publication-type="chapter"><string-name><surname>Elwerghemmi</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Heni</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ksantini</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Bouallegue</surname>, <given-names>R.</given-names></string-name> (<year>2019</year>). <chapter-title>Online QoE prediction model based on stacked multiclass incremental support vector machine</chapter-title>. In: <source>8th International Conference on Modeling Simulation and Applied Optimization (ICMSAO)</source>, pp. <fpage>1</fpage>–<lpage>5</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICMSAO.2019.8880302" xlink:type="simple">https://doi.org/10.1109/ICMSAO.2019.8880302</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8880302/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_015">
<mixed-citation publication-type="chapter"><string-name><surname>Emre Isik</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Gormez</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Kaynar</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Aydin</surname>, <given-names>Z.</given-names></string-name> (<year>2018</year>). <chapter-title>NSEM: novel stacked ensemble method for sentiment analysis</chapter-title>. In: <source>International Conference on Artificial Intelligence and Data Processing (IDAP)</source>, pp. <fpage>1</fpage>–<lpage>4</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IDAP.2018.8620913" xlink:type="simple">https://doi.org/10.1109/IDAP.2018.8620913</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8620913/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_016">
<mixed-citation publication-type="journal"><string-name><surname>Hadj Sassi</surname>, <given-names>M.S.</given-names></string-name>, <string-name><surname>Jedidi</surname>, <given-names>F.G.</given-names></string-name>, <string-name><surname>Fourati</surname>, <given-names>L.C.</given-names></string-name> (<year>2019</year>). <article-title>A new architecture for cognitive Internet of things and big data</article-title>. <source>Procedia Computer Science</source>, <volume>159</volume>, <fpage>534</fpage>–<lpage>543</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.procs.2019.09.208" xlink:type="simple">https://doi.org/10.1016/j.procs.2019.09.208</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S1877050919313924</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_017">
<mixed-citation publication-type="chapter"><string-name><surname>Kowsari</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>D.E.</given-names></string-name>, <string-name><surname>Heidarysafa</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Jafari Meimandi</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Gerber</surname>, <given-names>M.S.</given-names></string-name>, <string-name><surname>Barnes</surname>, <given-names>L.E.</given-names></string-name> (<year>2017</year>). <chapter-title>HDLTex: hierarchical deep learning for text classification</chapter-title>. In: <source>16th IEEE International Conference on Machine Learning and Applications (ICMLA)</source>, pp. <fpage>364</fpage>–<lpage>371</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICMLA.2017.0-134" xlink:type="simple">https://doi.org/10.1109/ICMLA.2017.0-134</ext-link>. <comment><ext-link ext-link-type="uri" xlink:href="http://ieeexplore.ieee.org/document/8260658/">http://ieeexplore.ieee.org/document/8260658/</ext-link></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_018">
<mixed-citation publication-type="journal"><string-name><surname>Ksia̧żek</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Hammad</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Pławiak</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Acharya</surname>, <given-names>U.R.</given-names></string-name>, <string-name><surname>Tadeusiewicz</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <article-title>Development of novel ensemble model using stacking learning and evolutionary computation techniques for automated hepatocellular carcinoma detection</article-title>. <source>Biocybernetics and Biomedical Engineering</source>, <volume>40</volume>(<issue>4</issue>), <fpage>1512</fpage>–<lpage>1524</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.bbe.2020.08.007" xlink:type="simple">https://doi.org/10.1016/j.bbe.2020.08.007</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0208521620300991</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_019">
<mixed-citation publication-type="chapter"><string-name><surname>Lahade</surname>, <given-names>S.V.</given-names></string-name>, <string-name><surname>Namuduri</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Upadhyay</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Bhansali</surname>, <given-names>S.</given-names></string-name> (<year>2020</year>). <chapter-title>Alcohol sensor calibration on the edge using tiny machine learning (Tiny-ML) hardware</chapter-title>. In: <source>237th ECS Meeting with the 18th International Meeting on Chemical Sensors (IMCS 2020), May 10–14, 2020</source>. <comment>ECS</comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_020">
<mixed-citation publication-type="chapter"><string-name><surname>Liu</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Cheng</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Ozcelebi</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Murphy</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lukkien</surname>, <given-names>J.</given-names></string-name> (<year>2019</year>). <chapter-title>Deep reinforcement learning for IoT network dynamic clustering in edge computing</chapter-title>. In: <source>19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)</source>, pp. <fpage>600</fpage>–<lpage>603</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/CCGRID.2019.00077" xlink:type="simple">https://doi.org/10.1109/CCGRID.2019.00077</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8752691/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_021">
<mixed-citation publication-type="chapter"><string-name><surname>Lopez Pena</surname>, <given-names>M.A.</given-names></string-name>, <string-name><surname>Munoz Fernandez</surname>, <given-names>I.</given-names></string-name> (<year>2019</year>). <chapter-title>SAT-IoT: an architectural model for a high-performance fog/edge/cloud IoT platform</chapter-title>. In: <source>2019 IEEE 5th World Forum on Internet of Things (WF-IoT)</source>, pp. <fpage>633</fpage>–<lpage>638</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/WF-IoT.2019.8767282" xlink:type="simple">https://doi.org/10.1109/WF-IoT.2019.8767282</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8767282/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Marjanovic</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Antonic</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Zarko</surname>, <given-names>I.P.</given-names></string-name> (<year>2018</year>). <article-title>Edge computing architecture for mobile crowdsensing</article-title>. <source>IEEE Access</source>, <volume>6</volume>, <fpage>10662</fpage>–<lpage>10674</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ACCESS.2018.2799707" xlink:type="simple">https://doi.org/10.1109/ACCESS.2018.2799707</ext-link>. <comment><uri>http://ieeexplore.ieee.org/document/8272334/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_023">
<mixed-citation publication-type="journal"><string-name><surname>Mocnej</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Miškuf</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Papcun</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Zolotová</surname>, <given-names>I.</given-names></string-name> (<year>2018</year>). <article-title>Impact of edge computing paradigm on energy consumption in IoT</article-title>. <source>IFAC-PapersOnLine</source>, <volume>51</volume>(<issue>6</issue>), <fpage>162</fpage>–<lpage>167</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ifacol.2018.07.147" xlink:type="simple">https://doi.org/10.1016/j.ifacol.2018.07.147</ext-link>. <comment><ext-link ext-link-type="uri" xlink:href="https://www.sciencedirect.com/science/article/pii/S2405896318308917">https://www.sciencedirect.com/science/article/pii/S2405896318308917</ext-link></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_024">
<mixed-citation publication-type="chapter"><string-name><surname>Mounica</surname>, <given-names>R.O.</given-names></string-name>, <string-name><surname>Soumya</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Krovvidi</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Chandrika</surname>, <given-names>K.S.</given-names></string-name>, <string-name><surname>Gayathri</surname>, <given-names>R.</given-names></string-name> (<year>2019</year>). <chapter-title>A multi layer ensemble learning framework for learning disability detection in school-aged children</chapter-title>. In: <source>10th International Conference on Computing, Communication and Networking Technologies (ICCCNT)</source>, pp. <fpage>1</fpage>–<lpage>6</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICCCNT45670.2019.8944774" xlink:type="simple">https://doi.org/10.1109/ICCCNT45670.2019.8944774</ext-link>. <uri>https://ieeexplore.ieee.org/document/8944774/</uri>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Mrozek</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Koczur</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Małysiak-Mrozek</surname>, <given-names>B.</given-names></string-name> (<year>2020</year>). <article-title>Fall detection in older adults with mobile IoT devices and machine learning in the cloud and on the edge</article-title>. <source>Information Sciences</source>, <volume>537</volume>, <fpage>132</fpage>–<lpage>147</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ins.2020.05.070" xlink:type="simple">https://doi.org/10.1016/j.ins.2020.05.070</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0020025520304886</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_026">
<mixed-citation publication-type="chapter"><string-name><surname>Pavlyshenko</surname>, <given-names>B.</given-names></string-name> (<year>2018</year>). <chapter-title>Using stacking approaches for machine learning models</chapter-title>. In: <source>IEEE Second International Conference on Data Stream Mining &amp; Processing (DSMP)</source>, pp. <fpage>255</fpage>–<lpage>258</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/DSMP.2018.8478522" xlink:type="simple">https://doi.org/10.1109/DSMP.2018.8478522</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8478522/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_027">
<mixed-citation publication-type="chapter"><string-name><surname>Pontoppidan</surname>, <given-names>N.H.</given-names></string-name> (<year>2020</year>). <chapter-title>Voice separation with tiny ML on the edge</chapter-title>. In: <source>TinyML Summit</source>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_028">
<mixed-citation publication-type="journal"><string-name><surname>Porambage</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Okwuibe</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Liyanage</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Ylianttila</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Taleb</surname>, <given-names>T.</given-names></string-name> (<year>2018</year>). <article-title>Survey on multi-access edge computing for Internet of things realization</article-title>. <source>IEEE Communications Surveys &amp; Tutorials</source>, <volume>20</volume>(<issue>4</issue>), <fpage>2961</fpage>–<lpage>2991</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/COMST.2018.2849509" xlink:type="simple">https://doi.org/10.1109/COMST.2018.2849509</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8391395/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_029">
<mixed-citation publication-type="journal"><string-name><surname>Raj</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gupta</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Chamola</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Elhence</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Garg</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Atiquzzaman</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Niyato</surname>, <given-names>D.</given-names></string-name> (<year>2021</year>). <article-title>A survey on the role of Internet of things for adopting and promoting Agriculture 4.0</article-title>. <source>Journal of Network and Computer Applications</source>, <volume>187</volume>, <elocation-id>103107</elocation-id>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jnca.2021.103107" xlink:type="simple">https://doi.org/10.1016/j.jnca.2021.103107</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S1084804521001284</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_030">
<mixed-citation publication-type="journal"><string-name><surname>Rokach</surname>, <given-names>L.</given-names></string-name> (<year>2010</year>). <article-title>Ensemble-based classifiers</article-title>. <source>Artificial Intelligence Review</source>, <volume>33</volume>(<issue>1–2</issue>), <fpage>1</fpage>–<lpage>39</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10462-009-9124-7" xlink:type="simple">https://doi.org/10.1007/s10462-009-9124-7</ext-link>. <comment><uri>http://link.springer.com/10.1007/s10462-009-9124-7</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_031">
<mixed-citation publication-type="chapter"><string-name><surname>Sanchez-Gomez</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gallego-Madrid</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Skarmeta</surname>, <given-names>A.F.</given-names></string-name> (<year>2019</year>). <chapter-title>Performance study of LoRaWAN for smart-city applications</chapter-title>. In: <source>IEEE 2nd 5G World Forum (5GWF)</source>, pp. <fpage>58</fpage>–<lpage>62</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/5GWF.2019.8911676" xlink:type="simple">https://doi.org/10.1109/5GWF.2019.8911676</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8911676/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_032">
<mixed-citation publication-type="journal"><string-name><surname>Sanchez-Gomez</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gallego-Madrid</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Santa</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Skarmeta Gómez</surname>, <given-names>A.F.</given-names></string-name> (<year>2020</year>). <article-title>Impact of SCHC compression and fragmentation in LPWAN: a case study with LoRaWAN</article-title>. <source>Sensors</source>, <volume>20</volume>(<issue>1</issue>), <fpage>280</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/s20010280" xlink:type="simple">https://doi.org/10.3390/s20010280</ext-link>. <comment><uri>https://www.mdpi.com/1424-8220/20/1/280</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_033">
<mixed-citation publication-type="journal"><string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Cano</surname>, <given-names>M.-D.</given-names></string-name> (<year>2016</year>). <article-title>State of the art in LP-WAN solutions for industrial IoT services</article-title>. <source>Sensors</source>, <volume>16</volume>(<issue>5</issue>), <fpage>708</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/s16050708" xlink:type="simple">https://doi.org/10.3390/s16050708</ext-link>. <comment><uri>http://www.mdpi.com/1424-8220/16/5/708/htm</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_034">
<mixed-citation publication-type="journal"><string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Skarmeta</surname>, <given-names>A.F.</given-names></string-name> (<year>2020</year>). <article-title>TinyML-enabled frugal smart objects: challenges and opportunities</article-title>. <source>IEEE Circuits and Systems Magazine</source>, <volume>20</volume>(<issue>3</issue>), <fpage>4</fpage>–<lpage>18</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/MCAS.2020.3005467" xlink:type="simple">https://doi.org/10.1109/MCAS.2020.3005467</ext-link>. <comment><ext-link ext-link-type="uri" xlink:href="https://ieeexplore.ieee.org/document/9166461/">https://ieeexplore.ieee.org/document/9166461/</ext-link></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_035">
<mixed-citation publication-type="journal"><string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Sanchez-Gomez</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Skarmeta</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Evolving IoT networks by the confluence of MEC and LP-WAN paradigms</article-title>. <source>Future Generation Computer Systems</source>, <volume>88</volume>, <fpage>199</fpage>–<lpage>208</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.future.2018.05.057" xlink:type="simple">https://doi.org/10.1016/j.future.2018.05.057</ext-link>. <comment><uri>http://linkinghub.elsevier.com/retrieve/pii/S0167739X17324159</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_036">
<mixed-citation publication-type="journal"><string-name><surname>Santa</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fernández</surname>, <given-names>P.J.</given-names></string-name>, <string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Ortiz</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Skarmeta</surname>, <given-names>A.F.</given-names></string-name> (<year>2018</year>). <article-title>Offloading positioning onto network edge</article-title>. <source>Wireless Communications and Mobile Computing</source>, <volume>2018</volume>, <fpage>1</fpage>–<lpage>13</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1155/2018/7868796" xlink:type="simple">https://doi.org/10.1155/2018/7868796</ext-link>. <comment><uri>https://www.hindawi.com/journals/wcmc/2018/7868796/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_037">
<mixed-citation publication-type="journal"><string-name><surname>Santa</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Skarmeta</surname>, <given-names>A.F.</given-names></string-name>, <string-name><surname>Ortiz</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fernandez</surname>, <given-names>P.J.</given-names></string-name>, <string-name><surname>Luis</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gomes</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Oliveira</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Gomes</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Sanchez-Iborra</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Sargento</surname>, <given-names>S.</given-names></string-name> (<year>2020</year>). <article-title>MIGRATE: mobile device virtualisation through state transfer</article-title>. <source>IEEE Access</source>, <volume>8</volume>, <fpage>25848</fpage>–<lpage>25862</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ACCESS.2020.2971090" xlink:type="simple">https://doi.org/10.1109/ACCESS.2020.2971090</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/8978626/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_038">
<mixed-citation publication-type="other"><string-name><surname>Semtech</surname></string-name> (2019). <italic>SX1272 LoRa Modem Datasheet v4</italic>. Technical report. <uri>https://www.semtech.com/products/wireless-rf/lora-transceivers/sx1272</uri>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_039">
<mixed-citation publication-type="chapter"><string-name><surname>Silva</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ribeiro</surname>, <given-names>B.</given-names></string-name> (<year>2006</year>). <chapter-title>Two-level hierarchical hybrid SVM-RVM classification model</chapter-title>. In: <source>5th International Conference on Machine Learning and Applications (ICMLA’06)</source>, pp. <fpage>89</fpage>–<lpage>94</lpage>. <isbn>0-7695-2735-3</isbn>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICMLA.2006.52" xlink:type="simple">https://doi.org/10.1109/ICMLA.2006.52</ext-link>. <comment><uri>http://ieeexplore.ieee.org/document/4041475/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_040">
<mixed-citation publication-type="chapter"><string-name><surname>Song</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yau</surname>, <given-names>S.S.</given-names></string-name>, <string-name><surname>Yu</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Xue</surname>, <given-names>G.</given-names></string-name> (<year>2017</year>). <chapter-title>An approach to QoS-based task distribution in edge computing networks for IoT applications</chapter-title>. In: <source>IEEE International Conference on Edge Computing (EDGE)</source>, pp. <fpage>32</fpage>–<lpage>39</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IEEE.EDGE.2017.50" xlink:type="simple">https://doi.org/10.1109/IEEE.EDGE.2017.50</ext-link>. <comment><uri>http://ieeexplore.ieee.org/document/8029254/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_041">
<mixed-citation publication-type="journal"><string-name><surname>van Klompenburg</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kassahun</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Catal</surname>, <given-names>C.</given-names></string-name> (<year>2020</year>). <article-title>Crop yield prediction using machine learning: a systematic literature review</article-title>. <source>Computers and Electronics in Agriculture</source>, <volume>177</volume>, <fpage>105709</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.compag.2020.105709" xlink:type="simple">https://doi.org/10.1016/j.compag.2020.105709</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0168169920302301</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_042">
<mixed-citation publication-type="journal"><string-name><surname>Veeramanikandan</surname></string-name>, <string-name><surname>Sankaranarayanan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Rodrigues</surname>, <given-names>J.J.P.C.</given-names></string-name>, <string-name><surname>Sugumaran</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Kozlov</surname>, <given-names>S.</given-names></string-name> (<year>2020</year>). <article-title>Data flow and distributed deep neural network based low latency IoT-edge computation model for big data environment</article-title>. <source>Engineering Applications of Artificial Intelligence</source>, <volume>94</volume>, <fpage>103785</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.engappai.2020.103785" xlink:type="simple">https://doi.org/10.1016/j.engappai.2020.103785</ext-link>. <uri>https://linkinghub.elsevier.com/retrieve/pii/S0952197620301780</uri>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_043">
<mixed-citation publication-type="chapter"><string-name><surname>Vuppalapati</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Ilapakurti</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Kedari</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Vuppalapati</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kedari</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Vuppalapati</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <chapter-title>Democratization of AI, Albeit constrained IoT devices &amp; tiny ML, for creating a sustainable food future</chapter-title>. In: <source>3rd International Conference on Information and Computer Technologies (ICICT)</source>, pp. <fpage>525</fpage>–<lpage>530</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICICT50521.2020.00089" xlink:type="simple">https://doi.org/10.1109/ICICT50521.2020.00089</ext-link>. <comment><uri>https://ieeexplore.ieee.org/document/9092247/</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_044">
<mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name> (<year>2021</year>). <article-title>NDN-based IoT with edge computing</article-title>. <source>Future Generation Computer Systems</source>, <volume>115</volume>, <fpage>397</fpage>–<lpage>405</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.future.2020.09.018" xlink:type="simple">https://doi.org/10.1016/j.future.2020.09.018</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0167739X20303903</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_045">
<mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Geng</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yin</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Jin</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <article-title>Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection</article-title>. <source>Applied Soft Computing</source>, <volume>77</volume>, <fpage>188</fpage>–<lpage>204</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.asoc.2019.01.015" xlink:type="simple">https://doi.org/10.1016/j.asoc.2019.01.015</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S1568494619300195</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_046">
<mixed-citation publication-type="book"><string-name><surname>Warden</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Situnayake</surname>, <given-names>D.</given-names></string-name> (<year>2019</year>). <source>TinyML: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers</source>. <publisher-name>O’Reilly UK Ltd</publisher-name>. <isbn>978-1492052043</isbn>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_047">
<mixed-citation publication-type="journal"><string-name><surname>Wolpert</surname>, <given-names>D.H.</given-names></string-name> (<year>1992</year>). <article-title>Stacked generalization</article-title>. <source>Neural Networks</source>, <volume>5</volume>(<issue>2</issue>), <fpage>241</fpage>–<lpage>259</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/S0893-6080(05)80023-1" xlink:type="simple">https://doi.org/10.1016/S0893-6080(05)80023-1</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0893608005800231</uri></comment>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_048">
<mixed-citation publication-type="other"><string-name><surname>Wong</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Famouri</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Shafiee</surname>, <given-names>M.J.</given-names></string-name> (2020). <italic>AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers</italic>. arXiv preprint: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/arXiv:2009.14385">arXiv:2009.14385</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor505_ref_049">
<mixed-citation publication-type="journal"><string-name><surname>Xu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Luo</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Peng</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Meng</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Qi</surname>, <given-names>L.</given-names></string-name> (<year>2019</year>). <article-title>A computation offloading method over big data for IoT-enabled cloud-edge computing</article-title>. <source>Future Generation Computer Systems</source>, <volume>95</volume>, <fpage>522</fpage>–<lpage>533</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.future.2018.12.055" xlink:type="simple">https://doi.org/10.1016/j.future.2018.12.055</ext-link>. <comment><uri>https://linkinghub.elsevier.com/retrieve/pii/S0167739X18319770</uri></comment>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
