<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">INFORMATICA</journal-id>
<journal-title-group><journal-title>Informatica</journal-title></journal-title-group>
<issn pub-type="epub">1822-8844</issn><issn pub-type="ppub">0868-4952</issn><issn-l>0868-4952</issn-l>
<publisher>
<publisher-name>Vilnius University</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">INFOR522</article-id>
<article-id pub-id-type="doi">10.15388/23-INFOR522</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Research Article</subject></subj-group></article-categories>
<title-group>
<article-title>Benchmark for Hyperspectral Unmixing Algorithm Evaluation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-4315-729X</contrib-id>
<name><surname>Paura</surname><given-names>Vytautas</given-names></name><email xlink:href="vytautas.paura@mif.stud.vu.lt">vytautas.paura@mif.stud.vu.lt</email><xref ref-type="aff" rid="j_infor522_aff_001"/><xref ref-type="corresp" rid="cor1">∗</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Marcinkevičius</surname><given-names>Virginijus</given-names></name><email xlink:href="virginijus.marcinkevicius@mif.vu.lt">virginijus.marcinkevicius@mif.vu.lt</email><xref ref-type="aff" rid="j_infor522_aff_001"/>
</contrib>
<aff id="j_infor522_aff_001">Institute of Data Science and Digital Technologies, <institution>Vilnius University</institution>, <country>Lithuania</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2023</year></pub-date><pub-date pub-type="epub"><day>15</day><month>6</month><year>2023</year></pub-date><volume>34</volume><issue>2</issue><fpage>285</fpage><lpage>315</lpage><history><date date-type="received"><month>12</month><year>2022</year></date><date date-type="accepted"><month>6</month><year>2023</year></date></history>
<permissions><copyright-statement>© 2023 Vilnius University</copyright-statement><copyright-year>2023</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Over the past decades, many methods have been proposed to solve the linear or nonlinear mixing of spectra inside the hyperspectral data. Due to a relatively low spatial resolution of hyperspectral imaging, each image pixel may contain spectra from multiple materials. In turn, hyperspectral unmixing is finding these materials and their abundances. A few main approaches to performing hyperspectral unmixing have emerged, such as nonnegative matrix factorization (NMF), linear mixture modelling (LMM), and, most recently, autoencoder networks. These methods use different approaches in finding the endmember and abundance of information from hyperspectral images. However, due to the huge variation of hyperspectral data being used, it is difficult to determine which methods perform sufficiently on which datasets and if they can generalize on any input data to solve hyperspectral unmixing problems. By trying to mitigate this problem, we propose a hyperspectral unmixing algorithm testing methodology and create a standard benchmark to test already available and newly created algorithms. A few different experiments were created, and a variety of hyperspectral datasets in this benchmark were used to compare openly available algorithms and to determine the best-performing ones.</p>
</abstract>
<kwd-group>
<label>Key words</label>
<kwd>hyperspectral unmixing</kwd>
<kwd>benchmark</kwd>
<kwd>matrix factorization</kwd>
<kwd>autoencoders</kwd>
<kwd>linear mixture models</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="j_infor522_s_001">
<label>1</label>
<title>Introduction</title>
<p>Hyperspectral imagery is used in many different areas due to the information it can capture. It is widely used in agriculture, mineralogy, food industry and others because it enables fast and accurate analysis with a non-destructive data-gathering method. Usually, hyperspectral cameras gather many light bands simultaneously but, in turn, have a small spatial resolution. Because of this, pixels of hyperspectral images may be a mixture of light emitted by different substances or materials, for example, different minerals captured by the hyperspectral camera while filming a quarry. The gathered light data can be mixed in a linear or non-linear way. In turn, this mixed data may be less useful for conducting analysis; therefore, hyperspectral image unmixing is an important issue that requires solutions. Additionally, hyperspectral cameras may gather a substantial amount of noise, especially when used in open fields, which creates additional errors in analysis, such as reflection from mixed or contaminated objects, atmospheric influences, weather-induced noise (from clouds or rain), and electrical noises from hardware.</p>
<p>To solve the problem of mixed data in hyperspectral pixels, hyperspectral unmixing (HU) methods are used. HU is the process used to separate hyperspectral image pixel spectra into a set of spectral signatures, called <italic>endmembers</italic> and their <italic>abundances</italic> for each pixel separately. An endmember spectrum is represented in equation (<xref rid="j_infor522_eq_001">1</xref>) in which <inline-formula id="j_infor522_ineq_001"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${R_{k}}$]]></tex-math></alternatives></inline-formula> is the spectral value at wavelength <italic>k</italic>, <inline-formula id="j_infor522_ineq_002"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${a_{i}}$]]></tex-math></alternatives></inline-formula> is the abundance of endmember <italic>i</italic>, <inline-formula id="j_infor522_ineq_003"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${E_{i,k}}$]]></tex-math></alternatives></inline-formula> is the spectral value of endmember <italic>i</italic> at wavelength <italic>k</italic> and a residual error <italic>ε</italic> at the wavelength <italic>k</italic> and <italic>n</italic> is the total number of endmembers. 
<disp-formula id="j_infor522_eq_001">
<label>(1)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>·</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">ε</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {R_{k}}={\sum \limits_{i=1}^{n}}{a_{i}}\cdot {E_{i,k}}+{\varepsilon _{k}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>The paper by Bioucas-Dias <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor522_ref_003">2012</xref>) provides a broad hyperspectral analysis algorithm review upon which we expand in this paper, and because a standardized methodology to test the performance of HU algorithms is not available and is not used. In this paper, we created a benchmarking methodology allowing hyperspectral methods to be tested standardised. The proposed benchmark tests algorithm robustness to noise, several endmembers, and image sizes and evaluates unmixing accuracy.</p>
</sec>
<sec id="j_infor522_s_002">
<label>2</label>
<title>Hyperspectral Unmixing Algorithms</title>
<p>This section describes and reviews available algorithms used for Hyperspectral unmixing. This section is split into three main parts describing different algorithms used. These parts are supervised algorithms, semi-supervised algorithms, and unsupervised algorithms. This section describes the algorithms in each category and shows the hyperspectral unmixing results that the authors of these algorithms acquired using experimentation. The reviewed algorithms were also checked if the authors publicly shared the algorithm implementation code. From these openly available algorithms, a few were selected and tested using the created hyperspectral unmixing benchmark. The code created for this paper’s benchmark and algorithm testing implementation is published as an open source. The implementation details and code is provided in Section <xref rid="j_infor522_s_020">4</xref>.</p>
<sec id="j_infor522_s_003">
<label>2.1</label>
<title>Supervised Algorithms</title>
<p>Supervised algorithms are machine learning methods similar to function approximation algorithms that try to find the connection between input and output data and, in turn, require a collection of input and output (or ground truth) data to be present. Some examples of supervised algorithms are the nearest neighbour (hyperspectral image classification, Guo <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_009">2018</xref>), decision tree (for example, hyperspectral classification, Goel <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_008">2003</xref>), linear regression, some types of neural networks, and many others.</p>
<p>Koirala <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor522_ref_019">2019</xref>) proposes a supervised hyperspectral unmixing method based on mapping true hyperspectral image spectra and the corresponding linear spectra composed of the same endmember abundances. The suggested algorithm works in a few steps: 
<list>
<list-item id="j_infor522_li_001">
<label>•</label>
<p>Real hyperspectral dataset is gathered.</p>
</list-item>
<list-item id="j_infor522_li_002">
<label>•</label>
<p>Ground truth abundance and endmembers are used to linearly mix spectra into an artificial hyperspectral image corresponding to the real dataset.</p>
</list-item>
<list-item id="j_infor522_li_003">
<label>•</label>
<p>Both data sources are input to a machine learning algorithm to learn the mappings between data.</p>
</list-item>
<list-item id="j_infor522_li_004">
<label>•</label>
<p>After training, the model is created and saved.</p>
</list-item>
<list-item id="j_infor522_li_005">
<label>•</label>
<p>Trained model is then tested with a part of the real hyperspectral dataset.</p>
</list-item>
<list-item id="j_infor522_li_006">
<label>•</label>
<p>A linearly mixed spectra are generated due to the unmixing resulting in the abundance map of the hyperspectral testing dataset.</p>
</list-item>
</list> 
The authors used a neural network and two regression algorithms to learn the mapping between the generated linear and nonlinear training spectra. The algorithms were tested using 10,000 mixed spectra using <inline-formula id="j_infor522_ineq_004"><alternatives><mml:math>
<mml:mn>50</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$50\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> signal-to-noise ratio (SNR) Gaussian noise and spectral signatures from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>). The spectral mixes were generated using the Hapke model (Hapke, <xref ref-type="bibr" rid="j_infor522_ref_011">1981</xref>).</p>
</sec>
<sec id="j_infor522_s_004">
<label>2.2</label>
<title>Semi-Supervised Algorithms</title>
<p>Semi-supervised algorithms are a combination of supervised and unsupervised learning. Because creating a high-quality labelled dataset is a time-consuming and difficult task, semi-supervised machine learning models may be used to help speed up this process. These methods use as input a dataset of labelled data. By training the machine learning method on this dataset, the created model can extrapolate data labels on a new collection of unlabelled data. A review of automatically labelled data may already be faster than labelling the data by hand, and with an expanding dataset, these models become more accurate at labelling new data.</p>
<sec id="j_infor522_s_005">
<label>2.2.1</label>
<title>Sparse Regression</title>
<table-wrap id="j_infor522_tab_001">
<label>Table 1</label>
<caption>
<p>Comparison of the results of sparse regression algorithms.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dataset</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Result</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left">SUnSAL-TV (Iordache <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_016">2012</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples ASTER</td>
<td rowspan="6" style="vertical-align: top; text-align: left">SRE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 12.6753 dB</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">(SNR = 40 dB; 5 signatures)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">ASTER:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 14.6485 dB</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">(SNR = 40 dB; 5 signatures)</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">SUnSAL (Bioucas-Dias and Figueiredo, <xref ref-type="bibr" rid="j_infor522_ref_002">2010</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left">Gaussian Synthetic data based on USGS library samples</td>
<td rowspan="4" style="vertical-align: top; text-align: left">RSNR</td>
<td style="vertical-align: top; text-align: left">Gaussian:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RSNR = 48 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RSNR = 23 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">C-SUnSAL (Bioucas-Dias and Figueiredo, <xref ref-type="bibr" rid="j_infor522_ref_002">2010</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left">Gaussian and Synthetic data based on USGS library samples</td>
<td rowspan="4" style="vertical-align: top; text-align: left">RSNR</td>
<td style="vertical-align: top; text-align: left">Gaussian:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RSNR = 47 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RSNR = 14.5 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">CLSUnSAL (Iordache <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_017">2014</xref>)</td>
<td rowspan="3" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="3" style="vertical-align: top; text-align: left">SRE</td>
<td style="vertical-align: top; text-align: left">SRE = 21.47 dB (SNR = 40 dB; 2 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 13.96 dB (SNR = 40 dB; 4 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 8.79 dB (SNR = 40 dB; 6 endmembers)</td>
</tr>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left">S<sup>2</sup>WSU (Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_042">2018</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="6" style="vertical-align: top; text-align: left">SRE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data 1:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 20.5709 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 41.4053 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data 2:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 19.5999 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 36.5364 dB (SNR = 50 dB)</td>
</tr>
<tr>
<td rowspan="9" style="vertical-align: top; text-align: left">SUSRLR-TV (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_021">2021</xref>)</td>
<td rowspan="9" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="9" style="vertical-align: top; text-align: left">SRE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data 1:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 7.59 dB (SNR = 10 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 24.98 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data 2:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 10.81 dB (SNR = 10 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 35.68 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data 3:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 5.01 dB (SNR = 10 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 22.27 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">MCSU (Qi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_031">2020</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Cuprite dataset Jasper Ridge</td>
<td rowspan="4" style="vertical-align: top; text-align: left">SRE RMSE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 33.0992 dB (SNR = 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Cuprite: RMSE = 0.0575</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge: SRE = 13.5567 dB</td>
</tr>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left">SVASU (Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_041">2022</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper Ridge dataset</td>
<td rowspan="6" style="vertical-align: top; text-align: left">SRE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 34.33 dB (SRE reconstruction)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 1.78 dB (SRE abundance)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge dataset:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 19.56 dB (SRE reconstruction)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 8.14 dB (SRE abundance)</td>
</tr>
<tr>
<td rowspan="8" style="vertical-align: top; text-align: left; border-bottom: solid thin">SBWCRLRU (Su <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_034">2022</xref>)</td>
<td rowspan="8" style="vertical-align: top; text-align: left; border-bottom: solid thin">Synthetic data based on USGS library samples Samson dataset Jasper Ridge dataset</td>
<td rowspan="8" style="vertical-align: top; text-align: left; border-bottom: solid thin">SRE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 20.24 dB (SNR = 20 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 34.66 dB (SNR = 30 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 44.59 dB (SNR = 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Samson dataset:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SRE = 17.03 dB</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge dataset:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">SRE = 17.37 dB</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>A regression problem is learning a function or model capable of estimating the dependent variables from given observations or features. Sparsity refers to the input and output data being incomplete and not fully populated. In machine learning, sparsity indicates data that includes many zeros or other non-significant values. In turn, sparse regression is a subcategory of regression machine learning algorithms designed to handle non-densely packed data. The same regression algorithms can be used for sparse regression (linear, lasso, ridge, and others), but an additional step is often required to determine the subset of predictors. The problem of regression is learning a model that allows estimating a certain quantity of the dependent variable from several observed variables, known as independent variables. Table <xref rid="j_infor522_tab_001">1</xref> shows an overview of algorithm results. More detailed explanations of each algorithm featured in Table <xref rid="j_infor522_tab_001">1</xref> are provided below in paragraphs of this subsection.</p>
<p><italic>SUnSAL and Total Variation</italic> (SUnSAL-TV) (Iordache <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_016">2012</xref>) is a variation of the SUnSAL algorithm with an added total variation regularization which spatial information for better spectral unmixing results. The created total variation regularizer accounts for spatial homogeneity because it is very likely that neighbouring pixels will have quite similar abundance fractions of the same endmembers. The total variation regularizer is most commonly used as a denoising algorithm in a bigger processing pipeline. A similar algorithm using total variation minimization to unmix and increase the hyperspectral image’s spectral resolution was suggested in Guo <italic>et al.</italic> (<xref ref-type="bibr" rid="j_infor522_ref_010">2009</xref>). Still, it uses the N-FINDR algorithm (Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_044">2009</xref>) to infer endmembers. The authors provided the signal reconstruction error (SRE) results for the algorithm performance evaluation with values: <inline-formula id="j_infor522_ineq_005"><alternatives><mml:math>
<mml:mn>12.67</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$12.67\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for USGS (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) dataset and <inline-formula id="j_infor522_ineq_006"><alternatives><mml:math>
<mml:mn>14.64</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$14.64\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for the ASTER dataset (NASA, <xref ref-type="bibr" rid="j_infor522_ref_026">2004</xref>). The given results are lower than some of the other review algorithms, but it is hard to compare because the synthetic data is created differently by different authors.</p>
<p>The <italic>Sparse Unmixing by variable Splitting and Augmented Lagrangian</italic> (SUnSAL) and <italic>Constrained SUnSAL</italic> (C-SUnSAL) algorithms (Bioucas-Dias and Figueiredo, <xref ref-type="bibr" rid="j_infor522_ref_002">2010</xref>) are based on the alternating direction method of multipliers (ADMM) (Gabay and Mercier, <xref ref-type="bibr" rid="j_infor522_ref_007">1976</xref>). The ADMM algorithm splits a difficult problem into an array of simpler problems. The results provided by the authors are in dB values of the reconstruction signal-to-noise ratio (RSNR) metric, and both algorithms were tested using <inline-formula id="j_infor522_ineq_007"><alternatives><mml:math>
<mml:mn>50</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$50\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> of artificial noise. SUnSAL algorithm got RSNR values of <inline-formula id="j_infor522_ineq_008"><alternatives><mml:math>
<mml:mn>48</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$48\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor522_ineq_009"><alternatives><mml:math>
<mml:mn>23</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$23\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for Gaussian and USGS (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) datasets, while SUnSAL-TV got <inline-formula id="j_infor522_ineq_010"><alternatives><mml:math>
<mml:mn>47</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$47\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor522_ineq_011"><alternatives><mml:math>
<mml:mn>14.5</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$14.5\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>, respectively.</p>
<p><italic>Collaborative Sparse Unmixing by variable Splitting and Augmented Lagrangian</italic> (CLSUnSAL) (Iordache <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_017">2014</xref>) is an elaboration of a previous algorithm <italic>SUnSAL</italic> introduced in Bioucas-Dias and Figueiredo (<xref ref-type="bibr" rid="j_infor522_ref_002">2010</xref>). The difference between the SUnSAL and collaborative SUnSAL algorithms is that the non-constrained algorithm performs regression on each pixel independently, while the constrained algorithm calculates sparsity for all pixels. Algorithm performance results provided by the authors were in the SRE metric with dB as the unit and an artificial noise level of <inline-formula id="j_infor522_ineq_012"><alternatives><mml:math>
<mml:mn>40</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$40\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR. The results were <inline-formula id="j_infor522_ineq_013"><alternatives><mml:math>
<mml:mn>21</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$21\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for 2 endmembers, <inline-formula id="j_infor522_ineq_014"><alternatives><mml:math>
<mml:mn>14</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$14\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for 4 endmembers, and <inline-formula id="j_infor522_ineq_015"><alternatives><mml:math>
<mml:mn>8.7</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$8.7\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for 6 endmembers.</p>
<p><italic>Spectral–Spatial Weighted Sparse Unmixing</italic> (S<sup>2</sup>WSU) (Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_042">2018</xref>) is a hyperspectral unmixing framework that tries to get a sparse solution that is constrained by both spectral and spatial domains at the same time. It implements ADMM for parameter and coefficient optimization purposes. A dataset generated from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) is used to determine the algorithm’s performance and compare it to other popular solutions. Synthetic cubes were used to test the algorithm with SRE as the given metric with results: <inline-formula id="j_infor522_ineq_016"><alternatives><mml:math>
<mml:mn>20.5</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$20.5\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for the first cube and <inline-formula id="j_infor522_ineq_017"><alternatives><mml:math>
<mml:mn>19.6</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$19.6\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for the second cube with given SNR of <inline-formula id="j_infor522_ineq_018"><alternatives><mml:math>
<mml:mn>30</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$30\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>.</p>
<p><italic>Superpixel-based Reweighted Low-Rank and Total Variation</italic> (SUSRLR-TV) (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_021">2021</xref>) is a sparse unmixing algorithm based on simple linear iterative clustering (SLIC) algorithm to segment the hyperspectral images into homogeneous regions and combining total variation and ADMM algorithm to calculate abundance maps. The algorithm was tested using three synthetic data cubes created with different abundances and endmembers gathered from USGS spectral library and using the Cuprite dataset (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>). A set of different datasets were used to determine the algorithm performance. The synthetic datasets were used to get accurate metrics, and the Cuprite dataset was used to inspect the algorithm’s performance visually.</p>
<p><italic>Spectral-Spatial-Weighted Multiview Collaborative Sparse Unmixing</italic> (MCSU) (Qi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_031">2020</xref>) is a sparse regression hyperspectral unmixing algorithm based on spatial and spectral correlation. The main algorithm idea is to use the existing correlations between adjacent spectral bands and neighbouring pixels that are assumed to have highly correlated information. A hyperspectral camera may have captured the same mixture of materials in multiple groups of pixels or a transitioning mixture of materials which will have a very strong correlation to its neighbouring pixels. The authors provide the SRE, and root mean squared error (RMSE) metrics for algorithms performance evaluation. Three different datasets were used with the following results: <inline-formula id="j_infor522_ineq_019"><alternatives><mml:math>
<mml:mtext>SRE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mn>33</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$\text{SRE}=33\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for the simulated dataset, <inline-formula id="j_infor522_ineq_020"><alternatives><mml:math>
<mml:mtext>RMSE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mn>0.057</mml:mn></mml:math><tex-math><![CDATA[$\text{RMSE}=0.057$]]></tex-math></alternatives></inline-formula> for Cuprite dataset (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>), and <inline-formula id="j_infor522_ineq_021"><alternatives><mml:math>
<mml:mtext>SRE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mn>13.55</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$\text{SRE}=13.55\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) dataset.</p>
<p><italic>Spectral Variability Augmented Sparse Unmixing model</italic> (SVASU) (Zhang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_041">2022</xref>) is a model that takes into account the spectral variability of the same endmember spectra. Principal component analysis (PCA) decomposition calculates a spectral variability library. The algorithm adopts a two-stage decomposition of hyperspectral images: first is the decomposition into endmember and abundance matrices, and the second is the reconstruction error of the pixels from the first stage, taking into account the spectral variability library. Algorithm testing by the authors was conducted on a synthetic dataset generated from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>), Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) and Samson datasets (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>).</p>
<p><italic>Superpixel-Based Weighted Collaborative sparse regression and Reweighted Low-Rank Representation Unmixing</italic> (SBWCRLRU) (Su <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_034">2022</xref>) is a hyperspectral unmixing algorithm that utilizes spatial-spectral data and incorporates superpixel segmentation methods. The segmentation algorithm divides the hyperspectral image into homogeneous regions with similar properties. For the superpixel segmentation, the authors adapt the SLIC algorithm, which enables its use for hyperspectral data and not on RGB images. To test the algorithm, authors created a synthetic dataset using USGS library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) data and a combination of three different real-world hyperspectral data cubes: Samson (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>), Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>), and Cuprite (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>) datasets.</p>
</sec>
<sec id="j_infor522_s_006">
<label>2.2.2</label>
<title>Conclusions from Sparse Regression Related Works Review</title>
<p>A few key takeaways and conclusions were made from the review of semi-supervised hyperspectral unmixing algorithms and the results provided by the authors of these papers: 
<list>
<list-item id="j_infor522_li_007">
<label>•</label>
<p>Most commonly used metric was SRE, with some papers using SNR for synthetically generated datasets.</p>
</list-item>
<list-item id="j_infor522_li_008">
<label>•</label>
<p>The Jasper ridge dataset was the most commonly used real-world dataset in these papers, with the MCSU algorithm having the lower SRE metric for this dataset.</p>
</list-item>
<list-item id="j_infor522_li_009">
<label>•</label>
<p>Most of the synthetically created datasets that were used to test these algorithms had an added additional artificial noise. Most commonly, <inline-formula id="j_infor522_ineq_022"><alternatives><mml:math>
<mml:mn>30</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$30\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> of added noise was used.</p>
</list-item>
<list-item id="j_infor522_li_010">
<label>•</label>
<p>The SUnSAl algorithm is the most influential of the algorithms reviewed due to citation amount and other algorithms created from it.</p>
</list-item>
<list-item id="j_infor522_li_011">
<label>•</label>
<p>Using the Jasper ridge dataset, as it should be identical between different papers and the SRE metric, the highest value of <inline-formula id="j_infor522_ineq_023"><alternatives><mml:math>
<mml:mn>19.56</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$19.56\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> was achieved by the SVASU algorithm.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec id="j_infor522_s_007">
<label>2.3</label>
<title>Unsupervised Algorithms</title>
<p>These algorithms do not require any labelled data of previously known ground truths to train the models. The main subcategories of unsupervised algorithms reviewed in this paper are linear mixture models (LMM) and non-negative matrix factorization (NMF) methods.</p>
<table-wrap id="j_infor522_tab_002">
<label>Table 2</label>
<caption>
<p>Comparison of the results of Nonnegative matrix factorization algorithms.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dataset</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Result</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left">CNMF (Yokoya <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_039">2012</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="2" style="vertical-align: top; text-align: left">PSNR SAE</td>
<td style="vertical-align: top; text-align: left">PSNR = 40.04 dB (inner iter. = 300)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAE = 0.5917 deg (inner iter. = 300)</td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top; text-align: left">GLNMF (Lu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_024">2013</xref>)</td>
<td rowspan="5" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper Ridge dataset</td>
<td rowspan="5" style="vertical-align: top; text-align: left">SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0192 (Gaussian SNR = 20 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.1551 (with noisy bands)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.1359 (without noisy bands)</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">LIDAR-NTF (Kaya <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_018">2021</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="4" style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">RMSE (64 × 64 image, 20 dB) = 0.1214</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE (81 × 81 image, 20 dB) = 0.1197</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE (64 × 64 image, 50 dB) = 0.1216</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE (81 × 81 image, 50 dB) = 0.1185</td>
</tr>
<tr>
<td rowspan="7" style="vertical-align: top; text-align: left">TV-RSNMF (He <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_012">2017</xref>)</td>
<td rowspan="7" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Urban dataset</td>
<td rowspan="7" style="vertical-align: top; text-align: left">SAD RMSE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0452 (SNR = 10d B)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0060 (SNR = 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.0496 (SNR = 10 dB);</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.0051 (SNR = 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD (mean) = 0.1022</td>
</tr>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left">R-CoNMF (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_022">2016</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Cuprite dataset</td>
<td rowspan="6" style="vertical-align: top; text-align: left">SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 3.68 (SNR = 20 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.66 (SNR = 80 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Cuprite:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 4.6978 (Alunite)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 4.4922 (Muscovite)</td>
</tr>
<tr>
<td rowspan="9" style="vertical-align: top; text-align: left">SGSNMF (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_037">2017</xref>)</td>
<td rowspan="9" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Cuprite dataset UAV-Borne dataset</td>
<td rowspan="9" style="vertical-align: top; text-align: left">SAD RMSE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.007 (3 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.04 (15 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.02 (3 endmembers);</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.06 (15 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Cuprite:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD (mean) = 0.0913</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">UAV:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD (mean) = 0.1185</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">EC-NTF-TV (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_036">2021</xref>)</td>
<td rowspan="3" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper Ridge</td>
<td rowspan="3" style="vertical-align: top; text-align: left">RMSE Mean SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data: RMSE = 0.1287</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data: SAD = 0.0899</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge: SAD = 0.1248</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top; text-align: left">SC-NMF (Lu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_025">2020</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">Cuprite dataset Indiana dataset</td>
<td rowspan="2" style="vertical-align: top; text-align: left">SAD</td>
<td style="vertical-align: top; text-align: left">Mean SAD = 0.0902 (Indiana dataset)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Mean SAD = 0.0887 (Cuprite dataset)</td>
</tr>
<tr>
<td rowspan="8" style="vertical-align: top; text-align: left">CSsRS-NMF (Li X. <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_023">2021</xref>)</td>
<td rowspan="8" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper Ridge dataset Urban dataset</td>
<td rowspan="8" style="vertical-align: top; text-align: left">SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.05 (3 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 1.4 (8 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0841</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.1753 (with noisy bands)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.1711 (without noisy bands)</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">GLNMF (Peng <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_029">2022</xref>)</td>
<td rowspan="3" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="3" style="vertical-align: top; text-align: left">SAD RMSE</td>
<td style="vertical-align: top; text-align: left">Mean SAD: 0.0951</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE: 0.06 (5 endmembers)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE: 0.07 (10 endmembers)</td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top; text-align: left">CANMF-TV (Feng <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_006">2022</xref>)</td>
<td rowspan="5" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Cuprite dataset</td>
<td rowspan="5" style="vertical-align: top; text-align: left">SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.13 (SNR = 10 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.05 (SNR = 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Cuprite:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0951</td>
</tr>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left; border-bottom: solid thin">SSWNMF (Zhang S. <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_043">2022</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left; border-bottom: solid thin">Synthetic data based on USGS library samples Cuprite dataset Urban dataset</td>
<td rowspan="6" style="vertical-align: top; text-align: left; border-bottom: solid thin">SAD</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data (SNR 20 dB):</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0636</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">USGS synthetic data (SNR 40 dB):</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">SAD = 0.0029</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban: SAD = 0.1034</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Cuprite: SAD = 0.1128</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="j_infor522_s_008">
<label>2.3.1</label>
<title>Nonnegative Matrix Factorization</title>
<p>Nonnegative matrix factorization (NMF) is an algorithm group that, as the name states, factorizes a matrix into two separate matrices with an additional assumption that all matrices have no negative elements. Because hyperspectral data cannot have negative values and, in turn, the endmember and abundance matrices are also not negative, these algorithms are widely used in hyperspectral unmixing. Table <xref rid="j_infor522_tab_002">2</xref> summarizes the datasets and metrics used in testing the algorithms reviewed below and the results achieved by the authors of corresponding papers.</p>
<p>The spectra at each hyperspectral image pixel are assumed to be a linear mixture of several endmembers. Therefore, the image <italic>Z</italic>, which represents the whole hyperspectral image cube consisting of three dimensions, two spatial and one spectral, can be formulated as: 
<disp-formula id="j_infor522_eq_002">
<label>(2)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">W</mml:mi>
<mml:mi mathvariant="italic">H</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">N</mml:mi>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ Z=WH+N,\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>W</italic> is the spectral signature matrix of size equal to the number of spectral bands (frequently annotated as <italic>λ</italic>) times the number of endmembers, <italic>H</italic> is the abundance matrix that is the size of several endmembers times the number of pixels, and <italic>N</italic> is the residual data of size equal to several spectral bands times the number of pixels. The hyperspectral unmixing is then performed by reversing the formula and finding the <italic>W</italic> and <italic>H</italic> matrices from the original hyperspectral image <italic>Z</italic>.</p>
<p><italic>Coupled Nonnegative Matrix Factorization</italic> (CNMF) (Yokoya <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_039">2012</xref>) is an algorithm used to unmix a high spatial resolution multispectral data and a high spectral resolution hyperspectral data together to achieve a hyperspectral and multispectral data fusion. Multispectral data usually has a much smaller amount of separate spectral bands that it gathers while having a higher spatial resolution than hyperspectral data. And in turn, the fusion between both data types is used to increase the spatial resolution of hyperspectral data. The algorithms use a vertex component analysis (VCA) algorithm to calculate the initial endmember matrix from the spectral data and a user-set number of endmembers to find. A peak SNR (PSNR) and spectral angle error (SAE) metrics were used by the authors to determine the performance of the unmixing algorithm. Spectral angle error is used to determine the accuracy of reconstructed spectra by calculating the angle of estimated spectra in <italic>λ</italic>-dimensional space and comparing it to actual spectra. A smaller angle indicates a more accurate spectral reconstruction. A value of <inline-formula id="j_infor522_ineq_024"><alternatives><mml:math>
<mml:mn>40</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$40\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> for the PSNR metric is given for the algorithm performance.</p>
<p><italic>Graph regularized L</italic><sub>1/2</sub><italic>-NMF</italic> (GLNMF) (Lu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_024">2013</xref>) is a hyperspectral unmixing algorithm that takes into consideration the local geometrical structures of hyperspectral image data. To detect the geometrical structure, graph regularization and sparsity constraints are used. A synthetic dataset was created using the endmembers from USGS Spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) with the number of endmembers differing from 5 to 10. AVIRIS Cuprite (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>) dataset was used to test the accuracy of abundance estimation of different minerals. In total, six experiments were conducted by the authors to test different performance metrics. Spectral angle distance (SAD) metric (Yuhas <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_040">1992</xref>) results are given by the authors: 0.019 for the synthetic dataset, and 0.155 for Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) dataset with noisy bands, 0.135 without noisy bands.</p>
<p><italic>LIDAR-aided total variation regularized Non-negative Tensor Factorization for hyperspectral unmixing</italic> (LIDAR-NTF) (Kaya <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_018">2021</xref>) proposes using a Digital Surface Model (DSM) that is created using LIDAR data to provide accurate elevation information about the observed scene. The provided DSM data is used in total variation regularization as a spatial constraint, increasing the tensor decomposition accuracy, especially in areas of the hyperspectral image with a significant height difference between neighbouring pixels. A tensor is a multidimensional array equal to a matrix if the tensor dimension is two. The same methods can be used for matrix and tensor decomposition if the dimension is two, while higher-dimension tensors require different algorithms. Five randomly selected materials from USGS library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) were selected to create a synthetic dataset. An additional Gaussian noise was added to corrupt the data. An RMSE value was calculated for synthetic images, and the results were: 0.1197 with <inline-formula id="j_infor522_ineq_025"><alternatives><mml:math>
<mml:mn>20</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$20\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> noise and 0.1185 with <inline-formula id="j_infor522_ineq_026"><alternatives><mml:math>
<mml:mn>50</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$50\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> noise.</p>
<p><italic>Total Variation Regularized Reweighted Sparse NMF</italic> (TV-RSNMF) (He <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_012">2017</xref>) is a blind hyperspectral unmixing algorithm based on nonnegative matrix factorization and is implemented using reweighted sparse regularizer to promote abundance sparsity and a TV regularizer to enhance the spatial information because the nearby pixel is likely to be highly correlated due to similar chemical composition. RMSE values of 0.049 and 0.051 are given for <inline-formula id="j_infor522_ineq_027"><alternatives><mml:math>
<mml:mn>10</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$10\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR and <inline-formula id="j_infor522_ineq_028"><alternatives><mml:math>
<mml:mn>40</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$40\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR with the synthetic dataset.</p>
<p><italic>Robust Collaborative Nonnegative Matrix Factorization</italic> (R-CoNMF) (Li <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_022">2016</xref>) is an unmixing algorithm that performs three steps of a hyperspectral unmixing chain. The three steps denoted by the authors are as follows: 
<list>
<list-item id="j_infor522_li_012">
<label>•</label>
<p>Estimation of the number of endmembers in the dataset being analysed.</p>
</list-item>
<list-item id="j_infor522_li_013">
<label>•</label>
<p>Identification of the endmember signatures.</p>
</list-item>
<list-item id="j_infor522_li_014">
<label>•</label>
<p>Estimation of the abundances in each pixel.</p>
</list-item>
</list> 
SAD values of 3.68 and 0.66 for a synthetic dataset with 20 and 80 dB SNR, respectively, are given.</p>
<p><italic>Spatial Group Sparsity regularized NMF</italic> (SGSNMF) (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_037">2017</xref>) is a blind unmixing method that incorporates a spatial groups sparsity regularizer constraint that takes into account the pixel location (spatial data) and the fact that abundance matrices are sparse. A simulated dataset was created from USGS library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>), and a real dataset was used to test the created algorithm. RMSE of 0.02 for 3 endmembers and 0.06 for 15 endmembers are given for the synthetic dataset.</p>
<p><italic>Endmember Constraint Non-negative Tensor Factorization via Total Variation for hyperspectral unmixing</italic> (EC-NTF-TV) (Wang <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_036">2021</xref>) is an algorithm that uses a proposed endmember constraint to mitigate the high correlation between spectral signatures for estimating endmembers and a total variation regularization for exploiting the spatial correlation in calculating the abundance maps. The authors also use an augmented multiplicative algorithm to solve their abundance map objective function. To test the algorithm’s performance, SAD and RMSE metrics were used with synthetically generated data and the Jasper Ridge dataset (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>). For the Jasper Ridge dataset, a mead SAD score was calculated from SAD values for each data class. The mean SAD for Jasper Ridge was 0.1248.</p>
<p><italic>Subspace Clustering constrained sparse NMF</italic> (SC-NMF) (Lu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_025">2020</xref>) is a spectral unmixing framework that uses subspace clustering with NMF to improve the precision of the unmixing. A coefficient matrix derived from the mentioned subspace clustering algorithm instead of a simple Euclidean distance is used to create a similarity graph. A synthetic hyperspectral image was created from the USGS Spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) to test the algorithm. SAD values for algorithm performance are 0.09 for the Indiana dataset and 0.089 for the Cuprite dataset (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>).</p>
<p><italic>Correntropy-based Spatial-spectral Robust Unmixing Model</italic> (CSsRS) (Li X. <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_023">2021</xref>) is an unmixing model that uses correntropy-based nonnegative matrix factorization, loss function, and a sparsity penalty. The algorithm is tested using a synthetic dataset created from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and real datasets: Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>), and Urban (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>). The authors provide values of SAD to evaluate the algorithm performance: 0.05 for a synthetic dataset with 3 endmembers, 1.4 for a synthetic dataset with 8 endmembers, 0.084 for the Jasper Ridge dataset, and 0.17 for the Urban dataset.</p>
<p><italic>General Loss-based NMF</italic> (GLNMF) (Peng <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_029">2022</xref>) is a hyperspectral unmixing algorithm that uses a general robust loss function in place of the least-squares loss function. The algorithm is tested using a synthetic dataset created from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>), and Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) dataset. Values of RMSE are given, which are: 0.06 for 5 endmembers and 0.06 for 10 endmembers.</p>
<p><italic>Correntropy-based Autoencoder-like Nonnegative Matrix Factorization with Total Variation</italic> (CANMF-TV) (Feng <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_006">2022</xref>) is an unmixing algorithm that adds correntropy-induced metric to create the unmixing model, and total variation regularizer is added to preserve spatial information. Synthetic datasets created from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and Cuprite (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>) datasets were used to test the algorithm. The authors provide values of SAD: 0.13 for a synthetic dataset with <inline-formula id="j_infor522_ineq_029"><alternatives><mml:math>
<mml:mn>10</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$10\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR, 0.05 for a synthetic dataset with <inline-formula id="j_infor522_ineq_030"><alternatives><mml:math>
<mml:mn>40</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$40\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR and 0.095 for the Cuprite dataset.</p>
<p><italic>Spectral–Spatial Weighted sparse NMF</italic> (SSWNMF) (Zhang S. <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_043">2022</xref>) is a hyperspectral unmixing algorithm that introduces weighting factors in the L1-NMF unmixing model. In addition, spatial and spectral data weighting factors can be included in L1-NMF to enhance the sparsity of the abundance matrix. To test the algorithm, the authors used a combination of synthetic and real data. The synthetic dataset was created using USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) data with different amounts of added Gaussian noise, and the real data used was Urban (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>) dataset and Cuprite (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>) data.</p>
</sec>
<sec id="j_infor522_s_009">
<label>2.3.2</label>
<title>Conclusions from Nonnegative Matrix Factorization Related Works Review</title>
<p>From the conducted review of algorithms using nonnegative matrix factorization for hyperspectral unmixing, a few conclusions were gathered: 
<list>
<list-item id="j_infor522_li_015">
<label>•</label>
<p>Most commonly used metric was SAD, and compared to semi-supervised algorithms, SRE matric was not used.</p>
</list-item>
<list-item id="j_infor522_li_016">
<label>•</label>
<p>Cuprite and Jasper ridge datasets were the most common real-world datasets used in these reviewed papers.</p>
</list-item>
<list-item id="j_infor522_li_017">
<label>•</label>
<p>The most cited algorithm of the reviewed is CNMF, while the most popular now is the LIDAR-NTF due to the number of citations per year since it was published in 2021.</p>
</list-item>
<list-item id="j_infor522_li_018">
<label>•</label>
<p>Using the SAD metrics provided by the authors, the best algorithm from this review is SGSNMF for the Cuprite dataset (0.0913). The differences between SGSNMF and other algorithms that used the Cuprite dataset are very small, and visually inspecting the provided hyperspectral data cube reconstructions, the differences are imperceptible.</p>
</list-item>
</list>
</p>
</sec>
<sec id="j_infor522_s_010">
<label>2.3.3</label>
<title>Autoencoder Networks</title>
<p>Autoencoders are a type of unsupervised learning-based neural network architecture. An artificial neuron bottleneck is created to create an autoencoder network that forces the input data to be compressed into a small number of features, extracting additional nonlinear information from the data. A few different types of encoder networks exist and are used for different purposes: 
<list>
<list-item id="j_infor522_li_019">
<label>•</label>
<p>Denoising autoencoder – a network that adds noise to input data, and from the corrupted input, it is trained to reconstruct the original data. This allows the removal of noise from the data in the future.</p>
</list-item>
<list-item id="j_infor522_li_020">
<label>•</label>
<p>Deep autoencoder – consists of at least 4 encoder and decoder layers that are Restricted Boltzmann Machines.</p>
</list-item>
<list-item id="j_infor522_li_021">
<label>•</label>
<p>Convolutional autoencoder – use the convolution to consider that a signal can be seen as a sum of other signals. In turn, they try to encode the input in a set of simple signals and reconstruct it in the decoder part.</p>
</list-item>
</list> 
The most commonly used autoencoder network for hyperspectral unmixing is the variational autoencoder. The model consists of the encoder part of the network that compresses the data and the decoder that reconstructs the original data using the compressed features as input. This allows the networks to be trained by minimizing the data reconstruction error, which measures the difference between input and reconstructed data. After the model is trained, the compressed data is extracted and can be input into other algorithms. This method extracts hidden or latent features in the training data. Variational autoencoder neural network architecture consists of two parts: the first part of the network is the encoder, and the second is the decoder, with a bottleneck layer in between. The variational part of the encoder provides distributions of values in the latent space of the network instead of a single value like in a regular autoencoder. The diagram also shows that the middle layers are smaller than the input and output layers, and the lines between nodes depict the neuron connection and weights. An overview of algorithm results is shown in Table <xref rid="j_infor522_tab_003">3</xref>, summarising results, metrics, and datasets from each corresponding paper.</p>
<p><italic>Convolutional Neural Network AutoEncoder Unmixing</italic> (CNNAEU) (Palsson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_028">2021</xref>) is a hyperspectral unmixing model based on both autoencoder neural network architecture and usage of convolutional layers. It is based on exploiting the spatial structures of hyperspectral images (HSI) and their spectral information. It is achieved by using the convolutional neural network (CNN) to extract spatial features from the structure of HSI. The authors give values of mean MSE for different datasets used: 0.078 for Samson dataset (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>), 0.056 for Urban (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>) dataset, 0.13 for Houston dataset of Science and Technology, 0.10 for Apex dataset (Schaepman <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_033">2015</xref>).</p>
<table-wrap id="j_infor522_tab_003">
<label>Table 3</label>
<caption>
<p>Comparison of the results of autoencoder network algorithms.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dataset</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Result</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="12" style="vertical-align: top; text-align: left">CNNAEU (Palsson <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_028">2021</xref>)</td>
<td rowspan="12" style="vertical-align: top; text-align: left">Samson dataset Urban dataset Houston dataset Apex dataset</td>
<td rowspan="12" style="vertical-align: top; text-align: left">Mean SAD Mean MSE</td>
<td style="vertical-align: top; text-align: left">Samson:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mSAD = 0.04;</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mMSE = 0.0781</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mSAD = 0.0398;</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mMSE = 0.0562</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Houston:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mSAD = 0.0502;</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mMSE = 0.1299</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Apex:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mSAD = 0.0714;</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mMSE = 0.1031</td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top; text-align: left">DeepGUn (Borsoi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_004">2020</xref>)</td>
<td rowspan="5" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Houston Samson Jasper Ridge</td>
<td rowspan="5" style="vertical-align: top; text-align: left">normalized RMSE (reconstruction)</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.0448</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Houston: RMSE = 0.2355</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Samson: RMSE = 0.0862</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge: RMSE = 0.1094</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left">DMBU (Su <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_035">2021</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left">Urban and Jasper Ridge</td>
<td rowspan="4" style="vertical-align: top; text-align: left">RMSE Mean SAD</td>
<td style="vertical-align: top; text-align: left">Urban SAD = 0.2173</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper SAD = 0.1496</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban RMSE = 0.2062</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper RMSE = 0.247</td>
</tr>
<tr>
<td rowspan="6" style="vertical-align: top; text-align: left">Deep HSnet (Dong <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_005">2020</xref>)</td>
<td rowspan="6" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Urban dataset</td>
<td rowspan="6" style="vertical-align: top; text-align: left">aRMSE rRMSE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">aRMSE = 0.3</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">rRMSE = 0.12 (SNR 40 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">aRMSE = 0.3592</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">rRMSE = 0.0869</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">LSTM-DNN (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_046">2021a</xref>)</td>
<td rowspan="3" style="vertical-align: top; text-align: left">Urban dataset</td>
<td style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">aSAD = 9.2 ± 2.9</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Average SAD</td>
<td style="vertical-align: top; text-align: left">aSID (<inline-formula id="j_infor522_ineq_031"><alternatives><mml:math>
<mml:mo>∗</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$\ast {10^{-3}}$]]></tex-math></alternatives></inline-formula>) = 115.7 ± 84.7</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Average SID</td>
<td style="vertical-align: top; text-align: left">RMSE (<inline-formula id="j_infor522_ineq_032"><alternatives><mml:math>
<mml:mo>∗</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$\ast {10^{-3}}$]]></tex-math></alternatives></inline-formula>) = 13.4 ± 3.4</td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top; text-align: left">AAS (Hua <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_014">2021</xref>)</td>
<td rowspan="5" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper dataset Samson dataset</td>
<td rowspan="5" style="vertical-align: top; text-align: left">aRMSE (abundance RMSE) eSAD (endmember SAD)</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data (aRMSE) =</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">0.0160 (Dataset 1; SNR = 35 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">0.0339 (Dataset 2; SNR = 35 dB)</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Samson (eSAD) = 0.1062</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper (eSAD) = 0.1593</td>
</tr>
<tr>
<td rowspan="5" style="vertical-align: top; text-align: left">GAUSS (Ranasinghe <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_032">2022</xref>)</td>
<td rowspan="5" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples Jasper Ridge dataset Urban dataset Samson dataset</td>
<td rowspan="5" style="vertical-align: top; text-align: left">average RMSE</td>
<td style="vertical-align: top; text-align: left">USGS synthetic data:</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">RMSE = 0.1816</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Jasper Ridge: RMSE = 0.1446</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Urban: RMSE = 0.1358</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">Samson: RMSE = 0.1945</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top; text-align: left; border-bottom: solid thin">SC-CAE (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_047">2021b</xref>)</td>
<td rowspan="4" style="vertical-align: top; text-align: left; border-bottom: solid thin">Synthetic data based on USGS library samples</td>
<td rowspan="4" style="vertical-align: top; text-align: left; border-bottom: solid thin">mean SAD mean AAD</td>
<td style="vertical-align: top; text-align: left">mSAD (SNR 20 dB) = 0.0135</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mSAD (SNR 50 dB) = 0.0051</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">mAAD (SNR 20 dB) = 0.0671</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">mAAD (SNR 50 dB) = 0.0306</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><italic>Deep Generative Unmixing algorithm</italic> (DeepGUn) (Borsoi <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_004">2020</xref>) is a spectral unmixing algorithm based on Generative models such as generative adversarial networks (GANs) and variational autoencoders (VAEs). According to the authors, their proposed strategy leads to more accurate abundance estimation at a small cost of computation resources. Their proposed autoencoder architecture consists of 3 hidden encoder layers with the rectified linear unit (ReLU) activation functions, 3 hidden decoder layers with ReLU activation functions, and an input and output layer with a sigmoid activation function. The experiment was conducted with 4 synthetically created data cubes from USGS Spectral Library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) data, and hyperspectral images called Houston dataset of Science and Technology, Samson (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>) and Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) datasets. The authors provide RMSE values for these datasets: 0.045 for the synthetic dataset, 0.236 for the Houston dataset of Science and Technology, 0.086 for the Samson dataset (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>), 0.11 for the Jasper dataset (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>).</p>
<p><italic>Deep autoencoders with Multitask learning for Bilinear hyperspectral Unmixing</italic> (DMBU) (Su <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_035">2021</xref>) is an unmixing algorithm created using deep autoencoder networks and a multitask learning framework. In the proposed method, authors train two instances of autoencoder networks together by minimizing the errors between encoder reconstructed data and original hyperspectral images. Using multitask learning frameworks, authors create a model to obtain endmembers and abundances and a second model to estimate the bilinear components of hyperspectral data. As a result, a bilinear mixture model is created that can more accurately predict the nonlinear interaction of light scattering. The authors used a variety of synthetic and real datasets to test the algorithm’s accuracy and computation time. For the Jasper Ridge dataset (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>), an RMSE of 0.247 was achieved, and a Mean SAD over all of the different classes was 0.150.</p>
<p><italic>Deep Half-Siamese Network</italic> (Deep HSNet) (Dong <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_005">2020</xref>) is a hyperspectral unmixing algorithm that consists of two different networks: endmember guided network and reconstruction network. The first network maps extracted endmembers to the abundances, while the reconstruction network is an autoencoder architecture network that recreates hyperspectral pixels. Two different parameter networks were used in the experimentation with a synthetic dataset created using USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and Urban dataset (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>). Values of reconstruction RMSE are given: 0.12 for a synthetic dataset with <inline-formula id="j_infor522_ineq_033"><alternatives><mml:math>
<mml:mn>40</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$40\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula> SNR and 0.087 for the Urban dataset.</p>
<p><italic>LSTM-DNN based autoencoder network for nonlinear hyperspectral image unmixing</italic> (LSTM-DNN) (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_046">2021a</xref>) is a proposed hyperspectral unmixing algorithm that uses long short-term memory (LSMT) based deep learning network. The authors propose an architecture of a recurrent neural network (RNN) architecture, specifically LSTM layers, and an autoencoder structure to calculate hyperspectral endmembers and abundances using the encoder and reconstruct a hyperspectral data cube using the decoder network. Authors created synthetic datasets using USGS spectral library data using a laboratory-created mixture data, Urban (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>) dataset and other scenes to test the algorithm performance. Multiple metrics were calculated: average spectral angle distance (aSAD), average spectral information divergence (aSID), RMSE, and a few others that were not used in the experiment conducted for the Urban dataset. Results for Urban dataset were: aSAD – <inline-formula id="j_infor522_ineq_034"><alternatives><mml:math>
<mml:mn>9.2</mml:mn>
<mml:mo>±</mml:mo>
<mml:mn>2.9</mml:mn></mml:math><tex-math><![CDATA[$9.2\pm 2.9$]]></tex-math></alternatives></inline-formula> , aSID <inline-formula id="j_infor522_ineq_035"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>∗</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>−</mml:mo>
<mml:mn>115.7</mml:mn>
<mml:mo>±</mml:mo>
<mml:mn>84.7</mml:mn></mml:math><tex-math><![CDATA[$(\ast {10^{-3}})-115.7\pm 84.7$]]></tex-math></alternatives></inline-formula> , RMSE <inline-formula id="j_infor522_ineq_036"><alternatives><mml:math>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:mo>∗</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo>−</mml:mo>
<mml:mn>13.4</mml:mn>
<mml:mo>±</mml:mo>
<mml:mn>3.4</mml:mn></mml:math><tex-math><![CDATA[$(\ast {10^{-3}})-13.4\pm 3.4$]]></tex-math></alternatives></inline-formula> .</p>
<p><italic>Autoencoder network with Adaptive Abundance Smoothing</italic> (AAS) (Hua <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_014">2021</xref>) is a hyperspectral unmixing algorithm based on an autoencoder network with an adaptive spatial smoothing algorithm to improve the unmixing performance. The synthetic dataset created from USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and Samson (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>), and Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) datasets were used to carry out the algorithm benchmark experiments. For Samson and Jasper datasets, endmember SAD values are given: 0.11 and 0.16, respectively.</p>
<p><italic>Guided encoder-decoder Architecture for hyperspectral Unmixing with Spatial Smoothness</italic> (GAUSS) (Ranasinghe <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_032">2022</xref>) is a three-network hyperspectral unmixing architecture. It consists of the: approximation network, unmixing network, and mixing network, the first two of which are the encoder part of the network, and the last one is the decoder. The authors also propose the pseudo-ground truth mechanism to generate better abundance in the algorithm’s decoder network and other parts. Algorithm testing and experimentation were conducted using USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) data and three real hyperspectral datasets: Samson (Zhu, <xref ref-type="bibr" rid="j_infor522_ref_048">2017</xref>), Jasper Ridge (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>), and Urban (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_049">2014a</xref>).</p>
<p><italic>Sparsity Constrained Convolutional AutoEncoder network for hyperspectral image unmixing</italic> (SC-CAE) (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_047">2021b</xref>) is a convolution-based autoencoder network algorithm for hyperspectral unmixing with constrained sparsity. Authors propose an algorithm that uses PCA on hyperspectral data that is then fed into a convolutional autoencoder deep learning network that can find abundance maps and spectral endmembers and reconstruct original hyperspectral data given enough training data and time. A combination of synthetic data generated using spectral information from USGS library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and Jasper Ridge dataset (Zhu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_050">2014b</xref>) was used to test the performance proposed algorithm. Mean spectral angle distance (mSAD) and mean abundance angle distance metrics (mAAD) were used for comparing algorithms. For the synthetic dataset, mSAD values were 0.0135 with an SNR of <inline-formula id="j_infor522_ineq_037"><alternatives><mml:math>
<mml:mn>20</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$20\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>, 0.0051 with an SNR of <inline-formula id="j_infor522_ineq_038"><alternatives><mml:math>
<mml:mn>50</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$50\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>, and mAAD values were 0.0671 with an SNR of <inline-formula id="j_infor522_ineq_039"><alternatives><mml:math>
<mml:mn>20</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$20\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>, 0.0306 with an SNR of <inline-formula id="j_infor522_ineq_040"><alternatives><mml:math>
<mml:mn>50</mml:mn>
<mml:mspace width="2.5pt"/>
<mml:mtext>dB</mml:mtext></mml:math><tex-math><![CDATA[$50\hspace{2.5pt}\text{dB}$]]></tex-math></alternatives></inline-formula>.</p>
</sec>
<sec id="j_infor522_s_011">
<label>2.3.4</label>
<title>Conclusions from Autoencoder Networks Related Works Review</title>
<p>A few conclusions were derived from the review of algorithms using autoencoder networks to solve the hyperspectral unmixing problems: 
<list>
<list-item id="j_infor522_li_022">
<label>•</label>
<p>Most common metric used in these reviewed papers was the RMSE. But a few variations of RMSE were used to analyse the differences between the reconstructed hyperspectral data, RMSE average over different material (classes), and separate RMSE for abundance matrix analysis.</p>
</list-item>
<list-item id="j_infor522_li_023">
<label>•</label>
<p>For autoencoder network algorithms, the most common real-world dataset was the Urban dataset.</p>
</list-item>
<list-item id="j_infor522_li_024">
<label>•</label>
<p>By using the provided RMSE metric of hyperspectral data reconstruction error, the algorithm with the lowest value (<inline-formula id="j_infor522_ineq_041"><alternatives><mml:math>
<mml:mn>13.4</mml:mn>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>−</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msup></mml:math><tex-math><![CDATA[$13.4\times {10^{-3}}$]]></tex-math></alternatives></inline-formula>) was LSTM-DNN.</p>
</list-item>
<list-item id="j_infor522_li_025">
<label>•</label>
<p>Compared to the algorithms in semi-supervised and non-negative matrix factorization categories, the autoencoder network algorithms are newer, with the oldest published in 2020.</p>
</list-item>
</list>
</p>
</sec>
<sec id="j_infor522_s_012">
<label>2.3.5</label>
<title>Linear Mixture Models</title>
<p>Linear mixture models (LMM) are regression model that simultaneously considers the variation of the dependent and the independent variables. The variations of both types of variables are often called fixed and random effects, and because the model uses both of these effects, it is called a mixed model. The linear mixture model is represented in equation (<xref rid="j_infor522_eq_003">3</xref>). In the equation, <italic>y</italic> is the outcome variable or mixture, <italic>X</italic> is the predictor multiplied by <italic>β</italic> regression coefficients, and <italic>Z</italic> is the design matrix of random effects of mixed data groups. The <italic>ε</italic> is residuals like noise. An overview of algorithm-acquired results, metrics, and datasets used in experimentation are shown in Table <xref rid="j_infor522_tab_004">4</xref>. 
<disp-formula id="j_infor522_eq_003">
<label>(3)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">X</mml:mi>
<mml:mi mathvariant="italic">β</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">Z</mml:mi>
<mml:mi mathvariant="italic">u</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">ε</mml:mi>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ y=X\beta +Zu+\varepsilon .\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p><italic>Augmented Linear Mixing Model</italic> (ALMM) (Hong <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_013">2019</xref>) is a changed linear mixture model that uses an endmember dictionary to determine the scaling factors and an additional dictionary to help model the rest of spectral variabilities. The proposed algorithm also implements an ADMM-based optimization to solve multi-block optimization problems (Xu <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_038">2012</xref>). In the experiment proposed by the authors, a combination of synthetic data generated from the USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) and an AVIRIS gathered a hyperspectral image called Cuprite (NASA, <xref ref-type="bibr" rid="j_infor522_ref_027">2015</xref>). The results of the reconstruction RMSE given by the authors are 0.0003 for the Cuprite dataset.</p>
<table-wrap id="j_infor522_tab_004">
<label>Table 4</label>
<caption>
<p>Comparison of the linear mixture model and supervised algorithms results.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Dataset</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Result</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">ALMM (Hong <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_013">2019</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples</td>
<td rowspan="2" style="vertical-align: top; text-align: left">rRMSE aSAM</td>
<td style="vertical-align: top; text-align: left">rRMSE = 0.0003</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">aSAM = 0.0052</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">GP_LM (Koirala <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_019">2019</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples (Hapke generating model)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">Training set 1: RMSE = 19.88</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">Training set 2: RMSE = 3.05</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">KRR_LM (Koirala <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_019">2019</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">Synthetic data based on USGS library samples (Hapke generating model)</td>
<td rowspan="2" style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">Training set 1: RMSE = 31.81</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left"/>
<td style="vertical-align: top; text-align: left">Training set 2: RMSE = 4.05</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">NN_LM (Koirala <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_019">2019</xref>)</td>
<td rowspan="2" style="vertical-align: top; text-align: left; border-bottom: solid thin">Synthetic data based on USGS library samples (Hapke generating model)</td>
<td rowspan="2" style="vertical-align: top; text-align: left; border-bottom: solid thin">RMSE</td>
<td style="vertical-align: top; text-align: left">Training set 1: RMSE = 23.57</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"/>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Training set 2: RMSE = 4.15</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
</sec>
<sec id="j_infor522_s_013">
<label>3</label>
<title>Benchmark Methodology</title>
<p>This section establishes and discusses the methodology used in creating the experiments to develop a hyperspectral unmixing algorithm performance benchmark. The proposed benchmark methodology could be used as a standardized way to simultaneously test hyperspectral unmixing algorithms in a few different ways. Different experiments test different aspects of the unmixing algorithms.</p>
<sec id="j_infor522_s_014">
<label>3.1</label>
<title>Datasets</title>
<p>This section describes the datasets used for the algorithm testing experiments. Three different datasets were used to test the various performance metrics of hyperspectral unmixing algorithms. These datasets were chosen due to their popularity, usability, and availability: 
<list>
<list-item id="j_infor522_li_026">
<label>•</label>
<p>A synthetic hyperspectral data cube was created artificially by mixing different amounts of pure spectra from the USGS spectral library. To create the synthetic datasets, version 7 of the USGS spectral library (splib07a) (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) was used, which contains over 2000 different spectral endmembers with spectral ranges from 0.2 to 200 micrometres in wavelength. A few different datasets are generated using the spectral data from this library to conduct the benchmark experiments. The detailed generation process is written in Section <xref rid="j_infor522_s_020">4</xref> and its corresponding subsections.</p>
</list-item>
<list-item id="j_infor522_li_027">
<label>•</label>
<p>A hyperspectral dataset created by the article’s authors (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_045">2019</xref>) in a laboratory setting containing hyperspectral images and spectral ground truths. The dataset is split into 3 scenes, each containing the spectra of pure-coloured materials mixed with different proportions to create mixed spectra. The difference between this dataset and the synthetic data created using the USGS library is that the mixtures are created physically and represent a more true-to-life mixing model. In contrast, library endmembers are usually mixed linearly. The first and third scenes use cyan, magenta, and yellow dyes and mixtures, while the second uses red, green, blue, and white dyes.</p>
<p>
<fig id="j_infor522_fig_001">
<label>Fig. 1</label>
<caption>
<p>2018 IEEE GRSS data fusion hyperspectral data RGB reconstruction.</p>
</caption>
<graphic xlink:href="infor522_g001.jpg"/>
</fig>
</p>
</list-item>
<list-item id="j_infor522_li_028">
<label>•</label>
<p>IEEE GRSS 2018 data fusion contest hyperspectral dataset (Prasad <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_030">2020</xref>). The hyperspectral dataset is gathered over the University of Houston and consists of 1202 by 4172 pixels, each containing 48 spectral bands with wavelengths from 317 nm to 1047 nm. The RGB reconstruction of the data is shown in Fig. <xref rid="j_infor522_fig_001">1</xref>.</p>
</list-item>
</list>
</p>
</sec>
<sec id="j_infor522_s_015">
<label>3.2</label>
<title>Metrics</title>
<p>To correctly test the performance and the ability of these algorithms in various aspects, a few metrics and their variants were chosen. Multiple different metrics are used in the hyperspectral unmixing problems. The most common are root mean squared error, signal reconstruction error, spectral angle mapping, and spectral angle distance. Root mean squared error and signal reconstruction error metrics were selected due to their popularity in hyperspectral unmixing algorithm performance evaluation and their overall simplicity in describing the differences between evaluated and real spectra in this benchmark:</p>
<list>
<list-item id="j_infor522_li_029">
<label>•</label>
<p>Root mean squared error (RMSE) shows the difference between the predicted spectra and the ground truth value. Different authors used a few variations of RMSE to test a different aspect of the created algorithms; these include average RMSE between all endmembers, reconstruction RMSE and abundance RMSE.</p>
</list-item>
<list-item id="j_infor522_li_030">
<label>•</label>
<p>Signal reconstruction error (SRE) is used to determine the quality of the spectral mixture reconstruction generated by the algorithms. A higher SRE value means a better reconstruction quality.</p>
</list-item>
</list>
<p>Metrics are calculated using these formulas: 
<disp-formula id="j_infor522_eq_004">
<label>(4)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mtext mathvariant="italic">RMSE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo largeop="true" movablelimits="false">∑</mml:mo></mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \textit{RMSE}=\sqrt{\frac{1}{N}{\sum \limits_{i=1}^{N}}{({x_{i}}-{\hat{x}_{i}})^{2}}},\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>N</italic> is the number of values in the vector being tested, <inline-formula id="j_infor522_ineq_042"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${x_{i}}$]]></tex-math></alternatives></inline-formula> is the <italic>i</italic>-th true value, and <inline-formula id="j_infor522_ineq_043"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${\hat{x}_{i}}$]]></tex-math></alternatives></inline-formula> is the <italic>i</italic>-th predicted value. 
<disp-formula id="j_infor522_eq_005">
<label>(5)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mtext mathvariant="italic">SRE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mn>10</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mo movablelimits="false">log</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">(</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mo stretchy="false">‖</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mo stretchy="false">‖</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo fence="true" stretchy="false">]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">E</mml:mi>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mo stretchy="false">‖</mml:mo>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo>−</mml:mo><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover>
<mml:msubsup>
<mml:mrow>
<mml:mo stretchy="false">‖</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo fence="true" stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo mathvariant="normal" fence="true" maxsize="2.03em" minsize="2.03em">)</mml:mo>
<mml:mo mathvariant="normal">,</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ \textit{SRE}=10{\log _{10}}\bigg(\frac{E[\| x{\| _{2}^{2}}]}{E[\| x-\hat{x}{\| _{2}^{2}}]}\bigg),\]]]></tex-math></alternatives>
</disp-formula> 
where <italic>x</italic> is the true value and <inline-formula id="j_infor522_ineq_044"><alternatives><mml:math><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">ˆ</mml:mo></mml:mover></mml:math><tex-math><![CDATA[$\hat{x}$]]></tex-math></alternatives></inline-formula> is the predicted value and <inline-formula id="j_infor522_ineq_045"><alternatives><mml:math>
<mml:mi mathvariant="italic">E</mml:mi>
<mml:mo fence="true" stretchy="false">[</mml:mo>
<mml:mo fence="true" stretchy="false">]</mml:mo></mml:math><tex-math><![CDATA[$E[]$]]></tex-math></alternatives></inline-formula> is the expected value of the expression inside.</p>
</sec>
<sec id="j_infor522_s_016">
<label>3.3</label>
<title>Experiment Steps</title>
<p>To test the different aspects of the algorithm, the main experiment part is divided into four main sections:</p>
<list>
<list-item id="j_infor522_li_031">
<label>1.</label>
<p>Hyperparameter testing. This experiment tests the tested algorithms’ results when changing the available hyperparameter. Standard and controlled datasets are created to ensure that the results are only affected by the change in algorithm hyperparameter. This test allows checking the algorithm dependencies on the hyperparameters and, in turn, checking the universality of the algorithm. The laboratory-created dataset (Zhao <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_045">2019</xref>) was used for this experiment. It was selected because of the different collections of data, accurate measurements and ground truths provided.</p>
</list-item>
<list-item id="j_infor522_li_032">
<label>2.</label>
<p>Endmember robustness. This tests the algorithm’s ability to be generalized and its overall performance when the input number of endmembers is changed. This type of test allows checking the algorithm’s ability to find endmembers and reconstruct hyperspectral images depending on the scene’s difficulty. Due to the changing number of endmembers, a synthetic dataset created using a combination of IEEE GRSS (Prasad <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_030">2020</xref>) data as a basis and USGS spectral library (Kokaly <italic>et al.</italic>, <xref ref-type="bibr" rid="j_infor522_ref_020">2017</xref>) was used.</p>
</list-item>
<list-item id="j_infor522_li_033">
<label>3.</label>
<p>Robustness to noise. This experiment determines the algorithm’s ability to accurately unmix the hyperspectral image spectra when a different level of artificial noise is added to said image. This experiment tests algorithms with different amounts of white noise and a noise profile created from a real-world scenario. The dataset created to test endmember robustness was used as a base hyperspectral dataset, and a layer of artificial noise was added to it.</p>
</list-item>
<list-item id="j_infor522_li_034">
<label>4.</label>
<p>Impact of differences in input image sizes. By setting different sizes of hyperspectral images, the amount of spatial and spectral information changes, affecting the overall performance of algorithms. This also allows us to determine the optimal image size for the most accurate unmixing result and performance combination. It also shows the data required for algorithms to achieve their best accuracy. The same endmember robustness dataset was used and then downscaled using the methodology described below to create the different spatial size hyperspectral images.</p>
</list-item>
</list>
<p>The experiments described above were performed using the different datasets described in Section <xref rid="j_infor522_s_014">3.1</xref>.</p>
</sec>
<sec id="j_infor522_s_017">
<label>3.4</label>
<title>Endmember Robustness Experiment Schema</title>
<p>Endmember robustness testing is done by creating a group of artificially generated datasets according to a set of rules: 
<list>
<list-item id="j_infor522_li_035">
<label>•</label>
<p>Datasets <inline-formula id="j_infor522_ineq_046"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D_{x}}$]]></tex-math></alternatives></inline-formula> (where <italic>x</italic> is the set number of endmembers) are created by selecting the endmembers from USGS spectral library data.</p>
</list-item>
<list-item id="j_infor522_li_036">
<label>•</label>
<p>The number of endmembers <italic>x</italic> selected are: from 3 to 10 endmembers with a step of 1, from 10 to 30 endmembers with a step of 5, and from 30 to 100 endmembers with a step of 10.</p>
</list-item>
<list-item id="j_infor522_li_037">
<label>•</label>
<p>For each <inline-formula id="j_infor522_ineq_047"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D_{x}}$]]></tex-math></alternatives></inline-formula> one abundance matrix <inline-formula id="j_infor522_ineq_048"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="italic">equal</mml:mtext>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${A_{\textit{equal}}}$]]></tex-math></alternatives></inline-formula> is created by using an equal abundance of each endmember <italic>x</italic>.</p>
</list-item>
<list-item id="j_infor522_li_038">
<label>•</label>
<p>For each <inline-formula id="j_infor522_ineq_049"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${D_{x}}$]]></tex-math></alternatives></inline-formula> ten abundance matrices <inline-formula id="j_infor522_ineq_050"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${A_{y}}$]]></tex-math></alternatives></inline-formula> are created by randomly generating endmember abundances <italic>y</italic> using a uniform distribution. <italic>y</italic> is normalized to conform to sum-to-one constraint (equation (<xref rid="j_infor522_eq_006">6</xref>)).</p>
<p>
<fig id="j_infor522_fig_002">
<label>Fig. 2</label>
<caption>
<p>Artificial hyperspectral image RGB representation.</p>
</caption>
<graphic xlink:href="infor522_g002.jpg"/>
</fig>
</p>
</list-item>
<list-item id="j_infor522_li_039">
<label>•</label>
<p>An artificial hyperspectral image <inline-formula id="j_infor522_ineq_051"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${I_{i}}$]]></tex-math></alternatives></inline-formula> (an example RGB representation of the such image is shown in Fig. <xref rid="j_infor522_fig_002">2</xref>) of size 150 by 100 pixels is generated using the abundance matrix <inline-formula id="j_infor522_ineq_052"><alternatives><mml:math>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
</mml:msub></mml:math><tex-math><![CDATA[${A_{y}}$]]></tex-math></alternatives></inline-formula> and endmembers <italic>x</italic>. The size of the image was selected to represent a realistic hyperspectral image while at the same time keeping it low to reduce computation resource usage.</p>
</list-item>
</list> 
<disp-formula id="j_infor522_eq_006">
<label>(6)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ {y_{i}}=\frac{{y_{i}}}{{\textstyle\textstyle\sum _{i=1}^{N}}{y_{i}}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
</sec>
<sec id="j_infor522_s_018">
<label>3.5</label>
<title>Robustness to Noise Experiment Schema</title>
<p>A collection of artificially generated hyperspectral images is created to test the algorithm’s robustness to noise. Then a different amount of noise is added to the images according to these set rules:</p>
<list>
<list-item id="j_infor522_li_040">
<label>•</label>
<p>A collection of 4 different datasets are created with different endmembers using the same methodology as in the endmember robustness experiment in Section <xref rid="j_infor522_s_017">3.4</xref>.</p>
</list-item>
<list-item id="j_infor522_li_041">
<label>•</label>
<p>For each of the 4 datasets, a different amount of artificial noise is added.</p>
</list-item>
<list-item id="j_infor522_li_042">
<label>•</label>
<p>The created noise is measured in SNR dB, in which a lower number means a higher amount of white noise.</p>
</list-item>
<list-item id="j_infor522_li_043">
<label>•</label>
<p>A random noise with a mean value of 0 is generated with the desired SNR dB values of 20, 25, 30, 35, 40, 45, and 50.</p>
<p>
<fig id="j_infor522_fig_003">
<label>Fig. 3</label>
<caption>
<p>Real and artificially created noise profile Pearson correlation comparison.</p>
</caption>
<graphic xlink:href="infor522_g003.jpg"/>
</fig>
</p>
</list-item>
<list-item id="j_infor522_li_044">
<label>•</label>
<p>This noise is then applied to each of the 4 datasets to create noisy images.</p>
</list-item>
</list>
<p>In addition to the random white noise generated, a set of noise parameters was extracted from a hyperspectral imaging camera used for research in an uncontrolled field environment. The camera was a BaySpec OCI-F Hyperspectral Imager in VIS-NIR range (BaySpec, <xref ref-type="bibr" rid="j_infor522_ref_001">2021</xref>). A Pearson correlation coefficient was calculated to measure the amount of noise the camera-generated at each wavelength. Each neighbouring wavelength was taken from a hyperspectral image, and the correlation between the values of these wavelengths across the whole image was calculated. Figure <xref rid="j_infor522_fig_003">3</xref> (orange line) shows the correlation coefficient at wavelength index <italic>x</italic> and <inline-formula id="j_infor522_ineq_053"><alternatives><mml:math>
<mml:mi mathvariant="italic">x</mml:mi>
<mml:mo>−</mml:mo>
<mml:mn>1</mml:mn></mml:math><tex-math><![CDATA[$x-1$]]></tex-math></alternatives></inline-formula>. The same Pearson correlation was calculated for one of the synthetically generated hyperspectral images used in this experiment, and the results are shown in Fig. <xref rid="j_infor522_fig_003">3</xref> (blue line). Pearson correlation between neighbouring bands in the same image was calculated using formula (<xref rid="j_infor522_eq_007">7</xref>) where <italic>r</italic> is the correlation value, <italic>x</italic> is the first set of values (in this case, values of specific wavelength) and <italic>y</italic> the second set of values (values of neighbouring wavelength). The calculation is done by taking a pair of values from the same pixel <italic>i</italic>, calculating the difference to the average value of each set (<inline-formula id="j_infor522_ineq_054"><alternatives><mml:math><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:math><tex-math><![CDATA[$\bar{x}$]]></tex-math></alternatives></inline-formula> and <inline-formula id="j_infor522_ineq_055"><alternatives><mml:math><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:math><tex-math><![CDATA[$\bar{y}$]]></tex-math></alternatives></inline-formula>), multiplying them, getting the sum and dividing by the root of their squared product sum: 
<disp-formula id="j_infor522_eq_007">
<label>(7)</label><alternatives><mml:math display="block">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:mi mathvariant="italic">r</mml:mi>
<mml:mo>=</mml:mo><mml:mstyle displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo largeop="false" movablelimits="false">∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">x</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mo mathvariant="normal" fence="true" stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>−</mml:mo><mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">¯</mml:mo></mml:mover>
<mml:mo mathvariant="normal" fence="true" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable></mml:math><tex-math><![CDATA[\[ r=\frac{{\textstyle\textstyle\sum _{i=1}^{n}}({x_{i}}-\bar{x})({y_{i}}-\bar{y})}{\sqrt{{\textstyle\textstyle\sum _{i=1}^{n}}{({x_{i}}-\bar{x})^{2}}{({y_{i}}-\bar{y})^{2}}}}.\]]]></tex-math></alternatives>
</disp-formula>
</p>
<p>A set of artificial noise parameters was found using a gradient descent minimization algorithm. When applied to our synthetically generated hyperspectral image, the band-to-band Pearson correlation coefficients were close to resembling a real-world camera noise specification. In this instance, a multivariate optimization algorithm was used to calculate the number of wavelengths – 1 amount of different variables. Specifically, an evolutionary algorithm from python library <italic>scipy</italic> called differential evolution was used to minimize the difference between true correlation and artificial correlation. Figure <xref rid="j_infor522_fig_003">3</xref> shows the created noise specification results. This noise profile was then used to create a more true-to-real-world camera noise to test how algorithms perform in this scenario.</p>
</sec>
<sec id="j_infor522_s_019">
<label>3.6</label>
<title>Image Size Difference Experiment Methodology</title>
<p>Algorithm performance testing according to different image sizes was conducted using these steps: 
<list>
<list-item id="j_infor522_li_045">
<label>•</label>
<p>A synthetically generated hyperspectral image dataset with different numbers of endmembers was generated. Created using the exact methodology of the endmember robustness experiment described in Section <xref rid="j_infor522_s_017">3.4</xref>.</p>
</list-item>
<list-item id="j_infor522_li_046">
<label>•</label>
<p>These datasets are then downscaled using mean values in an area of <inline-formula id="j_infor522_ineq_056"><alternatives><mml:math>
<mml:mn>2</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>2</mml:mn></mml:math><tex-math><![CDATA[$2\times 2$]]></tex-math></alternatives></inline-formula> pixels to 1 value and <inline-formula id="j_infor522_ineq_057"><alternatives><mml:math>
<mml:mn>3</mml:mn>
<mml:mo>×</mml:mo>
<mml:mn>3</mml:mn></mml:math><tex-math><![CDATA[$3\times 3$]]></tex-math></alternatives></inline-formula> pixels to one value. In turn, this creates images 4 and 9 times smaller.</p>
</list-item>
<list-item id="j_infor522_li_047">
<label>•</label>
<p>RMSE and SRE metrics are then calculated on these 3 collections of datasets to compare the results.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec id="j_infor522_s_020">
<label>4</label>
<title>Benchmark</title>
<p>In this section, all of the algorithms used in the experiments are described, and the final benchmark results are given. These algorithms were selected based on a few main factors: algorithm code was made public by the authors and opened to use, and the algorithm solved at least one of the hyperspectral unmixing tasks.</p>
<sec id="j_infor522_s_021">
<label>4.1</label>
<title>Tested Algorithms</title>
<p>The algorithm code was gathered from the author’s GitHub or personal pages. All the code used in the experiments and links to the author’s pages is provided in the Github repository (<uri>https://github.com/VytautasPau/HUBenchmark.git</uri>). Algorithms were implemented using Matlab software and Python programming language and ran using the Matlab Python engine, which adds additional overhead to the calculations. For these reasons, direct performance comparison to pure Matlab implementation is not recommended. This subsection describes the algorithms that were tested: 
<list>
<list-item id="j_infor522_li_048">
<label>1.</label>
<p>SUnSAL – solves an l1–l2 norm optimization problem with several constraints: positivity, which checks if all resulting values are greater or equal to 0, and Add-to-one, which calculates if the sum of the results (abundances) is equal to 1. The algorithm tries to minimize the l1 and l2 regularization norms. In other words, l1 and l2 norm optimization is simultaneously a sparse regression calculation on both linear and squared values.</p>
</list-item>
<list-item id="j_infor522_li_049">
<label>2.</label>
<p>SUnSAL-TV – is an extension of the SUnSAL algorithm that adds an isotropic or non-isotropic total variation spatial regularization.</p>
</list-item>
<list-item id="j_infor522_li_050">
<label>3.</label>
<p>S<sup>2</sup>WSU – an algorithm that uses spectral and spatial data at the same time to calculate a sparse unmixing matrix.</p>
</list-item>
<list-item id="j_infor522_li_051">
<label>4.</label>
<p>CNMF – an algorithm that fuses high spatial resolution multispectral data and high spectral resolution hyperspectral data to calculate image endmembers and unmix these spectra.</p>
</list-item>
<list-item id="j_infor522_li_052">
<label>5.</label>
<p>R-CoNMF – algorithm performs 3 important steps to find the endmembers, gather their signatures, and calculate the unmixing matrix.</p>
</list-item>
<list-item id="j_infor522_li_053">
<label>6.</label>
<p>SGSNMF – considers the spatial data and pixel locations and runs under the assumption that unmixing matrices are sparse.</p>
</list-item>
<list-item id="j_infor522_li_054">
<label>7.</label>
<p>RSNMF – a total variation regularized blind unmixing algorithm that considers pixel location and their correlation to nearby pixels.</p>
</list-item>
<list-item id="j_infor522_li_055">
<label>8.</label>
<p>ALMM – a linear model that uses an endmember dictionary to help calculate the spectral variability.</p>
</list-item>
</list>
</p>
</sec>
<sec id="j_infor522_s_022">
<label>4.2</label>
<title>Benchmarking Results</title>
<p>This subsection describes the results collected by running the created experiments on available algorithms. The code used in creating and running these benchmarks can be accessed at <uri>https://github.com/VytautasPau/HUBenchmark.git</uri>.</p>
<p><bold>Endmember robustness:</bold> This experiment was conducted using an artificial dataset generated using the pattern depicted in Fig. <xref rid="j_infor522_fig_002">2</xref>. The pattern has a ground truth counterpart, a crop version of the same image but with 20 different classes (collections of more than 2 endmembers assigned to each pixel) labelled in the image. Twenty-one randomly selected endmembers were mixed with different abundances using this classification pattern. The abundances selected followed a few steps that are also shown in a diagram of Fig. <xref rid="j_infor522_fig_004">4</xref>:</p>
<list>
<list-item id="j_infor522_li_056">
<label>•</label>
<p>10 different datasets were created using the same 21 endmembers to add statistical differences to calculations.</p>
<p>
<fig id="j_infor522_fig_004">
<label>Fig. 4</label>
<caption>
<p>Endmember robustness experiment diagram.</p>
</caption>
<graphic xlink:href="infor522_g004.jpg"/>
</fig>
</p>
<p>
<fig id="j_infor522_fig_005">
<label>Fig. 5</label>
<caption>
<p>Endmember robustness experiment result with box plots for each endmember group and algorithm. (Colours: purple – SUnSAL, dark blue – SUnSAL-TV, blue – SGSNMF, light blue – S2WSU, cyan – RSNMF, yellow – R-CoNMF, orange – CNMF, red – ALMM.) A combined synthetic IEEE GRSS and USGS spectral library dataset was used as test data.</p>
</caption>
<graphic xlink:href="infor522_g005.jpg"/>
</fig>
</p>
</list-item>
<list-item id="j_infor522_li_057">
<label>•</label>
<p>Endmembers were randomly selected into groups to create different amounts of endmembers, from 2 to 21, for each of the classes in the pattern.</p>
</list-item>
<list-item id="j_infor522_li_058">
<label>•</label>
<p>For each group of endmembers, uniformly distributed abundances were created.</p>
</list-item>
<list-item id="j_infor522_li_059">
<label>•</label>
<p>Other 9 variations of abundances were randomly selected and mixed into 10 different hyperspectral images.</p>
</list-item>
</list>
<p>Figure <xref rid="j_infor522_fig_005">5</xref> and Table <xref rid="j_infor522_tab_005">5</xref> show the results gathered from algorithms and calculating the RMSE metric between the predicted values and ground truth abundances that were generated. In Fig. <xref rid="j_infor522_fig_005">5</xref>, almost all algorithms excluding RSNMF show a consistent RMSE with different amounts of endmembers in the image, with SGSNMF having the biggest errors and, in turn, the worst performance while SUnSAL having the lowest error of all algorithms on average. The SGSNMF and RSNMF algorithms have the biggest value distributions out of these algorithms. The smaller distributions show more consistent results of these algorithms, while RSNMF is inconsistent at low amounts of endmembers. Table <xref rid="j_infor522_tab_005">5</xref> displays the same information given in the previously mentioned Fig. <xref rid="j_infor522_fig_005">5</xref>, but in a numerical form and with the values averaged instead of their distributions.</p>
<p><bold>Robustness to noise:</bold> as with the previous experiment, hyperspectral images were generated using the classified surface image of Fig. <xref rid="j_infor522_fig_002">2</xref> better to represent the distribution of endmembers in the hyperspectral image. As described in the methodology section, an image with 5 endmembers was generated, and artificial noise was added. Figure <xref rid="j_infor522_fig_006">6</xref> shows the averaged results of 10 runs each of RMSE for each tested algorithm and different amounts of artificial noise added, including the noise generated from real camera noise characteristics. This figure shows that the SUnSAL algorithm has a very linear correlation between the RMSE and the amount of noise in the image. RMSE is the worst of all algorithms when using a real-life noise characteristic. Other algorithms get almost consistent results across all of the noise levels. The RSNMF algorithm has the most accurate overall RMSE, higher at greater noise levels but the most accurate unmixing result in real noise experiments.</p>
<table-wrap id="j_infor522_tab_005">
<label>Table 5</label>
<caption>
<p>Endmember robustness experiment results with average RMSE values for each endmember group and algorithm. (Columns list algorithms tested, and rows are several endmembers.)</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">No. of endmembers</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SUnSAL</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SUnSAL-TV</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SGSNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">RSNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">S2WSU</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">R-CoNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">CNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">ALMM</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: char">2</td>
<td style="vertical-align: top; text-align: left">0.00124</td>
<td style="vertical-align: top; text-align: left">0.0386</td>
<td style="vertical-align: top; text-align: left">266.85</td>
<td style="vertical-align: top; text-align: left">0.1</td>
<td style="vertical-align: top; text-align: left"><bold>0.0001</bold></td>
<td style="vertical-align: top; text-align: left">0.088</td>
<td style="vertical-align: top; text-align: left">0.093</td>
<td style="vertical-align: top; text-align: left">0.162</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">3</td>
<td style="vertical-align: top; text-align: left">0.00193</td>
<td style="vertical-align: top; text-align: left">0.038</td>
<td style="vertical-align: top; text-align: left">274.59</td>
<td style="vertical-align: top; text-align: left">0.096</td>
<td style="vertical-align: top; text-align: left">0.0064</td>
<td style="vertical-align: top; text-align: left">0.086</td>
<td style="vertical-align: top; text-align: left">0.091</td>
<td style="vertical-align: top; text-align: left">0.157</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">4</td>
<td style="vertical-align: top; text-align: left">0.00127</td>
<td style="vertical-align: top; text-align: left">0.0494</td>
<td style="vertical-align: top; text-align: left">203.004</td>
<td style="vertical-align: top; text-align: left">0.091</td>
<td style="vertical-align: top; text-align: left">0.0054</td>
<td style="vertical-align: top; text-align: left">0.091</td>
<td style="vertical-align: top; text-align: left">0.083</td>
<td style="vertical-align: top; text-align: left">0.155</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">5</td>
<td style="vertical-align: top; text-align: left">0.00196</td>
<td style="vertical-align: top; text-align: left">0.0424</td>
<td style="vertical-align: top; text-align: left">171.066</td>
<td style="vertical-align: top; text-align: left">0.089</td>
<td style="vertical-align: top; text-align: left">0.0038</td>
<td style="vertical-align: top; text-align: left"><bold>0.082</bold></td>
<td style="vertical-align: top; text-align: left">0.081</td>
<td style="vertical-align: top; text-align: left">0.151</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">6</td>
<td style="vertical-align: top; text-align: left">0.00117</td>
<td style="vertical-align: top; text-align: left">0.0445</td>
<td style="vertical-align: top; text-align: left"><bold>124.579</bold></td>
<td style="vertical-align: top; text-align: left">0.083</td>
<td style="vertical-align: top; text-align: left">0.0066</td>
<td style="vertical-align: top; text-align: left">0.086</td>
<td style="vertical-align: top; text-align: left">0.075</td>
<td style="vertical-align: top; text-align: left">0.151</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">7</td>
<td style="vertical-align: top; text-align: left">0.00189</td>
<td style="vertical-align: top; text-align: left">0.0381</td>
<td style="vertical-align: top; text-align: left">349.082</td>
<td style="vertical-align: top; text-align: left">0.081</td>
<td style="vertical-align: top; text-align: left">0.0076</td>
<td style="vertical-align: top; text-align: left">0.091</td>
<td style="vertical-align: top; text-align: left">0.076</td>
<td style="vertical-align: top; text-align: left">0.143</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">8</td>
<td style="vertical-align: top; text-align: left">0.0021</td>
<td style="vertical-align: top; text-align: left">0.044</td>
<td style="vertical-align: top; text-align: left">219.953</td>
<td style="vertical-align: top; text-align: left">0.076</td>
<td style="vertical-align: top; text-align: left">0.010</td>
<td style="vertical-align: top; text-align: left">0.090</td>
<td style="vertical-align: top; text-align: left">0.072</td>
<td style="vertical-align: top; text-align: left">0.150</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">9</td>
<td style="vertical-align: top; text-align: left">0.00138</td>
<td style="vertical-align: top; text-align: left">0.0411</td>
<td style="vertical-align: top; text-align: left">220.569</td>
<td style="vertical-align: top; text-align: left">0.074</td>
<td style="vertical-align: top; text-align: left">0.013</td>
<td style="vertical-align: top; text-align: left">0.090</td>
<td style="vertical-align: top; text-align: left">0.073</td>
<td style="vertical-align: top; text-align: left">0.148</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">10</td>
<td style="vertical-align: top; text-align: left">0.00124</td>
<td style="vertical-align: top; text-align: left">0.0368</td>
<td style="vertical-align: top; text-align: left">299.613</td>
<td style="vertical-align: top; text-align: left">0.069</td>
<td style="vertical-align: top; text-align: left">0.011</td>
<td style="vertical-align: top; text-align: left">0.090</td>
<td style="vertical-align: top; text-align: left">0.065</td>
<td style="vertical-align: top; text-align: left">0.146</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">11</td>
<td style="vertical-align: top; text-align: left">0.00169</td>
<td style="vertical-align: top; text-align: left">0.0446</td>
<td style="vertical-align: top; text-align: left">342.694</td>
<td style="vertical-align: top; text-align: left">0.067</td>
<td style="vertical-align: top; text-align: left">0.013</td>
<td style="vertical-align: top; text-align: left">0.0907</td>
<td style="vertical-align: top; text-align: left">0.060</td>
<td style="vertical-align: top; text-align: left">0.146</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">12</td>
<td style="vertical-align: top; text-align: left"><bold>0.00111</bold></td>
<td style="vertical-align: top; text-align: left">0.0377</td>
<td style="vertical-align: top; text-align: left">251.486</td>
<td style="vertical-align: top; text-align: left">0.066</td>
<td style="vertical-align: top; text-align: left">0.013</td>
<td style="vertical-align: top; text-align: left">0.0904</td>
<td style="vertical-align: top; text-align: left">0.055</td>
<td style="vertical-align: top; text-align: left">0.141</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">13</td>
<td style="vertical-align: top; text-align: left">0.00152</td>
<td style="vertical-align: top; text-align: left">0.0366</td>
<td style="vertical-align: top; text-align: left">246.71</td>
<td style="vertical-align: top; text-align: left">0.062</td>
<td style="vertical-align: top; text-align: left">0.0169</td>
<td style="vertical-align: top; text-align: left">0.0906</td>
<td style="vertical-align: top; text-align: left">0.057</td>
<td style="vertical-align: top; text-align: left">0.146</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">14</td>
<td style="vertical-align: top; text-align: left">0.00174</td>
<td style="vertical-align: top; text-align: left"><bold>0.0346</bold></td>
<td style="vertical-align: top; text-align: left">202.204</td>
<td style="vertical-align: top; text-align: left">0.061</td>
<td style="vertical-align: top; text-align: left">0.015</td>
<td style="vertical-align: top; text-align: left">0.0899</td>
<td style="vertical-align: top; text-align: left">0.050</td>
<td style="vertical-align: top; text-align: left">0.143</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">15</td>
<td style="vertical-align: top; text-align: left">0.00133</td>
<td style="vertical-align: top; text-align: left">0.0390</td>
<td style="vertical-align: top; text-align: left">404.76</td>
<td style="vertical-align: top; text-align: left">0.0588</td>
<td style="vertical-align: top; text-align: left">0.016</td>
<td style="vertical-align: top; text-align: left">0.0874</td>
<td style="vertical-align: top; text-align: left">0.049</td>
<td style="vertical-align: top; text-align: left">0.145</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">16</td>
<td style="vertical-align: top; text-align: left">0.00139</td>
<td style="vertical-align: top; text-align: left">0.0369</td>
<td style="vertical-align: top; text-align: left">177.42</td>
<td style="vertical-align: top; text-align: left">0.054</td>
<td style="vertical-align: top; text-align: left">0.019</td>
<td style="vertical-align: top; text-align: left">0.0894</td>
<td style="vertical-align: top; text-align: left">0.041</td>
<td style="vertical-align: top; text-align: left">0.139</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">17</td>
<td style="vertical-align: top; text-align: left">0.00171</td>
<td style="vertical-align: top; text-align: left">0.0383</td>
<td style="vertical-align: top; text-align: left">215.487</td>
<td style="vertical-align: top; text-align: left">0.054</td>
<td style="vertical-align: top; text-align: left">0.0209</td>
<td style="vertical-align: top; text-align: left">0.090</td>
<td style="vertical-align: top; text-align: left">0.042</td>
<td style="vertical-align: top; text-align: left">0.142</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">18</td>
<td style="vertical-align: top; text-align: left">0.00145</td>
<td style="vertical-align: top; text-align: left">0.0360</td>
<td style="vertical-align: top; text-align: left">219.009</td>
<td style="vertical-align: top; text-align: left">0.054</td>
<td style="vertical-align: top; text-align: left">0.0207</td>
<td style="vertical-align: top; text-align: left">0.0909</td>
<td style="vertical-align: top; text-align: left">0.042</td>
<td style="vertical-align: top; text-align: left">0.140</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">19</td>
<td style="vertical-align: top; text-align: left">0.00164</td>
<td style="vertical-align: top; text-align: left">0.0391</td>
<td style="vertical-align: top; text-align: left">335.102</td>
<td style="vertical-align: top; text-align: left">0.051</td>
<td style="vertical-align: top; text-align: left">0.0219</td>
<td style="vertical-align: top; text-align: left">0.0909</td>
<td style="vertical-align: top; text-align: left">0.039</td>
<td style="vertical-align: top; text-align: left">0.140</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char">20</td>
<td style="vertical-align: top; text-align: left">0.00183</td>
<td style="vertical-align: top; text-align: left">0.0358</td>
<td style="vertical-align: top; text-align: left">173.696</td>
<td style="vertical-align: top; text-align: left">0.049</td>
<td style="vertical-align: top; text-align: left">0.0232</td>
<td style="vertical-align: top; text-align: left">0.0916</td>
<td style="vertical-align: top; text-align: left"><bold>0.032</bold></td>
<td style="vertical-align: top; text-align: left">0.138</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: char; border-bottom: solid thin">21</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.00163</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.0354</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">218.699</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><bold>0.047</bold></td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.0204</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.0895</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.035</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin"><bold>0.135</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p><bold>Image size difference:</bold> the RMSE results of 9 times down-scaled images are shown in Fig. <xref rid="j_infor522_fig_007">7</xref>. Compared with the images scaled down 4 times, Fig. <xref rid="j_infor522_fig_008">8</xref> shows the algorithm performance results. Both figures show similar results, correlating with endmember robustness experiment RMSE values where images are not downscaled.</p>
<fig id="j_infor522_fig_006">
<label>Fig. 6</label>
<caption>
<p>Algorithm robustness to noise experiment results. A combined synthetic dataset of IEEE GRSS and USGS spectral library with added noise was used as test data.</p>
</caption>
<graphic xlink:href="infor522_g006.jpg"/>
</fig>
<fig id="j_infor522_fig_007">
<label>Fig. 7</label>
<caption>
<p>Algorithm performance with 9 times down scaled hyperspectral images. A combined synthetic dataset of IEEE GRSS and USGS spectral library scaled down 9 times was used.</p>
</caption>
<graphic xlink:href="infor522_g007.jpg"/>
</fig>
<fig id="j_infor522_fig_008">
<label>Fig. 8</label>
<caption>
<p>Algorithm performance with 4 times down scaled hyperspectral images. A combined synthetic dataset of IEEE GRSS and USGS spectral library scaled down 4 times was used.</p>
</caption>
<graphic xlink:href="infor522_g008.jpg"/>
</fig>
<p>To better compare these results, a Table <xref rid="j_infor522_tab_006">6</xref> was created showing the averages of RMSE and SAD metrics for each algorithm and each set of scaled images. These results are shown in Table <xref rid="j_infor522_tab_006">6</xref>. Negative SRE values represent a worse reconstruction than positive values because the higher the SRE, the better the signal reconstruction. This table determines the effects of image scaling on the results of tested algorithms. All tested algorithms got consistent results between the different image scales. The SGSNMF algorithm got the worst results, while SUnSAL got the lowest RMSE results. The SRE values were consistent across the image scales, with S2WSU having a big difference in metrics values. Overall RSNMF algorithm got the lowest RMSE results of all the values.</p>
<p>During the benchmark experiment calculations, a log of the time spent on calculations of each algorithm was recorded to compare the time differences between them. This is not a standardized test, so the time comparison is only relative and will depend on the hardware. To compare the running times of the different algorithms with each dataset, all experiments were performed using a desktop computer with 12 core 24-thread AMD CPU and 64 GB of RAM and an Nvidia GTX 1080Ti with 11 Gb of VRAM. The average recorded times were gathered and are shown in Table <xref rid="j_infor522_tab_007">7</xref>.</p>
</sec>
</sec>
<sec id="j_infor522_s_023">
<label>5</label>
<title>Conclusions</title>
<p>In this paper, we analyse different available hyperspectral unmixing algorithms, propose a methodology, and create a benchmark to more accurately test these algorithms against each other. The code for the benchmark is available on GitHub. A hyperparameter testing experiment was conducted to determine the optimal hyperparameter of each tested algorithm. The main conclusion from this experiment was that hyperparameters are highly dependent on the datasets used and are not universal. An endmember robustness experiment was created to test the algorithm’s ability to accurately detect the abundances in hyperspectral images with different numbers of endmembers. Robustness to noise experiment shows the algorithm’s ability to get accurate results despite the artificially generated noise added to the same dataset. Image size difference experiment tests the algorithm’s ability to unmix hyperspectral images depending on the size of the image given and, in turn, the amount of spatial and spectral data available. One of the main takeaways from the conducted research is a perceived lack of standard algorithm testing methodology. Many reviewed papers use different metrics, testing methodologies and hyperspectral datasets to test their created algorithms. This makes it difficult to determine the best-performing algorithms. In this paper, we proposed a hyperspectral unmixing algorithms benchmark to help homogenize this type of algorithm testing. From the conducted hyperspectral unmixing algorithm benchmark experiments, we can conclude:</p>
<list>
<list-item id="j_infor522_li_060">
<label>•</label>
<p>The SUnSAL algorithm got the lowest RMSE results (0.008) across all of the experiments except on the dataset with a noise profile that resembles a real-world scenario (4.824) which indicates that the algorithm may not be suitable for real-world use especially if the gathered data tends to have noise.</p>
<p><table-wrap id="j_infor522_tab_006">
<label>Table 6</label>
<caption>
<p>Image size difference algorithm comparison results.</p>
</caption>
<table>
<thead>
<tr>
<td colspan="2" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td colspan="2" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Algorithm</td>
<td style="vertical-align: top; text-align: left">Downscale</td>
<td style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">SRE</td>
</tr>
</tbody><tbody>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">SUnSAL</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.003</td>
<td style="vertical-align: top; text-align: left">19.800</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">20.095</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.001</td>
<td style="vertical-align: top; text-align: left">16.549</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">SUnSAL-TV</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.046</td>
<td style="vertical-align: top; text-align: left">2.184</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.040</td>
<td style="vertical-align: top; text-align: left">1.643</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.047</td>
<td style="vertical-align: top; text-align: left">1.114</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">SGSNMF</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">257.327</td>
<td style="vertical-align: top; text-align: left">−20.889</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">305.677</td>
<td style="vertical-align: top; text-align: left">−28.925</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">197.180</td>
<td style="vertical-align: top; text-align: left">−32.716</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">S2WSU</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.176</td>
<td style="vertical-align: top; text-align: left">2.125</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.020</td>
<td style="vertical-align: top; text-align: left">4.603</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.042</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">1.558</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<td colspan="2" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin"/>
<td colspan="2" style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Metrics</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left">Algorithm</td>
<td style="vertical-align: top; text-align: left">Downscale</td>
<td style="vertical-align: top; text-align: left">RMSE</td>
<td style="vertical-align: top; text-align: left">SRE</td>
</tr>
</tbody><tbody>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">RSNMF</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.053</td>
<td style="vertical-align: top; text-align: left">0.569</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.051</td>
<td style="vertical-align: top; text-align: left">0.311</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.050</td>
<td style="vertical-align: top; text-align: left">0.452</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">R-CoNMF</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.217</td>
<td style="vertical-align: top; text-align: left">−4.189</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.218</td>
<td style="vertical-align: top; text-align: left">−5.496</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.216</td>
<td style="vertical-align: top; text-align: left">−5.330</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left">CNMF</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.045</td>
<td style="vertical-align: top; text-align: left">−0.025</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.041</td>
<td style="vertical-align: top; text-align: left">−0.153</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">3</td>
<td style="vertical-align: top; text-align: left">0.042</td>
<td style="vertical-align: top; text-align: left">0.009</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top; text-align: left; border-bottom: solid thin">ALMM</td>
<td style="vertical-align: top; text-align: left">1</td>
<td style="vertical-align: top; text-align: left">0.195</td>
<td style="vertical-align: top; text-align: left">−4.982</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left">2</td>
<td style="vertical-align: top; text-align: left">0.204</td>
<td style="vertical-align: top; text-align: left">−5.230</td>
</tr>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">0.200</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">−5.039</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="j_infor522_tab_007">
<label>Table 7</label>
<caption>
<p>Algorithm average calculation times in seconds.</p>
</caption>
<table>
<thead>
<tr>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">Algorithm</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SUnSAL</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SUnSAL-TV</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">SGS NMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">S2WSU</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">RSNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">R-Co NMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">CNMF</td>
<td style="vertical-align: top; text-align: left; border-top: solid thin; border-bottom: solid thin">ALMM</td>
</tr>
</thead>
<tbody>
<tr>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">Time (s)</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">228</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">2671</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">636</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">7451</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">2106</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">257</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">3855</td>
<td style="vertical-align: top; text-align: left; border-bottom: solid thin">851</td>
</tr>
</tbody>
</table>
</table-wrap></p>
</list-item>
<list-item id="j_infor522_li_061">
<label>•</label>
<p>In a real-world noise scenario, CNMF algorithm got the lowest RMSE result (0.0961). The resulting RMSE value was close to half of the next best value, but the values are small (at around 0.1 RMSE), so a perceived difference between these results may be minimal.</p>
</list-item>
<list-item id="j_infor522_li_062">
<label>•</label>
<p>Using the SRE metric shows that the S2WSU (4.603) and SUnSAL (20.095) algorithms achieved the most accurate image size comparison experiment results. The difference between the most accurate algorithms is almost ten times, and in turn, differences between the best and the worst algorithms are more than a few orders of magnitude. But algorithms amongst themselves in the three different image sizes remain in the same SRE magnitude, showing little to no degradation of results when images are downscaled.</p>
</list-item>
<list-item id="j_infor522_li_063">
<label>•</label>
<p>Image size comparison experiment showed that the differences in results between each image size were unnoticeable; from that, it is concluded that all of the algorithms are robust to changes in image size if their quality stays the same.</p>
</list-item>
<list-item id="j_infor522_li_064">
<label>•</label>
<p>SUnSAL and R-CoNMF got the fastest calculation times, 228 and 257 seconds, of all algorithms. It has to be taken into account that this comparison between running times is only relative between the algorithms as the test was not normalized for other factors such as hardware and software resources.</p>
</list-item>
</list>
</sec>
</body>
<back>
<ref-list id="j_infor522_reflist_001">
<title>References</title>
<ref id="j_infor522_ref_001">
<mixed-citation publication-type="other"><string-name><surname>BaySpec</surname></string-name> (2021). OCI™-F Hyperspectral Imager (VIS-NIR, SWIR). [Read on: 2021-10-12]. <uri>https://www.bayspec.com/spectroscopy/oci-f-hyperspectral-imager/</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_002">
<mixed-citation publication-type="chapter"><string-name><surname>Bioucas-Dias</surname>, <given-names>J.M.</given-names></string-name>, <string-name><surname>Figueiredo</surname>, <given-names>M.A.T.</given-names></string-name> (<year>2010</year>). <chapter-title>Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing</chapter-title>. In: <source>2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing</source>, pp. <fpage>1</fpage>–<lpage>4</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/WHISPERS.2010.5594963" xlink:type="simple">https://doi.org/10.1109/WHISPERS.2010.5594963</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_003">
<mixed-citation publication-type="journal"><string-name><surname>Bioucas-Dias</surname>, <given-names>J.M.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Dobigeon</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Parente</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Gader</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Chanussot</surname>, <given-names>J.</given-names></string-name> (<year>2012</year>). <article-title>Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches</article-title>. <source>IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing</source>, <volume>5</volume>(<issue>2</issue>), <fpage>354</fpage>–<lpage>379</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/JSTARS.2012.2194696" xlink:type="simple">https://doi.org/10.1109/JSTARS.2012.2194696</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_004">
<mixed-citation publication-type="journal"><string-name><surname>Borsoi</surname>, <given-names>R.A.</given-names></string-name>, <string-name><surname>Imbiriba</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Bermudez</surname>, <given-names>J.C.M.</given-names></string-name> (<year>2020</year>). <article-title>Deep generative endmember modeling: an application to unsupervised spectral unmixing</article-title>. <source>IEEE Transactions on Computational Imaging</source>, <volume>6</volume>, <fpage>374</fpage>–<lpage>384</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TCI.2019.2948726" xlink:type="simple">https://doi.org/10.1109/TCI.2019.2948726</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_005">
<mixed-citation publication-type="journal"><string-name><surname>Dong</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Yuan</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Lu</surname>, <given-names>X.</given-names></string-name> (<year>2020</year>). <article-title>Spectral-spatial joint sparse NMF for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>59</volume>(<issue>3</issue>), <fpage>2391</fpage>–<lpage>2402</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.3006109" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.3006109</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_006">
<mixed-citation publication-type="journal"><string-name><surname>Feng</surname>, <given-names>X.-R.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>H.-C.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>H.</given-names></string-name> (<year>2022</year>). <article-title>Correntropy-based autoencoder-like NMF with total variation for hyperspectral unmixing</article-title>. <source>IEEE Geoscience and Remote Sensing Letters</source>, <volume>19</volume>, <fpage>1</fpage>–<lpage>5</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/LGRS.2020.3020896" xlink:type="simple">https://doi.org/10.1109/LGRS.2020.3020896</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_007">
<mixed-citation publication-type="journal"><string-name><surname>Gabay</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Mercier</surname>, <given-names>B.</given-names></string-name> (<year>1976</year>). <article-title>A dual algorithm for the solution of nonlinear variational problems via finite element approximation</article-title>. <source>Computers &amp; Mathematics with Applications</source>, <volume>2</volume>(<issue>1</issue>), <fpage>17</fpage>–<lpage>40</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/0898-1221(76)90003-1" xlink:type="simple">https://doi.org/10.1016/0898-1221(76)90003-1</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_008">
<mixed-citation publication-type="journal"><string-name><surname>Goel</surname>, <given-names>P.K.</given-names></string-name>, <string-name><surname>Prasher</surname>, <given-names>S.O.</given-names></string-name>, <string-name><surname>Patel</surname>, <given-names>R.M.</given-names></string-name>, <string-name><surname>Landry</surname>, <given-names>J.A.</given-names></string-name>, <string-name><surname>Bonnell</surname>, <given-names>R.B.</given-names></string-name>, <string-name><surname>Viau</surname>, <given-names>A.A.</given-names></string-name> (<year>2003</year>). <article-title>Classification of hyperspectral data by decision trees and artificial neural networks to identify weed stress and nitrogen status of corn</article-title>. <source>Computers and Electronics in Agriculture</source>, <volume>39</volume>(<issue>2</issue>), <fpage>67</fpage>–<lpage>93</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/S0168-1699(03)00020-6" xlink:type="simple">https://doi.org/10.1016/S0168-1699(03)00020-6</ext-link>. <uri>https://www.sciencedirect.com/science/article/pii/S0168169903000206</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_009">
<mixed-citation publication-type="journal"><string-name><surname>Guo</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Han</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Bai</surname>, <given-names>Y.</given-names></string-name> (<year>2018</year>). <article-title>K-nearest neighbor combined with guided filter for hyperspectral image classification</article-title>. <source>Procedia Computer Science</source>, <volume>129</volume>, <fpage>159</fpage>–<lpage>165</lpage>. <comment><italic>2017 International Conference on Identification, Information and Knowledge in the Internet of Things</italic></comment>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.procs.2018.03.066" xlink:type="simple">https://doi.org/10.1016/j.procs.2018.03.066</ext-link>. <uri>https://www.sciencedirect.com/science/article/pii/S1877050918302904</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_010">
<mixed-citation publication-type="other"><string-name><surname>Guo</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Wittman</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Osher</surname>, <given-names>S.</given-names></string-name> (2009). L1 unmixing and its application to hyperspectral image enhancement. In: <italic>Proceedings SPIE Conference on Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV</italic>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1117/12.818245" xlink:type="simple">https://doi.org/10.1117/12.818245</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_011">
<mixed-citation publication-type="journal"><string-name><surname>Hapke</surname>, <given-names>B.</given-names></string-name> (<year>1981</year>). <article-title>Bidirectional reflectance spectroscopy: 1. Theory</article-title>. <source>Journal of Geophysical Research: Solid Earth</source>, <volume>86</volume>(<issue>B4</issue>), <fpage>3039</fpage>–<lpage>3054</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1029/JB086iB04p03039" xlink:type="simple">https://doi.org/10.1029/JB086iB04p03039</ext-link>. <uri>https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/JB086iB04p03039</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_012">
<mixed-citation publication-type="journal"><string-name><surname>He</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>L.</given-names></string-name> (<year>2017</year>). <article-title>Total variation regularized reweighted sparse nonnegative matrix factorization for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>55</volume>(<issue>7</issue>), <fpage>3909</fpage>–<lpage>3921</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2017.2683719" xlink:type="simple">https://doi.org/10.1109/TGRS.2017.2683719</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_013">
<mixed-citation publication-type="journal"><string-name><surname>Hong</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Yokoya</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Chanussot</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Zhu</surname>, <given-names>X.X.</given-names></string-name> (<year>2019</year>). <article-title>An augmented linear mixing model to address spectral variability for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Image Processing</source>, <volume>28</volume>(<issue>4</issue>), <fpage>1923</fpage>–<lpage>1938</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TIP.2018.2878958" xlink:type="simple">https://doi.org/10.1109/TIP.2018.2878958</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_014">
<mixed-citation publication-type="journal"><string-name><surname>Hua</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Qiu</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Zhao</surname>, <given-names>L.</given-names></string-name> (<year>2021</year>). <article-title>Autoencoder network for hyperspectral unmixing with adaptive abundance smoothing</article-title>. <source>IEEE Geoscience and Remote Sensing Letters</source>, <volume>18</volume>(<issue>9</issue>), <fpage>1640</fpage>–<lpage>1644</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/LGRS.2020.3005999" xlink:type="simple">https://doi.org/10.1109/LGRS.2020.3005999</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_015">
<mixed-citation publication-type="other"><string-name><surname>Houston dataset of Science and Technology</surname></string-name> (2021). Hyperspectral Data Set. [Last read on: 2021-10-10]. <uri>http://lesun.weebly.com/hyperspectral-data-set.html</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_016">
<mixed-citation publication-type="journal"><string-name><surname>Iordache</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bioucas-Dias</surname>, <given-names>J.M.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name> (<year>2012</year>). <article-title>Total variation spatial regularization for sparse hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>50</volume>(<issue>11</issue>), <fpage>4484</fpage>–<lpage>4502</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2012.2191590" xlink:type="simple">https://doi.org/10.1109/TGRS.2012.2191590</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_017">
<mixed-citation publication-type="journal"><string-name><surname>Iordache</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bioucas-Dias</surname>, <given-names>J.M.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name> (<year>2014</year>). <article-title>Collaborative sparse regression for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>52</volume>(<issue>1</issue>), <fpage>341</fpage>–<lpage>354</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2013.2240001" xlink:type="simple">https://doi.org/10.1109/TGRS.2013.2240001</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_018">
<mixed-citation publication-type="chapter"><string-name><surname>Kaya</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ataş</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kahraman</surname>, <given-names>S.</given-names></string-name> (<year>2021</year>). <chapter-title>LiDAR-aided total variation regularized nonnegative tensor factorization for hyperspectral unmixing</chapter-title>. In: <source>2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS</source>, pp. <fpage>5063</fpage>–<lpage>5066</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IGARSS47720.2021.9553137" xlink:type="simple">https://doi.org/10.1109/IGARSS47720.2021.9553137</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_019">
<mixed-citation publication-type="journal"><string-name><surname>Koirala</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Khodadadzadeh</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Contreras</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Zahiri</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Gloaguen</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Scheunders</surname>, <given-names>P.</given-names></string-name> (<year>2019</year>). <article-title>A supervised method for nonlinear hyperspectral unmixing</article-title>. <source>Remote Sensing</source>, <volume>11</volume>(<issue>20</issue>), <fpage>2458</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/rs11202458" xlink:type="simple">https://doi.org/10.3390/rs11202458</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_020">
<mixed-citation publication-type="other"><string-name><surname>Kokaly</surname>, <given-names>R.F.</given-names></string-name>, <string-name><surname>Clark</surname>, <given-names>R.N.</given-names></string-name>, <string-name><surname>Swayze</surname>, <given-names>G.A.</given-names></string-name>, <string-name><surname>Livo</surname>, <given-names>K.E.</given-names></string-name>, <string-name><surname>Hoefen</surname>, <given-names>T.M.</given-names></string-name>, <string-name><surname>Pearson</surname>, <given-names>N.C.</given-names></string-name>, <string-name><surname>Wise</surname>, <given-names>R.A.</given-names></string-name>, <string-name><surname>Benzel</surname>, <given-names>W.M.</given-names></string-name>, <string-name><surname>Lowers</surname>, <given-names>H.A.</given-names></string-name>, <string-name><surname>Driscoll</surname>, <given-names>R.L.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>A.J.</given-names></string-name> (2017). <italic>USGS Spectral Library Version 7</italic>. Technical report, Reston, VA. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3133/ds1035" xlink:type="simple">https://doi.org/10.3133/ds1035</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_021">
<mixed-citation publication-type="journal"><string-name><surname>Li</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Feng</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Zhong</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>L.</given-names></string-name> (<year>2021</year>). <article-title>Superpixel-based reweighted low-rank and total variation sparse unmixing for hyperspectral remote sensing imagery</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>59</volume>(<issue>1</issue>), <fpage>629</fpage>–<lpage>647</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.2994260" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.2994260</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_022">
<mixed-citation publication-type="journal"><string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Bioucas-Dias</surname>, <given-names>J.M.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>L.</given-names></string-name> (<year>2016</year>). <article-title>Robust collaborative nonnegative matrix factorization for hyperspectral Unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>54</volume>(<issue>10</issue>), <fpage>6076</fpage>–<lpage>6090</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2016.2580702" xlink:type="simple">https://doi.org/10.1109/TGRS.2016.2580702</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_023">
<mixed-citation publication-type="journal"><string-name><surname>Li</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Zhao</surname>, <given-names>L.</given-names></string-name> (<year>2021</year>). <article-title>Correntropy-based spatial-spectral robust sparsity-regularized hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>59</volume>(<issue>2</issue>), <fpage>1453</fpage>–<lpage>1471</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.2999936" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.2999936</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_024">
<mixed-citation publication-type="journal"><string-name><surname>Lu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Yuan</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yan</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>X.</given-names></string-name> (<year>2013</year>). <article-title>Manifold regularized sparse NMF for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>51</volume>(<issue>5</issue>), <fpage>2815</fpage>–<lpage>2826</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2012.2213825" xlink:type="simple">https://doi.org/10.1109/TGRS.2012.2213825</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_025">
<mixed-citation publication-type="journal"><string-name><surname>Lu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Dong</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Yuan</surname>, <given-names>Y.</given-names></string-name> (<year>2020</year>). <article-title>Subspace clustering constrained sparse NMF for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>58</volume>(<issue>5</issue>), <fpage>3007</fpage>–<lpage>3019</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2019.2946751" xlink:type="simple">https://doi.org/10.1109/TGRS.2019.2946751</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_026">
<mixed-citation publication-type="other"><string-name><surname>NASA</surname></string-name> (2004). The Advanced Spaceborne Thermal Emission and Reflection Radiometer. [Last read on: 2021-11-01]. <uri>https://asterweb.jpl.nasa.gov/</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_027">
<mixed-citation publication-type="other"><string-name><surname>NASA</surname></string-name> (2015). AVIRIS Data – Ordering Free AVIRIS Standard Data Products. [Last read on: 2021-10-10]. <uri>https://aviris.jpl.nasa.gov/data/free_data.html</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_028">
<mixed-citation publication-type="journal"><string-name><surname>Palsson</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Ulfarsson</surname>, <given-names>M.O.</given-names></string-name>, <string-name><surname>Sveinsson</surname>, <given-names>J.R.</given-names></string-name> (<year>2021</year>). <article-title>Convolutional autoencoder for spectral–spatial hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>59</volume>(<issue>1</issue>), <fpage>535</fpage>–<lpage>549</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.2992743" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.2992743</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_029">
<mixed-citation publication-type="journal"><string-name><surname>Peng</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Sun</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Jiang</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Zhou</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>Q.</given-names></string-name> (<year>2022</year>). <article-title>A general loss-based nonnegative matrix factorization for hyperspectral unmixing</article-title>. <source>IEEE Geoscience and Remote Sensing Letters</source>, <volume>19</volume>, <fpage>1</fpage>–<lpage>5</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/LGRS.2020.3017233" xlink:type="simple">https://doi.org/10.1109/LGRS.2020.3017233</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_030">
<mixed-citation publication-type="other"><string-name><surname>Prasad</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Le Saux</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Yokoya</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Hansch</surname>, <given-names>R.</given-names></string-name> (2020). 2018 IEEE GRSS data fusion challenge – fusion of multispectral LiDAR and hyperspectral data. <italic>IEEE Dataport</italic>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.21227/jnh9-nz89" xlink:type="simple">https://doi.org/10.21227/jnh9-nz89</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_031">
<mixed-citation publication-type="journal"><string-name><surname>Qi</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Gao</surname>, <given-names>X.</given-names></string-name> (<year>2020</year>). <article-title>Spectral–spatial-weighted multiview collaborative sparse unmixing for hyperspectral images</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>58</volume>(<issue>12</issue>), <fpage>8766</fpage>–<lpage>8779</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.2990476" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.2990476</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_032">
<mixed-citation publication-type="other"><string-name><surname>Ranasinghe</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Weerasooriya</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Godaliyadda</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Herath</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Ekanayake</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Jayasundara</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Ramanayake</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Senarath</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Wickramasinghe</surname>, <given-names>D.</given-names></string-name> (2022). GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.48550/ARXIV.2204.07713" xlink:type="simple">https://doi.org/10.48550/ARXIV.2204.07713</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_033">
<mixed-citation publication-type="journal"><string-name><surname>Schaepman</surname>, <given-names>M.E.</given-names></string-name>, <string-name><surname>Jehle</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Hueni</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>D’Odorico</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Damm</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Weyermann</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Schneider</surname>, <given-names>F.D.</given-names></string-name>, <string-name><surname>Laurent</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Popp</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Seidel</surname>, <given-names>F.C.</given-names></string-name>, <string-name><surname>Lenhard</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Gege</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Küchler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Brazile</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kohler</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>De Vos</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Meuleman</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Meynart</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Schläpfer</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Kneubühler</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Itten</surname>, <given-names>K.I.</given-names></string-name> (<year>2015</year>). <article-title>Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX)</article-title>. <source>Remote Sensing of Environment</source>, <volume>158</volume>, <fpage>207</fpage>–<lpage>219</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.rse.2014.11.014" xlink:type="simple">https://doi.org/10.1016/j.rse.2014.11.014</ext-link>. <uri>https://www.sciencedirect.com/science/article/pii/S0034425714004568</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_034">
<mixed-citation publication-type="journal"><string-name><surname>Su</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Jia</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Zheng</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>Q.</given-names></string-name> (<year>2022</year>). <article-title>Superpixel-based weighted collaborative sparse regression and reweighted low-rank representation for hyperspectral image unmixing</article-title>. <source>IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing</source>, <volume>15</volume>, <fpage>393</fpage>–<lpage>408</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/JSTARS.2021.3133428" xlink:type="simple">https://doi.org/10.1109/JSTARS.2021.3133428</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_035">
<mixed-citation publication-type="journal"><string-name><surname>Su</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Qi</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Gamba</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name> (<year>2021</year>). <article-title>Deep autoencoders with multitask learning for bilinear hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>59</volume>(<issue>10</issue>), <fpage>8615</fpage>–<lpage>8629</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2020.3041157" xlink:type="simple">https://doi.org/10.1109/TGRS.2020.3041157</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_036">
<mixed-citation publication-type="chapter"><string-name><surname>Wang</surname>, <given-names>J.-J.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>D.-C.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>T.-Z.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>J.</given-names></string-name> (<year>2021</year>). <chapter-title>Endmember constraint non-negative tensor factorization via total variation for hyperspectral unmixing</chapter-title>. <source>2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS</source>, pp. <fpage>3313</fpage>–<lpage>3316</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IGARSS47720.2021.9554468" xlink:type="simple">https://doi.org/10.1109/IGARSS47720.2021.9554468</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_037">
<mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Zhong</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>Y.</given-names></string-name> (<year>2017</year>). <article-title>Spatial group sparsity regularized nonnegative matrix factorization for hyperspectral unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>55</volume>(<issue>11</issue>), <fpage>6287</fpage>–<lpage>6304</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2017.2724944" xlink:type="simple">https://doi.org/10.1109/TGRS.2017.2724944</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_038">
<mixed-citation publication-type="journal"><string-name><surname>Xu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yin</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Wen</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name> (<year>2012</year>). <article-title>An alternating direction algorithm for matrix completion with nonnegative factors</article-title>. <source>Frontiers of Mathematics in China</source>, <volume>7</volume>(<issue>2</issue>), <fpage>365</fpage>–<lpage>384</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11464-012-0194-5" xlink:type="simple">https://doi.org/10.1007/s11464-012-0194-5</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_039">
<mixed-citation publication-type="journal"><string-name><surname>Yokoya</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Yairi</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Iwasaki</surname>, <given-names>A.</given-names></string-name> (<year>2012</year>). <article-title>Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>50</volume>(<issue>2</issue>), <fpage>528</fpage>–<lpage>537</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2011.2161320" xlink:type="simple">https://doi.org/10.1109/TGRS.2011.2161320</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_040">
<mixed-citation publication-type="chapter"><string-name><surname>Yuhas</surname>, <given-names>R.H.</given-names></string-name>, <string-name><surname>Goetz</surname>, <given-names>A.F.H.</given-names></string-name>, <string-name><surname>Boardman</surname>, <given-names>J.W.</given-names></string-name> (<year>1992</year>). <chapter-title>Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm</chapter-title>. In: <source>Summaries of the Third Annual JPL Airborne Geoscience Workshop. Vol. 1: AVIRIS Workshop</source>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_041">
<mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Mei</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Xie</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Ma</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Feng</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>Q.</given-names></string-name> (<year>2022</year>). <article-title>Spectral variability augmented sparse unmixing of hyperspectral images</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>60</volume>, <fpage>1</fpage>–<lpage>13</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2022.3169228" xlink:type="simple">https://doi.org/10.1109/TGRS.2022.3169228</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_042">
<mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Spectral–spatial weighted sparse regression for hyperspectral image unmixing</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>56</volume>(<issue>6</issue>), <fpage>3265</fpage>–<lpage>3276</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2018.2797200" xlink:type="simple">https://doi.org/10.1109/TGRS.2018.2797200</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_043">
<mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Plaza</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>J.</given-names></string-name> (<year>2022</year>). <article-title>Spectral-spatial hyperspectral unmixing using nonnegative matrix factorization</article-title>. <source>IEEE Transactions on Geoscience and Remote Sensing</source>, <volume>60</volume>, <fpage>1</fpage>–<lpage>13</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/TGRS.2021.3074364" xlink:type="simple">https://doi.org/10.1109/TGRS.2021.3074364</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_044">
<mixed-citation publication-type="chapter"><string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Tong</surname>, <given-names>X.-H.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>M.-L.</given-names></string-name> (<year>2009</year>). <chapter-title>An improved N-FINDR algorithm for endmember extraction in hyperspectral imagery</chapter-title>. In: <source>2009 Joint Urban Remote Sensing Event</source>, pp. <fpage>1</fpage>–<lpage>5</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/URS.2009.5137677" xlink:type="simple">https://doi.org/10.1109/URS.2009.5137677</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_045">
<mixed-citation publication-type="other"><string-name><surname>Zhao</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>He</surname>, <given-names>Z.</given-names></string-name> (2019). A laboratory-created dataset with ground-truth for hyperspectral unmixing evaluation. <italic>CoRR</italic>, <italic>abs/1902.08347</italic>. <uri>http://arxiv.org/abs/1902.08347</uri>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_046">
<mixed-citation publication-type="journal"><string-name><surname>Zhao</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Yan</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>J.</given-names></string-name> (<year>2021</year>a). <article-title>LSTM-DNN based autoencoder network for nonlinear hyperspectral image unmixing</article-title>. <source>IEEE Journal of Selected Topics in Signal Processing</source>, <volume>15</volume>(<issue>2</issue>), <fpage>295</fpage>–<lpage>309</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/JSTSP.2021.3052361" xlink:type="simple">https://doi.org/10.1109/JSTSP.2021.3052361</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_047">
<mixed-citation publication-type="chapter"><string-name><surname>Zhao</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Liang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Huang</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Xiao</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yu</surname>, <given-names>X.</given-names></string-name> (<year>2021</year>b). <chapter-title>Sparsity constrained convolutional autoencoder network for hyperspectral image unmixing</chapter-title>. In: <source>2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS</source>, pp. <fpage>3317</fpage>–<lpage>3320</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/IGARSS47720.2021.9553239" xlink:type="simple">https://doi.org/10.1109/IGARSS47720.2021.9553239</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_048">
<mixed-citation publication-type="other"><string-name><surname>Zhu</surname>, <given-names>F.</given-names></string-name> (2017). Hyperspectral Unmixing: Ground Truth Labeling, Datasets, Benchmark Performances and Survey. <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/arXiv:1708.05125">arXiv:1708.05125</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_049">
<mixed-citation publication-type="journal"><string-name><surname>Zhu</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Fan</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Xiang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Meng</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Pan</surname>, <given-names>C.</given-names></string-name> (<year>2014</year>a). <article-title>Spectral unmixing via data-guided sparsity</article-title>. <source>IEEE Transactions on Image Processing</source>, <volume>23</volume>(<issue>12</issue>), <fpage>5412</fpage>–<lpage>5427</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/tip.2014.2363423" xlink:type="simple">https://doi.org/10.1109/tip.2014.2363423</ext-link>.</mixed-citation>
</ref>
<ref id="j_infor522_ref_050">
<mixed-citation publication-type="journal"><string-name><surname>Zhu</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Xiang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Fan</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Pan</surname>, <given-names>C.</given-names></string-name> (<year>2014</year>b). <article-title>Structured sparse method for hyperspectral unmixing</article-title>. <source>ISPRS Journal of Photogrammetry and Remote Sensing</source>, <volume>88</volume>, <fpage>101</fpage>–<lpage>118</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.isprsjprs.2013.11.014" xlink:type="simple">https://doi.org/10.1016/j.isprsjprs.2013.11.014</ext-link>. <uri>https://www.sciencedirect.com/science/article/pii/S0924271613002761</uri>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
