<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article"><front><journal-meta><journal-id journal-id-type="publisher-id">INFORMATICA</journal-id><journal-title-group><journal-title>Informatica</journal-title></journal-title-group><issn pub-type="epub">0868-4952</issn><issn pub-type="ppub">0868-4952</issn><publisher><publisher-name>VU</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">INF51-214</article-id><article-id pub-id-type="doi">10.3233/INF-1994-51-214</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research article</subject></subj-group></article-categories><title-group><article-title>Searching for minimum in neural networks</article-title></title-group><contrib-group><contrib contrib-type="Author"><name><surname>Vyšniauskas</surname><given-names>Vytautas</given-names></name><xref ref-type="aff" rid="j_INFORMATICA_aff_000"/></contrib><aff id="j_INFORMATICA_aff_000">Institute of Mathematics and Informatics, 2600 Vilnius, Akademijos St.4, Lithuania</aff></contrib-group><pub-date pub-type="epub"><day>01</day><month>01</month><year>1994</year></pub-date><volume>5</volume><issue>1-2</issue><fpage>241</fpage><lpage>255</lpage><abstract><p>Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 10<sup>3</sup> – 10<sup>4</sup>). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neural networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.</p></abstract><kwd-group><label>Keywords</label><kwd>neural networks</kwd><kwd>optimization theory</kwd><kwd>pattern recognition</kwd></kwd-group></article-meta></front></article>