Cochrane–Orcutt estimation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Mark viking
Added wl
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
'''ALOPEX''' (an acronym from "'''''AL'''gorithms '''O'''f '''P'''attern '''EX'''traction''") is a correlation based machine learning algorithm first proposed by [[Evangelia Micheli-Tzanakou|Tzanakou]] and Harth in 1974.
Hello, my title is Andrew and my spouse doesn't like it at all. For many years he's been living in Alaska and he doesn't strategy on altering it. Office supervising is exactly where my primary earnings arrives from but I've usually needed my own business. I am truly fond of handwriting but I can't make it my occupation truly.<br><br>my blog - [http://www.skullrocker.com/blogs/post/10991 psychic love readings]
 
==Principle==
In [[machine learning]], the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function.  Many training algorithms, such as [[backpropagation]], have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function.  ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function.
 
==Method==
ALOPEX, in its simplest form is defined by an updating equation:
 
<math>\Delta\ W_{ij}(n) = \gamma\ \Delta\ W_{ij}(n-1) \Delta\ R(n) + r_i(n) </math>
 
Where:
*<math>n \geq 0</math> is the iteration or time-step.
*<math>\Delta\ W_{ij}(n)</math> is the difference between the current and previous value of system variable <math>\ W_{ij}</math> at iteration <math>n \ </math>.
*<math>\Delta\ R(n)</math> is the difference between the current and previous value of the response function <math>\ R,</math> at iteration <math>n \ </math>.
*<math>\gamma\ </math> is the learning rate parameter <math>(\gamma\ < 0 </math> minimizes <math>R, \ </math> and <math>\gamma\ > 0 </math> maximizes <math>R \ )</math>
*<math>r_i(n) \sim\ N(0,\sigma\ ^2)</math>
 
==Discussion==
Essentially, ALOPEX changes each system variable <math>W_{ij}(n)</math> based on a product of: the previous change in the variable <math>\Delta</math><math>W_{ij}(n-1)</math>, the resulting change in the cost function <math>\Delta</math><math>R(n)</math>, and the learning rate parameter <math>\gamma</math>. Further, to find the absolute minimum (or maximum), the stochastic process <math>r_{ij}(n)</math> (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.
 
==References==
*Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, '''14''':1475-1482. [http://dx.doi.org/10.1016/0042-6989(74)90024-8 Abstract from ScienceDirect]
 
[[Category:Classification algorithms]]
[[Category:Neural networks]]
 
 
{{compu-AI-stub}}

Latest revision as of 17:37, 2 December 2014

Hello, my title is Andrew and my spouse doesn't like it at all. For many years he's been living in Alaska and he doesn't strategy on altering it. Office supervising is exactly where my primary earnings arrives from but I've usually needed my own business. I am truly fond of handwriting but I can't make it my occupation truly.

my blog - psychic love readings