Critical speed

From formulasearchengine
Revision as of 15:04, 13 November 2013 by en>Halpaugh (Critical speed equation (Nc): true subscript)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Expert-subject

The softmax activation function is a neural transfer function. In neural networks, transfer functions calculate a layer's output from its net input.

On the other hand, a paper by Cadieu et al. argues that it is a biologically plausible approximation to the maximum operation.[1] The authors have used it to simulate an invariance operation of complex cells [2] where they defined it as

y=g(j=1nxjq+1k+(j=1nxjq)),

where g is a sigmoid function, x represents the value of input nodes, k a small constant to avoid division by zero, and the exponent q a parameter to control the non-linearity.

Artificial neural networks

In neural network simulations, the term softmax activation function refers to a similar function defined by[3]

σ:n×
σ(q,i)=exp(qi)j=1nexp(qj),

where the vector q is the net input to a softmax node, and n is the number of nodes in the softmax layer. It ensures all of the output values are between 0 and 1, and that their sum is 1. This is a generalization of the logistic function to multiple variables.

Since the function maps a vector and a specific index i to a real value, the derivative needs to take the index into account:

qkσ(q,i)==σ(q,i)(δikσ(q,k))

Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself).

See Multinomial logit for a probability model which uses the softmax activation function.

Reinforcement learning

In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is:[4]

Pt(a)=exp(qt(a)/τ)i=1nexp(qt(i)/τ),

where the action value qt(a) corresponds to the expected reward of following action a and τ is called a temperature parameter (in allusion to chemical kinetics). For high temperatures (τ), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (τ0+), the probability of the action with the highest expected reward tends to 1.

Smooth approximation of maximum

When parameterized by some constant, α>0, the following formulation becomes a smooth, differentiable approximation of the maximum function:

𝒮α({xi}i=1n)=i=1nxieαxii=1neαxi

𝒮α has the following properties:

  1. 𝒮αmax as α
  2. 𝒮0 is the average of its inputs
  3. 𝒮αmin as α

The gradient of softmax is given by:

xi𝒮α({xi}i=1n)=eαxij=1neαxj[1+α(xi𝒮α({xi}i=1n))],

which makes the softmax function useful for optimization techniques that use gradient descent.

Softmax transformation

Template:Section OR The softmax function is also used to standardize data which is positively skewed and includes many values around zero.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. It will take a variable such as revenue or age and transform the values to a scale from zero to one.[5] This type of data transformation is needed especially when the data spans many magnitudes.

For example, revenues for customers could span anywhere from 0 to 300.000. Let's say we have a range of revenue numbers between 3 and 300.000. If these numbers are expressed in powers of 10, then 3 becomes 3×100 and 300.000 becomes 3×105. The number 10 when raised to the power 0 becomes 1, so these two numbers expressed as powers of 10 span 5 orders of magnitude. Range scaling is a typical reason to use a function such as softmax.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

References

  1. Cadieu C, Kouh M, Pasupathy A, Conner CE, Riesenhuber M, and Poggio T. A Model of V4 Shape Selectivity and Invariance. J Neurophysiol 98: 1733-1750, 2007.
  2. Serre T, Kouh M, Cadieu C, Knoblich U, Kreiman G, and Poggio T. A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex. CBCL Paper 259/AI Memo 2005-036. Cambridge, MA: MIT, 2005.
  3. ai-faq What is a softmax activation function?
  4. Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998.Softmax Action Selection
  5. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

External links