Circular law: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Azylber
 
No edit summary
Line 1: Line 1:
Impact wrenches are 1 of the most utilised workshop tools by any automotive mechanicMatco Tools supplies several automotive and basic maintenance tools. Its time for a couple of new cordless effect wrenchs. A cordless three/8 is a worthwhile investment, though I'd also back it up with an air 3/eight. One more $one hundred buys any one of quite a few decent 1/two air impacts that will out execute cordless up to $350 or so. I will need a cordless considering the fact that I live in an apartment and keep my car in an unsecured parking garage.   If you have any inquiries concerning wherever and how to use [http://www.bestoscillatingtoolreviews.com/best-angle-grinder-reviews/ Http://Www.Bestoscillatingtoolreviews.Com/Best-Angle-Grinder-Reviews/], you can make contact with us at the web site. The Milwaukee HD18HIWF-402C is supplied with a carry case with two x four.0Ah red lithium-ion batteries and charger. It would take a incredibly large volume air tank to assistance an effect wrench.<br><br>If you are seeking an ultimate satisfaction in comfort and match, please do not buy 1 just before you study this consumers review. The quick review will cover what seriously the item can do, what is the feature, what real user say about it, and exactly where you can get it with very rationale price tag. MOST Critical: If you are interested in shopping for Ingersoll Rand 2015max Very best Cordless three 8 Effect Wrench you must attempt to locate complete description and item information.<br><br>This sort of wrench can only be applied for compact drives as the torque would be too a lot for the operator to manage. An angle drive is substantially like an inline wrench but it comes with a set of gears for a 90� rotation. A butterfly wrench has a torque of 75 Its  What Is The Very best Angle Grinder To Buy throttle permits a one-hand directional alter and its air inlet can be swiveled at 360�. SEE Far more.. This morning I just saw Ingersoll Rand 2135TiMAX 1/two Inch Air Impact Wrench sold off in the latest worth each individual can possibly get.<br><br>Particular socket extensions are accessible, which take benefit of the inability of an effect wrench to function against a spring, to precisely limit the output torque. Some effect wrenches made for item assembly have a built-in torque manage system, such as a built-in torsion spring and a mechanism that shuts the tool down when the provided torque is exceeded. The Porter-Cable Cordless Drill supplies up to 420 inch-pounds of torque.<br><br>Impact drivers are a wonderful support in tight spaces, such as when you happen to be hanging windows and you may not be in a position to get the leverage you need to have to exert a terrific quantity of stress. This is accomplished inside the impact driver with a spring-loaded mechanism not present in a drill driver. An impact driver is shaped extra like a mechanical screwdriver and it is significantly smaller sized than a traditional drill.<br><br>The Dwelling Depot presents close to 1,000 Energy STAR� qualified items, which meet or exceed the highest energy efficiency requirements without compromising efficiency. When you obtain this 19.2-Volt Cordless Effect Wrench Kit, you get almost all the things you need to have to make even the toughest jobs easyReviewers might have received a benefit, like a sweepstakes entry or rewards plan points, in exchange for writing a overview. Use your Sears Card and Save!
'''Extension neural network''' is a pattern recognition method found by M. H. Wang and C. P. Hung in 2003 to classify instances of data sets. Extension neural network is composed of [[neural network]] and extension theory concepts. It uses the fast and adaptive learning capability of [[neural network]] and correlation estimation property of extension theory by calculating extension distance. <br />
ENN was used in:
* Failure detection in machinery.
* Tissue classification through MRI.
* Fault recognition in automotive engine.
* State of charge estimation in lead-acid battery.
* Classification with incomplete survey data.
 
== Extension Theory ==
Extension theory was first proposed by Cai in 1983 to solve contradictory problems. While classical mathematic is familiar with quantity and forms of objects, extension theory transforms these objects to matter-element model.
<br />
{{NumBlk|:|<math>R=(N,C,V)</math>|{{EquationRef|1}}}}
<br />
where in matter <math>R</math>, <math>N</math> is the name or type, <math>C</math> is its characteristics and <math>V</math> is the corresponding value for the characteristic. There is a corresponding example in equation 2.
<br />
{{NumBlk|:|<math>
R=\begin{bmatrix}
Yusuf      & Height, & 178cm      \\
  & Weight & 98kg
\end{bmatrix}</math>|{{EquationRef|2}}}}
 
where <math>Height</math> and <math>Weight</math> characteristics form extension sets. These extension sets are defined by the <math>V</math> values which are range values for corresponding characteristics. Extension theory concerns with the extension correlation function between matter-element models like shown in equation 2 and extension sets. Extension correlation function is used to define extension space which is composed of pairs of elements and their extension correlation functions. The extension space formula is shown in equation 3.
<br />
{{NumBlk|:|<math>
A=\left \{ (x,y)| x\in U, y=K(x) \right \}
</math>|{{EquationRef|3}}}}
<br />
where, <math>A</math> is the extension space, <math>U</math> is the object space, <math>K</math> is the extension correlation function, <math>x</math> is an element from the object space and <math>y</math> is the corresponding extension correlation function output of element <math>x</math>. <math>K(x)</math> maps <math>x</math> to a membership interval <math>\left [ -\infin,\infin \right ] </math>. Negative region represents an element not belonging membership degree to a class and positive region vice versa. If <math>x</math> is mapped to <math>\left [ 0,1 \right ] </math>, extension theory acts like [[fuzzy set]] theory. The correlation function can be shown with the equation 4.
<br />
{{NumBlk|:|<math>
\rho(x,X_{in})=\left|x-\frac{a+b}{2}\right|-\frac{b-a}{2}
</math>
<math>
\rho(x,X_{out})=\left|x-\frac{c+d}{2}\right|-\frac{d-c}{2}
</math>
|{{EquationRef|4}}}}
<br />
where, <math>X_{in}</math> and <math>X_{out}</math> are called concerned and neighborhood domain and their intervals are (a,b) and (c,d) respectively. The extended correlation function used for estimation of membership degree between <math>x</math> and <math>X_{in}</math>, <math>X_{out}</math> is shown in equation 5.
<br />
{{NumBlk|:|<math>
K(x)=
\begin{cases}
-\rho(x,X_{in})                      &x\in{X_{in}} \\
\frac{\rho(x,X_{in})}{\rho(x,X_{out})-\rho(x,X_{in})}  &x\not \in{X_{in}}
\end{cases}
</math>
|{{EquationRef|5}}}}
<br />
[[Image:ExtendedCorrelationFunction.jpg|thumb|center|alt=Extension Correlation Function]]
 
== Extension Neural Network ==
Extension neural network has a neural network like appearance. Weight vector resides between the input nodes and output nodes. Output nodes are the representation of input nodes by passing them through the weight vector.  
 
There are total number of input and output nodes are represented by <math>n</math> and <math>n_c</math>, respectively. These numbers depend on the number of characteristics and classes. Rather than using one weight value between two layer nodes as in [[neural network]], extension neural network architecture has two weight values. In extension neural network architecture, for instance <math>i</math>, <math>x^p_{ij}</math> is the input which belongs to class <math>p</math> and <math>o_{ik}</math> is the corresponding output for class <math>k</math>. The output <math>o_{ik}</math> is calculated by using extension distance as shown in equation 6.
 
{{NumBlk|:|<math>
ED_{ik}=\sum\limits_{j=0}^n \left( \frac{|x_{ij}^p-z_{kj}|-\frac{w_{kj}^U-w_{kj}^L}{2}}{|\frac{w_{kj}^U-w_{kj}^L}{2}|}+1 \right)
</math>
<math>
k=1,2,....,n_c
</math>
|{{EquationRef|6}}}}
 
Estimated class is found through searching for the minimum extension distance among the calculated extension distance for all classes as summarized in equation 7, where <math>k^*</math> is the estimated class.
 
{{NumBlk|:|<math>
k^*= arg\min_k(o_{ik})
</math>
|{{EquationRef|7}}}}
 
=== Learning Algorithm ===
Each class is composed of ranges of characteristics. These characteristics are the input types or names which come from matter-element model. Weight values in extension neural network represent these ranges. In the learning algorithm, first weights are initialized by searching for the maximum and minimum values of inputs for each class as shown in equation 8
 
{{NumBlk|:|<math>
w_{kj}^U = \max_i\{x_{ij}^k\}
</math>
<math>
w_{kj}^L = \min_i\{x_{ij}^k\}
</math>
<math>
i=1,...,N_p
</math>
<math>
k=1,...,n_c
</math>
<math>
j=1,2,....,n
</math>
|{{EquationRef|8}}}}
 
where, <math>i</math> is the instance number and <math>j</math> is represents number of input. This initialization provides classes' ranges according to given training data.
 
After maintaining weights, center of clusters are found through the equation 9.
 
{{NumBlk|:|<math>
Z_k = \{z_{k1},z_{k2},...,z_{kn}\}
</math>
<math>
z_{kj} = \frac{w_{kj}^U+w_{kj}^L}{2}
</math>
<math>
k=1,2,....,n_c
</math>
<math>
j=1,2,....,n
</math>
|{{EquationRef|9}}}}
 
Before learning process begins, predefined learning performance rate is given as shown in equation 10
 
{{NumBlk|:|<math>
E_\tau=\frac{N_m}{N_p}
</math>
|{{EquationRef|10}}}}
 
where, <math>N_m</math> is the misclassified instances and <math>N_p</math> is the total number of instances. Initialized parameters are used to classify instances with using equation 6. If the initialization is not sufficient due to the learning performance rate, training is required. In the training step weights are adjusted to classify training data more accurately, therefore reducing learning performance rate is aimed. In each iteration, <math>E_\tau</math> is checked to control if required learning performance is reached. In each iteration every training instance is used for training. <br />
Instance <math>i</math>, belongs to class <math>p</math> is shown by:
 
<math>
X_{i}^p=\{x_{i1}^p,x_{i2}^p,...,x_{in}^p\}
</math>
 
<math>
1\leq p\leq n_c
</math>
 
Every input data point of <math>X_i^p</math> is used in extension distance calculation to estimate the class of <math>X_i^p</math>. If the estimated class <math>k^*=p</math> then update is not needed. Whereas, if <math>k^* \neq p</math> then update is done. In update case, separators which show the relationship between inputs and classes, are shifted proportional to the distance between the center of clusters and the data points. <br />
The update formula:
 
<math>
z_{pj}^{new} = z_{pj}^{old} + \eta (x_{ij}^p-z_{pj}^{old})
</math><br />
<math>
z_{k^*j}^{new} = z_{k^*j}^{old} - \eta (x_{ij}^p-z_{k^*j}^{old})
</math><br />
<math>
w_{pj}^{L(new)} = w_{pj}^{L(old)} + \eta (x_{ij}^p-z_{pj}^{old})
</math><br />
<math>
w_{pj}^{U(new)} = w_{pj}^{U(old)} + \eta (x_{ij}^p-z_{pj}^{old})
</math><br />
<math>
w_{k^*j}^{L(new)} = w_{k^*j}^{L(old)} - \eta (x_{ij}^p-z_{k^*j}^{old})
</math><br />
<math>
w_{k^*j}^{U(new)} = w_{k^*j}^{U(old)} - \eta (x_{ij}^p-z_{k^*j}^{old})
</math>
 
 
 
To classify the instance <math>i</math> accurately, separator of class <math>p</math> for input <math>j</math> moves close to data-point of instance <math>i</math>, whereas separator of class <math>k^*</math> for input <math>j</math> moves far away. In the above image, an update example is given. Assume that instance <math>i</math> belongs to class A, whereas it is classified to class B because extension distance calculation gives out <math>ED_A>ED_B</math>. After the update, separator of class A moves close to the data-point of instance <math>i</math> whereas separator of class B moves far away. Consequently, extension distance gives out <math>ED_B>ED_A</math>, therefore after update instance  
<math>i</math> is classified to class A.
 
== References ==
# {{cite doi|10.1016/j.eswa.2008.10.010}}
# Kuei-Hsiang Chao, Meng-Hui Wang, and Chia-Chang Hsu. A novel residual capacity estimation method based on extension neural network for lead-acid batteries. International Symposium on Neural Networks, pages 1145–1154, 2007
# Kuei-Hsiang Chao, Meng-Hui Wang, Wen-Tsai Sung, and Guan-Jie Huang. Using enn-1 for fault recognition of automotive engine. Expert Systems with Applications, 37(4):29432947, 2010
# {{cite doi|10.1109/IIH-MSP.2009.141}}
# {{cite doi|10.1016/j.eswa.2007.11.012}}
# Juncai Zhang, Xu Qian, Yu Zhou, and Ai Deng. Condition monitoring method of the equipment based on extension neural network. Chinese Control and Decision Conference, pages 1735–1740, 2010
# {{cite pmid|12850034}}
 
[[Category:Neural networks]]

Revision as of 22:51, 14 November 2013

Extension neural network is a pattern recognition method found by M. H. Wang and C. P. Hung in 2003 to classify instances of data sets. Extension neural network is composed of neural network and extension theory concepts. It uses the fast and adaptive learning capability of neural network and correlation estimation property of extension theory by calculating extension distance.
ENN was used in:

  • Failure detection in machinery.
  • Tissue classification through MRI.
  • Fault recognition in automotive engine.
  • State of charge estimation in lead-acid battery.
  • Classification with incomplete survey data.

Extension Theory

Extension theory was first proposed by Cai in 1983 to solve contradictory problems. While classical mathematic is familiar with quantity and forms of objects, extension theory transforms these objects to matter-element model.
Template:NumBlk
where in matter , is the name or type, is its characteristics and is the corresponding value for the characteristic. There is a corresponding example in equation 2.
Template:NumBlk

where and characteristics form extension sets. These extension sets are defined by the values which are range values for corresponding characteristics. Extension theory concerns with the extension correlation function between matter-element models like shown in equation 2 and extension sets. Extension correlation function is used to define extension space which is composed of pairs of elements and their extension correlation functions. The extension space formula is shown in equation 3.
Template:NumBlk
where, is the extension space, is the object space, is the extension correlation function, is an element from the object space and is the corresponding extension correlation function output of element . maps to a membership interval . Negative region represents an element not belonging membership degree to a class and positive region vice versa. If is mapped to , extension theory acts like fuzzy set theory. The correlation function can be shown with the equation 4.
Template:NumBlk
where, and are called concerned and neighborhood domain and their intervals are (a,b) and (c,d) respectively. The extended correlation function used for estimation of membership degree between and , is shown in equation 5.
Template:NumBlk

Extension Correlation Function

Extension Neural Network

Extension neural network has a neural network like appearance. Weight vector resides between the input nodes and output nodes. Output nodes are the representation of input nodes by passing them through the weight vector.

There are total number of input and output nodes are represented by and , respectively. These numbers depend on the number of characteristics and classes. Rather than using one weight value between two layer nodes as in neural network, extension neural network architecture has two weight values. In extension neural network architecture, for instance , is the input which belongs to class and is the corresponding output for class . The output is calculated by using extension distance as shown in equation 6.

Template:NumBlk

Estimated class is found through searching for the minimum extension distance among the calculated extension distance for all classes as summarized in equation 7, where is the estimated class.

Template:NumBlk

Learning Algorithm

Each class is composed of ranges of characteristics. These characteristics are the input types or names which come from matter-element model. Weight values in extension neural network represent these ranges. In the learning algorithm, first weights are initialized by searching for the maximum and minimum values of inputs for each class as shown in equation 8

Template:NumBlk

where, is the instance number and is represents number of input. This initialization provides classes' ranges according to given training data.

After maintaining weights, center of clusters are found through the equation 9.

Template:NumBlk

Before learning process begins, predefined learning performance rate is given as shown in equation 10

Template:NumBlk

where, is the misclassified instances and is the total number of instances. Initialized parameters are used to classify instances with using equation 6. If the initialization is not sufficient due to the learning performance rate, training is required. In the training step weights are adjusted to classify training data more accurately, therefore reducing learning performance rate is aimed. In each iteration, is checked to control if required learning performance is reached. In each iteration every training instance is used for training.
Instance , belongs to class is shown by:

Every input data point of is used in extension distance calculation to estimate the class of . If the estimated class then update is not needed. Whereas, if then update is done. In update case, separators which show the relationship between inputs and classes, are shifted proportional to the distance between the center of clusters and the data points.
The update formula:







To classify the instance accurately, separator of class for input moves close to data-point of instance , whereas separator of class for input moves far away. In the above image, an update example is given. Assume that instance belongs to class A, whereas it is classified to class B because extension distance calculation gives out . After the update, separator of class A moves close to the data-point of instance whereas separator of class B moves far away. Consequently, extension distance gives out , therefore after update instance is classified to class A.

References

  1. Template:Cite doi
  2. Kuei-Hsiang Chao, Meng-Hui Wang, and Chia-Chang Hsu. A novel residual capacity estimation method based on extension neural network for lead-acid batteries. International Symposium on Neural Networks, pages 1145–1154, 2007
  3. Kuei-Hsiang Chao, Meng-Hui Wang, Wen-Tsai Sung, and Guan-Jie Huang. Using enn-1 for fault recognition of automotive engine. Expert Systems with Applications, 37(4):29432947, 2010
  4. Template:Cite doi
  5. Template:Cite doi
  6. Juncai Zhang, Xu Qian, Yu Zhou, and Ai Deng. Condition monitoring method of the equipment based on extension neural network. Chinese Control and Decision Conference, pages 1735–1740, 2010
  7. Template:Cite pmid